id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2306.00526
|
Layout and Task Aware Instruction Prompt for Zero-shot Document Image
Question Answering
|
Layout-aware pre-trained models has achieved significant progress on document
image question answering. They introduce extra learnable modules into existing
language models to capture layout information within document images from text
bounding box coordinates obtained by OCR tools. However, extra modules
necessitate pre-training on extensive document images. This prevents these
methods from directly utilizing off-the-shelf instruction-tuning language
foundation models, which have recently shown promising potential in zero-shot
learning. Instead, in this paper, we find that instruction-tuning language
models like Claude and ChatGPT can understand layout by spaces and line breaks.
Based on this observation, we propose the LAyout and Task aware Instruction
Prompt (LATIN-Prompt), which consists of layout-aware document content and
task-aware instruction. Specifically, the former uses appropriate spaces and
line breaks to recover the layout information among text segments obtained by
OCR tools, and the latter ensures that generated answers adhere to formatting
requirements. Moreover, we propose the LAyout and Task aware Instruction Tuning
(LATIN-Tuning) to improve the performance of small instruction-tuning models
like Alpaca. Experimental results show that LATIN-Prompt enables zero-shot
performance of Claude and ChatGPT to be comparable to the fine-tuning
performance of SOTAs on document image question answering, and LATIN-Tuning
enhances the zero-shot performance of Alpaca significantly. For example,
LATIN-Prompt improves the performance of Claude and ChatGPT on DocVQA by 263%
and 20% respectively. LATIN-Tuning improves the performance of Alpaca on DocVQA
by 87.7%. Quantitative and qualitative analyses demonstrate the effectiveness
of LATIN-Prompt and LATIN-Tuning. We provide the code in supplementary and will
release it to facilitate future research.
|
Wenjin Wang, Yunhao Li, Yixin Ou, Yin Zhang
|
2023-06-01T10:28:12Z
|
http://arxiv.org/abs/2306.00526v4
|
# Layout and Task Aware Instruction Prompt for Zero-shot Document Image Question Answering
###### Abstract
Layout-aware pre-trained models has achieved significant progress on document image question answering. They introduce extra learnable modules into existing language models to capture layout information within document images from text bounding box coordinates obtained by OCR tools. However, extra modules necessitate pre-training on extensive document images. This prevents these methods from directly utilizing off-the-shelf instruction-tuning language foundation models, which have recently shown promising potential in zero-shot learning. Instead, in this paper, we find that _instruction-tuning language models like Claude and ChatGPT can understand layout by spaces and line breaks_. Based on this observation, we propose the **LA**yout and **T**ask aware **I**nstruction **Prompt (LATIN-Prompt)**, which consists of layout-aware document content and task-aware instruction. Specifically, the former uses appropriate spaces and line breaks to recover the layout information among text segments obtained by OCR tools, and the latter ensures that generated answers adhere to formatting requirements. Moreover, we propose the **LA**yout and **T**ask aware **I**nstruction **Tuning (LATIN-Tuning)** to improve the performance of small instruction-tuning models like Alpaca. Experimental results show that LATIN-Prompt enables zero-shot performance of Claude and ChatGPT to be comparable to the fine-tuning performance of SOTAs on document image question answering, and LATIN-Tuning enhances the zero-shot performance of Alpaca significantly. For example, LATIN-Prompt improves the performance of Claude and ChatGPT on DocVQA by \(263\%\) and \(20\%\) respectively. LATIN-Tuning improves the performance of Alpaca on DocVQA by \(87.7\%\). Quantitative and qualitative analyses demonstrate the effectiveness of LATIN-Prompt and LATIN-Tuning. We release our code to facilitate future research1.
Footnote 1: [https://github.com/WenjinW/LATIN-Prompt](https://github.com/WenjinW/LATIN-Prompt)
## 1 Introduction
Intelligent document image question answering, as an important application of document intelligence, aims to develop AI systems to automatically answer natural language questions based on the understanding of document images. Compared with text documents, document images contain textual, visual, and layout information, which pose unique challenges for machine comprehension.
Recently, layout-aware pre-trained models has achieved significant progress on document image question answering. They introduce extra learnable modules on top of language models [14, 15, 16, 17, 18] to capture layout information within document images from coordinates of text bounding box obtained by OCR tools (Fig. 1(a)). LayoutLM [14] introduces coordinate information into model input by 2D position embeddings. LayoutLMv2 [14], LayoutLMv3 [12], and ERNIE-Layout [15] capture the layout information from coordinate by layout-aware attention mechanism. These methods conduct pre-training on extensive document images for the newly introduced layout-aware modules.
However, the need of pre-training prevents these methods from directly utilizing off-the-shelf instruction-tuning language foundation models, which have recently shown promising potential in zero-shot learning. On the one hand, commercial large instruction-tuning models like Claude [13] and ChatGPT [15] are closed-source, impeding further pre-training. On the other hand, open-source instruction-tuning models are of
Figure 1: (a) Existing methods introduce layout-aware modules into language models to capture layout information within document images from text bounding box coordinates obtained by OCR tools. They need further pre-training on extensive document images. (b) Our method allows instruction-tuning language models to _capture layout by spaces and line breaks_ and can conduct _zero-shot inference_ on document image question-answering.
much larger scale than traditional models. For example, Alpaca [14] consists of 7 billion parameters, whereas BERTlarge [13] only comprises 300+ million parameters. Existing methods [22, 23, 12, 13] select over 10 million pages from the IIT-CDIP Test Collection dataset [11] for pre-training. But the cost of pre-training instruction-tuning models like Alpaca on 10 million pages is too expensive.
Instead, in this work, we find that _instruction-tuning language models like Claude and ChatGPT can understand layout by spaces and line breaks_ (Fig. 1(b)). Based on this observation, we propose the **LA**yout and **T**ask aware **I**nstruction **Prompt (LATIN-Prompt)**, which consists of layout-aware document content and task-aware instruction. Specifically, given the OCR results, we use appropriate spaces and line breaks to connect all the text segments together, resulting in the layout-aware document content. The layout information contained within the coordinates is translated into spaces and line breaks. Further, we integrate task instruction into layout-aware document content, ensuring that the model generates answers that adhere to the formatting requirements. Although simple, our method is intuitive and consistent with human behavior. Humans employ whitespace (blank) regions to represent and comprehend layout.
We also find that small instruction-tuning language foundation models like Alpaca are not good at understanding layout by spaces. So we propose the **LA**yout and **T**ask aware **I**nstruction **Tuning (LATIN-Tuning)** to improve the performance of them. We convert CSV-format tables into strings containing spaces and line breaks and construct instruction-tuning data from these strings by Claude.
Our contributions are summarized as follows:
* We find that instruction-tuning models like Claude and ChatGPT can capture layout by spaces and line breaks, and propose LATIN-Prompt to conduct zero-shot inference on document image question-answering tasks.
* We propose the LATIN-Tuning to enhance the ability of Alpaca to comprehend layout by spaces and line breaks.
* Experiment results on three datasets show that LATIN-Prompt enables zero-shot performance of Claude and ChatGPT to be comparable to the fine-tuning performance of SOTAs on document image question answering, and LATIN-Tuning enhances the zero-shot performance of Alpaca significantly. Quantitative and qualitative analyses demonstrate the effectiveness of LATIN-Prompt and LATIN-Tuning.
## 2 Related Work
### Visually-rich document understanding
Visually-rich Document Understanding (VrDU) focus on recognizing and understanding scanned or digital-born document images with language, vision, and layout information. Traditional works in the VrDU employ CNN [15, 16, 17, 18, 19, 20, 21], which consists of 2021-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-111-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-111-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-111-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-111-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-111-11-11-11-11-11-11-11-11-11-11-11-11-111-11-11-11-11-111-11-11-111-11-111-11-11-111-11-111-11-11-111-11-111-11-111-11-11-111-11-111-11-11-111-111-11-111-11-111-11-11-111-11-111-11-11-111-11-111-11-11-11-111-111-11-111-11-11-111-11-11-11-11-111-11-11-11-11-11-111-11-11-111-11-11-111-11-11-11-11-111-11-111-11-111-11-11-111-11-11-111-11-11-111-111-11-111-111-111-111-111-11-111-11-11-111-11-11-111-11-11-11-111-111-111-111-11-11-111-11-11-11-111-111-11-11-111-111-111-111-11-111-11-11-111-111-11-111-11-111-11-111-111-111-11-111-11-111-111-11-11-11-11-111-11-11-111-111-11-111-11-111-11-111-11-11-11-11-11-11-11-11-11-11-11-11-11-111-11-11-11-111-11-111-11-11-11-11-111-11-11-111-11-11-111-11-11-111-11-11-11-11-11-11-111-11-1-11-11-11-111-11-11-11-11-11-11-111-1-111-11-11-11-111-11-11-11-11-111-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-1-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-1-111-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-111-11-11-11-11-11-11-111-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-11-111-11-11-11-11-1-11-1-11-11-11-11-11-11-11-1-11-1-11-11-11-1-11-11-11-11-11-1-11-11-11-11-11-11-1-11-11-1-11-11-1-11-11-1-11-11-1-11-11-1-1-11-11-1-1-1-11-1-1-11-11-1-11-11-11-1-11-1-11-11-11-1-1-1-11-11-1-11-1-11-11-1-11-1-11-1-11-11-1-11-1-11-1-1-11-1-1-1-11-1-11-1-11-1-11-1-11-1-1-11-1-11-1-11-1-11-1-1-11-1-1-11-1-11-11-11-1-11-1-11-
## 3 Method
We propose the LATIN-Prompt and LATIN-Tuning for instruction-tuning language models to conduct zero-shot inference on document image question-answering tasks.
### LATIN-Prompt
The key ideas of LATIN-Prompt are as follows: (1) Capture the layout information by spaces and line breaks (2) Generate answers adhere to formatting requirements by task instruction. Fig. 2 illustrates the process of LATIN-Prompt. Given OCR results of a document image, we recover layout information within it by using appropriate spaces and line breaks to connect all the text segments together, resulting in the layout-aware document content. Then we insert the layout-aware document content and question into the task instruction prompt template. The instruction-tuning language model takes the filled template as input and predicts the answer for the question in the required format.
Formally, given a document image \(D\) and a question answer pair \(q\) and \(a\), we process the document image by an OCR tool. The extracted text segments and corresponding bounding boxes are denoted as \(\bar{S}=\{s_{1},s_{2},\dots,s_{n}\}\) and \(B=\{b_{1},b_{2},\dots,b_{n}\}\), where \(n\) represents the number of text segments.
Layout Aware DocumentWe employ appropriate spaces and line breaks to connect all text segments together, resulting in layout-aware document content. The process is as follows:
Step 1. Re-arrange the text segments and bounding boxes in the order from top to bottom and from left to right based on the coordinates.
Step 2. According to the coordinates, place the text segments and bounding boxes in the \(i\)-th row into the list \(S_{i}\) and \(B_{i}\) respectively from left to right, and calculate the character count \(c_{i}\) and the width \(w_{i}\) of \(i\)-th row. The \(w_{i}\) equals the width of the union of bounding boxes in the list \(B_{i}\).
Step 3. Calculate the character width of document \(D\), which is defined as follows:
\[\bar{c}:=w_{i}^{*}/c_{i}^{*},i^{*}=\text{argmax}_{i}c_{i}, \tag{1}\]
where \(i^{*}\)-row has the maximum character count among all rows.
Step 4. Join text segments in the same row from left to right by spaces. Given two adjacent text segments \(S_{i,j}\) and \(S_{i,k}\), the number of spaces joining them is equal to
\begin{table}
\begin{tabular}{c l} \hline \#Line & Prompt \\ \hline
1 & You are asked to answer questions asked on a document image. \\
2 & The answers to questions are short text spans taken verbatim from the document. This means that the answers comprise a set of contiguous text tokens present in the document. \\
3 & Document: \\
4 & \{Layout Aware Document placeholder\} \\
5 & \\
6 & Question: \{Question placeholder\} \\
7 & \\
8 & Directly extract the answer of the question from the document with as few words as possible. \\
9 & \\
10 & Answer: \\ \hline \end{tabular}
\end{table}
Table 1: DocVQA Prompt Template. The \(\{\}\) represents the placeholder.
Figure 2: The overview of LATIN-Prompt (Sec. 3.1). Given a document image and the corresponding question, we recover the layout information within the document image from OCR results using appropriate spaces and line breaks, and then insert the layout aware document content and question into the task instruction prompt template together. The instruction-tuning large language foundation model takes the filled template as input and predicts the answer to the question in the required format.
\(h_{i,jk}/\bar{c}\) where \(h_{i,jk}\) is the horizontal distance between the two bounding boxes \(B_{i,j}\) and \(B_{i,k}\).
Step 5. Join different rows by line breaks to obtain the layout-aware document content (denoted as \(S^{\prime}\)).
Recovering layout information by spaces and line breaks is simple but intuitive. In fact, people do represent and understand layout through blank areas between text elements, rather than precise bounding box coordinates.
Task Aware InstructionDifferent from the open-ended question-answering, document image question-answering typically involves explicit requirements for the answer format. For example, DocVQA [22] is an extractive QA task that requires answers to be extracted from the document. However, with only the layout-aware document content and question, the model can easily generate answers that are not in the document and generate unnecessary descriptions or explanations.
So we integrate task instruction into layout-aware document content, ensuring that the model generates answers that adhere to the formatting requirements. Specifically, we manually designed different instruction prompt templates for different tasks. Each template \(P(S^{\prime},q)\) contains the requirement of the task as well as placeholders for the layout-aware document content \(S^{\prime}\) and question \(q\).
Table 7 shows the prompt template for DocVQA. In the first and second lines, we explain the meaning of extraction in detail to the model according to the task description in DocVQA. Lines 3 to 6 provide placeholders for the layout-aware document and question. To avoid the model forgetting its task due to the interference of document content, the 8th line summarizes and reiterates the task requirements. Please refer to the supplementary materials for the prompt templates of InfographicVQA [22] and MPDocVQA [23].
Zero-shot InferenceAt last, the instruction-tuning language model \(M\) takes the filled template \(P(S^{\prime},q)\) as input and predicts the answer as follows:
\[a^{\prime}=f_{M}(P(S^{\prime},q)), \tag{2}\]
where \(a^{\prime}\) represents the prediction and the \(f_{M}\) represents the decoding process of model \(M\).
### LATIN-Tuning
Although instruction-tuning models like Claude and Chat-GPT can comprehend and utilize LATIN-Prompt well, we found that the performance of smaller models like Alpaca (7B) was not up to par. So we propose LATIN-Tuning to enhance their ability to comprehend layout by spaces and line breaks. As shown in Fig. 3, we employ the Pandas2 and Claude to construct instruction-tuning dataset from CSV-format tables. The process is as follows:
Footnote 2: [https://pandas.pydata.org](https://pandas.pydata.org)
(1) For each CSV table, we convert it into document string with spaces and line breaks by Pandas. Please refer to the appendix for the code implementation. (2) We insert the document string into the Question Generation Prompt Template and generate a question-answer pair by Claude. (3) We insert document string and question into the Instruction Prompt Template to form the input, with the answer serving as the target. Refer to the appendix for the details of Question Generation Prompt Template and Instruction Prompt Template.
At last, we fine-tune the Alpaca on the instruction-tuning dataset to enhance its ability to comprehend layout by spaces and line breaks.
## 4 Experiment
### Experiment Settings
DatasetsWe evaluate our method on three document image question answering datasets: DocVQA [22] is an extractive question answering task and consists of 50,000 questions defined on 12,767 document images; InfographicVQA [22] consists of 5,485 infographics, which convey information through text, graphics, and visual elements together; MP-DocVQA [23] extends DocVQA to more realistic multi-page scenarios where a document typically consists of multiple pages that should be processed together. Following common practice, we use Azure OCR results for DocVQA provided by DUE [1] and use offical OCR results3
Figure 3: Construction of LATIN-Tuning data (Sec. 3.2). (1) Convert the CSV-format table into document string with spaces and line breaks by Pandas. (2) Insert document string into the Question Generation Prompt Template and generate a question-answer pair by Claude. (3) Insert document string and question into the Instruction Prompt Template to form the input, with the answer serving as the target.
for InfographicVQA and MP-DocVQA. For all datasets, we adopt the Average Normalized Levenshtein Similarity (ANLS) [2] as the evaluation metric.
**LATIN-Prompt Baselines** To evaluate zero-shot performance of LATIN-Prompt, we compare it with Plain Prompt on three instruction-tuning language models: Claude4, ChatGPT-3.55, and Alpaca. The template of Plain Prompt is as follows: "Document: {document}\nQuestion: {question}\nDirectly extract the answer of the question from the document.\nAnswer:", where {document} and {question} are placeholders of original text segments from OCR tools and question. We also compare LATIN-Prompt's zero-shot performance with fine-tuning performance of pre-training-fine-tuning methods. Moreover, we report the result of multimodal GPT-4 presented in OpenAI blog [3]. Due to API permission restrictions, we leave the exploration of GPT-4+LATIN-Prompt as future work.
Footnote 4: We use the claude-v1.3 API.
Footnote 5: We use the gpt-3.5-turbo API from Azure OpenAI.
We only evaluate Alpaca on DocVQA because the other two tasks are too complex for it. Alpaca cannot follow the task instruction of these two tasks. In fact, Alpaca performs poorly on InfographicVQA (refer to Tab. 5) and cannot generate answers meeting the format requirement of MP-DocVQA. We exclude the ChatGPT-3.5 on MP-DocVQA because it needs to process too many document pages and the experimental cost exceeds the range we can afford.
**LATIN-Tuning** We randomly sample 5000 CSV-format tables from the WikiTableQuestions [3] with replacement to create the instruction-tuning dataset. We fine-tune the Alpaca on the created dataset for 3 epochs using the AdamW [14] optimizer with a warmup ratio of 0.03 following the Alpaca [15]. We use a batch size of \(64\) and a learning rate of \(2e-5\). The resulting model is denoted as Alpaca+LATIN-Tuning and we compare it with Alpaca to evaluate the performance of LATIN-Tuning.
### Performance of LATIN-Prompt
Table 2 presents the experimental results on DocVQA. (1) In the pre-training-fine-tuning paradigm, the layout-aware multimodal pre-trained model performs better than the pure language model. (2) Further, increasing the amount of fine-tuning data can improve the performance of models. (3) The instruction-tuning language models perform poorly with the
\begin{table}
\begin{tabular}{l l c c c c c c} \hline \hline Paradigm & Method & Parameters & Text & Vision & Layout & Fine-tuning Set & ANLS & \(\Delta\)ANLS \\ \hline \multirow{8}{*}{Fine-tuning} & BERT\({}_{\text{LARGE}}\) & 340M & ✓ & & & train & 0.6768 & \\ & RoBERTa\({}_{\text{LARGE}}\) & 355M & ✓ & & & train & 0.6952 & \\ & UniLMv2\({}_{\text{LARGE}}\) & 340M & ✓ & & & train & 0.7709 & \\ \cline{2-8} & LayoutLMv2\({}_{\text{LARGE}}\) & 343M & ✓ & & ✓ & train & 0.7259 & \\ & LayoutLMv2\({}_{\text{LARGE}}\) & 426M & ✓ & ✓ & ✓ & train & 0.8348 & \\ & LayoutLMv3\({}_{\text{LARGE}}\) & 368M & ✓ & ✓ & ✓ & train & 0.8337 & \\ & ERNIE-Layout\({}_{\text{LARGE}}\) & 507M & ✓ & ✓ & ✓ & train & 0.8321 & \\ & LayoutLMv2\({}_{\text{LARGE}}\) & 426M & ✓ & ✓ & ✓ & train + dev & 0.8529 & \\ & ERNIE-Layout\({}_{\text{LARGE}}\) & 507M & ✓ & ✓ & ✓ & train + dev & 0.8486 & \\ \hline \multirow{8}{*}{Zero-shot} & Alpaca+Plain Prompt & \multirow{2}{*}{7B} & ✓ & & & - & 0.3567 & \\ & Alpaca+LATIN-Prompt & & ✓ & & ✓ & - & 0.4200 & +0.0633 \\ \cline{1-1} & Claude+Plain Prompt & \multirow{2}{*}{Unknown} & ✓ & & & - & 0.2298 & \\ \cline{1-1} & Claude+LATIN-Prompt & & ✓ & & ✓ & - & 0.8336 & +0.6038 \\ \cline{1-1} & ChatGPT-3.5+Plain Prompt & \multirow{2}{*}{Unknown} & ✓ & ✓ & - & 0.6866 & \\ \cline{1-1} & ChatGPT-3.5+LATIN-Prompt & & ✓ & ✓ & - & 0.8255 & +0.1389 \\ \cline{1-1} & GPT-4\({}^{*}\) & Unknown & not clearly described & - & 0.8840 & \\ \hline \hline \end{tabular}
* represents that we report the result of GPT-4 presented in OpenAI blog [3]. Although lacking a technical detail description, compared with Claude and ChatGPT-3.5, GPT-4 utilizes visual information. The LATIN-Prompt is orthogonal to GPT-4 and can be used to further improve the performance of GPT-4. However, due to API permission limitations, we are unable to evaluate the performance of GPT-4 + LATIN-Prompt and leave it in future.
\end{table}
Table 2: Performance on test dataset of DocVQA. Text, Vision, and Layout represent the modal information used by the model. The \(\Delta\)ANLS represents the gain of LATIN-Prompt compared to Plain Prompt. Unknown indicates missing relevant details.
Figure 4: The impact of the size of instruction fine-tuning dataset on LATIN-Tuning. The performance of LATIN-Tuning improves as the number of samples increases.
plain prompt based on the original text segments obtained by OCR tools. (4) The LATIN-Prompt proposed in this paper significantly improves the zero-shot performance of instruction-tuning language models. It enables the zero-shot performance of Claude and ChatGPT-3.5 to significantly outperform the fine-tuned layout-aware LayoutLM. In addition, despite only using text and layout information, their zero-shot performance is comparable to the performance of fine-tuned layout-aware multimodal pre-trained models. (5) Although unknown, the number of parameters of Claude and GPT-3.5 should be much larger than that of Alpaca. The experimental results show that the final zero-shot performance is positively correlated with the size and ability of the instruction-tuning models. (6) The zero-shot performance of GPT-4 matched the best fine-tuned performance. Although lacking a technical detail description, compared with Claude and GPT-3.5, GPT-4 utilizes visual information, reflecting the importance of visual information for document image understanding. LATIN-Prompt is orthogonal to GPT-4. However, due to API permission restrictions, we can only leave arming GPT-4 with the LATIN-Prompt to the future.
Table 3 presents results on InfographicVQA. Experimental results show that LATIN-Prompt enable the zero-shot performance of Claude and GPT-3.5 to exceed the performance of all fine-tuned baselines except TILT. We find that Claude performs poorly with Plain Prompt, but its performance improves significantly when using LATIN-Prompt.
Table 4 presents results on MP-DocVQA. It shows that, with LATIN-Prompt, Claude's zero-shot performance exceeds fine-tuning performance of Longformer [11] and Big Bird [11] designed for long sequences. Furthermore, its zero-shot performance is comparable to the fine-tuning performance of Hi-VT5 [12], a layout-aware multimodal model for multi-page document images.
### Performance of LATIN-Tuning
Table 5 demonstrates that LATIN-Tuning improves the performance of Alpaca on DocVQA by \(87.7\%\) and on InfographicVQA by \(113\%\). Nevertheless, its performance still lags behind Claude. We will explore more effective instruction-tuning method in the future.
### Quantitative and Qualitative Analyses
Effect of components of LATIN-PromptLATIN-Prompt consists of layout-aware document content (Layout) and task instruction (Task). Table 6 presents the results of ablation study of LATIN-Prompt with Claude and ChatGPT-3.5 on DocVQA and InfographicVQA. The results show that both the layout-aware document content and the task instruction can significantly improve the zero-shot performance of Claude and ChatGPT-3.5. The improvement brought by task
\begin{table}
\begin{tabular}{l l|c c|c c c c} \hline \hline \multirow{2}{*}{ Paradigm} & \multirow{2}{*}{Method} & \multicolumn{2}{c|}{Overall} & \multicolumn{3}{c}{Answer type} \\ & & ANLS & \(\Delta\)ANLS & Image span & Question span & Multiple spans & Non span \\ \hline \multirow{6}{*}{Fine-tuning} & BERT & 0.2078 & 0.2625 & 0.2333 & 0.0739 & 0.0259 \\ & LayoutLM & 0.2720 & 0.3278 & 0.2386 & 0.0450 & 0.1371 \\ & LayoutLMv2 & 0.2829 & 0.3430 & 0.2763 & 0.0641 & 0.1114 \\ & BROS & 0.3219 & 0.3997 & 0.2317 & 0.1064 & 0.1068 \\ & pix3struct & 0.4001 & 0.4308 & 0.4839 & 0.2059 & 0.3173 \\ & TILT & **0.6120** & **0.6765** & **0.6419** & **0.4391** & **0.3832** \\ \hline \multirow{6}{*}{Zero-shot} & Claude + Plain Prompt & 0.0798 & 0.0951 & 0.0913 & 0.0203 & 0.0280 \\ & Claude + LATIN-Prompt & 0.5451 & +0.4653 & 0.5992 & 0.5861 & 0.3985 & 0.3544 \\ \cline{1-1} \cline{2-7} & ChatGPT-3.5 + Plain Prompt & 0.3335 & 0.3749 & 0.4505 & 0.0950 & 0.1822 \\ & ChatGPT-3.5 + LATIN-Prompt & 0.4898 & +0.1563 & 0.5457 & 0.5639 & 0.3458 & 0.2798 \\ \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Table/List} & \multicolumn{2}{c|}{Evidence} & \multicolumn{3}{c}{Operation} \\ & & Table/List & Textual & Visual object & Figure & Map & Comparison & Arithmetic & Counting \\ \hline BERT & 0.1852 & 0.2995 & 0.0896 & 0.1942 & 0.1709 & 0.1805 & 0.0160 & 0.0436 \\ LayoutLM & 0.2400 & 0.3626 & 0.1705 & 0.2551 & 0.2205 & 0.1836 & 0.1559 & 0.1140 \\ LayoutLMv2 & 0.2449 & 0.3855 & 0.1440 & 0.2601 & 0.3110 & 0.1897 & 0.1130 & 0.1158 \\ BROS & 0.2653 & 0.4488 & 0.1878 & 0.3095 & 0.3231 & 0.2020 & 0.1480 & 0.0695 \\ pix2struct & 0.3833 & 0.5256 & 0.2572 & 0.3726 & 0.3283 & 0.2762 & 0.4198 & 0.2017 \\ TILT & **0.5917** & **0.7916** & 0.4545 & **0.5654** & 0.4480 & **0.4801** & **0.4958** & 0.2652 \\ \hline Claude + Plain Prompt & 0.0849 & 0.1099 & 0.0858 & 0.0695 & 0.0496 & 0.0589 & 0.0271 & 0.0368 \\ Claude + LATIN-Prompt & 0.5421 & 0.6725 & **0.4897** & 0.5027 & **0.4982** & 0.4598 & 0.4311 & **0.2708** \\ \hline \hline \multirow{2}{*}{
\begin{tabular}{} \end{tabular} } & 0.3481 & 0.3893 & 0.3670 & 0.3114 & 0.1843 & 0.2349 & 0.1466 & 0.2320 \\ \cline{1-1} \cline{2-7} & 0.4917 & 0.6016 & 0.4491 & 0.4585 & 0.3614 & 0.4312 & 0.3157 & 0.2660 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance on test dataset of InfographicVQA. The questions in InfographicVQA can be grouped according to answer type, evidence source, and operation. We list both the overall performance of the model and its performance on different groups. All performances are evaluated by ANLS. The \(\Delta\)ANLS represents the gain of LATIN-Prompt compared to Plain Prompt. The highest and second-highest scores are bolded and underlined.
instruction is more significant in Claude because it ensures that the format of the answers generated by the model meets the task requirements. On the basis of the correct format, the layout-aware document content further improves the performance of the model because it enables the model to utilize the layout information among text segments.
**Effect of instruction-tuning data size for LATIN-Tuning** Figure 4 shows that the performance of LATIN-Tuning improves as the number of samples increases. The improvement rate slows down when the sample count exceeds 2000.
**Case study of LATIN-Prompt** Figure 5 provides cases of Claude on DocVQA. Compared with Plain Prompt, LATIN-Prompt enables the model to comprehend layout more effectively and generate answers meeting the format requirement.
**Case study of LATIN-Tuning** Case study on DocVQA shows that LATIN-Tuning enables Alpaca to understand layout by spaces. Please refer to the appendix for details.
## 5 Conclusion
In this work, we point a new perspective for comprehending layout information within document images. Instead of capturing layout by coordinate of bounding boxes, we find that instruction-tuning language models like Claude and ChatGPT can understand layout by spaces and line breaks. Based on this observation, we propose LATIN-Prompt and it enables zero-shot performance of Claude and ChatGPT to be comparable to the fine-tuning performance of SOTAs on document image question answering. Moreover, we propose LATIN-Tuning, which enhances the ability of Alpaca
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Prompt} & \multicolumn{2}{c}{DocVQA} & \multicolumn{2}{c}{InfographicVQA} \\ & Claude & ChatGPT & Claude & ChatGPT \\ \hline LATIN-Prompt & 0.8311 & 0.8135 & 0.5218 & 0.4708 \\ w/o Layout & 0.7825 & 0.7491 & 0.4638 & 0.4341 \\ w/o Task & 0.3637 & 0.7561 & 0.1234 & 0.4296 \\ w/o Task+Layout & 0.2144 & 0.6795 & 0.0702 & 0.3103 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Ablation of LATIN-Prompt on validation data of DocVQA and InfographicVQA.
\begin{table}
\begin{tabular}{l l c c} \hline \hline Paradigm & Method & Setup & ANLS \\ \hline \multirow{8}{*}{Fine-tuning} & BERT & Max Conf & 0.5347 \\ & & Concat & 0.4183 \\ \cline{2-4} & Longformer & Max Conf & 0.5506 \\ & & Concat & 0.5287 \\ \cline{2-4} & Big Bird & Max Conf & 0.5854 \\ & & Concat & 0.4929 \\ \cline{2-4} & LayoutLMv3 & Max Conf & 0.5513 \\ & & Concat & 0.4538 \\ \cline{2-4} & T5 & Max Conf & 0.4028 \\ & & Concat & 0.5050 \\ \cline{2-4} & Hi-VT5 & Multipage & **0.6201** \\ \hline Zero-shot & Claude+LATIN-Prompt & Max Conf & 0.6129 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance on test dataset of MP-DocVQA. Contact indicates concatenating multi-page content. MaxConf indicates processing pages separately and choosing an answer based on confidence. We only evaluate Claude with LATIN-Prompt in MaxConf setting because Plain Prompt does not apply to MaxConf, and Concat exceeds the length limit.
\begin{table}
\begin{tabular}{l l c c c} \hline \hline \multirow{2}{*}{Prompt} & \multicolumn{2}{c}{DocVQA} & \multicolumn{2}{c}{InfographicVQA} \\ & Claude & ChatGPT & Claude & ChatGPT \\ \hline LATIN-Prompt & 0.8311 & 0.8135 & 0.5218 & 0.4708 \\ w/o Layout & 0.7825 & 0.7491 & 0.4638 & 0.4341 \\ w/o Task & 0.3637 & 0.7561 & 0.1234 & 0.4296 \\ w/o Task+Layout & 0.2144 & 0.6795 & 0.0702 & 0.3103 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Ablation of LATIN-Prompt on validation data of DocVQA and InfographicVQA.
Figure 5: Case study of Claude on DocVQA. Due to the lack of task instruction, Plain Prompt generates unnecessary words (in blue), violating the extraction requirement. Moreover, Plain Prompt cannot capture layout information and generates incorrect answers (in green). In Case (A), it regards “Coffee” and “Chocolate”, which are semantically similar to “choco”, as the answer. In Case (B), it regards “61” as the answer since it directly follows “romance and mystery”. In Case (C), it fails to comprehend the document and engages in erroneous reasoning. In contrast, LATIN-Prompt can understand layout relationships and generate the correct answer (in red). The document images of these cases are complex. Due to limited space, we only display a portion of the original document images here. Please refer to the appendix for the original document images and more cases.
to comprehend layout by spaces and line breaks. In the future, we will explore to incorporate visual information into LATIN-Prompt and create more effective instruction-tuning dataset for LATIN-Tuning.
|
2307.08253
|
Kibble-Zurek scaling in the quantum Ising chain with a time-periodic
perturbation
|
We consider the time-dependent transverse field Ising chain with
time-periodic perturbations. Without perturbations, this model is one of the
famous models that obeys the scaling in the adiabatic limit predicted by the
quantum Kibble-Zurek mechanism (QKZM). However, it is known that when
oscillations are added to the system, the non-perturbative contribution becomes
larger and the scaling may break down even if the perturbation is small.
Therefore, we analytically analyze the density of defects in the model and
discuss how much the oscillations affect the scaling. As a result, although the
non-perturbative contribution does not become zero in the adiabatic limit, the
scaling does not change from the prediction of the QKZM. This indicates that
the QKZM is robust to the perturbations.
|
Takayuki Suzuki, Kaito Iwamura
|
2023-07-17T05:37:54Z
|
http://arxiv.org/abs/2307.08253v1
|
# Kibble-Zurek scaling in the quantum Ising chain with a time-periodic perturbation
###### Abstract
We consider the time-dependent transverse field Ising chain with time-periodic perturbations. Without perturbations, this model is one of the famous models that obeys the scaling in the adiabatic limit predicted by the quantum Kibble-Zurek mechanism (QKZM). However, it is known that when oscillations are added to the system, the non-perturbative contribution becomes larger and the scaling may break down even if the perturbation is small. Therefore, we analytically analyze the density of defects in the model and discuss how much the oscillations affect the scaling. As a result, although the non-perturbative contribution does not become zero in the adiabatic limit, the scaling does not change from the prediction of the QKZM. This indicates that the QKZM is robust to the perturbations.
## I Introduction
The Kibble-Zurek mechanism (KZM) is a fundamental concept that explains the formation of topological defects during non-equilibrium phase transitions. The original theory was proposed in the context of cosmology, where the universe underwent a symmetry-breaking phase transition in the early stages of its evolution [1; 2]. Since then, the KZM has been adapted to condensed matter systems, especially in the study of quantum phase transitions [3; 4; 5]. The KZM has been experimentally validated in a variety of systems, such as the superfluid helium experiments [6] and the superconductor experiments [7; 8; 9].
The quantum Kibble-Zurek mechanism (QKZM), is an extension of the KZM that incorporates quantum effects. In the context of phase transitions, quantum corrections can lead to significant modifications in the physics near the critical point, giving rise to novel phenomena. The QKZM has been developed to investigate how these quantum corrections affect the predictions of the KZM and take into account the quantum fluctuations near the critical point. The QKZM has already been studied [10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51] and observed in many experiments [52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74]. In the study of the QKZM, a theoretical approach based on the one-dimensional transverse field Ising model is sometimes used [25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 71; 73; 74]. The system begins with all spins aligned which corresponds to the ground state at infinite past. As the system's energy evolves linearly with time such as \(v(t-t_{\rm c})\), a phase transition occurs, resulting in the emergence of defects. According to the QKZM, the density of defects is generally given by \(n\propto v^{d\nu/(1+z\nu)}\), where \(d\) is the dimension of the system, \(z\) is the dynamic exponent, and \(\nu\) is the correlation length exponent. In the one-dimensional transverse field Ising model, these exponents are given by \(z=\nu=1\)[10; 45].
This scaling is an estimate of the computational time for quantum annealing, since it corresponds to the probability of successfully obtaining the ground state. Therefore, it is important to investigate what happens to the scaling when the linear ramp is deviated or perturbations are added. The robustness to these changes has been investigated in several previous studies. For example, when the spin-spin coupling is changed alternately, the density of defects includes a factor that decays exponentially and is subject to large corrections [48]. Furthermore, the numerical simulation shows that the density of defects increases due to the effect of white noise. [49]. It is also known to exhibit nontrivial behavior when oscillations are added as perturbations, but the effect of the perturbation on the scaling is not derived analytically [51].
This nontrivial behavior, caused by adding an oscillation term to a linear ramp, is also observed in other fields. The Franz-Keldysh effect, originally proposed in the 1950s [75; 76; 77; 78], is an important phenomenon observed in semiconductors when subjected to strong electric fields. This analysis method is also applied to the dynamically assisted Schwinger mechanism [79], the extension of the Schwinger mechanism [80; 81; 82] which explains the phenomenon in quantum electrodynamics where electron-positron pairs are generated in a vacuum by the application of an electric field. Recent research on the dynamically assisted Schwinger mechanism calculates the particle pair creation rate analytically using the Furry picture (FP) [83] for a system in which an oscillating electric field is perturbatively added to a strong constant electric field. It has been suggested that the perturbed electric field allows non-adiabatic contributions to appear [84; 85].
In this paper, we consider the transverse field Ising model which depends linearly on time, with time-periodic perturbations and investigate how the addition of oscillations affects the phase transition behavior in the QKZM framework. The analytical expression of the density of defects is derived using the Landau-Zener-Stukelberg-Majorana (LZSM) model [86; 87; 88; 89]. The LZSM model describes a two-level system whose Hamiltonian has diagonal elements that are linearly dependent on time, while the off-diagonal elements are time-independent. The calculations are performed using the perturbation and FP formulation to derive analytical solutions with approximations. The perturbation approximation is valid for the non-adiabatic region, while the FP formulation is valid for the adiabatic region.
The structure of this paper is as follows. In Sec. II,
we analyze the contribution of time-periodic perturbations to the two-level system for the calculations in the next section. In this section, we introduce the LZSM model and analyze the dynamics of the system when time-periodic perturbations are added, using the perturbation theory and the FP formalism. Furthermore, we confirm that these approximate solutions are in good agreement with numerical calculations. In Sec. III, we consider a time-dependent transverse field Ising chain with a time-periodic perturbation. We determine how the density of defects changes when a time-periodic perturbation is applied to the diagonal or off-diagonal elements, and compare the results with those obtained by the QKZM. In Sec. IV, we summarize the discussion so far.
## II Transition probabilities of two-level system
This section focuses on the treatment of the LZSM model in the presence of an external oscillation field to analyze many-body systems later. There have been some previous studies on this topic [90; 91]. Here, we introduce the LZSM model first, and the perturbation theory and the FP formulation for the LZSM model. Finally, we evaluate the validity of these approximations.
### LZSM model
The LZSM model is described by the two-level Hamiltonian
\[H_{\text{LZSM}}(t)=\frac{1}{2}vt\sigma^{z}+\Delta\sigma^{x}.\]
In this model, if a state was an instantaneous eigenstate in the infinite past, the probability of transitioning to another instantaneous eigenstate in the infinite future is given by
\[P_{\text{LZSM}}=\exp{\left(-\frac{2\pi\Delta^{2}}{v}\right)}, \tag{1}\]
where the natural units are used. When \(\Delta\) is significantly larger than \(\sqrt{v}\), the system is considered adiabatic, resulting in a small transition probability. Conversely, when \(\Delta\) is significantly smaller than \(\sqrt{v}\), the system is characterized as non-adiabatic, leading to a large transition probability. Recent studies of the LZSM model have investigated the dynamics under various conditions, including the presence of external oscillating perturbations [90; 91; 92]. The perturbation approach derives the approximate formula for the LZSM model when the diagonal elements of the Hamiltonian are small [90].
### perturbation theory
In this section, we consider the two-level time-dependent Hamiltonian
\[H(t) =H_{z}(t)+H_{x}(t), \tag{2}\] \[H_{z}(t) =\frac{1}{2}(vt+\varepsilon-A\cos(\omega t))\sigma^{z},\] \[H_{x}(t) =\left(\Delta+\frac{B}{2}\cos\omega t\right)\sigma^{x},\]
which is the LZSM model with oscillations of magnitude \(A\) and \(B\) in the diagonal and off-diagonal elements, respectively. The initial state is assumed to be \(\ket{\psi(-\infty)}\propto\ket{\uparrow}\), where \(\sigma_{z}\ket{\uparrow}=\ket{\uparrow}\) holds. The goal is to obtain the transition probability at the final time \(p(\infty)=|\bra{\psi(\infty)}\ket{\uparrow}|^{2}\). We note that the transition probabilities were obtained approximately when either \(A\) or \(B\) is \(0\) with perturbation theory [90]. Changing the frame with the unitary operator
\[U_{z}(t)=\exp{\left(-i\int_{0}^{t}dt^{\prime}H_{z}(t^{\prime})\right)},\]
the Schrodinger equation \(i\dot{U}(t)=H(t)U(t)\) becomes
\[i\frac{d}{dt}\hat{U}_{x}(t)=\hat{H}_{x}(t)\hat{U}_{x}(t),\]
where we define \(\hat{U}_{x}(t)=U_{z}^{\dagger}(t)U(t)\) and
\[\hat{H}_{x}(t) =U_{z}^{\dagger}(t)H_{x}(t)U_{z}(t)\] \[=\left(\Delta+\frac{B}{2}\cos\omega t\right)\sum_{n=-\infty}^{ \infty}\begin{pmatrix}0&J_{n}\big{(}\frac{A}{\omega}\big{)}e^{\frac{i}{2}vt^{2 }+i\varepsilon t-in\omega t}\\ J_{n}\big{(}\frac{A}{\omega}\big{)}e^{-\frac{i}{2}vt^{2}-i\varepsilon t+in \omega t}&0\end{pmatrix}.\]
Here, we used the formula
\[e^{ix\sin\tau}=\sum_{n=-\infty}^{\infty}J_{n}(x)e^{in\tau},\]
where \(J_{n}(x)\) is the Bessel function of the first kind and the basis of the matrix is \(\{\ket{\uparrow},\ket{\downarrow}\}\), where \(\sigma_{z}\ket{\downarrow}=-\ket{\downarrow}\) holds. We express the state in this basis as \(\ket{\tilde{\psi}(t)}=\begin{pmatrix}C_{\uparrow}(t)&C_{\downarrow}(t)\end{pmatrix}^ {\mathrm{T}}\), and these variables satisfy
\[i\dot{C}_{\uparrow}(t) =\begin{pmatrix}\Delta+\frac{B}{2}\cos\omega t\end{pmatrix}\sum_{n =-\infty}^{\infty}J_{n}\bigg{(}\frac{A}{\omega}\bigg{)}e^{i\frac{\pi}{2}t^{2}+ i\varepsilon t-in\omega t}C_{\downarrow}(t),\] \[i\dot{C}_{\downarrow}(t) =\begin{pmatrix}\Delta+\frac{B}{2}\cos\omega t\end{pmatrix}\sum_{n =-\infty}^{\infty}J_{n}\bigg{(}\frac{A}{\omega}\bigg{)}e^{-i\frac{\pi}{2}t^{2} -i\varepsilon t+in\omega t}C_{\uparrow}(t).\]
The initial conditions on these variables can be regarded as \(C_{\uparrow}(-\infty)=1\) and \(C_{\downarrow}(-\infty)=0\), and the transition probability \(p(\infty)\) can be expressed as \(|C_{\uparrow}(\infty)|^{2}\). We introduce the dimensionless parameters \(\tau=\sqrt{v}t\) and \(\eta=A/\omega\), and we define \(\tilde{\varsigma}=\circ/\sqrt{v}\). By successive substitutions, we obtain the following result
\[C_{\uparrow}(\infty)-1 \simeq-\sum_{n=-\infty}^{\infty}\sum_{m=-\infty}^{\infty}J_{n}( \eta)J_{m}(\eta)\] \[\qquad\times\int_{-\infty}^{\infty}d\tau\int_{-\infty}^{\tau}d \tau^{\prime}\,\left(\tilde{\Delta}+\frac{\tilde{B}}{2}\cos\tilde{\omega}\tau \right)\!\!\left(\tilde{\Delta}+\frac{\tilde{B}}{2}\cos\tilde{\omega}\tau^{ \prime}\right)\!\!e^{\frac{i}{2}(\tau^{2}-\tau^{\prime 2})+i\tilde{\varepsilon}(\tau-\tau^{ \prime})-i\tilde{\omega}(n\tau-mr^{\prime})}\] \[=-2\pi\sum_{n=-\infty}^{\infty}\sum_{m=-\infty}^{\infty}\Bigg{(} \tilde{\Delta}J_{n}(\eta)+\frac{\tilde{B}}{4}J_{n+1}(\eta)+\frac{\tilde{B}}{4 }J_{n-1}(\eta)\Bigg{)}\Bigg{(}\tilde{\Delta}J_{m}(\eta)+\frac{\tilde{B}}{4}J_{ m+1}(\eta)+\frac{\tilde{B}}{4}J_{m-1}(\eta)\Bigg{)}\] \[\qquad\times\exp\left(-\frac{i}{2}\tilde{\omega}(n^{2}\tilde{ \omega}-2n\tilde{\varepsilon})+\frac{i}{2}\tilde{\omega}(m^{2}\tilde{\omega} -2m\tilde{\varepsilon})\right)\!\theta(n-m),\]
where we define the step function
\[\theta(x)=\left\{\begin{array}{ll}1&(x>0)\\ 1/2&(x=0)\\ 0&(x<0)\end{array}\right..\]
Here, we assume \(\tilde{\Delta}\) and \(\tilde{B}\) are small enough that this approximation is valid in the non-adiabatic region. Finally, the transition probability is approximately
\[p(\infty) \simeq\exp\!\left(-4\pi\sum_{n=-\infty}^{\infty}\sum_{m=-\infty}^ {\infty}\Bigg{(}\tilde{\Delta}J_{n}(\eta)+\frac{\tilde{B}}{4}J_{n+1}(\eta)+ \frac{\tilde{B}}{4}J_{n-1}(\eta)\Bigg{)}\Bigg{(}\tilde{\Delta}J_{m}(\eta)+ \frac{\tilde{B}}{4}J_{m+1}(\eta)+\frac{\tilde{B}}{4}J_{m-1}(\eta)\Bigg{)}\right.\] \[\qquad\times\cos\Bigg{(}\frac{1}{2}\tilde{\omega}(n^{2}\tilde{ \omega}-2n\tilde{\varepsilon})-\frac{1}{2}\tilde{\omega}(m^{2}\tilde{\omega} -2m\tilde{\varepsilon})\Bigg{)}\theta(n-m)\Bigg{)}\] \[=:P_{\mathrm{PT}}. \tag{3}\]
We note that the transition probability \(p(\infty)\) must be periodic with \(\varepsilon\) because
\[H\Big{(}t,\varepsilon-2\pi n\frac{v}{\omega}\Big{)}=H\bigg{(}t-n\frac{2\pi}{ \omega},\varepsilon\bigg{)},\quad\forall n\in\mathbb{Z}\]
holds and the period is \(2\pi/\tilde{\omega}\).
### Furry Picture
Next, we decompose the Hamiltonian (2) as in
\[H(t) =H_{0}(t)+H_{1}(t),\] \[H_{0}(t) =\frac{1}{2}(vt+\varepsilon)\sigma_{z}+\Delta\sigma_{x},\] \[H_{1}(t) =\frac{1}{2}\cos\omega t(-A\sigma_{z}+B\sigma_{x}),\]
where \(H_{0}(t)\) is the Hamiltonian of the LZSM model. Let \(U_{0}(t)\) be the time-evolution operator of \(H_{0}(t)\) and we define \(\hat{H}_{1}(t)=U_{0}^{\dagger}(t)H_{1}(t)U_{0}(t)\). If the \(\hat{H}_{1}(t)\) is sufficiently small, we can approximate the time-evolution operator
by first-order:
\[U(t)\simeq U_{0}(t)\bigg{(}I-i\int_{t_{0}}^{t}dt^{\prime}\,\hat{H}_{1}(t^{\prime}) \bigg{)}.\]
Then, the transition probability becomes
\[p(\infty) \simeq\bigg{|}\langle\uparrow|\,U_{0}(\infty)\,|\uparrow\rangle-i \,\langle\uparrow|\,U_{0}(\infty)\int_{-\infty}^{\infty}dt^{\prime}\,\hat{H}_ {1}(t^{\prime})\,|\uparrow\rangle\bigg{|}^{2}\] \[=:P_{\text{FP}}.\]
We note that we need to consider up to the second-order perturbation if we approximate the transition probability by the second-order of \(\hat{H}_{1}\):
\[p(\infty) \simeq P_{\text{FP}}-2\operatorname{Re}\biggl{(}\langle\uparrow|U_{0}( \infty)|\uparrow\rangle^{*}\] \[\quad\times\langle\uparrow|U_{0}(\infty)\int_{-\infty}^{\infty} dt\int_{-\infty}^{t}dt^{\prime}\hat{H}_{1}(t)\hat{H}_{1}\,(t^{\prime})\,|\uparrow \rangle\biggr{)}.\]
However, we assume that the last term is negligible. This assumption is justified in the adiabatic limit because the term contains an exponentially small term in the limit. The above method is called the Furry picture formalism [83].
With this formalism, the transition probability in the adiabatic limit is
\[P_{\text{FP}} \simeq\bigg{|}\langle\downarrow|\int_{-\infty}^{\infty}dt\hat{H}_ {1}(t)\,|\uparrow\rangle\bigg{|}^{2}\] \[\simeq\pi^{2}e^{-2\pi\tilde{\Delta}^{2}}\bigg{|}\eta_{1}\tilde{F} _{1}\Bigl{(}-i\tilde{\Delta}^{2},0,i\tilde{\omega}^{2}\Bigr{)}\] \[\quad-i\tilde{B}\frac{|\tilde{\Delta}|}{2}\biggl{(}_{1}\tilde{F} _{1}\left(-i\tilde{\Delta}^{2},1,i\tilde{\omega}^{2}\right)\] \[\quad+{}_{1}\tilde{F}_{1}\left(-i\tilde{\Delta}^{2}+1,1,i\tilde{ \omega}^{2}\right)\biggr{)}\bigg{|}^{2}, \tag{4}\]
where \({}_{1}\tilde{F}_{1}(a,b,x)\) is the regularized confluent hypergeometric function of the first kind. We derive the probability in Appendix.A and the probability without the adiabatic limit is (16). The first-order approximation in this method is valid in the region where both \(\eta,\tilde{B}\) are small. Unlike perturbation theory, this method does not treat \(\tilde{\Delta}\) as small values, but rather assumes that it takes on large values. In this way, this method calculates non-perturbative effects on \(\tilde{\Delta}\).
### numerical calculation
In this subsection, we compare the approximate formula (3) and (16) or (4) with the results of numerical solution of the Schrodinger equation.
First, consider the case of the non-adiabatic limit and the case \(\tilde{A}=0\). In this case, only the off-diagonal component has an oscillating term and the transition probability (3) can be expressed as
\[P_{\text{PT}} =\exp\biggl{(}-2\pi\tilde{\Delta}^{2}-\frac{\pi}{2}\tilde{B}^{2} \cos^{2}\left(\tilde{\omega}\tilde{\varepsilon}\right)\] \[\quad-2\pi\tilde{B}\tilde{\Delta}\cos\left(\frac{\tilde{\omega}^{ 2}}{2}\right)\cos\left(\tilde{\omega}\tilde{\varepsilon}\right)\biggr{)}. \tag{5}\]
The result (5) is also derived in the previous study [90]. We note, however, that a factor of \(1/4\) is missing in the third term of equation (7) in [90].
The numerical results in this case are shown in Fig. 1(a). In the region where \(\tilde{B}\) is small, the numerical calculation and the approximate expression (5) are in good agreement. On the other hand, as \(\tilde{B}\) increases, the contribution of \(O(\tilde{B}^{3})\), which is ignored in the approximate expression (5), increases, resulting in deviation from the numerical calculation.
Next, consider the case \(\tilde{B}=0\). From (3), the transition probability can be expressed as
\[P_{\text{PT}} =\exp\biggl{(}-4\pi\tilde{\Delta}^{2}\sum_{n=-\infty}^{\infty} \sum_{m=-\infty}^{\infty}J_{n}(\eta)J_{m}(\eta)\] \[\times\cos\left(\tilde{\omega}\biggl{(}\frac{1}{2}(n^{2}-m^{2}) \tilde{\omega}-(n-m)\tilde{\varepsilon}\biggr{)}\right)\theta(n-m)\biggr{)}. \tag{6}\]
The numerical results in this case are shown in Fig. 1(b). Here, the sum was calculated in the range of \(-10\leq n,m\leq 10\). In this case, the sum is large enough that even if \(\eta=A/\omega\) is not small, the numerical calculation and the approximate expression (6) are in good agreement. In Fig. 1(b), the transition probabilities show a simple behavior as \(\tilde{\omega}\) increases. This corresponds to the region where \(\eta\) is sufficiently small. In this limit, (6) yields
\[P_{\text{PT}}\simeq\exp\biggl{(}-2\pi\tilde{\Delta}^{2}\biggl{(}1+2\eta\sin( \tilde{\omega}\tilde{\varepsilon})\sin\frac{\tilde{\omega}^{2}}{2}\biggr{)} \biggr{)}.\]
This result is also derived in the previous study [90].
Finally, consider the case \(\tilde{A}\neq 0\) and \(\tilde{B}\neq 0\). The numerical results for this case are shown in Fig. 1(c). In this case, the numerical calculation and the approximate formula (3) agree well even for large values of \(\eta\) because the sums are sufficiently large.
Next, we show the validity of (16) in the adiabatic process. First, in the case of \(\tilde{A}=0\), the results of the numerical calculations are compared with those of the expression (16) in Fig. 2(a). It can be seen that in the region where \(\tilde{B}\) is small, the results are in good agreement with the numerical calculations. The dashed line in the figure represents the LZSM transition probability (1). Although this probability is sufficiently small in the adiabatic limit, it can be seen that there are parameter regions where the transition probabilities are much larger than the LZSM transition probability for \(\tilde{B}\neq 0\) due to the effects of the oscillations.
Next, in the case of \(\bar{B}=0\), the results of the numerical calculations are compared with those of (16) in Fig. 2(b). It can be seen that in the region where \(\eta\) is sufficiently small, the results are in good agreement with the numerical calculations. In this case, as in the previous case, there are parameter regions where the transition probabilities are much larger than the LZSM transition probability due to the oscillations.
In addition, Fig. 2(c) compares the results of the numerical calculations with those of (16) in the case of \(A\neq 0\) and \(B\neq 0\). In this case, we can see that (16) is in good agreement with the numerical calculation in the region where \(\eta\) is sufficiently small due to the small value of \(\bar{B}\).
Finally, we check that (4) is consistent with (16) in the adiabatic limit. The results are shown in Fig. 3. In this case, it can be seen that (16) is consistent with (4), especially in regions where \(\eta\) is sufficiently small.
## III Transverse Ising chain with time-periodic perturbation
Next, we consider the transverse field Ising model which depends linearly on time, with time-periodic perturbations. For this model, there is a previous study that investigated the model numerically [51]. However, this study only shows that the transfer matrix method [91] agrees with the numerical calculations. In the following, we consider the case where the perturbations are uniformly contained in the diagonal or off-diagonal elements.
### perturbation in the diagonal elements
We consider the time-dependent Hamiltonian
\[H_{D}(t) =-\sum_{j=1}^{N}\bigg{(}\frac{J}{2}\sigma_{j}^{x}\sigma_{j+1}^{x }+g(t)\sigma_{j}^{z}\bigg{)},\] \[g(t) =\frac{1}{4}(vt+\varepsilon^{\prime}-A\cos(\omega t)),\]
where we impose the periodic boundary condition
\[\sigma_{N+j}^{a}=\sigma_{j}^{a}.\]
This Hamiltonian has \(\mathbb{Z}_{2}\) symmetry and only the space to which the ground state belongs will be considered from now on. Here, we introduce the spinless fermion operators \(c_{j}\) using the Jordan-Wigner(JW) transformation
\[\sigma_{j}^{z}=1-2c_{j}^{\dagger}c_{j},\quad\sigma_{j}^{x}=\Big{(}c_{j}^{ \dagger}+c_{j}\Big{)}\prod_{l<j}(-\sigma_{l}^{z}),\]
and we consider the Fourier expansion of the operators
\[c_{j}=\frac{1}{\sqrt{N}}e^{-i\frac{\pi}{4}}\sum_{q}e^{iqj}c_{q},\]
Figure 1: Numerical calculations (solid) and analytical approximate solutions (dotted) are plotted. Inset is a magnified view of the vertical axis. Parameters are (a) \(\tilde{\Delta}=0.2,\ \tilde{\varepsilon}=0.5,\ \tilde{A}=0,\) (b) \(\tilde{\Delta}=0.75,\ \tilde{\varepsilon}=0.5,\ \tilde{B}=0,\) and (c) \(\tilde{\Delta}=0.2,\ \tilde{\varepsilon}=0.5,\ \tilde{B}=0.2\). In (b) and (c), the sum was calculated in the range of \(-10\leq n,m\leq 10\). We can see that the numerical calculation and the approximate formulae agree well in the region where \(\bar{B}\) is small.
where \(q=\pm(2n-1)\pi/N,\ n\in\{1,\cdots,N/2\}\). In the Heisenberg picture, these operators satisfy
\[i\frac{d}{dt}\begin{pmatrix}c_{q}(t)\\ c_{-q}^{\dagger}(t)\end{pmatrix} =\begin{pmatrix}E_{q}(t)&\delta_{q}\\ \delta_{q}&-E_{q}(t)\end{pmatrix}\begin{pmatrix}c_{q}(t)\\ c_{-q}^{\dagger}(t)\end{pmatrix}, \tag{7}\] \[E_{q}(t) =J\cos q+\frac{1}{2}(vt+\varepsilon^{\prime}-A\cos(\omega t)),\] \[\delta_{q} =-J\sin q.\]
The eigenvalues of the Hamiltonian in (7) are shown in Fig. 4. It can be seen that when \(q\simeq 0,\pm\pi\), the energy gap is small, corresponding to the non-adiabatic region where non-adiabatic transitions occur, while in the other region, the energy gap is large, corresponding to the adiabatic region.
The initial state is set to be the ground state at \(t=-\infty\) : \(\left|\psi(-\infty)\right\rangle=\left|\downarrow\right\rangle^{\otimes N}\). The final time is set to \(t=t_{F}\) and we calculate the expectation value
\[\left\langle\mathcal{N}\right\rangle =\frac{1}{2}\left\langle\psi(-\infty)\right|\sum_{j}\left(1- \sigma_{j}^{*}(t_{F})\right)\left|\psi(-\infty)\right\rangle\] \[=\sum_{j=1}^{N}\left\langle\psi(-\infty)\right|c_{j}^{\dagger}(t _{F})c_{j}(t_{F})\left|\psi(-\infty)\right\rangle\] \[=\sum_{q}\left\langle 1(q)\right|c_{q}^{\dagger}(t_{F})c_{q}(t _{F})\left|1(q)\right\rangle,\]
where \(c_{q}^{\dagger}(-\infty)c_{q}(-\infty)\left|1(q)\right\rangle=\left|1(q)\right\rangle\) and we used the fact that the initial state can be written as \(\bigotimes_{q}\left|1(q)\right\rangle\). The solution of (7) can be expressed as
\[\begin{pmatrix}c_{q}(t)\\ c_{-q}^{\dagger}(t)\end{pmatrix}=\begin{pmatrix}u_{q}(t)&-v_{q}^{*}(t)\\ v_{q}(t)&u_{q}^{*}(t)\end{pmatrix}\begin{pmatrix}c_{q}(-\infty)\\ c_{-q}^{\dagger}(-\infty)\end{pmatrix},\]
Figure 3: Numerical calculations (solid), analytical approximate solutions (dotted,(A1)), and analytical approximate solutions in the adiabatic limit (dashed,(4)) are plotted. Inset is a magnified view of the vertical axis. Parameters are \(\tilde{\Delta}=1.0,\ \tilde{\varepsilon}=0.5,\ \tilde{B}=0.3\). It can be seen that the dotted lines (A1) and the dashed lines (4) agree well, especially in regions where \(\eta\) is sufficiently small.
Figure 2: Numerical calculations (solid), analytical approximate solutions (dotted), and LZSM transition probability \(P_{LZ}\) (dashed) are plotted. Inset is a magnified view of the vertical axis. Parameters are (a) \(\tilde{\Delta}=0.75,\ \tilde{\varepsilon}=0.5,\ \tilde{A}=0,\) (b) \(\tilde{\Delta}=0.75,\ \tilde{\varepsilon}=0.5,\ \tilde{B}=0,\) and (c) \(\tilde{\Delta}=0.75,\ \tilde{\varepsilon}=0.5,\ \tilde{B}=0.3\). We can see that the numerical calculation and the approximate formula agree well in the region where \(\eta\) and \(\tilde{B}\) are small.
\[|u_{q}(\infty)|^{2}\simeq\exp\biggl{(}-4\pi\kappa_{q}\sum_{n=-\infty}^{\infty}\sum_ {m=-\infty}^{\infty}J_{n}(\eta)J_{m}(\eta)\cos\bigg{(}\tilde{\omega}\biggl{(} \frac{1}{2}(n^{2}-m^{2})\tilde{\omega}-(n-m)(\tilde{\varepsilon}^{\prime}+2 \tilde{J})\biggr{)}\biggr{)}\theta(n-m)\biggr{)}, \tag{8}\]
where we define \(\eta=A/\omega\). In the adiabatic limit where \(\tilde{J}\) is sufficiently large, there is a non-adiabatic region only near \(q=0,\pm\pi\). In the vicinity of \(q=0\), we obtain
\[|u_{q}(\infty)|^{2} \simeq\exp\biggl{(}-2\pi\tilde{J}^{2}q^{2}-4\pi\tilde{J}^{2}q^{2} \sum_{n>m}J_{n}(\eta)J_{m}(\eta)\cos\bigg{(}\tilde{\omega}\biggl{(}\frac{1}{2} (n^{2}-m^{2})\tilde{\omega}-(n-m)(\tilde{\varepsilon}^{\prime}+2\tilde{J}) \biggr{)}\biggr{)}\biggr{)}\] \[=e^{-\alpha q^{2}},\]
and in the vicinity of \(q=\pm\pi\), we get
\[|u_{q}(\infty)|^{2} \simeq\exp\biggl{(}-2\pi\tilde{J}^{2}(q\mp\pi)^{2}-4\pi\tilde{J}^ {2}(q\mp\pi)^{2}\sum_{n>m}J_{n}(\eta)J_{m}(\eta)\cos\bigg{(}\tilde{\omega} \biggl{(}\frac{1}{2}(n^{2}-m^{2})\tilde{\omega}-(n-m)(\tilde{\varepsilon}^{ \prime}-2\tilde{J})\biggr{)}\biggr{)}\biggr{)}\] \[=e^{-\beta(q\mp\pi)^{2}}.\]
We note that \(|u_{q}(\infty)|^{2}\) has the finite value near \(q=0,\pm\pi\) in this region if \(\alpha,\beta\propto\tilde{J}^{2}\) is large enough.
Next, in the adiabatic region \(\kappa_{q}\gg 1\), we obtain
\[|u_{q}(\infty)|^{2} \simeq\pi^{2}\eta^{2}e^{-2\pi\kappa_{q}}\Big{|}_{1}\tilde{F}_{1} \bigl{(}-i\kappa_{q};0;i\tilde{\omega}^{2}\bigr{)}\biggr{|}^{2}\] \[=:P_{\rm FP}(q) \tag{9}\]
from (4).
The distribution of \(|u_{q}(\infty)|^{2}\) is shown in Fig. 5. This figure shows that in addition to the transitions at \(q=0,\pi\) predicted by the KZ mechanism, other transitions occur around them, which is the result from the time-periodic perturbation. We note that if \(\tilde{J}\) is larger, the transitions in the adiabatic region occur closer to these vicinities as
Figure 4: Time dependence of instantaneous eigenvalues of the Hamiltonian in (7) for \(\tilde{J}=7.0,\ \eta=0.05,\ \tilde{\omega}=6.0,\ \tilde{\varepsilon}^{ \prime}=0.5\). It is plotted as \(N=50\). The two-level system represented by the line (orange, red) corresponds to a non-adiabatic region with a narrow gap, while the line (green, purple) corresponds to an adiabatic region with a wide gap.
long as \(\tilde{\omega}\) and \(\eta\) are fixed.
From the above discussion, we obtain the expectation value approximately
\[n(\infty) \simeq \int_{-\infty}^{\infty}\frac{dq}{2\pi}\Big{(}e^{-\alpha q^{2}}+e^{- \beta q^{2}}\Big{)}+\int_{-\pi}^{\pi}\frac{dq}{2\pi}P_{\rm FP}(q) \tag{10}\] \[= \frac{1}{2\sqrt{\pi}}\bigg{(}\frac{1}{\sqrt{\alpha}}+\frac{1}{ \sqrt{\beta}}\bigg{)}+n_{\rm FP},\] \[n_{\rm FP} = \int_{-\pi}^{\pi}\frac{dq}{2\pi}P_{\rm FP}(q) \tag{11}\]
From (10), we see that the first term is proportional to \(\tilde{J}^{-1}\). This corresponds to the part of the QKZM where perturbative oscillatory effects are added to the non-adiabatic transition. The second term \(n_{\rm FP}\) is the contribution from the non-perturbative effect. In fact, this term is also proportional to \(\tilde{J}^{-1}\). This can be seen as follows. First, for sufficiently large \(\tilde{J}\), \(P_{\rm FP}(q)\) has a value of only \(q\simeq 0,\pm\pi\), so \(n_{\rm FP}\) can be transformed to
\[n_{\rm FP}\simeq 2\pi\eta^{2}\int_{0}^{\pi/2}dq\ e^{-2\pi\tilde{J}^{2}q^{2 }}\Big{|}_{1}\tilde{F}_{1}\Big{(}-i\tilde{J}^{2}q^{2},0,i\tilde{\omega}^{2} \Big{)}\Big{|}^{2}.\]
Transforming to \(\tilde{J}q=x\) and changing the upper bound of the integral to \(\infty\) because the integrand function has no value at \(q=\pi/2\), we obtain
\[n_{\rm FP}\simeq\frac{2\pi\eta^{2}}{\tilde{J}}\int_{0}^{\infty}dx\ e^{-2\pi x ^{2}}\Big{|}_{1}\tilde{F}_{1}\big{(}-ix^{2},0,i\tilde{\omega}^{2}\big{)}\Big{|} ^{2}. \tag{12}\]
This approximation and the original definition (11) are plotted in Fig. 6. The fact that both agree where \(\tilde{J}\) is large indicates that this approximation is correct and that the non-perturbative contribution also shows a scaling of \(\tilde{J}^{-1}\). It can also be seen numerically that the peak of \(n_{\rm FP}\) appears where \(\tilde{\omega}=2\tilde{J}\) is satisfied. This can be interpreted as a result of the resonance phenomenon. Fig. 7 shows the dependence of the coefficient of \(\tilde{J}^{-1}\) in (12) on \(\tilde{\omega}\). It can be seen that the contribution of this non-perturbative term increases as the frequency and the amplitude increases.
To confirm that the derived equation (10) is correct as an approximation, we finally check it with numerical calculation when \(N\) is finite as shown in the Fig. 8. Because the contribution of \(n_{\rm FP}\) is large, it can be seen that the density of defects behaves differently from the case of no oscillations. As discussed, however, the scaling of \(\tilde{J}^{-1}\propto\sqrt{v}\) does not change because this is the same in the non-adiabatic and adiabatic regions. This means that the QKZM is robust to the time-periodic perturbations. To verify that the finite \(N\) discussed here is sufficiently large, we compared the integral (11) with the sum of (9) and the result is shown in Fig. 9. It can be seen that \(N=200\) is sufficient to be regarded as the thermodynamic limit.
### perturbation in the off-diagonal elements
Next, we consider another time-dependent Hamiltonian
\[H_{O}(t) = -\sum_{j=1}^{N}\big{(}J(t)\sigma_{j}^{x}\sigma_{j+1}^{x}+g(t) \sigma_{j}^{z}\big{)},\] \[g(t) = \frac{1}{4}(vt+\varepsilon^{\prime}),\] \[J(t) = \frac{1}{2}\Delta^{\prime}+\frac{B^{\prime}}{4}\cos\omega t,\]
where we impose the periodic boundary condition.
As before, introducing spinless fermions by the JW transformation yields
\[i\frac{d}{dt}\binom{c_{q}(t)}{c_{-q}^{\dagger}(t)} = \binom{E_{q}(t)}{\delta_{q}(t)}\binom{c_{q}(t)}{-E_{q}(t)} \binom{c_{-q}(t)}{c_{-q}^{\dagger}(t)},\] \[E_{q}(t) = 2J(t)\cos q+\frac{1}{2}(vt+\varepsilon^{\prime}),\] \[\delta_{q}(t) = -2J(t)\sin q.\]
First, we consider a non-adiabatic region: \(\kappa_{q}=\tilde{\Delta}^{\prime 2}\sin^{2}q\ll 1\). In this region, the transition amplitude becomes
Figure 5: Comparison of numerical calculations and approximate expressions of \(|u_{q}(\infty)|^{2}\) for \(\tilde{J}=7.0,\ \eta=0.05,\ \tilde{\omega}=6.0,\ \tilde{\varepsilon}^{\prime}=0.5\). It is plotted as \(N=200\). The solid line shows the numerical calculation of \(|u_{q}(\infty)|^{2}\), while the green dotted line is the result of plotting the approximate expression in the non-adiabatic region (8). The pink dash-dot line shows the result without oscillation: \(A=0\). The instantaneous eigenvalues corresponding to the maximum value in (8) are represented by orange and red lines in Fig. 4. The orange dashed line is the result of plotting the approximate expression in the adiabatic region (9). The instantaneous eigenvalues corresponding to the maximum value in (9) are represented by green and purple lines in Fig. 4.
\[|u_{q}(\infty)|^{2}\simeq \exp\Biggl{(}-4\pi\sin^{2}q\sum_{n=-\infty}^{\infty}\sum_{m=-\infty }^{\infty}\left(\tilde{\Delta}^{\prime}J_{n}(\eta_{B}\cos q)+\frac{\tilde{B}^{ \prime}}{4}J_{n+1}(\eta_{B}\cos q)+\frac{\tilde{B}^{\prime}}{4}J_{n-1}(\eta_{B} \cos q)\right)\] \[\quad\times\left(\tilde{\Delta}^{\prime}J_{m}(\eta_{B}\cos q)+ \frac{\tilde{B}^{\prime}}{4}J_{m+1}(\eta_{B}\cos q)+\frac{\tilde{B}^{\prime}}{4 }J_{m-1}(\eta_{B}\cos q)\right)\] \[\quad\times\cos\left(\tilde{\omega}\biggl{(}\frac{1}{2}(n^{2}-m^ {2})\tilde{\omega}-(n-m)(\tilde{\varepsilon}^{\prime}+2\tilde{\Delta}^{\prime }\cos q)\biggr{)}\right)\theta(n-m)\Biggr{)}, \tag{13}\]
where we define \(\eta_{B}=B^{\prime}/\omega\). We note that this expression becomes the same with (8) in the adiabatic limit if the
Figure 6: Comparison of the numerical calculation of (11) (solid) and the approximate expression (dashed) for \(\eta=0.05\), \(\tilde{\varepsilon}^{\prime}=0.5\). The dashed lines represent (12). It can be seen that (12) is an approximation where \(\tilde{J}\) is sufficiently large and agrees well with the numerical calculation. This shows that the spin number density due to the non-perturbative effect \(n_{\text{FP}}\) also scales with \(\tilde{J}^{-1}\).
Figure 7: The dependence of the coefficient of \(\tilde{J}^{-1}\) in (12) on \(\tilde{\omega}\). The coefficient increases as \(\tilde{\omega}\) and \(\eta\) increases. This means that the effect of perturbative oscillation becomes dominant as \(\tilde{\omega}\) and \(\eta\) increases.
amplitude is small enough. On the other hand, in the adiabatic region \(\kappa_{q}=\tilde{\Delta}^{\prime 2}\sin^{2}q\gg 1\), we obtain
\[|u_{q}(\infty)|^{2}\simeq\pi^{2}e^{-2\pi\kappa_{q}}\bigg{|}\eta_{B}\cos q_{1} \tilde{F}_{1}\big{(}-i\kappa_{q},0,i\tilde{\omega}^{2}\big{)}-i\tilde{B}^{ \prime}\sin q\frac{\sqrt{\kappa_{q}}}{2}\bigg{(}_{1}\tilde{F}_{1}\left(-i \kappa_{q},1,i\tilde{\omega}^{2}\right)+_{1}\tilde{F}_{1}\left(-i\kappa_{q}+1, 1,i\tilde{\omega}^{2}\right)\bigg{)}\bigg{|}^{2}. \tag{14}\]
These expressions of \(|u_{q}(\infty)|^{2}\) have values only at \(q\simeq 0,\pm\pi\) if \(\tilde{\Delta}^{\prime}\) is sufficiently large (Fig. 10), which is easy to see from the asymptotic expansion of \(\kappa_{q}\). Furthermore, the first term in the absolute value in (14) is the largest contribution compared to the others because we focus on the regions at \(q\simeq 0,\pm\pi\). This shows that the integral of \(|u_{q}(\infty)|^{2}\) scales with \(\tilde{\Delta}^{\prime-1}\), as in \(H_{\rm D}(t)\). From the above discussion, the analytical expression for the density of defects \(n(\infty)\) can be expressed as in (10).
We check it with a numerical calculation when \(N\) is finite. In the Fig. 11, we compared the density of defects obtained by numerically solving Schrodinger equation with an approximate analytical expression. As in the previous subsection, \(N=200\) can be regarded as the thermodynamic limit. This figure shows that the density of defects behaves differently compared to the case without oscillations. However, the scaling of the non-perturbative contribution is \(\tilde{\Delta}^{\prime-1}\propto\sqrt{v}\), which is not different from the scaling predicted by the QKZM, indicating the robustness of the scaling. In addition, the resonance phenomenon was observed in the diagonal oscillation, but when there was oscillation in the off-diagonal elements, the resonance phenomenon was canceled out by the contribution of the second term in the absolute value of (14).
## IV Conclusion
The QKZM is currently attracting attention, and scaling laws for models beyond the simple setting of an isolated system and linear sweep are also of interest. In this paper, we consider a model in which an oscillating external field is perturbatively added in addition to the usual linear linear sweep. In such a setting, it was found that it is necessary to consider not only a perturbative correction term for transitions in the non-adiabatic region, as in the usual QKZM, but also a non-perturbative cor
Figure 8: Comparison of numerical calculations and approximate expressions of the density of defects in case \(N\in\{50,100,200\}\), \(\eta=0.05\), \(\tilde{\omega}=6.0\), \(\tilde{\varepsilon}^{\prime}=0.5\). The solid line is the result of solving the Schrödinger equation numerically, the dotted line (square) corresponds to the approximate expression (10), and the pink dashed line corresponds to the result of the QKZM which is the same in \(A=0\). When solving the Schrödinger equation numerically, the initial and final times are set to \(\tau=-500,500\), respectively. It can be seen that the numerical and approximate results are in good agreement. Moreover, even when \(N\) is sufficiently large, the density of defects differs from that without oscillation. However, with or without oscillation, both are found to be scaled by \(\tilde{J}^{-1}\propto\sqrt{v}\). This means that the QKZM is robust to the time-periodic perturbation.
rection in the adiabatic region. Moreover, although the power spectrum of transition probability is different between with and without oscillation, the non-perturbative correction term also scales as \(\sqrt{v}\) in the adiabatic limit, as in the usual QKZM, indicating the robustness of the QKZM with respect to the scaling law.
In the present study, the high symmetr in the model allows for the analytical discussion. The scaling laws of the QKZM have also been investigated for other models such as the spin glass model [102; 103; 104]. The relation between symmetry and the effect of time-periodic perturbations on the QKZM is a subject for future work. Furthermore, we need to investigate the robustness of other quantities, such as kink-kink correlations [105; 106; 107; 108; 106; 107; 108].
## Acknowledgement
We thank H. Nakazato, M. Fujiwara, and G. Kato for helpful discussions.
## Appendix A Furry Picture
Figure 11: Comparison of numerical calculations and approximate expressions of the density of defects in case \(N\in\{50,100,200\},\ \tilde{B}^{\prime}=0.3,\ \tilde{\omega}=5.0,\ \tilde{ \varepsilon}^{\prime}=0.5\). The solid line is the result of solving the Schrödinger equation numerically, the dotted line (square) corresponds to the approximate expression (10), and the dashed line (pink) corresponds to the result of the QKZM which is the same in \(B=0\). When solving the Schrödinger equation numerically, the initial and final times are set to \(\tau=-500,500\), respectively. It can be seen that the numerical and approximate results are in good agreement. As in the diagonal oscillation, the density of defects differs from that without oscillation. However, with or without oscillation, both are found to be scaled by \(\tilde{J}^{-1}\propto\sqrt{v}\). This means that the QKZM is robust to the time-periodic perturbation.
Figure 10: Comparison of numerical calculations and approximate expressions of \(|u_{q}(\infty)|^{2}\) for \(\tilde{\Delta}^{\prime}=7.0,\ \tilde{B}^{\prime}=0.05,\ \tilde{\omega}=5.0,\ \tilde{ \varepsilon}^{\prime}=0.5\). It is plotted as \(N=200\). The solid line shows the numerical calculation of \(|u_{q}(\infty)|^{2}\), while the green dotted line is the result of plotting the approximate expression in the non-adiabatic region (13) and the orange dashed line is the result of plotting the approximate expression in the adiabatic region (14). The pink dash-dot line shows the result without oscillation: \(B^{\prime}=0\).
Figure 9: Comparison of the integral result for the density of defects due to the non-perturbative effect with the approximate calculation by summing over a finite \(N\) for \(\eta=0.05,\ \tilde{\omega}^{\prime}=6.0,\ \tilde{\varepsilon}^{\prime}=0.5\). The solid line corresponds to the sum and the dashed line to the integral numerically. The sum for \(N=200\) is consistent with the integral result and is sufficiently large to be considered as the thermodynamic limit.
In this section, we use the FP to obtain the transition probability (4). In the following discussion, we use these relations
\[\int_{-\infty}^{\infty}d\tau\,e^{i\tilde{\omega}\tau}D_{\nu_{1}}\big{(}e ^{i\frac{\pi}{4}}\tau\big{)}D_{\nu_{2}}\big{(}e^{-i\frac{\pi}{4}}\tau\big{)} =\frac{2\pi}{\Gamma(-\nu_{1})}e^{-i\frac{\pi}{4}(\nu_{1}-\nu_{2})} e^{-i\frac{\tilde{\omega}^{2}}{2}}\tilde{\omega}^{-\nu_{1}-\nu_{2}-1}U \left(-\nu_{2},-\nu_{1}-\nu_{2},i\tilde{\omega}^{2}\right),\] \[\int_{-\infty}^{\infty}d\tau\,e^{i\tilde{\omega}\tau}D_{\nu_{1}} \big{(}e^{i\frac{\pi}{4}}\tau\big{)}D_{\nu_{2}}\big{(}e^{i\frac{\pi}{4}}\tau \big{)} =\frac{\sqrt{2\pi}\Gamma(\nu_{2}+1)}{\Gamma(-\nu_{1})}e^{-i\frac{ \pi}{4}(\nu_{1}+3\nu_{2}+1)}e^{-i\frac{\tilde{\omega}^{2}}{2}}\tilde{\omega}^{- \nu_{1}+\nu_{2}}U\left(\nu_{2}+1,-\nu_{1}+\nu_{2}+1,i\tilde{\omega}^{2}\right)\] \[\quad+\sqrt{2\pi}\Gamma(\nu_{2}+1)e^{-i\frac{\pi}{4}(\nu_{1}-\nu _{2}+1)}e^{-i\frac{\tilde{\omega}^{2}}{2}}\tilde{\omega}^{-\nu_{1}+\nu_{2}} {}_{1}\tilde{F}_{1}\left(\nu_{2}+1,-\nu_{1}+\nu_{2}+1,i\tilde{\omega}^{2} \right),\] \[\int_{-\infty}^{\infty}d\tau\,e^{i\tilde{\omega}\tau}D_{\nu_{1}} \big{(}e^{-i\frac{\pi}{4}}\tau\big{)}D_{\nu_{2}}\big{(}e^{-i\frac{\pi}{4}}\tau \big{)} =\sqrt{2\pi}e^{i\frac{\pi}{4}(\nu_{1}+3\nu_{2}+1)}e^{-i\frac{\tilde {\omega}^{2}}{2}}\tilde{\omega}^{-\nu_{1}+\nu_{2}}U\left(-\nu_{1},-\nu_{1}+ \nu_{2}+1,i\tilde{\omega}^{2}\right),\]
where \(D_{\nu}(z)\) is the parabolic cylinder function, \({}_{1}\tilde{F}_{1}(a,b,x)\) is the regularized confluent hypergeometric function of the first kind, and \(U(a,b,x)\) is the confluent hypergeometric function of the second kind. We note that these relations are derived from the integral expressions of the special functions [109] and applicable only when \(\tilde{\omega}>0\).
We consider the dimensionless Hamiltonian
\[H(\tau)=\frac{1}{2}(\tau-\tilde{A}\cos\tilde{\omega}\tau+\tilde{ \varepsilon})\sigma_{z}+\Bigg{(}\tilde{\Delta}+\frac{\tilde{B}}{2}\cos\tilde{ \omega}\tau\Bigg{)}\sigma_{x},\]
where we define \(\tau=\sqrt{v}t\) and \(\tilde{\varsigma}=\circ/\sqrt{v}\). The time-evolution operator for the LZSM Hamiltonian (\(\tilde{A}=\tilde{B}=0\)) is
\[U_{0}(\tau,\tau_{0}) =\begin{pmatrix}f(\tau,\tau_{0})&-g^{*}(\tau,\tau_{0})\\ g(\tau,\tau_{0})&f^{*}(\tau,\tau_{0})\end{pmatrix},\] \[f\left(\tau,\tau_{0}\right) =e^{-\frac{\pi}{2}\kappa}D_{i\kappa}\big{(}e^{-\frac{\pi}{4}i}( \tau_{0}+\tilde{\varepsilon})\big{)}D_{-i\kappa}\big{(}e^{\frac{\pi}{4}i}( \tau+\tilde{\varepsilon})\big{)}+e^{-\frac{\pi}{4}\kappa}\kappa D_{-i\kappa-1 }\big{(}e^{\frac{\pi}{4}i}(\tau_{0}+\tilde{\varepsilon})\big{)}D_{i\kappa-1} \big{(}e^{-\frac{\pi}{4}i}(\tau+\tilde{\varepsilon})\big{)}\] \[=f_{1}\left(\tau_{0}\right)D_{-i\kappa}\big{(}e^{\frac{\pi}{4}i}( \tau+\tilde{\varepsilon})\big{)}+f_{2}\left(\tau_{0}\right)\sqrt{\kappa}D_{i \kappa-1}\big{(}e^{-\frac{\pi}{4}i}(\tau+\tilde{\varepsilon})\big{)},\] \[g\left(\tau,\tau_{0}\right) =e^{-\frac{\pi}{4}\kappa}e^{\frac{\pi}{4}i}\sqrt{\kappa}D_{i \kappa}\big{(}e^{-\frac{\pi}{4}i}(\tau_{0}+\tilde{\varepsilon})\big{)}D_{-i \kappa-1}\big{(}e^{\frac{\pi}{4}i}(\tau+\tilde{\varepsilon})\big{)}-e^{-\frac{ \pi}{4}\kappa}e^{\frac{\pi}{4}i}\sqrt{\kappa}D_{-i\kappa-1}\big{(}e^{\frac{ \pi}{4}i}(\tau_{0}+\tilde{\varepsilon})\big{)}D_{i\kappa}\big{(}e^{-\frac{ \pi}{4}i}(\tau+\tilde{\varepsilon})\big{)}\] \[=g_{1}\left(\tau_{0}\right)\sqrt{\kappa}D_{-i\kappa-1}\big{(}e^{ \frac{\pi}{4}i}(\tau+\tilde{\varepsilon})\big{)}+g_{2}\left(\tau_{0}\right)D_{i \kappa}\left(e^{-\frac{\pi}{4}i}(\tau+\tilde{\varepsilon})\right).\]
We note that
\[g_{1}(\tau_{0})=e^{i\frac{\pi}{4}}f_{1}(\tau_{0}),\quad g_{2}(\tau_{0})=-e^{i \frac{\pi}{4}}f_{2}(\tau_{0})\]
hold.
From this, the perturbation term can be written as
\[\hat{H}_{1}(\tau) =U_{0}^{\dagger}(\tau,\tau_{0})H_{1}(\tau)U_{0}(\tau,\tau_{0})\] \[=\frac{1}{2}\cos\tilde{\omega}\tau\begin{pmatrix}f^{*}(\tau,\tau_{ 0})&g^{*}(\tau,\tau_{0})\\ -g(\tau,\tau_{0})&f(\tau,\tau_{0})\end{pmatrix}\left(-\tilde{A}\sigma_{z}+\tilde{B} \sigma_{x}\right)\begin{pmatrix}f(\tau,\tau_{0})&-g^{*}(\tau,\tau_{0})\\ g(\tau,\tau_{0})&f^{*}(\tau,\tau_{0})\end{pmatrix}\] \[=-\frac{\tilde{A}}{2}\cos\tilde{\omega}\tau\begin{pmatrix}|f|^{2}-|g |^{2}&-2f^{*}g^{*}\\ -2fg&|g|^{2}-|f|^{2}\end{pmatrix}+\frac{\tilde{B}}{2}\cos\tilde{\omega}\tau\begin{pmatrix} 2\operatorname{Re}(fg^{*})&-(g^{*})^{2}+(f^{*})^{2}\\ f^{2}-g^{2}&-2\operatorname{Re}(fg^{*})\end{pmatrix},\]
where the argument \((\tau,\tau_{0})\) was omitted. To obtain the transition probability, we need to calculate
\[\int_{-\infty}^{\infty}d\tau\hat{H}_{1}(\tau).\]
Hereafter, the argument (\(\tau_{0}\)) is omitted. When \(\tilde{B}=0\), the diagonal part of the integral becomes
\[\int_{-\infty}^{\infty}d\tau\,(\hat{H}_{1}(\tau))_{11} =-\sqrt{2\pi}\eta\Bigg{(}(|f_{1}|^{2}-|f_{2}|^{2})K_{1}(\kappa, \tilde{\omega})+\sqrt{\kappa}\,\text{Re}\Bigg{(}if_{1}f_{2}^{*}K_{2}(\kappa, \tilde{\omega})\Bigg{)}\Bigg{)},\] \[K_{1}(\kappa,\tilde{\omega}) =\sqrt{2\pi}e^{-\frac{\kappa\kappa}{2}}\,\text{Re}\left(\frac{e^ {-i\Omega}}{\Gamma(i\kappa)}U\big{(}-i\kappa,0,i\tilde{\omega}^{2}\big{)} \right),\] \[K_{2}(\kappa,\tilde{\omega}) =e^{-\pi\kappa}e^{i\Omega}U\big{(}i\kappa,0,-i\tilde{\omega}^{2} \big{)}+e^{-i\Omega}\Gamma(-i\kappa)\left(\frac{e^{-\pi\kappa}}{\Gamma(i \kappa)}U\left(-i\kappa,0,i\tilde{\omega}^{2}\right)-{}_{1}\tilde{F}_{1} \left(-i\kappa,0,i\tilde{\omega}^{2}\right)\right),\] \[\Omega =\tilde{\omega}\tilde{\varepsilon}+\frac{\tilde{\omega}^{2}}{2}.\]
Similarly, the off-diagonal part of the integral becomes
\[\int_{-\infty}^{\infty}d\tau\,(\hat{H}_{1}(\tau))_{21}=\sqrt{2\pi}\eta e^{i \frac{\pi}{4}}\bigg{(}-2f_{1}f_{2}K_{1}(\kappa,\tilde{\omega})+\frac{i}{2} \sqrt{\kappa}\big{(}f_{2}^{2}K_{2}^{*}(\kappa,\tilde{\omega})+f_{1}^{2}K_{2} (\kappa,\tilde{\omega})\big{)}\bigg{)}.\]
In the adiabatic limit \(|\langle 0\,|U_{0}(\infty)|\,0\rangle|^{2}=e^{-2\pi\kappa}\simeq 0\), the off-diagonal part can be written more simply as
\[\int_{-\infty}^{\infty}d\tau\,(\hat{H}_{1}(\tau))_{21}\simeq\sqrt{\frac{\pi \kappa}{2}}\eta e^{-i\frac{\pi}{4}}f_{2}^{2}e^{i\Omega}\Gamma(i\kappa)_{1} \tilde{F}_{1}^{*}\big{(}-i\kappa,0,i\tilde{\omega}^{2}\big{)}.\]
Next, we consider the case \(\tilde{A}=0\). In this case, the integrals of diagonal part and off-diagonal part becomes
\[\int_{-\infty}^{\infty}d\tau\,(\hat{H}_{1}(\tau))_{11} =\frac{\tilde{B}}{2}\,\text{Re}\bigg{(}e^{-i\Omega}K_{3}(\kappa,\tilde{\omega})+e^{i\Omega}K_{6}(\kappa,\tilde{\omega})\bigg{)},\] \[\int_{-\infty}^{\infty}d\tau\,(\hat{H}_{1}(\tau))_{21} =\frac{\tilde{B}}{4}\bigg{(}e^{-i\Omega}K_{4}(\kappa,\tilde{ \omega})+e^{i\Omega}K_{7}(\kappa,\tilde{\omega})-e^{-i\Omega}K_{5}(\kappa, \tilde{\omega})-e^{i\Omega}K_{8}(\kappa,\tilde{\omega})\bigg{)},\]
where we define
\[K_{3}(\kappa,\tilde{\omega}) =-i(|f_{1}|^{2}-|f_{2}|^{2})\sqrt{\kappa}\frac{2\pi e^{-\frac{\pi}{ 2}\kappa}}{\Gamma(i\kappa)}U\left(-i\kappa+1,1,i\tilde{\omega}^{2}\right)\] \[\quad+\left(f_{1}f_{2}^{*}\frac{\Gamma(-i\kappa)}{\Gamma(i \kappa)}-f_{2}f_{1}^{*}\right)\kappa\sqrt{2\pi}e^{-\pi\kappa}U\left(-i\kappa+1,1,i\tilde{\omega}^{2}\right)+if_{1}f_{2}^{*}\sqrt{2\pi}\Gamma(-i\kappa+1)_{1} \tilde{F}_{1}\left(-i\kappa+1,1,i\tilde{\omega}^{2}\right),\]
\[K_{4}(\kappa,\tilde{\omega}) =\left(f_{1}^{2}\frac{\Gamma(-i\kappa)}{\Gamma(i\kappa)}-f_{2}^{ 2}\right)\!e^{-i\frac{3\pi}{4}}\kappa\sqrt{2\pi}e^{-\pi\kappa}U\left(-i\kappa+ 1,1,i\tilde{\omega}^{2}\right)\] \[\quad+f_{1}^{2}\sqrt{2\pi}\Gamma(-i\kappa+1)e^{-i\frac{\pi}{4}} {}_{1}\tilde{F}_{1}\left(-i\kappa+1,1,i\tilde{\omega}^{2}\right)+4\pi f_{1} f_{2}\sqrt{\kappa}\frac{e^{-\frac{\pi}{2}\kappa}e^{-i\frac{\pi}{4}}}{\Gamma(i \kappa)}U\left(-i\kappa+1,1,i\tilde{\omega}^{2}\right),\] \[K_{5}(\kappa,\tilde{\omega}) =-f_{1}^{2}\sqrt{2\pi}\Gamma(-i\kappa+1)e^{-i\frac{\pi}{4}}{}_{1 }\tilde{F}_{1}\left(-i\kappa,1,i\tilde{\omega}^{2}\right)\] \[\quad-4i\pi f_{1}f_{2}\sqrt{\kappa}\frac{e^{-\frac{\pi}{2}\kappa }e^{-i\frac{\pi}{4}}}{\Gamma(i\kappa+1)}U\left(-i\kappa,1,i\tilde{\omega}^{2} \right)+\left(f_{2}^{2}+f_{1}^{2}\frac{\Gamma(-i\kappa)}{\Gamma(i\kappa)} \right)\!\sqrt{2\pi}e^{-\pi\kappa}e^{i\frac{3\pi}{4}}U\left(-i\kappa,1,i \tilde{\omega}^{2}\right),\] \[K_{6}(\kappa,\tilde{\omega}) =-i\bigg{(}(|f_{1}|^{2}-|f_{2}|^{2})\sqrt{\kappa}\frac{2\pi e^{ \frac{-\pi}{2}\kappa}}{\Gamma(i\kappa+1)}U\left(-i\kappa,1,i\tilde{\omega}^{2} \right)\] \[\quad+\sqrt{2\pi}e^{-\pi\kappa}\bigg{(}-f_{1}^{*}f_{2}+f_{2}^{*}f _{1}\frac{\Gamma(-i\kappa)}{\Gamma(i\kappa)}\bigg{)}U\left(-i\kappa,1,i \tilde{\omega}^{2}\right)+f_{2}^{*}f_{1}\sqrt{2\pi}\Gamma(-i\kappa+1)_{1} \tilde{F}_{1}\left(-i\kappa,1,i\tilde{\omega}^{2}\right)\bigg{)}^{*},\] \[K_{7}(\kappa,\tilde{\omega}) =\Bigg{(}\bigg{(}(f_{1}^{2})^{*}+(f_{2}^{2})^{*}\frac{\Gamma(-i \kappa)}{\Gamma(i\kappa)}\bigg{)}\sqrt{2\pi}e^{-\pi\kappa}e^{i\frac{\pi}{4}}U \left(-i\kappa,1,i\tilde{\omega}^{2}\right)\] \[\quad+4\pi f_{1}^{*}f_{2}^{*}\sqrt{\kappa}\frac{e^{-\frac{\pi}{2} \kappa}e^{i\frac{\pi}{4}}}{\Gamma(i\kappa+1)}U\left(-i\kappa,1,i\tilde{\omega }^{2}\right)+(f_{2}^{2})^{*}\sqrt{2\pi}\kappa\Gamma(-i\kappa)e^{-\frac{i\pi}{4 }}{}_{1}\tilde{F}_{1}\left(-i\kappa,1,i\tilde{\omega}^{2}\right)\bigg{)}^{*},\] \[K_{8}(\kappa,\tilde{\omega}) =\bigg{(}\kappa\sqrt{2\pi}e^{-\pi\kappa}e^{i\frac{3\pi}{4}}\bigg{(} (f_{1}^{2})^{*}+(f_{2}^{2})^{*}\frac{\Gamma(-i\kappa)}{\Gamma(i\kappa)}\bigg{)} U\left(-i\kappa+1,1,i\tilde{\omega}^{2}\right)\] \[\quad+4i\pi f_{1}^{*}f_{2}^{*}\sqrt{\kappa}\frac{e^{-i\frac{\pi}{ 2}\kappa}e^{-\frac{\pi}{2}\kappa}}{\Gamma(i\kappa)}U\left(-i\kappa+1,1,i \tilde{\omega}^{2}\right)-i(f_{2}^{2})^{*}\sqrt{2\pi}\Gamma(-i\kappa+1)e^{-i \frac{\pi}{4}}{}_{1}\tilde{F}_{1}\left(-i\kappa+1,1,i\tilde{\omega}^{2}\right) \bigg{)}^{*}.\]
In the adiabatic limit, the off-diagonal part becomes
\[\int_{-\infty}^{\infty}d\tau(\hat{H}_{1}(t))_{21}\simeq\frac{\tilde{B}}{4}f_{2 }^{2}e^{i\Omega}\sqrt{2\pi}\kappa\Gamma(i\kappa)e^{i\frac{\pi}{4}}\bigg{(}{}_{1 }\tilde{F}_{1}\left(-i\kappa,1,i\tilde{\omega}^{2}\right)+{}_{1}\tilde{F}_{1} \left(-i\kappa+1,1,i\tilde{\omega}^{2}\right)\bigg{)}^{*}.\]
From the above discussion, the transition probability in the adiabatic limit under \(A\neq 0,B\neq 0\), and \(t_{0}\rightarrow-\infty\) becomes
\[|\langle\uparrow|U(\infty)|\uparrow\rangle|^{2}\] \[\simeq\bigg{|}\langle\downarrow|\int_{-\infty}^{\infty}dt\tilde{H} _{1}(t)\,|\uparrow\rangle\bigg{|}^{2}\] \[\simeq\frac{\pi\kappa}{2}e^{-\pi\kappa}|\Gamma(i\kappa)|^{2} \bigg{|}\eta_{1}\tilde{F}_{1}^{*}\big{(}-i\kappa,0,i\tilde{\omega}^{2}\big{)}\] \[\quad+i\tilde{B}\sqrt{\kappa}\frac{1}{2}\bigg{(}{}_{1}\tilde{F}_{1} \left(-i\kappa,1,i\tilde{\omega}^{2}\right)+{}_{1}\tilde{F}_{1}\left(-i\kappa+1,1,i\tilde{\omega}^{2}\right)\bigg{)}^{*}\bigg{|}^{2}\] \[\simeq\pi^{2}e^{-2\pi\kappa}\bigg{|}\eta_{1}\tilde{F}_{1}\big{(}-i \kappa,0,i\tilde{\omega}^{2}\big{)}\] \[\quad-i\tilde{B}\frac{\sqrt{\kappa}}{2}\bigg{(}{}_{1}\tilde{F}_{1} \left(-i\kappa,1,i\tilde{\omega}^{2}\right)+{}_{1}\tilde{F}_{1}\left(-i\kappa+1,1,i\tilde{\omega}^{2}\right)\bigg{)}\bigg{|}^{2},\]
where we use these relation
\[f_{1}(\tau_{0}) \to e^{-\frac{5\pi}{4}\kappa}e^{\frac{i}{4}|\tau_{0}|^{2}}|\tau_{0}|^{i\kappa},\] \[f_{2}(\tau_{0}) \to e^{-\frac{\pi}{4}\kappa}\sqrt{1-e^{-2\pi\kappa}}e^{i\arg\Gamma(1 -i\kappa)}e^{\frac{i}{4}|\tau_{0}|^{2}}|\tau_{0}|^{i\kappa},\]
which hold in \(\tau_{0}\rightarrow-\infty\).
|
2306.07249
|
Generalized Power Attacks against Crypto Hardware using Long-Range Deep
Learning
|
To make cryptographic processors more resilient against side-channel attacks,
engineers have developed various countermeasures. However, the effectiveness of
these countermeasures is often uncertain, as it depends on the complex
interplay between software and hardware. Assessing a countermeasure's
effectiveness using profiling techniques or machine learning so far requires
significant expertise and effort to be adapted to new targets which makes those
assessments expensive. We argue that including cost-effective automated attacks
will help chip design teams to quickly evaluate their countermeasures during
the development phase, paving the way to more secure chips.
In this paper, we lay the foundations toward such automated system by
proposing GPAM, the first deep-learning system for power side-channel analysis
that generalizes across multiple cryptographic algorithms, implementations, and
side-channel countermeasures without the need for manual tuning or trace
preprocessing. We demonstrate GPAM's capability by successfully attacking four
hardened hardware-accelerated elliptic-curve digital-signature implementations.
We showcase GPAM's ability to generalize across multiple algorithms by
attacking a protected AES implementation and achieving comparable performance
to state-of-the-art attacks, but without manual trace curation and within a
limited budget. We release our data and models as an open-source contribution
to allow the community to independently replicate our results and build on
them.
|
Elie Bursztein, Luca Invernizzi, Karel Král, Daniel Moghimi, Jean-Michel Picod, Marina Zhang
|
2023-06-12T17:16:26Z
|
http://arxiv.org/abs/2306.07249v2
|
# Generic Attacks against
###### Abstract.
Hardware-based cryptographic implementations utilize countermeasures to resist side-channel attacks. In this paper, we propose a novel deep-learning architecture for side-channel analysis called SCANET that generalizes across multiple implementations and algorithms without manual tuning or trace pre-processing. We achieve this by combining a novel input processing technique with several advanced deep learning techniques including transformer blocks and multi-task learning. We demonstrate the generality of our approach by successfully attacking four hardware-accelerated countermeasures for elliptic curve digital signatures in an end-to-end manner without human tuning.
Additionally, we showcase SCANET's ability to generalize across multiple algorithms by successfully replicating state-of-the-art attacks against protected AES without the need for trace preprocessing, hand-tuning, or model architectural changes. These results offer promising prospects for generic and automated side-channel leakage evaluation without manual effort.
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none none
+
Footnote †: copyrighted: none none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none none
+
Footnote †: copyrighted: none
+
Footnote: copyrighted: none none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none none
+
FootnoteFootnote †: copyrighted: none none
+
Footnote: copyrighted: none
+
Footnote †: copyrighted: none none
+
FootnoteFootnote: copyrighted: none none
+
Footnote †: copyrighted: none
+
Footnote: copyrighted: none none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none none
+
Footnote: copyrighted: none
+
Footnote: copyrighted none none:
+
Footnote: copyrighted none none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none none
+
Footnote: copyrighted: none
+
Footnote: copyrighted none: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none: none
+
public/private key pair. Public parameters include an elliptic curve \(E\), a point \(G\) on the curve, and the integer order \(n\) of \(G\) over \(E\). The secret key \(d\) is a random integer satisfying \(1<d<n-1\). The public key is calculated as \(Q\!=\!d\!\times\!G\) (\(\times\) is the scalar multiplication operation supported by curve \(E\)). As relevant to this paper, to generate a signature for a message hash \(h\), ECDSA algorithm chooses a random secret \(k\) such that \(1<k<n-1\), computes \((x,y)\!=\!k\!\times\!G\) and \(r\!=\!x\!\mod n\), computes \(s\!=\!k^{-1}(h\!+\!r\!\cdot\!d)\mod n\), and outputs \((r,s)\) as the signature pair.
It is critical for the private key \(d\) and the per-message random secret \(k\) to remain secret. An attacker who acquires one instance of \(k\) for a known signature can simply calculate the private key as \(d\!=\!r^{-1}(s\!\cdot\!k\!-\!h)\mod n\). An attack that can recover part of the \(k\) cannot simply extract the private key \(d\) using the above calculation, but lattice attacks can recover the private key from partial knowledge of \(k\) from several signature generation operations.
### Side-Channel Attack and Defense
Side-channel attacks (SCA) target the execution of cryptographic algorithms (Srivastava et al., 2017; Wang et al., 2018). During the execution, certain physical signals may be generated by intermediate computations that depend on the secret data bits being processed. The attacker can build a distinguisher that identifies which signals are related to which secret bits. This is typically accomplished by attacking each segment of the key separately. For instance, we build a distinguisher that can find the correct key byte for AES, which can be repeated for each of the 16 different key bytes of the 128-bit AES key.
There are two primary scenarios for constructing a distinguisher: In direct attacks, such as SPA (Wang et al., 2018) or DPA (Srivastava et al., 2017), the attacker attempts to retrieve the key from traces without prior modeling of the target. In profiling-based attacks--Template attacks (Beng et al., 2018), the attacker first constructs a model based on previous observations of the target (or a similar one). We focus on profiling-based attacks, which are valuable for attacking real devices and assessing the security of an implementation against side-channel attacks.
One can use masking countermeasures to mitigate side-channel attacks by disrupting the statistical correlation between intermediate values and the physical signal, e.g., power consumption. To achieve this, implementations can generate a random value and combine it with secret parameters and intermediate values during computation. As a result, the computation is carried out using blinded secrets instead of cleartext ones.
A common protection for ECC implementations is to randomize the secret integer (\(d\) or \(k\)) during scalar multiplication (Kang et al., 2018). For this, implementations can add a random multiple of the curve order \(n\) to the private integer \(k\) as \(k^{\prime}\!\to\!k\!+\!r\!\cdot\!n\). Later on, when computing the scalar multiplication \(k^{\prime}\!\times\!G\) as in the signature generation, it will results as \((k+r\cdot n)\times G=k\times k\times G+r\cdot n\times G\). Since \(n\!\times\!G\) is equal to point at infinity (the identity element), the expression simplifies to \(k\!\times\!G\). Randomizing the secret integer can also be achieved using the euclidean division \(k=\lfloor k/r\rfloor\cdot r\!+\!(k\!\!\mod r)\), or the secret integer can be divided into multiple random shares for extra security \(k\!=\!k_{1}\!+\!...\!+\!k_{m}\). In this paper, we attack ECC masking with single and double shares that are considered secure when the random share \(r\) is chosen in a way that \(\|r\|\geq\|n\|/2\) (see (Kang et al., 2018; Wang et al., 2018; Wang et al., 2018)) where \(\|n\|\) stands for the bit-length of a natural number \(n\). These implementations include hardware-accelerated constant time scalar multiplication (CM0), additive masking (CM1), multiplicative masking (CM2), and a combination of the previous two (CM3), for details see Section 5.1.
### Transformers
Recent breakthroughs in deep learning, e.g., large language models (LLMs), rely on the _Transformer_ architecture (Srivastava et al., 2017). They are trained on vast amounts of text data and fine-tuned for specific downstream tasks, such as language translation, question answering, and text completion, pushing the boundaries of language understanding and generation.
_Self-attention_(Wang et al., 2018) introduced by Transformers captures global relationships and context in text and generates more coherent and context-aware responses. In traditional models like recurrent neural networks (RNNs) and convolutional neural networks (CNNs), words or tokens are processed sequentially without considering their connections. Transformers instead leverage self-attention to simultaneously analyze all tokens and assign different importance to each based on its relevance to the others. This enables them to effectively understand long-range dependencies and capture contextual information. We believe that this ability is particularly valuable in constructing generic side-channel attacks for practical targets that include capturing a large amount of lengthy time-series data, where data leaks often occur through interconnected relationships between distant data points.
In our work, we implement and use Gated Attention Units (GAUs) as described by Hua et al. (Hua et al., 2018). GAUs enhance the Transformer architecture by incorporating gating mechanisms within the attention mechanism. In a GAU, the input is split into two branches: the key branch and the value branch. The key branch calculates the attention weights based on the similarity between the input and a learnable key vector, while the value branch applies a non-linear transformation to the input. The gating mechanism then combines the outputs of the key and value branches, allowing the model to selectively attend to relevant information and suppress irrelevant or noisy features. This gating mechanism helps to improve the efficiency and effectiveness of the attention mechanism, enabling faster training and improved performance in Transformer models.
Throughout the paper, we assume a certain familiarity with terms around neural networks. The terminology we use is defined in widely available textbooks (for instance, (Kang et al., 2018) and (Kang et al., 2018)).
## 3. Threat Model
We assume that the attacker has access to a clone of the targeted hardware device being analyzed and has the necessary tooling to collect power traces from it. This is a reasonable assumption for profiling-based SCA, in which the attack is performed in two phases: _training_ and _attack_(Beng et al., 2018). During training, we consider two threat scenarios: (1) the attacker has no knowledge of the deployed countermeasures and only knows the input and output of the cryptographic primitive (i.e., black box). (2) the attacker has knowledge of the countermeasures and can control all the random parameters used in the countermeasure (i.e., white box). This latter assumption is reasonable if the attacker has access to the debug interface of the clone device, such as if the device is undergoing certification and needs side-channel testing, or during the assessment of a new implementation.
During the attack phase, the model exclusively processes the raw traces of the targeted device (not the clone) to predict the expected output. During that phase the model does not have access to counter-measure parameters and cannot assume that they are correlated with the ones seen during training. To mimic this scenario we use two different chips to collect our datasets one for creating training data and the other to create the holdout dataset used for the final evaluation.
## 4. Scanet
In this section, we present the SCANET architecture and discuss how we train it. We start by discussing the training objective used alongside the metrics used to evaluate convergence. Next we detail SCANET model architecture. Finally we discuss relevant implementation details.
### Training objective and metrics
#### Training objective
SCANET's training objective is to predict the value of specific key byte \(k_{i}\). Following standard machine learning practice we cast this problem as a classification problem where the model is tasked to produce the correct value out of 256. In practice this translates to the model outputting softmax probabilities \(P\) for every possible key candidate \(c\), i.e., \(P[k_{i}=c]\) for \(c=\lambda\times 00\),...,\(\lambda\times\)FF, each indicating the likelihood that the predicted key byte \(k_{i}\) is equal to \(c\). We use the categorical cross-entropy loss function to optimize our model.
**Metrics** We use the following metrics to evaluate SCANET performance:
* **Accuracy** is our main metric. It is defined as the categorical accuracy of the output, meaning the ratio in which the model predicts the correct output. The baseline accuracy for a random guess is \(1/256=0.39\%\). We will indicate it in the rest of the paper with the symbol.
* **Rank** is the position of the correct value in the ranking of predicted byte values, sorted in descending order by probability. A rank of zero is assigned to the highest probability, and 255 is assigned to the least-likely probability (given that a byte has 256 possible values).
* **MaxRank** is the maximum rank over a set of model predictions for a batch of examples. The baseline MaxRank is 255. A lower value implies that the key space needed to brute force the algorithm, in the worst case, has been successfully reduced as each correct prediction is contained in a smaller range of values.
* **MeanRank** is the average rank of predictions. The baseline MeanRank for a random guess is 127.5. A lower value implies that the key space was successfully reduced and that on average the correct value is within a smaller range of values.
* prediction[-2]). Intuitively, this metric indicates how much the neural network is confident that the predicted value is the right one.
These metrics provide a framework to evaluate how well SCANET is performing and generalizing. However for end-to-end attacks, such as recovering an AES key as discussed in Section 6, an attacker can combine several predictions by encrypting several different plaintexts with the same key and averaging the predictions. This is why in addition to reporting the above metrics, it is useful to evaluate models as part of end-to-end attacks to understand their real-world performance. To that effect we evaluate how effective SCANET is at attacking ECDSA end-to-end in Section 5.8.
### Architecture
At a high-level SCANET is composed of three functional components depicted in Figure 1: a _temporal patchification_ stem designed to group traces into sequence that preserves the temporal inductive bias, a _trunk_ that attends to the temporal sequence using transformer encoder blocks to extract information, and a multi-headed directed acyclic graph that is used to implement multi-task learning (predicting multiple values at once).
#### 4.2.1. Temporal Patchification
The temporal patchification stem has two goals:
1. Group the trace into blocks of N contiguous non-overlapping chunks, or "patches", to preserve the temporal inductive bias while making the sequence easier to process by transformer encoder blocks that expect relatively short sequence of dense data. This approach while slightly different is inspired by state-of-the-art image patchific
Figure 1. SCANET architecture for predicting \(k_{0}\) in CM1. The attacked key byte is \(k_{0}\), with \(k_{0}\) having related outputs \(km_{0}\) and \(r_{0}\).
2. Potentially, inject global positional encoding information to the sequence as transformer encoder block requires positional information to perform efficiently (Kumar et al., 2017).
#### 4.2.2. Trunk
The trunk main function is to attend to the sequence to extract from the patchified traces the trace latent representation needed by the heads to predict the targeted values. To do so SCANET uses a trunk made of three state-of-the-art GAU transformer encoder blocks (Kumar et al., 2017) that are able to isolate and process long-range interacting features. In addition to the transformer encoders, the trunk also includes a combiner module made of convolutional layers that is meant to combine the output of the three encoder blocks into a unified latent representation. Combining the output of the encoder block instead of using the output of the last one, as traditionally done in NLP is useful because each block extracts features at a different level of "abstraction". Those multi-level representations are commonly used in other signal processing applications such as speech recognition (Kumar et al., 2017). As reported in one of the ablation study (Section 5.6) the latent representation created by the trunk is critical to the model performance - without it the accuracy drops by over 50% in certain cases.
#### 4.2.3. Heads and Relational Outputs
The last component of SCANET is its multi-heads DAG (directed acyclic graph). This component is designed to achieve two goals:
1. **Allow multi-task learning**: Against masked implementation SCANET relies on multi-task learning (Kumar et al., 2017) to perform efficiently as reported in Section 5.4. Multi-task learning is accomplished by not only predicting the targeted byte value but also jointly predicting the value of intermediate values such as the mask and random nonce values.
2. **Inject domain expertise**: Standard multi-task learning involves jointly predicting values without establishing relation between the outputs. We found out, as reported in Section 5.4, that we can increase SCANET performance by instead representing the outputs as a DAG where intermediate outputs feed into the byte prediction output, as depicted in Figure 1. This allows the model to benefit from expert understanding and make it easier for it to learn which intermediate values are useful to compute a given output. We note that defining those relations is fully configuration driven and does not require to change the model architecture or fiddle with the code.
#### 4.2.4. Heads design
Unlike standard transformers architecture where the output is a single layer, we discovered during our architecture search that having a deeper head architecture improves model performance. As visible in Figure 2, SCANET head architecture comprises several dense layers, and a single dropout layer. Extensive initial testing during the architectural development revealed that adding norms layers, residual connection, or more dropout layers does not seem to improve model performance or convergence speed.
### Hyper-parameter tuning
SCANET is designed from the ground-up to be hyper-tunable automatically using Keras-tuner to be quickly adapted to new implementations or attacks. This tunable first approach led us to focus on reducing the tuner search space to the minimum by performing an extensive architecture search early on to isolate which parameters should be hyper-tunable and which should be considered canonical. For example, the activation function used (Swish), the number of layers per head, and the number of GAU blocks (3) all proved to be close to optimal choice across our various use-cases and are therefore not tunable.
The end result of our combined effort to create a generic architecture and isolate the parameters that really matter for hyper-tuning is that SCANET only requires 8 parameters to be tuned to work on multiple algorithms and countermeasures. The values of those 8 parameters for all the ECC and AES implementations targeted in this paper are reported in Table 1. As visible many of those parameters deal with the size of the input (trace length, step size) and the optimizer (learning rate, batch size, number of epochs) as those are fully dependent on the speed of the algorithm, the capture sample rate, and the size of the dataset.
#### 4.3.1. Initialization Weights Influence
The influence of weights initialization is a well-known issue in deep learning in general, and for transformers in particular where an unlucky initialization can lead to a model collapse or subpar performance (see for instance (Kumar et al., 2017)). SCANET being transformer based is also very sensitive to weight initialization. Following the best practices, we mitigate this issue by using a custom learning-rate scheduler, starting from a low value to then ramp it up to its target value and follow it up with a cosine
\begin{table}
\begin{tabular}{l c c c c c} & **CM0** & **CM1** & **CM2** & **CM3** & **ASCADv2** \\ \hline Batch size & 128 & 64 & 64 & 32 & 64 \\ Steps per epoch & 200 & 200 & 200 & 400 & 1,000 \\ Epochs & 25 & 500 & 500 & 500 & 150 \\ Target learning rate & 0.0006 & 0.0006 & 0.0006 & 0.0003 & 0.00005 \\ Merge filter 1 & 16 & 16 & 16 & 16 & 0 \\ Merge filter 2 & 8 & 8 & 8 & 8 & 0 \\ Trace length & 1,620,000 & 4,194,304 & 8,388,608 & 16,777,216 & 1,000,000 \\ Step size & 1,200 & 2,048 & 4,096 & 4,096 & 400 \\ \end{tabular}
\end{table}
Table 1. Model hyper-parameters for each targeted implementation used in this paper.
Figure 2. Single head output.
decay [(40)]). During the warm-up phase, a low learning rate allows the model to perform smaller steps along the gradient, reducing the influence of the initialization.
### Implementation
We implement the SCANET architecture and conduct training using TensorFlow ([1]) and the Keras API ([(16)]). We use Keras Tuner to hypertune SCANET automatically. The temporal patchification code was specially designed and implemented for SCANET. The GAU layer is a custom implementation based on the pseudo-code provided in [(30)]. The relational output technique and its implementation were invented and specifically designed for SCANET. Overall, the SCANET code base is about 1,000 lines of Python.
The estimated training time for the various datasets used in the paper using an NVidia RTX 4090 as baseline are reported in Table 2. In practice we did use multiple servers with various configuration as the total computation required to complete all our experiments exceeds 1 year of computation. As mentioned throughout the paper, we avoid running unnecessary experiments to minimize carbon emissions wherever we could.
## 5. Generic SCAANET attacks against hardware-protected ECC
In this section, we evaluate SCANET's ability to generalize to multiple hardware-protected implementations by successfully performing power side-channel attacks against scalar multiplication as part of four distinct implementations of ECDSA. The targeted hardware implementation includes a constant-time implementation and three distinct algebraic masking implementations that are considered state-of-the-art protections [(5; 20)]. We start by describing the implementations targeted, then describe our collection process. Next, we evaluate SCANET's performance against those datasets and finally discuss how we can attack the ECDSA signature scheme by combining partial nonce \(K\) recovered via SCANET and a lattice attack.
Through the course of this evaluation section we are answering the following questions:
1. Does SCANET generalize to multiple hardware implementations? (Section 5.3)
2. Is Multi-task learning needed to attack protected implementation and if yes which tasks are needed? (Section 5.4 and Section 5.5)
3. How critical are transformer block long range ability to attacks ECC? (Section 5.6)
4. How many traces are needed for SCANET to successfully attacks various implementations? (Section 5.2)
### Targeted hardware implementations
Given our goal to evaluate SCANET against highly-protected hardware implementations, we are using as a base for all implementations the NXP K82F dedicated cryptographic accelerator (LTC - LP Trusted Crypto) to perform constant-time hardware-accelerated scalar multiplication and point addition. Relying on this accelerator ensures that all our implementations are not vulnerable to timing attacks or software-based leakages. All our ECDSA implementations use the elliptic curve FRP2561 from ANSSI, but our results apply equally to other curves (e.g., NIST P-256) as the scalar multiplication algorithm is typically the same for all Weierstrass curve implementations.
Footnote 1: [https://neuromancer.sk/std/amssi/FRP256v1](https://neuromancer.sk/std/amssi/FRP256v1)
**Countermeasure implementations**
We evaluate the following four implementations to highlight the effectiveness of SCANET against increasingly-stronger protections. Here, we use to denote computation done in software and to indicate computation done on the chip. The \(k\leftarrow\delta_{L}(N)\) notation indicates the generation of a random number with \(N\) bits where each bit is selected uniformly independently at random. The four implementations considered in this section are:
1. **Constant-time implementation (CM0).** This is a simple countermeasure effective against timing attacks, but not power side channels, exclusively relying on our chip's constant-time accelerated scalar multiplication, without randomizing the secret multiplier.
2. **Additive masking (CM1):** This implementation is significantly more resistant to power side channel attacks compared to CM0, thanks to the addition of multiplier masking. A random integer \(r\) is added to \(k\) so the scalar multiplication executes on the blinded secret scalar. More formally, it 1. Chooses an independent 256-bit random mask \(r\). 2. Computes the difference \(km\) of the secret multiplier \(k\) (the ECDSA nonce) and the mask \(r\). 3. On chip, computes \(P_{km}{=}km{\times}G\) and \(P_{r}{=}r{\times}G\). 4. On chip, computes \(P_{km}{+}P_{r}\) and returns this value. Here is the pseudo code used to implement this scheme: \[\begin{array}{l}k{\leftarrow}\delta_{L}(256)\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad
* **Combined countermeasure (CM3):** This final implementation combines CM1 and CM2 techniques in an attempt to increase security further with higher-order masking. This algorithm is straightforwardly implemented as: \[k \leftarrow\delta_{L}(256)\] (secret multiplier) \[r_{1} \leftarrow\delta_{L}(256)\] (random mask for CM1 \[r_{2} \leftarrow\delta_{L}(128)\] (random mask for CM2 \[r_{3} \leftarrow\delta_{L}(128)\] (random mask for CM2 \[r_{4} \leftarrow\delta_{L}(128)\] \[km_{1} =(k-r_{1})\bmod n\] (CM2 a) instead of \[P_{km_{1}}=km_{1}\times G\] ) \[km_{2} =\lfloor km_{1}/r_{2}\rfloor\] \[rem_{2} =km_{1}\bmod r_{2}\] \[P_{km_{2}} =km_{2}\times G\] \[P_{km_{2}} =r_{2}\times P_{km_{2}}\] \[P_{k_{2m}} =P_{r_{2}}+P_{km_{2}}\] (CM2 b) instead of \[P_{r_{1}}=r_{1}\times G\] ) \[km_{3} =\lfloor r_{1}/r_{3}\rfloor\] \[rem_{3} =r_{1}\bmod r_{3}\] \[P_{km_{3}} =km_{3}\times G\] \[P_{k_{3}} =r_{3}\times P_{km_{3}}\] \[P_{r_{3}} =rem_{3}\times G\] \[P_{r_{1}} =P_{r_{3}}+P_{km_{3}}\] \[Result =P_{k_{m_{1}}}+P_{r_{1}}\] (equal to \[k\times G\] )
### Random masks selection
We discuss the choice of the random mask \(r\) (or \(r_{1}r_{2}r_{3}\) in the case of CM3). For additive masks, we choose \(r\) in a way that \(\|r\|=\|n\|\), i.e., 256 bits. But for the multiplicative masks, we have to choose \(r\) in a way that \(\|r\|<\|k\|\); otherwise, the mask would not be effective. We chose \(r_{i}\) as \(\|r\|=\|n\|/2\), i.e., 128 bits, to ensure that it meets this property, but also it achieves the level of resistance to side-channel attacks, as suggested by prior work [59, 26].
We also choose \(k\) and \(r\) (or \(r_{1}r_{2}r_{3}\) for CM3) uniformly independently at random for each computation to ensure we are performing attacks against implementation that do not have a bias in their random entropy.
### Power traces collection
Our capture setup consists of a Chipwhisperer CW308 UFO board with a CW308T-K82F target board connected to it. The firmware of the target chip was solely responsible for curve addition and scalar multiplication. We did not rely on any software implementation for curve arithmetic. Scalar multiplication and point addition operations were performed by sending an integer and a point, or two points, respectively, to the LTC. For model training purpose we additionally record on the host computer the secret multiplier and random parameters used for masking each trace.
**Capture setup** We collect power measurements using the _Teledyne LeCroy WavePro 404HD-MS_[63] oscilloscope connected to the embedded resistor shunt on the _ChipWhisper NAE-CW308T-K82F_[47] target board. The oscilloscope probe is hooked to the test point TP5 of the CW308 UFO board to measure the current while digital channel D0 is connected to GPIO4/TRIGGER pin to get the trigger signal which starts the capture. We insert a 7.37MHz crystal into the X1 socket and adjust the clock source selection jumper J3 to the CRYSTAL position to provide the clock signal. This configuration ensures that there is no correlation between the target chip clock and the oscilloscope sampling clock, resulting in asynchronous measurements. The oscilloscope channel is set to AC coupling with a bandwidth limited to 200MHz. The first scalar multiplication is always aligned using a trigger signal for each operation, and there are no additional alignment performed. The trigger signal was configured to stay high during each operation performed by the LTC. We use the first rising edge of the trigger signal as the oscilloscope trigger to start capturing.
**Experimental leakages** We ensure that no UART communication is leaking by conducting one experiment where we replace captured points in the training set with Gaussian noise when the trigger signal is low. After training a model, we observe no performance loss when compared to training on raw traces, indicating that no discernible UART leakeage is occurring in the replaced points. Note that all other experiments are conducted with raw traces.
**Datasets Collected** Table 3 provides technical summary of the datasets created generated using the implementation discussed in Section 5.1. We use the SCAAML ([9]) dataset library to store our traces and attack point values as TFRecord files. We ensure that no key is reused between the splits by tracking which keys were previously used. Overall each dataset collection process takes anywhere from several weeks (CM0) to months (CM3) to complete.
Note that to ensure that the attack generalizes between chips of the same family despite potential subtle hardware variations we use a different chip to collect the training/test splits and the holdout split. The holdout splits, following machine learning best practices, are never used to tune the models or during experimentation. Instead they were reserved for the final evaluation presented in Section 5.3.
Note that Table 3 also reports the ASCADv2 dataset that we use to evaluate SCANET generalization across multiple algorithms in Section 6. This dataset was made public in [23], we simply convert it to the SCAAML dataset format. Since there was no apparent restriction on how to divide the samples into splits (train, test, holdout) we took a portion of consecutive samples for train, a portion for test, and a portion for holdout.
### Generalization over multiple implementations
Overall, SCANET is able to successfully attack all four ECC hardware implementations in white-box settings using multi-task relational
outputs training as reported in Table 4. These results are computed on the holdout splits which, as discussed previously, are captured on a different chip of the same family and were not used for model tuning or any other experiments discussed later in this section. As discussed in the threat model section (Section 3), white-box setting means that the model had access to the intermediate values (masks and random values) during training.
Note that to avoid creating unnecessary carbon emissions due to the high-cost of training models (see Table 2), we only evaluate SCANET on the initial (most-significant byte \(k_{0}\)), middle (\(k_{15}\)), and last (least-significant byte \(k_{31}\)) byte of each implementation as those bytes are representative of SCANET performance against those implementations.
As expected, as the strength of the protection increases, the model accuracy decreases to the point where for CM3, only the initial byte can be attacked successfully. Note that an accuracy of 0.4% is close to random chance. We hypothesize that increasing SCANET's performance against stronger countermeasures requires more training data, not increased capacity or a different architecture. This is empirically supported by our experiment detailed in Section 5.7, that looks at model accuracy as a function of the number of traces used in training. This experiment shows that SCANET's accuracy against CM3 only starts to rise past 100k traces. Furthermore, looking at the max rank results, it is clear that the model did not fully generalize for CM1, CM2 and CM3, as it is very close to its 255 upper bound.
In Table 5, we report SCANET results under black-box settings on the holdout splits. Unlike the white-box settings, SCANET is not always able to successfully recover even the initial byte \(k_{0}\). Interestingly, SCANET fails to recover CM1 \(k_{0}\) but is able to recover CM2 \(k_{0}\), which is surprising given that in the white-box setting, CM1 appears to be easier than CM2. We hypothesize that CM1 is harder in the black-box setting because its random mask uses 256 bits whereas CM2 only uses 128 bits. Having access to the intermediate seems to make the size of the random mask irrelevant in white box but a strong defense in black box.
### Multi-task effectiveness evaluation
In the following set of experiments, we study whether using multi-task learning improves SCANET's accuracy. In particular we are interested in understanding which additional tasks beyond the key byte prediction improve model accuracy if any. To not taint our holdout splits, the results reported in this section are computed on the test splits. Once again we only perform experiments on representative bytes to limit carbon footprint emission namely the initial bytes (\(k_{0}\), \(k_{1}\), \(k_{2}\)), middle one (\(k_{15}\)) and final ones (\(k_{29}\), \(k_{30}\), \(k_{31}\)). For the same reason we also only target the middle of the road dataset CM2 and reserve the study of CM1 for the ablation study discussed later in the paper in Section 5.5.
Overall, there are two types of additional tasks that can be included in the training: _Adjacency predictions_ and _Intermediate predictions_. _Adjacency predictions_ ask the model to predict the key bytes on the left and the right of the targeted bytes with the hope it helps with carry issues and generalization. _Intermediate predictions_ involve the model predicting the value of intermediate computation points including mask and random nonces values.
The model can operate in two modes when performing multi-task learning using intermediate values: the _multi-outputs_ mode and _relational outputs_ mode. In the _multi-outputs_ mode the model outputs
\begin{table}
\begin{tabular}{l r r r r}
**Dataset** & **Attack point** & **Accuracy [\%]** & **MeanRank** & **MaxRank** \\ \hline CM0 & \(k_{0}\) & 100.00 & 0 & 0 \\ CM0 & \(k_{15}\) & 100.00 & 0 & 0 \\ CM0 & \(k_{31}\) & 100.00 & 0 & 0 \\ \hline CM1 & \(k_{0}\) & 78.80 & 0.75 & 192 \\ CM1 & \(k_{15}\) & 93.20 & 0.31 & 253 \\ CM1 & \(k_{31}\) & 92.98 & 0.24 & 218 \\ \hline CM2 & \(k_{0}\) & 66.22 & 1.40 & 254 \\ CM2 & \(k_{15}\) & 0.30 [\(\frac{\textbf{x}}{\textbf{x}}\)] & 127.29 & 255 \\ CM2 & \(k_{31}\) & 11.31 & 8.76 & 233 \\ \hline CM3 & \(k_{0}\) & 8.60 & 19.77 & 255 \\ CM3 & \(k_{15}\) & - & - & - \\ CM3 & \(k_{31}\) & 0.37 [\(\frac{\textbf{x}}{\textbf{x}}\)] & 127.89 & 255 \\ \end{tabular}
\end{table}
Table 4. SCANET whitebox key byte recovery success rate on the four ECC hardware protected implementations holdout splits.
\begin{table}
\begin{tabular}{l r r r r}
**Name** & **Trace length** & **Train length** & **Test** & **Holdout** & **File size** [**TB**] \\ \hline ECC CM0 & 1,6M & 57,344 & 8,192 & 8,192 & 0.2 \\ ECC CM1 & 5M & 194,544 & 8,192 & 8,192 & 1.5 \\ ECC CM2 & 10M & 122,880 & 8,192 & 8,192 & 2.1 \\ ECC CM3 & 17,5M & 122,880 & 8,192 & 8,192 & 3.7 \\ ASCADv2 & 1M & 640,000 & 80,000 & 80,000 & 0.9 \\ \end{tabular}
\end{table}
Table 3. List of ECC evaluation datasets used in this study to evaluate SCANET generalization to multiple hardware implementations. The names used refer to protected implementations described in Section 5.1. The table also includes the ASCADv2 dataset collected in (Kumar et al., 2019) that is used in Section 6 to evaluate SCANET generality across multiple algorithms.
all the asked values without any interaction between outputs. This is the classical form of multi-task learning used by many models to boost generality and accuracy. In the _relational output_ mode, as illustrated in Figure 1, we create a directed acyclic graph between the heads to model expert knowledge of how the outputs relate to each other according to the protecting algorithm. Obviously this type of knowledge is only available in white box testing conditions.
**Notation**: To make the results tables easier to understand we are using the following visual convention to distinguish between the various relations conditions:
* Circles are byte indexes centered at the column index (if the column byte index is \(i\) then the circles represent \([i-1,i,i+1]\)).
* A white circle \(\bigcirc\) at position \(j\) means not leveraging multi-task learning.
* A grey circle \(\bigcirc\) at position \(j\) means using multi-task learning.
* A black circle \(\bigcirc\) means using relational outputs learning.
Here are a few examples of such notations for the column \(k_{2}\) of Table 6:
_Results_. Overall we observe that multi-task learning is needed for the attack to succeed as reported on CM1 in Table 6. Without relation, denoted using the \(\bigcirc\) symbol, the model predictions are unable to exceed random chance (\(k_{0}\), \(k_{1}\), \(k_{2}\), \(k_{15}\), \(k_{29}\)) or barely exceed it (\(k_{30}\) and \(k_{31}\)). Using the simplest form of multi-task learning, denoted using the \(\bigcirc\) symbol, the model obtains high accuracy for almost all the bytes except \(k_{15}\).
Using relational-output, denoted using the \(\bigcirc\) symbol, the model accuracy overall improves further compared to using multi-task learning.
When it comes to the effectiveness of adjacency relations the results are more mitigated. As reported in Table 6, they introduce model instability with the accuracy slightly increasing on some bytes (e.g \(k_{0}\), \(k_{30}\), \(k_{31}\)) but decreasing significantly on others (e.g., \(k_{15}\)). Given the marginal gains offered by using adjacency relations and our goal to have a stable, end-to-end fully automated attack, we decided to not use adjacency relations. We leave for future work the task to make better use of them.
_Effect of multi-task learning on model convergence_. The positive impact of multi-task learning is best visualized by looking at how each of the output accuracy improves as training progresses. Regardless of the hardware implementation, we observe that outputs start to converge one after the other. For example, as visible in Figure 3 we observe that the mask prediction (\(km_{0}\)) accuracy rises before the random nonce (\(r_{0}\)) prediction accuracy improves and that the key value (\(k_{0}\)) prediction starts converging only after both the mask and the random nonce have reached a high accuracy. The same effect is observed for CM2 (Figure 4) and CM3 (Figure 5).
\begin{table}
\begin{tabular}{c c c c c c c c}
**Relations** & \(k_{0}\) & \(k_{1}\) & \(k_{2}\) & \(k_{15}\) & \(k_{29}\) & \(k_{30}\) & \(k_{31}\) \\ \hline \(\bigcirc\) & 0.39 & 0.39 & 0.20 & 0.20 & 0.39 & 0.78 & 0.59 \\ \(\bigcirc\) & 81.50 & **79.20** & 85.45 & 43.36 & **86.62** & 86.13 & 92.68 \\ \(\bigcirc\) & 85.35 & 71.88 & **88.09** & **85.64** & 85.64 & 87.21 & 92.58 \\ \(\bigcirc\) & - & 67.68 & 81.45 & 40.82 & 82.52 & 83.38 & **93.65** \\ \(\bigcirc\) & **87.40** & 74.80 & 83.98 & 63.67 & 82.03 & 87.70 & – \\ \(\bigcirc\) & - & 63.38 & 80.96 & 41.11 & 84.08 & **88.48** & – \\ \end{tabular}
\end{table}
Table 6. SCANET accuracy in % on CM1 test dataset when trained using various forms of multi-task learning.
Figure 4. ECC CM2 validation accuracy for \(k_{0}\) when using relational outputs.
Figure 3. CM1 validation accuracy for \(k_{0}\) when using relational outputs.
Additionally we observe that in each case the model first learns to predict the mask values and then the random nonces. This behavior is consistent with the hypothesis that multi-task learning is critical to have generalized SCAAML attacks as it allows models to learn to "unpack" protection one step at a time. It also supports the hypothesis that black-box attacks are significantly harder, because models greatly benefit from those extra information. Last but not least, this behavior seems to confirm the effectiveness of higher-order masks against advanced side channel attacks such as SCAAMLs.
### Multi-task ablation study
In this section, we perform an ablation study to better understand which intermediate values are needed for the attacks to succeed on CM1 and CM2. We exclude CM0 as there are no intermediate values and CM3 as SCANET's relatively low accuracy on this dataset makes it hard to separate confidently the results and CM3 experiments would take roughly 16 months of computation.
For CM1, as reported in Table 7, removing the prediction of the mask (\(r_{*}\)) at training time results in the model being unable to successfully attack CM1. Removing the prediction of the random nonce (\(km_{*}\)) drastically reduces the accuracy of the model for the middle key byte but has no effect on the initial and last byte. We are not sure why this happens.
We only target \(k_{0}\) and \(k_{31}\) in Table 8 as the model is able to only target the most and least significant bytes of CM2.
### Trunk ablation study
In this ablation study, we evaluate whether SCANET needs transformer encoder blocks' long-range interaction properties to perform well. To validate this hypothesis, we try to predict \(k_{15}\) but with only getting the output of the \(km_{15}\) and \(r_{15}\) heads not the output of the trunk. Additionally we prevent the model from cheating and encoding \(k_{15}\) prediction information in the intermediate outputs by applying a stop gradient on the \(km_{15}\) and \(r_{15}\) output layers. For this ablation study, the head outputting \(k_{15}\) looks like the one in Figure 2 when one removes the "Trunk" input.
When the key byte prediction head is not directly connected to the trunk the model exhibits an almost 50% accuracy drop as reported in Table 9. Those results strongly support the hypothesis that the transformer encoder blocks' latent representation and compute power is critical for accurate prediction.
### Dataset size impact evaluation
In this section, following the insights of (Nagarajan et al., 2017) which showcase that transformer models are either bound by capacity, compute, or data, we attempt to determine which of those factors is limiting SCANET performance. We already know, thanks to the experiment ran in Section 5.4 that SCANET is not bounded by compute as the performance plateau after a few hundred epochs - see Figure 3 for example.
\begin{table}
\begin{tabular}{l r r r r}
**Target** & **Dependency** & **Accuracy [\%]** & **MeanRank** & **MaxRank** \\ \hline \(k_{0}\) & \(km_{0}\),\(r_{0}\) & **86.26** & **0.17** & **9** \\ \(k_{0}\) & \(km_{0}\) & 0.39 & 130.00 & 255 \\ \(k_{0}\) & \(r_{0}\) & 86.60 & 0.14 & 4 \\ \(k_{0}\) & Nothing & 0.29 & 127.90 & 255 \\ \hline \(k_{15}\) & \(km_{15}\),\(r_{15}\) & **44.00** & **1.07** & **58** \\ \(k_{15}\) & \(km_{15}\) & 0.48 & 125.60 & 255 \\ \(k_{15}\) & \(r_{15}\) & 0.29 & 126.20 & 255 \\ \(k_{15}\) & Nothing & 0.48 & 125.69 & 255 \\ \hline \(k_{31}\) & \(km_{31}\),\(r_{31}\) & 92.87 & 0.09 & 15 \\ \(k_{31}\) & \(km_{31}\) & 0.48 & 123.46 & 255 \\ \(k_{31}\) & \(r_{31}\) & **93.85** & **0.07** & **7** \\ \(k_{31}\) & Nothing & 0.20 & 126.94 & 255 \\ \end{tabular}
\end{table}
Table 7. CM1 relational outputs ablation.
\begin{table}
\begin{tabular}{l r r r}
**Target** & **Dependency** & **Accuracy [\%]** & **MeanRank** & **MaxRank** \\ \hline \(k_{0}\) & Nothing & 18.26 & 8.44 & 235 \\ \(k_{0}\) & Outputs: \(r_{0}\),\(km_{16}\),\(rem_{16}\) & 52.44 & **1.78** & 235 \\ \(k_{0}\) & \(r_{0}\),\(km_{16}\),\(rem_{16}\) & 66.50 & 1.99 & 252 \\ \(k_{0}\) & \(r_{0}\),\(km_{16}\) & 66.50 & 2.14 & 254 \\ \(k_{0}\) & \(r_{0}\),\(rem_{16}\) & 39.16 & 4.47 & 244 \\ \(k_{0}\) & \(km_{16}\),\(rem_{16}\) & 32.40 & 4.21 & 223 \\ \(k_{0}\) & \(r_{0}\) & 39.26 & 4.43 & 247 \\ \(k_{0}\) & \(km_{16}\) & 36.23 & 2.63 & **160** \\ \(k_{0}\) & \(rem_{16}\) & 11.72 & 15.37 & 203 \\ \hline \(k_{31}\) & Nothing & 0.39 & 125.60 & 255 \\ \(k_{31}\) & Outputs: \(r_{15}\),\(km_{15}\),\(r_{31}\) & 90.00 & **6.74** & 192 \\ \(k_{31}\) & \(r_{15}\),\(km_{31}\),\(rem_{31}\) & 10.40 & 10.60 & 238 \\ \(k_{31}\) & \(r_{15}\),\(km_{31}\) & **10.74** & 9.90 & **142** \\ \(k_{31}\) & \(r_{15}\),\(rem_{31}\) & 8.49 & 10.33 & 207 \\ \(k_{31}\) & \(km_{31}\) & 8.59 & 8.58 & 195 \\ \(k_{31}\) & \(r_{15}\) & 9.27 & 8.99 & 155 \\ \(k_{31}\) & \(km_{31}\) & 1.36 & 41.38 & 215 \\ \(k_{31}\) & \(rem_{31}\) & 8.59 & 9.90 & 165 \\ \end{tabular}
\end{table}
Table 8. CM2 relational outputs ablation.
Figure 5. ECC CM3 validation accuracy for \(k_{0}\) when using relational outputs.
Accordingly, to decide whether or not the model is capped by the amount of data available or the model capacity/architecture, we train the same model on an increasing number of traces for CM1, CM2, and CM3. We take 10%, 20%,... of the 122,880 available examples and learn to predict \(k_{0}\).
Overall we found out, as visible in Figure 6 and Figure 7, that SCANET is most likely bounded by the lack of data as the accuracy keeps rising as the amount of training examples increases. In particular for CM3, SCANET starts to generalize only when at least 110,000 examples are used, suggesting that increasing the dataset size would most likely lead to significant accuracy gain. However, the CM3 dataset already requires 3.7TB of storage so we decided against increasing the dataset size further as SCANET is already able to successfully attack CM3 and increased accuracy doesn't bring significant additional benefits.
### ECDSA attack
We apply standard lattice attack (Zhou et al., 2017; Zhang et al., 2018) to leverage the partial leakage from ECC scalar multiplication protected by either CM1, CM2 (even blackbox), or CM3 to recover the private key from ECDSA ((Zhou et al., 2017)). To simulate a realistic attack, we take our holdout split (captured on a different physical chip than the training data), treat it as the nonce multiplication of ECDSA \(k\times G\) as explained in Section 2.1, and predict the most significant byte. One caveat is that our multipliers in the holdout split are chosen to test the deep learning model so that each byte is chosen uniformly independently at random between 0 and 255. Since for ECDSA we require the nonce \(k\) to be \(1\leq k<n\) where \(n\) is the size of the elliptic curve, we exclude all measurements that have the multiplier outside of this range, after which we are left with roughly 7,800 examples (depending on the dataset - CM1, CM2, CM3).
For the lattice attack, we can only use signatures with correct predictions of the most significant bits (Zhou et al., 2017), although previous work reports that lattice attacks can handle a small amount of noisy signatures (Zhou et al., 2017). After trial and error with the predictions, our intuition is that if we turn byte predictions into predictions of the four most significant bits (MSBs), we will achieve the highest accuracy. To compute the probability of a given value of 4 MSB of the nonce for a given signature, we sum the probabilities of the 16 possible byte values with the same 4 MSBs. Table 10 shows the prediction accuracy of the four most significant bits. As we can see, we have a much higher accuracy in this case.
A lattice attack using the four MSBs of the nonce requires roughly 80 signatures (Zhou et al., 2017; Zhang et al., 2018). Still, random sampling of 80 signatures will have a high chance of having erroneous signatures, especially when the accuracy is still under 90%. However, we notice that in cases where the correct key 4 MSBs are detected with high accuracy, the prediction value has a higher _confidence_ (highest probability - second highest probability). We use the confidence of predictions as weights when randomly sampling (using the parameter weights of random.choices in Python). This approach retrieves the secret key after several retries for CM1 and CM2.
For the case of CM3, we employ the following heuristics to succeed in the attack. First, we give more weight to samples with higher confidence; we achieve this by using confidence to the power of eight as weights when sampling for the subset used in the lattice attack. The constant eight is chosen by profiling the attack on the validation set (also roughly 8,200 examples). Second, we discard samples with too high confidence (more than 0.25, chosen by trial and error) when predicting byte value, discarding roughly 10% of examples. This excludes predictions that are too confident and would occur in most random samples that are likely wrong.
We use the lattice construction of (Becker et al., 2016) and (Zhang et al., 2018) with (Zhang et al., 2018) implementation of the BKZ algorithm (Bkzai et al., 2018). The attacks take order of minutes. The longest is the black box attack on ECDSA using CM2, which
\begin{table}
\begin{tabular}{l c c}
**Experiment** & **Accuracy [\%]** & **MeanRank** \\ \hline CM1 & 94.83 & 0.06 \\ CM2 & 96.39 & 0.07 \\ CM2 black-box & 85.75 & 0.20 \\ CM3 & 71.86 & 0.87 \\ \end{tabular}
\end{table}
Table 10. Results of predicting 4 most significant bits (on holdout).
Figure 6. ECC \(k_{0}\) accuracy while using only a fraction of the datasets. Results evaluated on the test split.
Figure 7. ECC \(k_{0}\) MeanRank while using only a part of the dataset. Results evaluated on the test split.
takes roughly 15 minutes on a desktop computer with AMD Ryzen 9 CPU. This is due to several retries needed to get all 80 samples correct.
## 6. Generalizing Scanet to AEs
In this section, we show that SCANET generalizes to other cryptographic algorithms by demonstrating its effectiveness at attacking an AES software protected implementation namely the publicly available ASCADv2 dataset (Kumar et al., 2017) in an end-to-end manner without trace processing. We start by providing an overview of this dataset, next we discuss the attack scenario considered, and finally report SCANET performance.
### ASCADv2 dataset
The ASCADv2 dataset (Kumar et al., 2017) is comprised of 800,000 power traces collected from a Cortex M4 microcontroller manufactured by ST Microelectronics (STM32F303RCT7) while it was performing AES-128 encryptions. The firmware implements affine masking and shuffling to protect the AES encryption computation from side-channel attacks. More details about the dataset can be found in (Bordes et al., 2017).
### Attack Scenario
To compare with previous approaches, we replicate the "First Threat Scenario" described in (Kumar et al., 2017). In this scenario, the attacker tries to attack the following equations from the AES S-BOX masked operation (Kumar et al., 2017):
\[c[i]=r_{m}\times Sbox[pt[p[i]]\oplus k[p[i]]]\oplus r_{out}\]
where
* \(\times\) and \(\oplus\) stand for multiplication and addition in the Rijndael finite field (Rijndael et al., 2017),
* \(Sbox\) is the AES S-BOX,
* \(r_{m}\) and \(r_{out}\) are affine mask bytes.
* \(p[i]\) is the permutation index,
* \(pt[i]\) is the byte of the plaintext.
* \(k[i]\) is the byte of the AES round key.
Critically, unlike previous work (Kumar et al., 2017), we perform an end-to-end attack by using the \(1,000,000\) points of each trace without preprocessing instead of using an SNR (signal-to-noise ratio analysis) analysis to cut and use only 15,000 points (1.5%) out of the total length. Additionally we do not modify SCANET architecture to perform the attack and solely rely on parameters hyper-tuning to adjust the model hyper-parameters to this new target.
### AES attack evaluation
Table 11 shows the results for recovering the permutation index \(p[i]\) and Table 12 shows the results for recovering the masked S-BOX output \(c[i]\). For \(c[i]\), we report the best of 7 runs (due to influence of initialization weights, see Section 4.3.1). That is we train a model seven times and for each attack point we pick the model with the highest accuracy on the validation set. We then use that to evaluate the performance on holdout.
**Comparison with ASCADv2 paper (Kumar et al., 2017)** Following the methodology in (Kumar et al., 2017), we turn predictions of \(c[i]\) into predictions of \(k[j]\) by performing the inverse operation of \(c[i]\)'s definition (see above). To do so, we compute the weighted predictions of \(c[i]-r_{out}\) and the predictions of \(\frac{c[i]-r_{out}}{r_{m}}\). This allows us to compute the weighted predictions of the un-permuted input of the S-Box, which is the (known) plaintext XORed with a key byte. Additionally, we simulate a fixed key scenario where the key is zeroed out. Finally we sample \(N\) such predictions and sum their logarithms. The index with the highest value is our predicted value. We do this 10,000 times. When we sample \(N=80\) such predictions we get mean rank below 1 for all key bytes which is consistent with the definition of successful attack in (Kumar et al., 2017).
It is worth noting that with \(N=80\) each byte of the key is correctly recovered with probability 6.57%. We can get better success probability by allowing ourselves to enumerate several key possibilities. Enumerating the top 10 predictions for each key byte (which would take order of months on a powerful desktop computer) gives us the following results. For \(N=80\) we have successful enumeration attack in 79.3% cases, for \(N=60\) in 43.73% cases, and for \(N=50\) in 21.27% cases.
Table 13 shows a comparison of our results and results of (Kumar et al., 2017). For the sake of brevity we average results with multiple indexes. We need roughly 80 traces to recover the secret key, comparable to (Kumar et al., 2017) who need roughly 60 traces, but use preprocessing.
SCANET competitive results against ASCADv2 demonstrate its ability to competitively generalize to multiple algorithms and more
\begin{table}
\begin{tabular}{l r r r r}
**Attack point** & **Acc [\%]** & **MeanRank** & **MaxRank** & **Confidence** \\ \hline \(p[0]\) & 98.87 & **0.011** & **3** & 0.98 \\ \(p[1]\) & 93.83 & 0.069 & 5 & 0.91 \\ \(p[2]\) & 96.77 & 0.036 & 4 & 0.95 \\ \(p[3]\) & 90.80 & 0.106 & 4 & 0.89 \\ \(p[4]\) & 95.48 & 0.049 & 5 & 0.93 \\ \(p[5]\) & 96.78 & 0.034 & 3 & 0.94 \\ \(p[6]\) & 95.11 & 0.055 & 4 & 0.93 \\ \(p[7]\) & 90.84 & 0.108 & 6 & 0.89 \\ \(p[8]\) & 97.40 & 0.028 & 4 & 0.95 \\ \(p[9]\) & 95.80 & 0.046 & 5 & 0.93 \\ \(p[10]\) & 95.55 & 0.050 & 5 & 0.93 \\ \(p[11]\) & 97.38 & 0.029 & 4 & 0.95 \\ \(p[12]\) & 96.49 & 0.038 & 6 & 0.93 \\ \(p[13]\) & 89.23 & 0.124 & 5 & 0.87 \\ \(p[14]\) & 96.91 & 0.033 & 3 & 0.95 \\ \(p[15]\) & 93.85 & 0.068 & 5 & 0.93 \\ \end{tabular}
\end{table}
Table 11. Permutation indexes (holdout, \(80,000\) examples).
\begin{table}
\begin{tabular}{l r r r r}
**Attack point** & **Acc [\%]** & **MeanRank** & **MaxRank** & **Confidence** \\ \hline \(c[0]\) & 1.18 & 79.07 & 255 & 0.0042 \\ \(c[1]\) & 1.12 & 81.72 & 255 & 0.0045 \\ \(c[2]\) & 1.15 & 81.76 & 255 & 0.0040 \\ \(c[3]\) & 1.16 & 83.30 & 255 & 0.0044 \\ \(c[4]\) & 1.21 & 81.23 & 255 & 0.0058 \\ \(c[5]\) & 1.17 & 80.50 & 255 & 0.0049 \\ \(c[6]\) & **1.24** & 78.72 & 255 & 0.0038 \\ \(c[7]\) & 1.17 & 79.45 & 255 & 0.0043 \\ \(c[8]\) & 1.15 & 79.30 & 255 & 0.0039 \\ \(c[9]\) & 1.21 & **78.48** & 255 & 0.0048 \\ \(c[10]\) & 1.14 & 79.32 & 255 & 0.0041 \\ \(c[11]\) & 1.20 & 82.52 & 255 & 0.0057 \\ \(c[12]\) & 1.23 & 81.80 & 255 & 0.0049 \\ \(c[13]\) & 1.10 & 83.46 & 255 & 0.0057 \\ \(c[14]\) & 1.23 & 79.09 & 255 & 0.0041 \\ \(c[15]\) & 1.14 & 80.64 & 255 & 0.0045 \\ \end{tabular}
\end{table}
Table 12. Masked permuted values (holdout, \(80,000\) examples). Best of 7 runs.
generally that end-to-end SCAAML attacks are a viable and simpler alternative to previous hand-crafted approaches.
## 7. Related Work
### Machine-learning side channel attacks
Machine learning (ML) is a viable approach to SCA. Lerman et al. (Lerman et al., 2017) applied classical ML algorithms like the support vector machine (SVM) to outperform the famous template attack in recovering keys from masked implementations of AES (Maghebi et al., 2017). Maghebi et al. (Maghebi et al., 2017) later applied deep learning algorithms, including CNN and LSTM, to attack AES. Bursztein et al. (Bursztein et al., 2017, 2018) proved the feasibility of full-trace attacks with deep learning.
Since these early results, several follow-up efforts have been on developing neural network architectures based on MLPs and CNNs to increase the attack efficiency in terms of the number of traces required (Golovolovolov et al., 2010; Golovolovolov and LeCun, 2011; Golovolovolov and LeCun, 2011).
Despite these records, state-of-the-art SCAAML attacks have the following limitations that make them impractical for assessment of real-world hardware countermeasures:
**(1) Preprocessing:** They focus on finding an optimal solution for a given dataset (e.g., ASCAD (Maghebi et al., 2017, 2017)) whereas in practice, traces collected from hardware need to be preprocessed by other tools or experts. Won et al. (Won et al., 2017) developed a framework based on multi-scale CNN to enable the integration of user-defined preprocessing phases into SCA. Hettwer et al. (Hettwer et al., 2017) explored various image classification metrics for finding points of interest in the signal. Zhou and Standaert (Zhou and Standaert, 2017) proposed a technique based on residual networks for aligning SCA traces. Wu and Pieck (Wu and Pieck, 2017) used auto-encoders to filter out noise added by mitigations such as clock jitter and random delays.
**(2) Generalizability:** For every new device, the analyst has to search for the optimal network architecture--hyper-parameter tuning (Wang et al., 2017; Zhou and Standaert, 2017). This is incredibly inefficient when it comes to attacking masked implementations in that attackers require additional traces, which results in a lack of generalization. Several work aim at improving architecture search based on various tools such as Information Theory (Bursztein et al., 2017), Bayesian Optimization (Wang et al., 2017), Gradient Visualization (Golovolovolov et al., 2010). Pernin et al. (Pernin et al., 2017) use an ensemble of ML models based on average class probabilities to improve generalization.
**(3) Portability:** Lastly, these algorithms fundamentally do not account for variations across hardware devices, either identical or near-identical (different revisions), which has encouraged several efforts into incorporating device-to-device variation into SCAAML training (Bursztein et al., 2017; Wang et al., 2017; Li et al., 2017).
Although, in theory, a combination of proposed techniques can overcome the practical limitations of SCAAML, but in reality, it is unclear what techniques should be used in what scenario.
In comparison, we take a drastically different approach to developing a universal model that can attack real traces collected from several hardware countermeasures, automatically scale to other devices, and even algorithms. Lu et al. (Lu et al., 2017) followed a similar philosophy to develop a neural-network architecture that consists of autoencoders and attention mechanisms to achieve end-to-end profiling-based analysis. Although they demonstrated that attention mechanisms could be beneficial in achieving generalization, their results are limited to public datasets of software countermeasures. In contrast, we are the first to apply state-of-the-art GAUs to develop a universal model that effectively can overcome hardware countermeasures.
### Side-channel attacks on ECC
Single-trace side-channel attacks on ECC aim to recover most of the secret bits in one execution of scalar multiplication, an ideal choice, especially for a signature scheme like ECDSA that performs scalar multiplication on a fresh integer every time. However, these attacks (Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017) have only been successful when scalar blinding has low entropy. Most recently, Pernin et al. (Pernin et al., 2017) showed that DL could be used to perform unsupervised attacks and recover 90% percent of secret bits when scalar blinding is performed using 32-or 64-randomly generated bits. (Wang et al., 2017) achieved better records in a scenario that only is applicable to ECC key generation, where the attacker collects multiple traces from scalar multiplication for the same secret. They suggest that the random share \(r\) is chosen in a way that \(\|r\|\geq\|n\|/2\) for an effective side-channel countermeasure.
Lattice-based attacks have been shown to be effective in recovering keys from partial leakage from real cryptographic chips when no masking countermeasure is applied (Wang et al., 2017; Wang et al., 2017; Wang et al., 2017). However, the effectiveness of SCAAML in assisting lattice attacks and bypassing stronger countermeasures is unknown. Goudarzi et al. (Goudarzi et al., 2017) combine lattice attacks with side-channel (using hypothetical SNR analysis) and show that attacks are probable when \(\|r\|\) is 16, 32, 64 bits. Based on these works, the current understanding in the community is that if \(\|r\|\geq\|n\|/2\), is a safe countermeasure.
In contrast, we are the first to show that SCAAML driven lattice-attacks using our universal model can recover the ECDSA key even when \(\|r\|\geq\|n\|/2\) and even for higher-order masking with more than one share.
## 8. Conclusion
In this paper we presented SCANET the first deep-learning architecture that can perform power side-channel attack against multiple cryptographic algorithms, namely ECC and AES, in an end-to-end manner. We also demonstrate SCANET ability to generalize by successfully attacking several highly protected ECC implementations without changing the model architecture. Last but not least we open-source our models and datasets to enable reproducibility. Our results suggest that using fully end-to-end automated side-channel attacks to test new counter-measure is within reach. They also suggest that many counter-measures currently considered sufficient to thwart
\begin{table}
\begin{tabular}{l r r r r r}
**Attack** & **Accuracy** & **MeanRank** & **MaxRank** & **Acc** & **MeanRank** \\
**point** & **[\%]** & & & **[**44**]** & **[**44**]** \\ \hline \(r_{m}\) & **100.00** & 0 & 0 & 99.2 & – \\ \(r_{out}\) & 18.25 & 4.78 & 65 & **21.1** & – \\ \(c|c|\) (average) & 1.18 & 80.65 & 255 & **1.6** & **80** \\ \(|c|\) (average) & **95.07** & 0.05 & 4.44 & 88.9 & – \\ \end{tabular}
\end{table}
Table 13. ASCADv2 results (\(c\{|i\}\) are best out of 7 runs) compared with results of (Lu et al., 2017). Measured on holdout dataset with 80,000 examples. We average results when there are multiple indexes.
side-channels attacks are not enough to defend against SCAAML attacks. Accordingly there is a pressing need to devise new protections that are resilient to deep-learning attacks.
|
2305.06407
|
Combo of Thinking and Observing for Outside-Knowledge VQA
|
Outside-knowledge visual question answering is a challenging task that
requires both the acquisition and the use of open-ended real-world knowledge.
Some existing solutions draw external knowledge into the cross-modality space
which overlooks the much vaster textual knowledge in natural-language space,
while others transform the image into a text that further fuses with the
textual knowledge into the natural-language space and completely abandons the
use of visual features. In this paper, we are inspired to constrain the
cross-modality space into the same space of natural-language space which makes
the visual features preserved directly, and the model still benefits from the
vast knowledge in natural-language space. To this end, we propose a novel
framework consisting of a multimodal encoder, a textual encoder and an answer
decoder. Such structure allows us to introduce more types of knowledge
including explicit and implicit multimodal and textual knowledge. Extensive
experiments validate the superiority of the proposed method which outperforms
the state-of-the-art by 6.17% accuracy. We also conduct comprehensive ablations
of each component, and systematically study the roles of varying types of
knowledge. Codes and knowledge data can be found at
https://github.com/PhoebusSi/Thinking-while-Observing.
|
Qingyi Si, Yuchen Mo, Zheng Lin, Huishan Ji, Weiping Wang
|
2023-05-10T18:32:32Z
|
http://arxiv.org/abs/2305.06407v1
|
# Combo of Thinking and Observing for Outside-Knowledge VQA
###### Abstract
Outside-knowledge visual question answering is a challenging task that requires both the acquisition and the use of open-ended real-world knowledge. Some existing solutions draw external knowledge into the cross-modality space which overlooks the much vaster textual knowledge in natural-language space, while others transform the image into a text that further fuses with the textual knowledge into the natural-language space and completely abandons the use of visual features. In this paper, we are inspired to constrain the cross-modality space into the same space of natural-language space which makes the visual features preserved directly, and the model still benefits from the vast knowledge in natural-language space. To this end, we propose a novel framework consisting of a multimodal encoder, a textual encoder and an answer decoder. Such structure allows us to introduce more types of knowledge including explicit and implicit multimodal and textual knowledge. Extensive experiments validate the superiority of the proposed method which outperforms the state-of-the-art by 6.17% accuracy. We also conduct comprehensive ablations of each component, and systematically study the roles of varying types of knowledge. Codes and knowledge data can be found at [https://github.com/PhoebusSi/Thinking-while-Observing.1](https://github.com/PhoebusSi/Thinking-while-Observing.1)
Footnote 1: Joint work with ByteDance AI Lab.
## 1 Introduction
Conventional visual question answering (VQA) [11] tasks require models to answer questions based on image content. Such tasks have been thoroughly studied [13, 14, 15] on conventional VQA datasets VQAv2 [16]. However, real-world questions often rely on a certain amount of knowledge beyond images. Therefore, Knowledge Base Question Answering (KB-VQA) tasks [15, 17, 18, 19] always require models to answer questions by referring to the corresponding knowledge facts in a specific pre-defined knowledge base. Yet any pre-defined knowledge base is far from covering real-world knowledge. Recently, the outside-knowledge visual question answering (OK-VQA) task has been proposed [16] and provides the most open VQA setting. That is, any knowledge resource can be used to answer its challenging and diverse questions.
Most previous work [14, 15, 16] on OK-VQA follows conventional VQA paradigm (as shown in Figure 1 (a)) based on visual-language pre-trained (VLP) models, and injects knowledge into the same cross-modality space afterward. However, knowl
Figure 1: Comparison with previous paradigms. Orange lines indicate processes involving cross-modality space. (a) The conventional VQA paradigm fuses image and question text into the cross-modality space, and then predicts answers in a close-set classification manner. (b) Language-centric paradigm applies captioning and tagging tools to describe the visual context, and abandons the visual features to convert the VQA task into an open-ended generative QA task. (c) The proposed paradigm intends to constrain the cross-modality space into the same space as natural-language space so that models can directly decode both text and multimodal embeddings.
edge in cross-modality space is much less than that in natural-language space Gao et al.. This paradigm excels at visual understanding, but refers to little knowledge, like a human who focuses on _observing_ but does not _think_ enough.
To take the advantage of the vast knowledge in natural-language space, state-of-the-art methods Gao et al. (2022); Yang et al. (2022); Gui et al. (2021) on OK-VQA follow language-centric paradigm (as shown in Figure 1 (b)) based on pre-trained language models (PLMs). However, although more knowledge can be introduced, the paradigm is counter-intuitive because many visual details are lost when converting an image into text. Therefore, it is like a human who starts _thinking_ after brief _observing_.
For a human, a feasible solution to OK-VQA is combo _Thinking **while****O**sserving_. To this end, we propose **TwO**, which is a framework consisting of a multimodal encoder, a textual encoder and an answer decoder. As shown in Figure 1(c), the multimodal encoder directly encodes the visual features and acts as the role of _observer_, while the textual encoder encodes a range of knowledge resources and acts as the role of _thinker_. Finally, the answer decoder decodes the latent embeddings from both encoders to generate the final answer. In addition, a pre-training stage is added to help constrain the output of both encoders to the same latent space.
Previous methods Gui et al. (2021); Gao et al. (2022); Wu et al. (2022) have thoroughly studied explicit textual knowledge such as Wikipedia, as well as implicit textual knowledge in GPT-3 Brown et al. (2020). However, the discussion of multimodal knowledge, which further utilizes visual features, is still in its infancy in OK-VQA. In this paper, we accumulate explicit multimodal knowledge during pre-training on VQAv2 Ding et al. (2022). Besides, inspired by prompting GPT-3 Yang et al. (2022) for implicit textual knowledge, we use prompt to bring in implicit multimodal knowledge stored in the unifying VLP model OFA Wang et al. (2022). Moreover, we refine a taxonomy of existing methods by knowledge (refer to Figure 2) where our method is the first to bring in all types of knowledge.
To summarize, our contributions are as follows:
(1) We propose a simple and effective paradigm that combines the advantages of both conventional VQA and language-centric paradigms.
(2) Our method can deal with more comprehensive types of knowledge, and is the first to bring in implicit multimodal knowledge through a prompt-learning fashion. In addition, we empirically analyze the roles of different types of knowledge.
(3) Experimental results show the effectiveness of our method, which establishes a new SoTA accuracy on OK-VQA with a 6.17% gain.
## 2 Background
### Outside-Knowledge Visual Question Answering (OK-VQA)
In addition to dividing existing methods according to latent space, namely multimodal-space methods Ding et al. (2022); Garderes et al. (2020); Zhu et al. (2020); Yu et al. (2020); Zheng et al. (2021); Marino et al. (2021) and textual-space methods Yang et al. (2022); Gui et al. (2021); Gao et al. (2022), existing methods can also be roughly categorized into two lines by whether GPT-3 is used. Most of the GPT-3 based methods Gui et al. (2021); Lin et al. (2022) outperform non-GPT ones by large margins, since huge-parameter-capacity GPT-3 can store abundant implicit textual knowledge. The vast implicit knowledge in GPT-3 can be easily retrieved in a prompt manner. For example, Pica Yang et al. (2022) uses text prompts of in-context examples to query GPT-3 for answers directly. However, most existing methods for OK-VQA are non-GPT-3 based, which do not directly compare with GPT-3 based methods for a fair comparison. For completeness, we explore our model performance with and without GPT-3, respectively.
Previous work has generally improved model performance in OK-VQA in two ways: one is to
Figure 2: Taxonomy of OK-VQA methods by knowledge types. Green, purple, blue and red fonts represent the introduction of one, two, three and four types of knowledge. No existing work introduces four types of knowledge in a unified framework, but ours.
introduce more knowledge sources (see Figure 4), and the other is to optimize the model paradigm (see Figure 1). For example, MAVEx Wu et al. (2022) follows the former way and introduces more knowledge sources such as Wikipedia, ConceptNet Speer et al. (2017) and Google images to boost model performance; VRR-EReader Luo et al. (2021) follows the latter way and replaces the classifier with an extraction reader to solve the generalization problem of classification manner. Our method goes further in both directions: On the one hand, we explore more comprehensive types of knowledge. On the other hand, we refine the paradigm to make the visual features retained, and the model still benefits from natural language space. We list the relationship between our method and previous work in Appendix A.1.
### Taxonomy of OK-VQA Methods by Knowledge Types
With an in-depth look at the types of knowledge involved in each existing method, we propose a complete taxonomy of OK-VQA methods shown in Figure 2. We divide all knowledge into four types: explicit textual knowledge, explicit multimodal knowledge, implicit textual knowledge, and implicit multimodal knowledge.
From Figure 2, we find that (1) most GPT-3 based methods Yang et al. (2022); Gui et al. (2021) appear in the two circles of "Textual" because they adopt the language-centric paradigm. (2) There are few methods to use explicit multimodal knowledge, which is more challenging to introduce into models than explicit textual knowledge. Among them, Marino et al. (2022) propose accumulating this knowledge through pre-training while Wu et al. use Google Image to provide similar images. (3) Recent work is usually distributed in the two circles of "Implicit". This shows that VLP models or PLMs have become one of the vital components of the model for OK-VQA. Appendix A.2 and A.3 show more related work about VLP models and PLMs.
## 3 Method
### Visual Description Module
Given an image \(I_{i}\), following Gao et al. (2022), we adopt a coarse-to-fine transformation strategy to describe it as comprehensively as possible, and obtain three parts as follows.
1. Image-level caption \(C_{i}\), given by the SoTA VLP model OFA Wang et al. (2022).
2. Object-level attribution description \(L_{i}\) from the VinVL Zhang et al. (2021) detector.
3. Token-level Optical Character Recognition (OCR) results \(O_{i}\) from easyOCR2.
Footnote 2: [https://github.com/JaideAI/EasyOCR](https://github.com/JaideAI/EasyOCR)
To simplify, we refer to the three as visual context \(V_{i}=(Ci,Li,Oi)\). The generated visual descriptions are in the following forms:
\[\begin{split} C_{i}&=\left\{(w_{0}^{cap},...,w_{j }^{cap})\right\}\\ L_{i}&=\left\{phrase_{0}^{lab},...,phrase_{m}^{lab} \right\},\\ &\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad
knowledge source, and our model accumulates multimodal knowledge in advance through pre-training on VQAv2.
### Implicit Knowledge Retrieval
Recently, the GPT-3 LLM has shown its strength in generating open domain knowledge Gui et al. (2021); Yang et al. (2022) in a prompt-learning manner, and is widely used in OK-VQA as a source of implicit textual knowledge. However, the text descriptions of given images in prompts may lack important visual information, resulting in incomplete or irrelevant knowledge output from GPT-3. To overcome such drawbacks, we propose to view the unifying VLP model OFA as a source of implicit multimodal knowledge. Different from GPT-3, OFA can be queried directly by visual features with text prompts.
Implicit Textual Knowledge in GPT-3.Following the prompt tuning procedure of KAT Gui et al. (2021), we retrieve implicit textual knowledge in GPT-3 with supporting evidence. Specifically, we use the combination of the question, caption, and object labeling as a prompt \(X_{gpt}\) for each image-question pair. Then we add carefully designed instruction text and semantically similar samples as the in-context examples at the beginning of \(X_{gpt}\). That is, \(X_{gpt}\) is "\(\langle instructions\rangle\)\(\langle in-context\ examples\rangle\) Context:\(\langle caption\ C_{i}\rangle\)+\(\langle objectlabeling\ L_{i}\rangle\). Q:\(\langle question\ Q_{i}\rangle\) A:". \(X_{gpt}\) can query a tentative answer \(A_{i}^{gpt}\), and we then query GPT-3 with another prompt \(Y_{gpt}\)_"\(\langle questionQ_{i}\rangle\)\(\langle answerA_{i}^{gpt}\rangle\)_. This is because" for supporting evidence \(E_{i}^{gpt}\). The final obtained implicit textual knowledge is \(T_{i}=\left\{A_{i}^{gpt},E_{i}^{gpt}\right\}\).
Implicit Multimodal Knowledge in OFA.Instruction-guided pre-training enables OFA to perform zero-shot generalization for different prompts, although it does not have a huge parameter capacity like GPT-3. To generate the tentative answer \(A_{i}^{ofa}\), we directly feed OFA the visual features and question as the prompt \(X_{gpt}\). In addition, "This is because" in \(Y_{gpt}\) is no longer applicable to prompt OFA to generate the evidence, as OFA excels at question-form prompts rather than writing a continuation like GPT-3. We therefore design a question-form prompt \(Y_{ofa}\)_"\(\langle question\ Q_{i}\rangle\) why \(\langle answerA_{i}^{ofa}\rangle\)?"_ to query OFA for supporting evidence \(E_{i}^{ofa}\). The final obtained implicit multimodal knowledge is \(M_{i}=\left\{A_{i}^{ofa},E_{i}^{ofa}\right\}\).
### Model Structure of TwO
We have designed the modules above for different types of knowledge, and then, as shown in Figure 3, transfer the acquired knowledge to our model, which contains the following modules:
Multimodal Encoder.We directly adopt an existing VLP model as our multimodal encoder. This paper mainly uses LXMERT, the most widely used one in VQA. LXMERT encodes question \(Q_{i}\) and image \(I_{i}\) to obtain the language hidden states \(\hat{H_{i}^{l}}\) and vision hidden states \(\hat{H_{i}^{v}}\) that have fully interacted with each other.
\[\hat{H_{i}^{l}},\hat{H_{i}^{v}}=enc_{mm}(Q_{i},I_{i}) \tag{3}\]
where \(\hat{H_{i}^{l}}\in\mathbb{R}^{L_{u}*\hat{h}}\), \(\hat{H_{i}^{v}}\in\mathbb{R}^{L_{v}*\hat{h}}\), \(L_{q}\) is the length of the question, \(L_{v}\) is the number of objects, and \(\hat{h}\) is the size of hidden embedding. This encoder acts like _"observing"_ where visual features can interact well with questions.
Textual Encoder.We use T5's encoder as the textual encoder, and feed in all possible textual
Figure 3: The flowchart of our method shows how we obtain four types of knowledge (red fonts) and feed them into the proposed model, which consists of a multimodal encoder, a textual encoder and an answer decoder.
information, i.e., \(Q_{i}\), \(V_{i}\), \(M_{i}\)(, \(T_{i}\))3 and \(P_{i}\) as input. Due to the large number of relevant Wikipedia passages, we concatenate each passage \(p_{i,k}\) that iterates over \(P_{i}\) with other inputs, and then feed each concatenated sequence into the textual encoder as:
Footnote 3: Unless compared with GPT-3 based methods, \(T_{i}\) extracted from GPT-3 is not included by default, due to the much energy consumption of GPT-3.
\[Z_{i}^{k}=enc_{txt}(Q_{i},V_{i},M_{i},p_{i,k}) \tag{4}\]
Here, we obtain the hidden embedding sequence \(Z_{i}^{k}=(z_{0},z_{1},...,z_{t})\), where \(z_{t}\) represents the \(t_{th}\) token embedding, \(Z_{i}^{k}\in\mathbb{R}^{L_{t}*h}\), \(L_{t}=|(Q_{i},V_{i},M_{i},p_{i,k})|\) is the length of the sequence and \(h\) is the size of the hidden embedding. This encoder acts like "_thinking_" where vast knowledge can interact well with questions.
Combo of Both Encoders.To combine the hidden embeddings of both encoders, we map the embedding of the multimodal encoder into the same dimensional space as the textual encoder:
\[H_{i}^{l},H_{i}^{v}=FC_{2}(relu(FC_{1}([\hat{H_{i}^{l}},\hat{H_{i}^{v}}]))) \tag{5}\]
where \(H_{i}^{l}\in\mathbb{R}^{L_{q}*h}\), \(H_{i}^{v}\in\mathbb{R}^{L_{v}*h}\). The final multimodal embedding sequence is \(H_{i}=(H_{i}^{l},H_{i}^{v})\). Then we combine the multimodal and textual embedding sequence together to obtain a hybrid embedding sequence \(S_{i}^{k}=(H_{i},Z_{i}^{k})\). Subsequently, we iterate all \(k\) passages with the same encoding process to generate \(k\) hybrid embedding sequences:
\[S_{i}=(S_{i}^{0},S_{i}^{1},...,S_{i}^{k}) \tag{6}\]
where \(S_{i}\in\mathbb{R}^{((L_{q}+L_{v}+L_{t})\cdot k)\times h}\) is the concatenation of all \(k\) sequences. Taking into account both visual features and vast knowledge, we come to a combo of "_thinking and observing_".
Answer Decoder.We apply T5's decoder as the answer decoder, and feed in the embedding sequence \(S_{i}\) to generate the final answer according to the prediction probability \(P()\) over the vocabulary space \(|V|\) for each answer token:
\[P(a_{i}^{1}),...,P(a_{i}^{l})=softmax(dec(\mathbf{S_{i}})) \tag{7}\]
where \(l\) is the length of the answer. Finally, we adopt teacher-enforcing to train the model with auto-regressive cross-entropy objective:
\[L_{ans}=\frac{-1}{N\cdot l\cdot|V|}\sum_{i=1}^{N}\sum_{j=1}^{l}\sum_{w=1}^{| V|}A_{i}^{j,w}\log(P(a_{i}^{j,w})) \tag{8}\]
where \(N\) is the size of the whole training set.
Pre-training and Fine-tuning.In addition to accumulating explicit multimodal knowledge in VQAv2, the pre-training stage also makes the answer decoder suitable for decoding two different encoders. Note that the implicit knowledge \(T_{i}\) and \(M_{i}\) are not used during pre-training, while the forms of other inputs are consistent with fine-tuning. To employ model ensemble, a common practice in OK-VQA, we take ensembles of six models trained with different seeds, and select the most frequent predictions as the final answers.
## 4 Experiments
### Experimental Setup
OK-VQA Dataset.This paper conducts extensive experiments on the OK-VQA dataset (Marino et al., 2019), the most open VQA dataset, where each question requires outside knowledge beyond the image to answer correctly. Since all questions are manually annotated with no fixed template or knowledge base, this dataset allows the use of any external knowledge source that can help answer.
Evaluation Metric and Implementation Details.We evaluate performance by the standard VQA evaluation metric (Goyal et al., 2017) (denoted by Acc) and Exact Match (Gao et al., 2022) (denoted by EM). Acc defines a soft score (between 0 and 1) for each annotated answer according to a voting mechanism, reflecting the consensus subjectively of multiple annotators. In contrast, EM treats all annotated answers to a question equally as the ground truth, which is a looser metric.
We adopt _txmert-base-uncased_ or _visualbert-vqa_(Li et al., 2019) and _T5-large_ models to initialize our model. We pre-train and finetune the models on 12 and 8 A100-80GB GPUs respectively for 3 epochs with a batch size of 1. More details are shown in Appendix B.
### Comparison with Existing Approaches
Comparison with SoTAs.Table 1 reports the performance of our proposed method and state-of-the-art models, from which we can derive several observations: (1) Comparing the second and third lines with the first line, we find that implicit knowledge in VLP models or PLMs, used for model initialization, further improves model performance. This was rarely discussed in previous work. (2) MuKEA and TriG are the best-performing methods to implement OK-VQA in cross-modal space
and natural-language space, respectively. By comparing their performance, we find that OK-VQA solutions in natural-language space perform significantly better than those in cross-modal space. This is because squeezing the rich representation of natural-language knowledge (billion-degree pre-training corpus) into a much smaller cross-modal space (million-degree pre-training corpus) leads to a severe loss of knowledge. (3) Our method is compatible with various VLP encoders, and beats the previous SoTAs TRiG by 6.17% Acc and 6.59% EM. (4) It can be seen from the middle two columns that, compared to previous work, our method is the first to utilize all four types of knowledge at the same time, which is one of the reasons why our method is effective. Moreover, as shown in Appendix C.1, our method can outperform TRiG using 100 Wikipedia passages by 4.37% Acc even using only 5 passages, which substantially reduces computing consumption.
Comparison with GPT-3 Based Methods.We also compare our method with recent GPT-3 based methods. As shown in Table 2, GPT-3 Based methods are significantly superior to non-GPT-3 baselines shown in Table 1. However, even without GPT-3 (175B), we can achieve competitive results with OFA (0.93B). To compare fairly, we further improve our model performance by incorporating GPT-3, and clearly surpass all GPT-3 based SoTAs.
### Ablation Study
Ablation of Pretrain-finetune Strategy.In Figure 4, we evaluate the contribution of pre-training and fine-tuning in our method. The decline in performance caused by "w/o pre-train" confirms the necessity of pre-training. Although 'w/o fine-tune' is far worse than the final performance, it is still competitive compared with previous methods. This further verifies that multimodal knowledge in VQAv2 is helpful in solving OK-VQA.
Ablation of Model Structure.To prove the complementary benefits of applying the two encoders, we conduct experiments and report results in Table 3. The findings can be summarized as follows: (1) As shown in the "Input Form" column,
\begin{table}
\begin{tabular}{c c|c|c|c} \hline
**Method** & **Venue** & **Implicit Knowledge** & **Explicit Knowledge Resources** & **EM** & **Acc** \\ \hline BAN & NeurIPS(2019) & — & — & 25.17 \\ +AN & CVPR(2019) & — & Wikipedia & 25.61 \\ +KG-AUC & MM(2020a) & — & Wikipedia + ConceptNet & 26.71 \\ MUTAN & ICCV(2017) & — & — & 26.41 \\ +AN & CVPR(2019) & — & Wikipedia & 27.84 \\ Mucko & BCA(2020) & — & ConceptNet & 29.20 \\ GRUC & PR(2020) & — & ConceptNet & 29.87 \\ KM4 & Inf Fusion(2021) & — & multimodal knowledge from OK-VQA & 31.32 \\ \hline ViLBERT & ICNP(2019) & ViLBERT & & 31.35 \\ LXMERT & EEMNLP(2019) & LXMERT & & 32.04 \\ VPR-CReader & EEMNLP(2021) & LXMERT & Google Search & 36.78 \\ RVLESK & LANTERN(2021) & LXMERT & ConceptNet & 39.04 \\ MAVEA & AAAI(2022) & ViLBERT & Wikipedia + ConceptNet + Google Images & 41.37 \\ MuEA & CVPR(2022) & LXMERT & multimodal knowledge from VQAv2 and OK-VQA & 42.59 \\ \hline ConceptBert & EEMNLP(2020) & BERT & ConceptNet & 33.66 \\ KRISP(w/o mm p.c.) & CVPR(2021) & BERT & DBpedia + ConceptNet + VisualGenome + haspartKB & 32.31 \\ KRISP(w/mm p.c.) & CVPR(2021) & BERT & _ditto_ + VQAv2 & 38.90 \\ VRR-REeader & EEMNLP(2021) & RoBERTa & Google Search & 39.20 \\ TRiG & CVPR(2022) & T5 & Wikipedia & 53.59 & 49.35 \\ TRiG, \(\mathbf{E}\) & CVPR(2022) & T5 & Wikipedia & 54.73 & 50.50 \\ \hline Ours & \multirow{2}{*}{} & LXMERT+OFA+T5 & VQAv2 + Wikipedia & **59.85** & **55.33** \\ Ours, \(\mathbf{E}\) & & LXMERT+OFA+T5 & VQAv2 + Wikipedia & **61.12** & **56.49** \\ Ours & & visualBERT+OFA+T5 & VQAv2 + Wikipedia & **60.17** & **55.52** \\ Ours, \(\mathbf{E}\) & & visualBERT+OFA+T5 & VQAv2 + Wikipedia & **61.32** & **56.67** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results comparison with existing methods. The middle two columns report the implicit knowledge and explicit knowledge sources involved in each method respectively. The middle two rows show the methods based on VLP models and PLMs respectively. \(\mathbf{E}\) denotes the model ensemble.
\begin{table}
\begin{tabular}{c|c|c} \hline
**Method** & **Knowledge in Input Text** & **Acc** \\ \hline PICa & Frozen GPT-3 (175B) & 46.50 \\ PICa, \(\mathbf{E}\) & Frozen GPT-3 (175B) & 48.00 \\ KAT & Wikidata+Frozen GPT-3 (175B) & 53.10 \\ KAT, \(\mathbf{E}\) & Wikidata+Frozen GPT-3 (175B) & 54.40 \\ REVIVE & Wikidata+Frozen GPT-3 (175B) & 56.60 \\ REVIVE, \(\mathbf{E}\) & Wikidata+Frozen GPT-3 (175B) & 58.00 \\ \hline ours & Wikipedia+Frozen OFA (0.93B) & 55.33 \\ ours, \(\mathbf{E}\) & Wikipedia+Frozen OFA (0.93B) & 56.49 \\ ours w/ GPT-3 & _ditto_-Frozen GPT-3 (175B) & **57.57** \\ ours w/ GPT-3, \(\mathbf{E}\) & _ditto_-Frozen GPT-3 (175B) & **58.72** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results comparison with existing GPT-3 based methods. \(\mathbf{E}\) denotes the model ensemble.
combining both textual and multimodal encoders allows our method to handle both visual features and textual input simultaneously. (2) 'w/o txt enc' consistently underperforms 'w/o mm enc', because the natural-language space of the textual encoder contains more knowledge, which is critical to OK-VQA. (3) The upper part shows that, without pre-training, 'w/o textual enc' performs worse than LXMERT, as the answer decoder, initialized with T5, cannot directly fit the encoder initialized with LXMERT. (4) Similarly, removing the multimodal encoder without pre-training will instead result in a slight performance improvement for the same reason. (5) As shown in the lower part, adopting pre-training contributes to ameliorating the above phenomenon. That is, the performance of 'ours' is superior to both 'w/o txt enc' and 'w/o mm enc' by clear margins. This proves that pre-training can help make the answer decoder suitable for decoding both encoders, thus combining the advantages of both encoders.
Ablation of Four Types of Knowledge.Table 4 shows that the absence of any type of knowledge will lead to a significant drop in performance (1.39%-4.86% Acc and 1.53%-5.20% EM), which proves the complementary benefits among the four types of knowledge. Among the four types of knowledge, implicit knowledge in OFA contributes the most and explicit knowledge of Wikipedia contributes the least. We will discuss this phenomenon in Appendix D.1. In addition, in Appendix C.3, we also perform ablations from a dependence perspective to prove the indispensability of each encoder and knowledge.
Performance of Knowledge Retrieval.From Table 5, it can be seen that: (1) The combination of all the knowledge retrieved in our method can cover the answers corresponding to 95.30% of the samples. The high \(hit\) guarantees a high upper bound, allowing the model to generalize better. (2) \(Hit\) of prompting OFA significantly outperforms that of prompting GPT-3, indicating that implicit multi-modal knowledge may be more effective than implicit textual knowledge in OK-VQA. (3) The supporting evidence can clearly improve \(hit\) of the tentative answers, especially for OFA (from 61.59% to 66.75%). (4) Wikipedia's high \(hit\) demonstrates the effectiveness of our adopted DPR model in retrieval. As shown in Appendix C.1, as the number of Wikipedia passages increases, Acc/EM of our model rises first and then falls because noise is introduced when the number of passages is large.
In Appendix D.1, we also conduct experiments to further explore the extent to which the model
\begin{table}
\begin{tabular}{l|c|c} \hline \hline
**Model** & Knowledge Type & **EM** & **Acc** \\ \hline ours & all four types & **62.33** & **57.57** \\ \hline w/o pre. & explicit multimodal & 59.93 & 55.44 \\ w/o Wiki & explicit textual & 60.80 & 56.18 \\ w/o GPT-3 & implicit multimodal & 59.65 & 55.28 \\ w/o OFA & implicit textual & 57.13 & 52.71 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation study on the pre-training and fine-tuning stages. ’w/o finetune’ denotes that after pre-training on VQAv2, the model will be evaluated directly on the OK-VQA test set without further fine-tuning.
Figure 4: Ablation study on the pre-training and fine-tuning stages. ’w/o finetune’ denotes that after pre-training on VQAv2, the model will be evaluated directly on the OK-VQA test set without further fine-tuning.
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline \hline
**Knowledge** & \(hit\) & **Knowledge** & \(hit\) \\
**Source** & Train & Test & **Source** & Train & Test \\ \hline
**GPT-3 ans + eut** & **56.59** & **61.51** & **OFA ans + eut** & **63.36** & **66.75** \\ GPT-3 ans & 54.02 & 59.27 & OFA ans & 57.63 & 61.59 \\ GPT-3 evi & 34.09 & 37.26 & OFA evi & 57.84 & 61.47 \\ \hline
**Visual Context** & **32.28** & **32.92** & **Wikipedia(75)** & **28.58** & **85.26** \\ captions & 22.34 & 22.81 & Wikipedia(50) & 80.34 & 82.62 \\ labels & 23.62 & 24.18 & Wikipedia(25) & 74.28 & 76.56 \\ OCR & 0.44 & 0.32 & Wikipedia(10) & 63.20 & 64.74 \\ \hline
**all** & **93.18** & **95.30** & Wikipedia(5) & 51.88 & 54.12 \\ \hline \hline \end{tabular}
\end{table}
Table 5: \(Hit\) of each component in our model’s inputs. \(Hit\) is defined as the percentage of samples in the whole dataset that get a \(hit\) on any corresponding annotated answer by the retrieved knowledge. “ans” and ”evi” denote tentative answers and supporting evidence, respectively.
Figure 4: Ablation study on the pre-training and fine-tuning stages. ’w/o finetune’ denotes that after pre-training on VQAv2, the model will be evaluated directly on the OK-VQA test set without further fine-tuning.
makes use of each type of knowledge. We find that compared with explicit knowledge, implicit knowledge has a higher conversion rate from knowledge to correct answers. We also qualitatively analyze the impact on OK-VQA of different versions of OFA in Appendix D.2.
## 5 Qualitative Analysis
Case Study on Two Encoders.To explore the respective roles of the two encoders, the upper part of Figure 5 shows the examples that can be answered correctly by one of the two single-encoder models. Plot (a) and (b) of Figure 5 show that **ours-mm** excels at answering questions that need comprehension about image scenes and objects. For example, the orientation and the relative position between TV and sofa in plot (b) help generate the answer "watch tv". Such scene information is easily omitted by a single textual encoder. This further validates that the multimodal encoder supplements the missing image information, and makes better use of the image when combining knowledge.
Plot (c) and (d) shows that **ours-txt** is an expert in answering questions that require focusing more on external knowledge rather than image understanding, since the textual encoder is the primary channel for receiving knowledge from multiple sources.
Case Study on Varying Types of Knowledge.As shown in the lower plots in Figure 5, we further analyze the circumstances under which each type of knowledge is essential, respectively. Plot (e) shows that the model would hardly generate correct answers, even those that have been recalled by knowledge, once pre-training is removed. This demonstrates that explicit multimodal knowledge accumulated during pre-training enhances the ability to use the recalled knowledge according to image content. Plot (f) shows that when a question is deeply dependent on image content (e.g., bird type detection), implicit multimodal knowledge in OFA can directly provide tentative answers from the image, which strengthens the visual understanding. Plot (g) shows that implicit textual knowledge in GPT-3 is essential for questions that require commonsense knowledge. Plot (h) shows that when a question is highly open, even if both GPT-3 and OFA fail to recall the corresponding knowledge, the retrieved Wikipedia passage can still provide enough knowledge (see Figure 4), e.g., enumerating the most plane models. In Appendix D.3, we also compare our method qualitatively against the previous methods.
## 6 Conclusion and Future Work
This paper proposes a simple and effective method that mimics human behavior "_thinking while observing_", i.e., benefiting from the vast knowledge in natural-language space while making the most of the visual features for better image understanding. Our method establishes a new SoTA accuracy of
Figure 5: Examples of our prediction together with all the supporting knowledge when (Upper) only using a single encoder or (Lower) respectively removing each type of knowledge from our method. **Pred** denotes our predicted answer. **ours-mm** and **ours-txt** represent the model that combines only multimodal encoder or textual encoder with answer decoder, respectively.
56.67% with a 6.17% improvement on OK-VQA. Moreover, we consider more comprehensive types of knowledge, and systematically analyze the role of each type of knowledge in detail. We hope our work can stimulate followers to explore OK-VQA further along the direction of how to fuse both nature-language and cross-modality spaces better.
## Limitations
First, considering the unique most open setting of OK-VQA, following most previous work Gao et al. (2022); Wu et al. (2022); Yang et al. (2022); Gui et al. (2021); Lin et al. (2022), we only evaluate our method on this dataset. Second, although the proposed method has verified the feasibility of the idea that constrains both natural-language and cross-modality spaces together, it is still necessary to explore more ways to better combine the output of two encoders. Third, our method involves multiple offline knowledge retrieval processes, such as retrieving relevant Wikipedia passages, which will make it difficult to deploy our model as an online model.
|
2307.07160
|
Do not Mask Randomly: Effective Domain-adaptive Pre-training by Masking
In-domain Keywords
|
We propose a novel task-agnostic in-domain pre-training method that sits
between generic pre-training and fine-tuning. Our approach selectively masks
in-domain keywords, i.e., words that provide a compact representation of the
target domain. We identify such keywords using KeyBERT (Grootendorst, 2020). We
evaluate our approach using six different settings: three datasets combined
with two distinct pre-trained language models (PLMs). Our results reveal that
the fine-tuned PLMs adapted using our in-domain pre-training strategy
outperform PLMs that used in-domain pre-training with random masking as well as
those that followed the common pre-train-then-fine-tune paradigm. Further, the
overhead of identifying in-domain keywords is reasonable, e.g., 7-15% of the
pre-training time (for two epochs) for BERT Large (Devlin et al., 2019).
|
Shahriar Golchin, Mihai Surdeanu, Nazgol Tavabi, Ata Kiapour
|
2023-07-14T05:09:04Z
|
http://arxiv.org/abs/2307.07160v1
|
# Do not Mask Randomly:
###### Abstract
We propose a novel task-agnostic in-domain pre-training method that sits between generic pre-training and fine-tuning. Our approach selectively masks _in-domain keywords_, i.e., words that provide a compact representation of the target domain. We identify such keywords using KeyBERT (Grootendorst, 2020). We evaluate our approach using six different settings: three datasets combined with two distinct pre-trained language models (PLMs). Our results reveal that the fine-tuned PLMs adapted using our in-domain pre-training strategy outperform PLMs that used in-domain pre-training with random masking as well as those that followed the common pre-train-then-fine-tune paradigm. Further, the overhead of identifying in-domain keywords is reasonable, e.g., 7-15% of the pre-training time (for two epochs) for BERT Large (Devlin et al., 2019).1
Footnote 1: The code for all of our experiments is available at [https://github.com/shahiargolchin/do-not-mask-randomly](https://github.com/shahiargolchin/do-not-mask-randomly).
## 1 Introduction
Employing large pre-trained language models (PLMs) is currently a common practice for most natural language processing (NLP) tasks (Tunstall et al., 2022). A two-stage pre-train-then-fine-tune framework is usually used to adapt/fine-tune PLMs to downstream tasks (Devlin et al., 2019). However, motivated by ULMFiT (Howard and Ruder, 2018) and ELMo (Peters et al., 2018), Gururangan et al. (2020) showed that incorporating in-domain pre-training (also known as domain-adaptive pre-training) between generic pre-training and fine-tuning stages can lead to further performance improvements in downstream tasks because it "pulls" the PLM towards the target domain. At this intermediate stage, the domain adaptation for PLMs is typically handled by continuing pre-training in the same way, i.e., using randomly-masked tokens on unstructured in-domain data (Devlin et al., 2019). Here, we argue that this intermediate pre-training should be performed differently, i.e., masking should focus on _words that are representative of target domain_ to streamline the adaptation process.
We propose a novel task-independent in-domain pre-training approach for adapting PLMs that increases domain fit by focusing on _keywords_ in the target domain, where keywords are defined as "a sequence of one or more words that offers a compact representation of a document's content" (Rose et al., 2010). By applying token masking only to in-domain keywords, the meaningful information in the target domain is more directly captured by the PLM. This is in contrast to the classic pre-training strategy that randomly masks tokens (Devlin et al., 2019), which may overlook domain-meaningful information, or the in-domain pre-training methods that selectively mask tokens deemed important given the downstream task (Gu et al., 2020, inter alia), which require incorporating information from the downstream task into the pre-training stage. We empirically show that our method offers a better transmission of high-quality information from the target domain into PLMs, yielding better generalizability for the downstream tasks.
The key contributions of this paper are:
**(1)** We propose the first task-agnostic selective masking technique for domain adaptation of PLMs that relies solely on in-domain keywords. In particular, we first extract contextually-relevant keywords from each available document in the target domain using KeyBERT (Grootendorst, 2020) and keep the most frequently occurring keywords to be masked during the adaptation phase.
**(2)** We evaluate our proposed strategy by measuring the performance of fine-tuned PLMs in six different settings. We leverage three different datasets for text classification from multiple domains: IMDB movie reviews (Maas et al., 2011),
Amazon pet product reviews from Kaggle,2 and PUBHEALTH (Kotonya and Toni, 2020). Our experiments show that the classifiers trained on top of two PLMs--in our case, Bidirectional Encoder Representations from Transformers (BERT) Base and Large (Vaswani et al., 2017; Devlin et al., 2019)--that are adapted based on our suggested approach outperform all baselines, including the fine-tuned BERT with no in-domain adaptation, and fine-tuned BERT adapted by random masking. Further, the overhead of identifying in-domain keywords is reasonable, e.g., 7-15% of the pre-training time (for two epochs of data) for BERT Large.
Footnote 2: [https://www.kaggle.com/datasets/kashnitsky/exploring-transfer-learning-for-nlp](https://www.kaggle.com/datasets/kashnitsky/exploring-transfer-learning-for-nlp)
## 2 Related Work
Bidirectional Encoder Representations from Transformers (BERT) brought pre-training to transformer networks (Vaswani et al., 2017) through masked language modeling (MLM) (Devlin et al., 2019). They showed that a simple two-step paradigm of generic pre-training followed by fine-tuning to the target domain can significantly improve performance on a variety of tasks.
However, after showing that infusing an intermediate pre-training stage (commonly known as in-domain pre-training) can help pre-trained Long Short-Term Memory models learn domain-specific patterns better (Howard and Ruder, 2018; Peters et al., 2018), Gururangan et al. (2020) found that the same advantage applies to PLMs as well. Since then, several efforts proposed different domain-adaptive pre-training strategies.
Unsurprisingly, one of the most extensively utilized in-domain pre-training methodologies has been to employ classic random masking to adapt PLMs into several domains (Lee et al., 2020; Beltagy et al., 2019; Alsentzer et al., 2019; Tavabi et al., 2022b, a; Araci, 2019). Following this, Zheng et al. (2020) introduced the fully-explored MLM in which random masking is applied to specific non-overlapping segments of the input sequence. The limitation of random masking that we aim to address is that it may put unnecessary focus on tokens that are not representative of the target domain.
In contrast, task-specific selective masking methods mask tokens that are important to the downstream task. For each task, "importance" is defined differently: Gu et al. (2020) let an additional neural model learns important tokens given the task at hand; Ziyadi et al. (2020) defined importance by masking entities for the named entity recognition task, and Feng et al. (2018) found important tokens by input reduction--maintaining model's confidence in the original prediction by reducing input--and they were left with a few (potentially nonsensical) tokens that were treated as important to model. Similarly, Li et al. (2020) designed a task-dependent objective for dialogue adaptation, and Ke et al. (2019) proposed label-aware MLM for a sentiment analysis task. In the same vein, token selection in certain domains, e.g., biomedical and clinical domains, was performed based on the entities relevant to the domain (Lin et al., 2021; Zhang et al., 2020; Pergola et al., 2021).
Note that other MLM-based pre-training strategies focused on training a language model from scratch (Zhang et al., 2020; Joshi et al., 2020; Sun et al., 2019, inter alia). However, since our work focuses on in-domain pre-training, we skip this part for brevity.
In this study, we propose an information-based domain-adaptive pre-training that, without being aware of the downstream task, selectively masks words that are information-dense with respect to the target domain. As a result, PLMs adapted using our mechanism outperform baselines adapted with random masking or fine-tuned directly. In the following sections, we refer to our approach as "keyword masking pre-training."
## 3 Approach
### Extracting In-domain Keywords
In order to extract keywords relevant to the domain of interest, we use KeyBERT (Grootendorst, 2020). In a nutshell, KeyBERT uses BERT's (Devlin et al., 2019) contextualized embeddings to find the \(n\)-grams-in our scenario, unigrams-that concisely describe a given document. In particular, word embeddings with the highest cosine similarity to the overall document-level representation are identified as keywords that best represent the entire document. We configure KeyBERT to extract up to 10 keywords from each input document. Note that we do not pre-train or fine-tune BERT as the underlying model for KeyBERT.
### Removing Noisy Keywords
After extracting domain-specific keywords, we compute the frequency of each specific word that has been recognized as a keyword in all in-domain
documents. Subsequently, we sort them in descending order of their frequency and keep only the most frequent ones. This simple strategy allows us to remove keywords that are likely to be noisy or irrelevant to the target domain.
Figure 1 summarizes the noisy keyword removal process for PUBHEALTH dataset (see Appendix B for other domains). Note that the actual figure has a very long tail on the right, indicating that the actual in-domain keywords (or parts where information is condensed in the target domain) are frequently repeated. The graph displays the frequency of terms along with the number of times they are identified as keywords. In the PUBHEALTH dataset, for example, more than 10,000 words were detected as keywords only once. Thus, we select the cut-off point where the curve is intended to leap up, signaling that keywords with repetition below the threshold were excluded from the list of domain-relevant keywords.3 Namely, in the PUBHEALTH dataset, all words detected fewer than eight times as a keyword were removed from the list of in-domain keywords, and consequently, for performing keyword masking pre-training. The provided examples on the graph in Figure 1 indicate a qualitative indication that KeyBERT, coupled with our frequency-based heuristic, selects meaningful domain-specific keywords. For example, our approach identifies relevant keywords (e.g., _health_, _coronavirus_), while skipping other less relevant ones (e.g., _gym_, _gift_).
Footnote 3: The threshold is adjusted via three points: an empirically chosen point from the graph, a point before, and a point after it. Following keyword masking based on each of these three thresholds, we choose the one that resulted in the highest F1 score on the validation split as the final threshold.
### Keyword Masking Pre-training
We pair the list of retrieved candidate keywords with all target domain documents to perform keyword masking pre-training. If any of the keywords from the list appear in the input documents, the tokens corresponding to those keywords get masked given the masking probability. In our pre-training strategy, we use a constant learning rate scheduler with a high masking probability rather than a linear one to force the majority of tokens associated with keywords to be masked while continuously learning from surrounding tokens. As our approach inherits from MLM Devlin et al. (2019), the tokens related to keywords are masked 80% of the time, replaced 10% of the time with other tokens, and left unchanged 10% of the time. Note that during pre-training masking only applies to the tokens that match the candidate keywords. Therefore, there is no pre-training for unmasked tokens.
### Fine-tuning and Baselines
We compare the performance of all fine-tuned PLMs adapted using our technique with two other baselines: fine-tuned PLMs adapted using random masking, and fine-tuned PLMs with no in-domain adaptation. For all these settings we employ both BERT Base and BERT Large Devlin et al. (2019).
## 4 Experimental Setup
**Data:** In our experiments, we chose tasks and datasets with sufficient amounts of unlabeled data for the domain adaptation stage in order to observe the effects of keyword selection.4 In particular, we evaluate our method on three text classification datasets: PUBHEALTH Kotonya and Toni (2020), which contains public health claims associated with veracity labels, IMDB movie reviews dataset Maas et al. (2011), and Amazon pet product reviews dataset (from a Kaggle competition).5
Footnote 4: For example, we did not use the GLUE dataset Wang et al. (2018) because the included texts are short.
Footnote 5: [https://www.kaggle.com/datasets/kashnitsky/exploring-transfer-learning-for-nlp](https://www.kaggle.com/datasets/kashnitsky/exploring-transfer-learning-for-nlp)
Based on the thresholds we studied for filtering out the noisy keywords (see Section 3.2), we gathered 2,116, 7,274, and 6,881 domain-specific keywords from the PUBHEALTH dataset, IMDB dataset, and Amazon dataset, respectively.
**Settings:** We use KeyBERT Grootendorst (2020) to extract up to 10 unigram keywords per input document utilizing contextualized word embeddings of BERT Base Devlin et al. (2019), stratified by the Maximal Marginal Relevance (MMR) Carbonell and Goldstein (1998) with a threshold of 0.8.
To perform keyword masking pre-training, we set the masking probability to 0.75 with a constant learning scheduler. The other hyperparameters are left at their default values from the Hugging Face data collator for whole word masking Wolf et al. (2020). For random masking pre-training, we set the masking probability to 0.15, which is a standard value for continual MLM pre-training, and left the remaining hyperparameters at the values provided by the Hugging Face data collator for language modeling Wolf et al. (2020). Note that the default learning rate scheduler is linear. Further, in all settings, pre-training is limited to two epochs, and
the batch size of 16 is adopted during both the adaptation and fine-tuning stages.
With the learning rate set to 2e-5 and the weight decay set to 0.01 (Devlin et al., 2019), we fine-tune the whole network for all of our adapted models and baselines for up to four epochs in all datasets, while keeping the other hyperparameters at the default value of Hugging Face (Wolf et al., 2020). The models that obtained the highest F1 score in the validation partition are then chosen and evaluated on the test split of the datasets.
## 5 Results and Discussion
Table 1 and 2 report the performance of fine-tuned models that used multiple domain-adaptive pre-training methods for each of our settings: three different datasets and two distinct PLMs. Table 1 contains the results for _BERT Base_ as underlying PLM; Table 2 uses _BERT Large_.
In particular, each table contrasts the performance of two fine-tuned baselines--one without adaptation/in-domain pre-training and one with random masking in-domain pre-training--to a fine
\begin{table}
\begin{tabular}{l c c} \hline \hline \multicolumn{3}{c}{_PUBHEALTH Dataset_} \\ \hline
**Adaptation Method** & **Accuracy (\%)** & **F1 Score (\%)** \\ \hline No Adaptation & 64.80 & 63.23 \\ Random Masking & 65.77 & 64.94 \\ Our Keyword Masking & ***66.09** & ***65.40** \\ \hline \hline \multicolumn{3}{c}{_IMDB Movie Reviews Dataset_} \\ \hline
**Adaptation Method** & **Accuracy (\%)** & **F1 Score (\%)** \\ \hline No Adaptation & 94.44 & 94.43 \\ Random Masking & 94.96 & 94.95 \\ Our Keyword Masking & ***95.36** & ***95.35** \\ \hline \hline \multicolumn{3}{c}{_Amazon Pet Product Reviews Dataset_} \\ \hline
**Adaptation Method** & **Accuracy (\%)** & **F1 Score (\%)** \\ \hline No Adaptation & 85.89 & 85.73 \\ Random Masking & 86.33 & 86.31 \\ Our Keyword Masking & ***87.14** & ***86.98** \\ \hline \hline \end{tabular}
\end{table}
Table 1: A comparison between the performance of fine-tuning adapted PLMs using our keyword masking and other baselines when _BERT Base_ is used as the PLM. The best results are shown **bold** and the obtained statistically significant results compared to random masking are indicated by an asterisk (*) (see Appendix E).
Figure 1: The graph shows the frequency of the last 50 most frequent keywords in the PUBHEALTH domain and the cut-off line for removing noisy keywords.
\begin{table}
\begin{tabular}{l c c} \hline \hline \multicolumn{3}{c}{_PUBHEALTH Dataset_} \\ \hline
**Adaptation Method** & **Accuracy (\%)** & **F1 Score (\%)** \\ \hline No Adaptation & 66.42 & **65.08** \\ Random Masking & 63.90 & 64.74 \\ Our Keyword Masking & ***66.66** & 64.74 \\ \hline \hline \multicolumn{3}{c}{_IMDB Movie Reviews Dataset_} \\ \hline
**Adaptation Method** & **Accuracy (\%)** & **F1 Score (\%)** \\ \hline No Adaptation & 95.38 & 95.37 \\ Random Masking & 95.50 & 95.49 \\ Our Keyword Masking & **95.52** & **95.51** \\ \hline \hline \multicolumn{3}{c}{_Amazon Pet Product Reviews Dataset_} \\ \hline
**Adaptation Method** & **Accuracy (\%)** & **F1 Score (\%)** \\ \hline No Adaptation & 85.69 & 85.71 \\ Random Masking & 86.84 & 86.72 \\ Our Keyword Masking & ***87.58** & ***87.51** \\ \hline \hline \end{tabular}
\end{table}
Table 2: A comparison between the performance of fine-tuning adapted PLMs using our keyword masking and other baselines when _BERT Large_ is used as the PLM. The best results are shown **bold** and the obtained statistically significant results compared to random masking are indicated by an asterisk (*) (see Appendix E).
tuned model adapted using our keyword masking.
Both tables show that our approach outperforms all other baselines in all six settings. The improvements are statistically significant in four out of six settings (Appendix E). This highlights the importance of selecting information-carrying keywords for masking during the in-domain pre-training.
The results reveal that our suggested in-domain pre-training technique outperforms alternative settings with or without standard in-domain pre-training on target domain unlabeled data. Although the benefits of continual pre-training vary depending on the domain and the task at hand Gururangan et al. (2020), our adaptation strategy always has a greater impact on PLMs in capturing domain-specific patterns compared to typical random masking when in-domain adaptation has a positive impact on downstream tasks. This indicates that our pre-training method indeed exposes the PLMs to relevant in-domain representations.
Given the superior outcomes seen in our six different experiments, we can argue that our selective masking strategy, which is task-agnostic as random masking yet more effective, could potentially widely replace random masking in the intermediate pre-training stage for a variety of NLP tasks. Other than performance, our method is simple and has no "pathological behavior" Feng et al. (2018) (see Appendix C). Additionally, our method takes 2 to 10 minutes of computational overhead to extract keywords. This accounts for 7% to 39% of pre-training time of only two epochs (Appendix A).
## 6 Conclusion
We proposed the first task-agnostic selective masking pre-training approach, dubbed "keyword masking," to adapt PLMs to the target domains. For keyword masking, we first extract in-domain keywords from the target domain using KeyBERT Grootendorst (2020), and after excluding the noisy ones, we only mask the selected keywords during adaptation.
We evaluated our methodology using six different settings. The results revealed that when in-domain pre-training is conducted using our approach, all fine-tuned PLMs outperform those with no adaptation or adapted using random masking. Further, we observed that our pre-training approach was superior for difficult tasks, i.e., datasets with many labels and more complexity. Lastly, keyword masking pre-training can be widely substituted with random masking during shift domain in NLP tasks since it is task-independent, as simple to use as random masking, and more effective.
## 7 Limitations
Although all pre-training approaches require a sufficient amount of data, given how we defined keywords, longer sequences suit our approach better than short ones for studying the effects of keyword selection. Further, as shown in this study, our findings strongly imply that the strategy we suggested for adapting PLMs can effectively enhance their performance on text classification as the downstream task. To determine whether these findings can translate to other NLP applications, however, further experiments are required.
## 8 Ethics Statement
Although keyword extraction may amplify bias depending on the input documents and the way it extracts keywords, KeyBERT Grootendorst (2020) has not been reported to exhibit this behavior. Further work may be necessary to thoroughly explore the potential of introducing undesired bias.
|
2306.06157
|
Fault Localization for Buggy Deep Learning Framework Conversions in
Image Recognition
|
When deploying Deep Neural Networks (DNNs), developers often convert models
from one deep learning framework to another (e.g., TensorFlow to PyTorch).
However, this process is error-prone and can impact target model accuracy. To
identify the extent of such impact, we perform and briefly present a
differential analysis against three DNNs widely used for image recognition
(MobileNetV2, ResNet101, and InceptionV3) converted across four well-known deep
learning frameworks (PyTorch, Keras, TensorFlow (TF), and TFLite), which
revealed numerous model crashes and output label discrepancies of up to 100%.
To mitigate such errors, we present a novel approach towards fault localization
and repair of buggy deep learning framework conversions, focusing on
pre-trained image recognition models. Our technique consists of four stages of
analysis: 1) conversion tools, 2) model parameters, 3) model hyperparameters,
and 4) graph representation. In addition, we propose various strategies towards
fault repair of the faults detected. We implement our technique on top of the
Apache TVM deep learning compiler, and we test it by conducting a preliminary
fault localization analysis for the conversion of InceptionV3 from TF to
TFLite. Our approach detected a fault in a common DNN converter tool, which
introduced precision errors in weights, reducing model accuracy. After our
fault localization, we repaired the issue, reducing our conversion error to
zero.
|
Nikolaos Louloudakis, Perry Gibson, José Cano, Ajitha Rajan
|
2023-06-10T23:50:02Z
|
http://arxiv.org/abs/2306.06157v5
|
# Fault Localization for Buggy Deep Learning Framework Conversions in Image Recognition
###### Abstract
When deploying Deep Neural Networks (DNNs), developers often convert models from one deep learning framework to another (e.g., TensorFlow to PyTorch). However, this process is error-prone and can impact target model accuracy. To identify the extent of such impact, we perform and briefly present a differential analysis against three DNNs widely used for image recognition (MobileNetV2, ResNet101, and InceptionV3) converted across four well-known deep learning frameworks (PyTorch, Keras, TensorFlow (TF), and TFLite), which revealed numerous model crashes and output label discrepancies of up to 72%. To mitigate such errors, we present a novel approach towards fault localization and repair of buggy deep learning framework conversions, focusing on pre-trained image recognition models. Our technique consists of four stages of analysis: 1) conversion tools, 2) model parameters, 3) model hyperparameters, and 4) graph representation. In addition, we propose various strategies towards fault repair of the faults detected. We implement our technique on top of the Apache TVM deep learning compiler, and we test it by conducting a preliminary fault localization analysis for the conversion of InceptionV3 from TF to TFLite. Our approach detected a fault in a common DNN converter tool, which introduced precision errors in weights, reducing model accuracy. After our fault localization, we repaired the issue, reducing our conversion error to zero.
## I Introduction
Deep Neural Network (DNN) models, trained using a given deep learning (DL) framework (such as PyTorch [1], TensorFlow (TF) [2]), can be converted to a different DL framework (such as Keras [3]). Common reasons for this conversion include 1) deployment on resource-constrained environments such as IoT devices, which may require lightweight DL frameworks (e.g., TFLite), and 2) support for a wider set of features, that allow more in-depth model modification and optimization, such as explicitly defining forward propagation implementation. Conversion of DNN models between DL frameworks is facilitated by automated conversion processes using tools such as tf2onnx [4], onnx2keras [5], onnx2torch [6], and MMdnn [7]. However, this conversion process can introduce faults [8, 9, 10, 11], which can make the converted models undeployable or reduce performance on their target task [12, 13].
In order to mitigate this problem, we propose an automated approach for fault localization and repair of faults introduced by the DL framework conversion process. We focus on DL framework conversion used in deployment of pre-trained image recognition models, utilized for image classification tasks. Note that our methodology is agnostic to the DNN architecture and can be applied to other tasks such as image segmentation. Our approach detects faults introduced in model parameters, hyperparameters, and the model graph, as the primary coefficients that define DNN model behaviour. The proposed approach performs analysis and comparison against source and target model parameters and hyperparameters, as well as comparison of layer activations for inputs resulting in output label discrepancies against the source and the target model. Additionally, we explore potential discrepancies introduced by graph transformations between the source and the target model during the conversion process. Then, we propose a set of strategies to mitigate conversion faults such as the replacement of model parameters of the target model with those from source, and applying graph transformations that eliminate the error from the converted model. Finally, we present an evaluation example of the conversion process for the InceptionV3 model converted from TensorFlow to TFLite. Our technique is able to detect precision errors in weights related to convolutional layers introduced by the TFLiteConverter tool, with value deviations of up to \(0.01\) between _Source_ and _Target_, which, although small, affected the model performance.
Overall, the main contributions of this paper are: 1) A novel method to systematically localize faults in DL framework conversion processes, and 2) repair strategies for said faults.
## II Related Work
A number of studies have been conducted related to faults introduced in the deployment process of DNNs. For instance, a study of 3023 Stack Overflow posts built a taxonomy of faults and highlighted the difficulty of DNN deployment [12]. Another study explores the effect of DNN faults on mobile devices by identifying 304 faults from GitHub and Stack Overflow [13], while other studies provide surveys on existing contributions towards machine learning testing components, workflow and application scenarios [14]. In addition, there are works related to exploring the test oracle problem in the context of machine learning [15, 16]. In terms of fault localization, DeepCover [17] attempts to apply a statistical fault localization approach, focusing on the extraction of heatmap explanations from DNN inputs. DeepFault [18] focuses on a suspiciousness-oriented spectrum analysis algorithm in order to detect parts of the DNN that can be responsible for faults, while it also proposes a method for adversarial input generation. DeepLocalize [19] attempts to detect faults in DNNs by converting them to an imperative representation
and then performing dynamic analysis on top of its execution traces. Regarding fault localization in DL framework specifically, CRADLE [20] tries to detect faults introduced by DL frameworks by performing model execution graph analysis. LEMON [21] leverages the metrics used by CRADLE for its analysis to apply mutation testing.
Although the above works attempt to overcome fault localization challenges for DNNs, none of them considers model conversions as a factor of fault introduction in DNNs and, therefore, no previous work explores this problem. However, several tools exist to ease the DL model conversion process, including MMdnn [7], tf2onnx [4], onnx2keras [5], onnx2torch [6], and tflite2onnx [22]. There are also some native APIs for DL framework conversion to ONNX found within PyTorch [1] and TFLite [2]. These tools are extensively used, as they all have more than 100 stars on their GitHub repositories. In addition, a recent study by Openja et al. [23] highlights the challenges of the conversion process, while our preliminary work [24, 8] explores the robustness of DNNs against different computational environment aspects, including DL framework conversions. However, the impact of DL framework conversions on DNN model correctness is not explored in-depth in the literature.
To the best of our knowledge, this paper is the first attempt focusing on the error proneness, fault localization, and repair of DL framework conversions for DNN models. We focus on image recognition models as a a starting point, but our work is applicable to DNNs used in other domains.
## III Motivation
To observe the potential impact of DNN model conversions, we conducted an initial evaluation using three widely used image recognition models of varying size and architectural complexity: MobileNetV2 [25], ResNet101 [26], and InceptionV3 [27]. For each model, we used pre-trained versions from official repositories of four different DL frameworks: TensorFlow [2], TFLite [2], Keras [28], and PyTorch [1]. We refer to the pre-trained model of each DL framework as the _Source_ model. As a result, we have 4 _Source_ versions for each of our 3 models. We then convert each _Source_ model to use a different DL framework; we refer to the converted model as _Target_. To implement the conversion, we use tools that convert the _Source_ model either directly to _Target_, or to the ONNX [29] format, a popular model representation format that is designed to act as a common interchange format between frameworks. Some DL frameworks, such as PyTorch and TFLite, have native tools for this conversion; whereas for others, such as TensorFlow, we leverage popular third-party conversion tools like tf2onnx [4]. We then convert from ONNX to _Target_ using a number of widely used libraries, such as onnx2keras [5] and onnx2torch [6]. Following the conversion process, we perform pairwise comparison between _Source_ and _Target_ model inferences using the ILSVRC 2017 object detection test dataset [30], in order to detect discrepancies in classification introduced by the conversion process.
For each image of the dataset, we compare the output labels of _Target_ against _Source_ to check if any errors were introduced by the model conversion. The proportion of output label dissimilarities between _Source, Target_ pairs across all images in the dataset is shown in Figure 1. As can be seen from the empty grey cells, the conversion tool crashes in 10 out of the 36 conversions across the three DNN models, indicating that the conversion process failed. This happened due to compatibility issues between the conversion tool and a given model architecture, or the _Source_ or _Target_ DL framework. Additionally, we observe a further 10 cases where the conversion succeeded without crashing, but the _Target_ model gave considerable label discrepancies in comparison to the _Source_ model (over 35%), with a maximum observed discrepancy of 72% in the output labels when converting the ResNet101 model from PyTorch to TF. The conversion of TensorFlow models to Keras gives varying results across models, with MobileNetV2 having a considerable amount of dissimilarity (49%), ResNet101 having 4% dissimilarity, and InceptionV3 leading to a crash. This points to weaknesses in the conversion tool with certain model architectures. Finally, for conversions between TF or TFLite to PyTorch no conversion errors were observed, while when converting TF to TFLite across all models we see relatively small discrepancies, 0-10%, demonstrating a more reliable conversion. However, even small discrepancies may have non-negligible impact when these models are used in safety critical applications.
From Figure 1, it is clear that the conversion process is error-prone and there is a need for a technique to localize and fix faults introduced by DL frameworks converters. We discuss our approach for fault localization and repair in Section IV.
## IV Methodology
The stages in our proposed approach for fault localization and repair are shown in Figure 2. It starts by converting a given model from _Source_ to _Target_ DL framework, then performs inference over both models over an input dataset, compares output labels to identify parts of the dataset which led to different outputs, and finally performs a fault localization and repair process where possible/appropriate. We describe the fault localization and repair steps below.
Fig. 1: Pairwise comparison of output labels between _Source_ and converted _Target_ models.
### _Fault Localization & Repair_
We start by examining the tools involved in the DNN model conversion to identify if the fault is introduced during conversion from _Source_ to the ONNX format, or from ONNX to _Target_. We then complement this analysis by examining differences for three key DNN model architecture aspects: 1) Hyperparameters (such as kernel and batch size), 2) parameters (such as weights and biases), and 3) the model's graph structure (such as operations and their connections). We describe these steps below.
#### Iii-A1 Conversion Tools Analysis
Following the DNN model conversion process, when discrepancies are observed between the _Source_ and the _Target_ model, it is important to identify which part of the conversion process was responsible. The conversion process typically uses more than one tool, e.g., one for conversion to ONNX format from _Source_, and another to convert from ONNX to _Target_. We explore this over a subset of the images that presented discrepancies between _Source_ and _Target_ while also considering the intermediate ONNX representation. In particular, we consider a subset of the dataset inputs that presented different outputs between _Source_ and _Target_ models. In addition, we perform inference using the ONNX intermediate representation from the conversion process, and compare the outputs against the _Source_ and _Target_. If the conversion process involves multiple steps, we repeat the process for all intermediate steps, so that we can better localize where the fault is introduced.
#### Iii-A2 Parameter Analysis
A correct DNN model conversion should result in a target model having the same parameters, and producing the same output as the source model. However, if for some reason the parameters are altered (e.g., due to a precision error in the conversion process), this could potentially affect the model's correctness.
To detect this fault, we take the _Source_ and _Target_ model variants, and extract their parameters (e.g., weights and biases). We then compare the parameters between model variants across layers of the same type (e.g., convolutions, bias additions, etc), by computing \(\mathrm{mean(abs(}P_{source}-P_{target}))\), where \(P_{source}\), \(P_{target}\) are the parameters of the source and converted target models, respectively. The value of the mean difference is expected to be zero when the model parameters are unaffected, and any other value indicates that there is a difference across the parameters in a specific layer, which is a potential cause for bugs.
#### Iii-A3 Hyperparameter Analysis
Much like parameters, incorrectly converted hyperparameters are another potential source of error. For example, we would expect for a convolutional layer, the padding, strides, dilation, and other configurations would remain unchanged during a conversion. However, a difference could indicate a potential source of error and is marked for further evaluation in our fault localization approach.
#### Iii-A4 Layer Analysis
To detect faults that occur on a specific layer, we propose a layer-based dynamic analysis approach. Using a small, indicative subset of 5 images from the dataset that present output label discrepancies between _Source_ and _Target_ models, we perform inference and compare per-layer activations between the models. For each input, we compute the mean of differences found across activations for each layer. We then further examine the layers affected sequentially, starting from the first layer and moving forward. We focus on errors in the graph representation of that layer, as well as on implementation details. In particular, we examine if a layer or its graph neighbors are implemented in a different manner or are using different but equivalent operations (e.g., _reshape_ and _flatten_) between the _Source_ and _Target_ model.
#### Iii-A5 Fault Repair
Once a difference is detected, we attempt one of the following options based on the location of the difference for fault repair.
(a) For differences in model hyperparameters, weights, and biases, the respective values from source can be replaced by the target model. Since the conversion process should preserve those values, then the replacement in the target model should resolve the observed differences.
(b) For differences detected in layer activations, there are a number of measures that can be applied. First, a set of mappings can be applied in order to perform in-place replacement of parts of the graph that should behave similarly, but differences in implementation (such as the selection of a different layer type, or the addition of extra redundant layers) could cause differences in layer outputs. For example, we observed cases (e.g., MobileNetV2, PyTorch-to-Keras conversion) where the _flatten_ layer was replaced by a _reshape_ layer by the converter tools. We instruct a layer replacement to the target based on the layer of the source model, while adjusting tensor inputs and outputs to preserve model validity. In addition, if there are extra nodes added close to the layer affected, they could be modified and removed as an attempt to eliminate errors. For instance, we observed the addition of some padding layers to the target model for a number of conversions (e.g., MobileNetV2, TF-to-PyTorch conversion). A potential fix is to simply remove this node.
Our current approach has limitations for cases where whole sub-graphs in the _Target_ model have completely different structure than the _Source_. A replacement in this scenario is non-trivial and is subject to consideration for future work. Once a fix is applied, inference is performed with the target model against the inputs causing discrepancies, and the behavior is monitored. If an improved result is detected for some or all of the images, then the fix is considered successful.
Fig. 2: Fault Localization & Repair Pipeline.
### _Implementation Details_
Our methodology is implemented using Apache TVM [31], a cross-platform machine learning compiler framework. We use TVM in order to build and perform inference for the _Source_ and _Target_ models, while we extract the graph parameters, graph structure provided by each model for weights, biases, and hyperparameters utilizing the model static parameters and graph description metadata generated across the build process. We also use ONNXRuntime [32] to perform intermediate representation inference. In addition, we utilize the TVM Debugger to extract layer activations upon inference, as well as set specific inputs and extracting targeted outputs from hidden layers. The TVM debugger was also used in order to achieve model repair strategies, such as replacing weights, biases, and hyperparameters. For the graph modification part, we utilized the ONNX [29] API in combination with ONNX-Modifier [33] in order to apply graph modifications. We also used Netron [34] for DNN graph observation purposes.
### _Preliminary Evaluation_
As an initial case study, we consider the conversion of InceptionV3 using TensorFlow (TF) as _Source_ and converting it to TFLite as _Target_. The conversion involved two utilities, the native API of _TFLite_ (_TFLiteConverter_), and _tf2onnx_. We observed label differences between _Source_ and _Target_ models for 4% of the input images (240 out of 5500 images). We were interested in this particular case study because the conversion error is low but still present in a small number of images. This "subtle" failure is of particular interest for safety critical applications, where edge case behavior is more important.
For fault localization, we start by performing an analysis of the conversion tools on the images showing label differences. As seen in Table I, for three sample images label differences occur when converting models from TF to TFLite, but not in the conversion to ONNX. As a result, we find that the TFLiteConverter is the problematic part of this particular conversion process. We perform further investigation of this tool in the next steps.
We then proceed with _parameters_ and _layers_ analysis between _Source_ and _Target_ to further examine the effects and the potential reasons for the problem. We consider an image presenting no discrepancies and two images presenting minor and major label discrepancies (by calculating and comparing Kendall's Tau coefficient [35] for the top-5 inference labels). We present the results in Figure 3, where the _Parameters_ line indicates the mean of differences per-layer (x-axis) in parameters for two types of layers, convolutions and bias additions. The remaining lines depict the differences in activations (mean of tensor values comparison) for each layer between _Source_ and _Target_. Image 1 presented no discrepancies, Image 2 presented small discrepancies, and Image 3 presented major discrepancies, measured using Kendall's Tau coefficient. We observe layer 2 started presenting discrepancies between _Source_ and _Target_ for all images under test, affecting the model early in the process. Additionally, there is a spike in the difference observed in layers 170 onwards for Image 3 (which presented large discrepancies between _Source_ and _Target_). We examined if the cause of the discrepancy was errors introduced in the model weights while using TFLiteConverter in the conversion process. In particular, we performed a manual _Source_ and _Target_ model parameters inspection using Netron [34], which confirmed the fault localization finding, as we observed precision errors in the generated ONNX graph from _Source_.
In order to fix the error, we replaced the model weights of the _Target_ model with those from the _Source_, and performed inference against the subset of images presenting discrepancies between the models. The outputs of the updated _Target_ were identical to the original _Source_, resolving the issue, and proving its cause.
## V Conclusions and Future Work
We presented a novel fault localization approach for errors encountered during DL framework conversion in image recognition models. It focuses on key DNN model elements such as parameters, hyperparameters, and graph architecture. We also propose strategies to repair the detected errors, such as correcting corrupted model weights. As an example, we examined InceptionV3 when converted from TF to TFLite, which resulted in discrepancies for a small fraction of the input images. We used our approach to localize the conversion bug and fix it. As future work, we aim to evaluate our approach against all conversion tools in Figure 1 and other image recognition models. We will also apply it to other DL tasks such as object detection. Finally, we plan to expand our fault repair strategies to address conversion errors that cause significant changes in _Source_ and _Target_ model graphs.
Fig. 3: Layer-wise evaluation of the differences between InceptionV3 model sourced from TensorFlow, and converted to TFLite. _Parameters_ shows the mean difference between their weights and biases for convolutional and bias addition layers. _Image 1_, _Image 2_, and _Image 3_ show models’ differences in activations for two inputs across _Source_ and _Target_ models.
|
2301.02396
|
Don't follow the leader: Independent thinkers create scientific
innovation
|
Academic success is distributed unequally; a few top scientists receive the
bulk of attention, citations, and resources. However, do these ``superstars"
foster leadership in scientific innovation? We introduce three
information-theoretic measures that quantify novelty, innovation, and impact
from scholarly citation networks, and compare the scholarly output of
scientists who are either not connected or strongly connected to superstar
scientists. We find that while connected scientists do indeed publish more,
garner more citations, and produce more diverse content, this comes at a cost
of lower innovation and higher redundancy of ideas. Further, once one removes
papers co-authored with superstars, the academic output of these connected
scientists diminishes. In contrast, authors that produce innovative content
without the benefit of collaborations with scientific superstars produce papers
that connect a greater diversity of concepts, publish more, and have comparable
citation rates, once one controls for transferred prestige of superstars. On
balance, our results indicate that academia pays a price by focusing attention
and resources on superstars.
|
Sean Kelty, Raiyan Abdul Baten, Adiba Mahbub Proma, Ehsan Hoque, Johan Bollen, Gourab Ghoshal
|
2023-01-06T06:17:06Z
|
http://arxiv.org/abs/2301.02396v1
|
# Don't follow the leader: Independent thinkers create scientific innovation
###### Abstract
Academic success is distributed unequally; a few top scientists receive the bulk of attention, citations, and resources. However, do these "superstars" foster leadership in scientific innovation? We introduce three information-theoretic measures that quantify novelty, innovation, and impact from scholarly citation networks, and compare the scholarly output of scientists who are either not connected or strongly connected to superstar scientists. We find that while connected scientists do indeed publish more, garner more citations, and produce more diverse content, this comes at a cost of lower innovation and higher redundancy of ideas. Further, once one removes papers co-authored with superstars, the academic output of these connected scientists diminishes. In contrast, authors that produce innovative content without the benefit of collaborations with scientific superstars produce papers that connect a greater diversity of concepts, publish more, and have comparable citation rates, once one controls for transferred prestige of superstars. On balance, our results indicate that academia pays a price by focusing attention and resources on superstars.
Introduction
"To truly make an apple pie from scratch you must first invent the universe"--a quote attributed to Carl Sagan [1]--illustrates the idea that the process by which individuals create is contingent upon the elements on which that creation is based. Whether creating a new piece of music, going about daily routines, or engaging in scientific research, people's actions are founded in the information, experiences, and relationships that they have establish by themselves and through others [2; 3; 4; 5]. Each person has their own basis of knowledge that stems from their own lived experiences while also existing in a network of relationships through which they share experiences and knowledge with each other, thereby informing a collective understanding among a network of connected individuals [6]. Within such networks, hierarchies can emerge in which some actors exert greater social influence over the network and thus the creative process that it supports, while others may influence only those closest to them or no one at all [7]. This social hierarchy is common in the societal dynamics of government and politics, where some individuals and institutions exert a great degree of influence over the flow of information in the system and opinion formation [8; 9; 10].
Academia is not immune from the emergence of social hierarchies; some academics can function as figures of authority due to the merit and influence of their work and their prominent position in a network of academic collaborations. Citations as an indicator of academic influence [11] have long been known to be distributed very unequally[12], with a minority of a few scientists receiving most citations. Such inequality may be increasing at a global level[13], at least with respect to citation numbers. In academic publishing, biasing effects like this have been studied under the lens of the Matthew Effect, where success begets more success and early success compounds into a cumulative advantage as the "rich get richer" [14]. There are arguments that this effect is beneficial for academia; the rewards of top researchers are proportional to their contributions, which ensures the "epistemic security" of the field [15]. This thinking is aligned with the notion that science should operate as a meritocracy; those who contribute the most are also valued the most, and will therefore be most influential. Indeed, there is a high degree of trust in our most successful academics and the value of their mentorship. For instance, junior researchers collaborating with top scientists at the early stages of their career are likely to become top-cited scientists themselves,
especially those at less prestigious universities [16]. Inexperienced academics can benefit from apprenticeships with top scientists; the "chaperoning" of early-career scientists leads to higher rates of publication in high-impact journals [17]. These relationships are frequently mutually beneficial. Less visible authors benefit from more opportunities to publish papers in high quality journals that attract larger audiences, whereas top scientists gain collaborators with unique skills to produce more high quality work [18]. Close collaboration of less visible academics with those in the upper echelons can furthermore create opportunities for a first-mover advantage, inducing a positive feedback loop and early bandwagoning of innovative ideas [19].
While top academics (sometimes referred to as "superstars") may make consistent and high impact contributions that benefit their field and collaborators, their status as superstars may also have deleterious effects due to the subsequent concentration of resources and attention. For instance, it has been shown that the collaborators of academic superstars experience a 5 to 9% drop in publication rates after the sudden death of that superstar [20], highlighting their dependence on the superstar's collaboration. In fact, it is unclear whether collaborating with superstars truly fosters independent career development [21; 22] Furthermore, superstars can induce a high degree of inequality in the distribution of research funding due to a funding Matthew-effect. Those who receive funding accumulate twice as much research funding afterwards compared to those who submitted similarly valued proposals but found themselves, by chance, just below the funding threshold. There is no evidence that this accumulation of research funding is due to actual achievements enabled by previous funding [23; 24]. If successful collaborations with superstars lead to early funding success, this can induce a superstar-fueled funding cycle that increasingly widens the gap between scientific haves and have-nots.
The topology, structure, and characteristics of scientific collaboration networks may play an important role in these effects since they shape both the production and dissemination of ideas, potentially with conflicting outcomes. Tightly connected networks could be more efficient in distributing and leveraging knowledge thereby yielding higher productivity, but may at the same time lead to a decline of diversity, reducing exploration and discovery [25; 26; 27]. Although some spillover effects may occur, i.e. collaborators of highly-acclaimed authors benefit by proxy [28], it is not clear whether the concentration of attention of resources
towards superstars yields more novel and innovative research. This is a particularly relevant issue with the rise of interdisciplinary research which relies on the ability of scientists to collaborate in equitable teams that foster creativity and innovation across various research fields [29].
To investigate the effects of superstar influence on academic productivity, impact, and innovation, we perform a comprehensive analysis of the American Physical Society corpus. Following [20], we define superstars as academics who are among the top.1% in terms of their h-index [30; 31]. We extract the semantic content of over 250,000 abstracts, defining a number of information-theoretic measures to quantify the novelty and innovation of each paper. We augment this with analysis of publication and citation rates, and examine the difference in academic output between researchers who collaborate with or cite frequently papers by superstars against those with little-to-no connection to such superstars. We find that at the individual level, collaborators and frequent citers of superstars, publish more, garner higher citations and produce papers with more diverse content compared to other academics. However, their work is no more innovative than the rest of the corpus and its content is more redundant. Further, once one excludes papers co-authored with superstars, their publication and citation output are no different from the rest of the corpus and in some cases output is lower.
Focusing on early career researchers, we find that those who frequently collaborate with superstars in the beginning of their careers, do eventually go on to produce impressive academic output, although once the collaboration is removed, their output in terms of publication rates, citation impact, and innovation is significantly diminished. On the other hand, early career researchers that produce innovative content without the benefit of early superstar collaboration, continue to produce such content over the rest of their careers. They publish more then early collaborators of superstars and accrue similar citation numbers, once one controls for the collaboration itself.
Results
### Data
We use the American Physical Society (APS) corpus [32] that contains articles published in APS journals since 1893. The data set contains full citation data, i.e. the citations pointing from the references of one article to another, allowing a reconstruction of the full citation network among all articles, including article-specific fields such as DOI, journal, volume, issue, first page and last page OR article id and number of pages, title, authors, affiliations, publication history, PACS codes, table of contents heading, article type, and copyright information. Given that the data does not include article abstracts, we used a web-scraping algorithm [33] to collect abstracts for 250,628 articles corresponding to between 35-40% of all published papers across the different APS journals (Fig. S1). We note that around 1% of these articles have references not contained in the APS citation network, and on average we scraped abstracts for 38% of paper references. The distribution of citations and h-index are both heavy-tailed (Fig. S2), with the average number of citations being 14.4 and the average h-index 1.74. Author disambiguation was done using a rule-based scoring method [34] (Cf. Sec.S1.2) We consider authors who first publish on or after 1970, and define superstars as those with the top.1% of h-index in the corpus, corresponding to an h-index threshold of 21. This yields 303 superstars among 292,394 authors. The summary statistics can be found in Tab. S1.
In order to extract topics from the collected abstracts, we use an unsupervised Latent Dirichlet Allocation (LDA) algorithm on phrases (P-LDA) [35] to establish vector embeddings for phrases and documents within our corpus. Stop words in the corpus were removed, all words were lemmatized, and phrases were determined based on a significance score that determined whether or not phrases occurred due to random chance. These vector embeddings have dimensionality \(k\) correspoding to the number of topics defined for our corpus. P-LDA utilizes Gibbs Sampling to generate distributions of topics over phrases as well as documents [36], from which novelty scores can be extracted based on topic-spread. We choose a number of topics \(k\) based on the UMass coherence measure ([37]), the value of which first stabilizes at \(k=25\) topics (Fig. S3). Tab. S2 shows the top 10 terms per topic. The resulting output for each document \(u\) is a \(k\)-dimensional vector \(\mathbf{v}^{u}\) whose elements
correspond to the frequency of topic \(i\) extracted from its abstract (example in Tab. S3).
### Novelty, innovation and redundancy
Novelty detection in the literature has been implemented in a variety of ways [38], such as contextualizing novelty in machine learning as information retrieval [39; 40], distant combinations of ideas via citation relations [41], first-pass combinations of concepts never before connected [42], knowledge-graphs of concepts within social networks [26], and agent-based simulations of social and individual learning [27].
Here we rely on document-level embeddings that represent a distribution of all topics contained within the abstract of given paper, using which one can define the topic diversity in terms of a paper, its references, and articles that cite the paper. Using this, we define a variety of metrics capturing different aspects of novelty and innovation.
Coupling connections between authors and the content of their works can then elucidate the influence that superstars have on the success of and novelty produced by other academics.
_Entropy:_ For a given document \(u\), we define the Shannon entropy as
\[I_{u}^{(S)}=-\sum_{i=1}^{k}v_{i}^{u}\ln v_{i}^{u}, \tag{1}\]
The expression quantifies the average level of "surprise" or uncertainty over the outcomes of a random variable [43]. In this context, papers focusing on limited number of topics in abstracts will yield low values of \(I_{u}^{(S)}\), whereas those with a wide diversity of topics will yield a larger value of the entropy.
_Reference and Citation Diversity:_ While \(I_{u}^{(S)}\) measures the "surprise" with respect to a paper's content, in this case its abstract, references and citations refer to the degree that the ideas in a given paper were inspired by other papers (references) or of inspiration to other papers (citations). We can thus measure the novelty of a paper, or its Information Diversity [44], by evaluating the dispersion of the topics of its references or the citations its receives. The greater the variance of the topic distribution, the higher the information diversity. For a set \(X^{u}\), that can represent either the references in paper \(u\), or citations to paper \(u\), we
define the quantity,
\[I_{u}^{(X)}=\frac{1}{|X^{u}|}\sum_{l\in X^{u}}\left[1-\cos\left(\mathbf{v}^{l}, \overline{X^{u}}\right)\right] \tag{2}\]
where \(\cos\left(\mathbf{v}^{l},\overline{X^{u}}\right)\) is the cosine similarity of the vector embedding of a particular reference/citation \(\mathbf{v}^{l}\) with the average over the vector embeddings of all references/citations in the set \(X^{u}\). We can as such define _reference diversity_ and _citation diversity_ as the information diversity over the references from a paper and citations to the paper respectively.
_Innovation:_ The metrics defined thus far are based on topic models expressed as topic distributions per document derived from the words in their content (abstracts). These metrics capture topic diversity of the paper itself, or its influences, but does not express the degree to which the paper expanded the literature through innovation. In other words, they express what document themselves are about, but not whether this adds to the diversity of the literature. We therefore define Innovation as the degree to which the document adds topics in new combination to the literature [45; 46]. Specifically, innovation in this context, is a measurement of when terms were first introduced or combined in the corpus (cf. Sec. S1.4 and Fig. S4). Coupled with the novelty measures, this allows us to track how the diversity of ideas correlates with new conceptual recombinations and co-occurrences of terms. Following this logic, we define the Innovativeness of paper \(u\) as
\[I_{u}^{(I)}=\frac{1}{2}\sum_{w_{1}\neq w_{2}\in u}\mathcal{I}(w_{1},w_{2};u) \tag{3}\]
where \(w_{1}\) and \(w_{2}\) are distinct terms in paper \(u\), \(\mathcal{I}(w_{1},w_{2};u)\) is an indicator function that is 1 if terms \(w_{1}\) and \(w_{2}\) are first seen within the corpus in paper \(u\) and 0 otherwise, and the \(\frac{1}{2}\) prefix accounts for double counting. To remove spurious conceptual links due to chance or extreme rarity, we calculate a point-wise mutual information for all links as the log ratio of co-occurrence probability over the individual probabilities of each concept [46]. In Fig. S5 we determine the Pearson's \(r\) correlation coefficients between each measure and find only weak correlations, indicating that each measure captures a different aspect of academic output.
_Redundancy:_ Finally, in a related context, in the field of creative ideation, it has been reported that inspirees stimulated by highly creative alters, tend to generate more creative ideas [47; 48; 49]. However, as a group, the inspirees ideas was found to be similar to each other
leading to redundancy in generated ideas over time at the group level. To check whether a similar effect manifests in academic publishing, we compute the cosine similarity score between papers \(u,u^{\prime}\) in the set \(P(G,s,t)\) thus
\[\text{Sim}(G,s,t)=\frac{2}{\left|P(G,s,t)\right|\left(\left|P(G,s,t)\right|-1 \right)}\sum_{u,u^{\prime}\in P(G,s,t)}\cos(\mathbf{v}^{u},\mathbf{v}^{u^{ \prime}}). \tag{4}\]
### Superstar statistics
We next examine whether the novelty and innovation produced by superstars are significantly different from the rest of the academic corpus. In Fig. 1 we plot the Reference and Citation diversity (Eq. (2)), the Shannon entropy (Eq. (1)) and Innovation (Eq. (3)) comparing the set of superstar academics against the rest of the authors in the corpus. In terms of reference diversity, citation diversity and Shannon entropy, superstars outperform the remaining academics by 20%, 15%, and 2% respectively. That is, superstars are inspired by a higher diversity of content, publish works that are more conceptually diverse, and inspire a wider array of publications than non-superstars. The starkest contrast can be seen in terms of Innovation, where there is a factor of ten difference between superstars and other academics indicating that the former are more prolific in introducing new combinations of terms. We note that there is a monotonic dependence of the metrics with number of publications for all academics, although the effect is more pronounced for superstars (Fig. S6). Furthermore, there is also a monotonic dependence of citations received by a paper \(u\) and the novelty/innovation metrics (once again more pronounced for superstars) indicating that an increase in conceptual diversity and the ability to connect concepts for the first time is rewarded in terms of more attention paid to that paper (Fig. S7).
### Superstar influence
Having established that superstars outperform other academics in terms of our metrics, we next determine to what degree superstars affect the academic output of their collaborators and their "inspieres" (those inspired by their work). Inspieres are authors that cite a superstar's papers, for whom we determine the degree of inspiration by the frequency of
citations. We examine inspirees both at the group- and individual-levels. At the group-level, we center the superstar in a network of inspirees where the degree of inspiration is the number of times a researcher cites the superstar. We then partition the inspirees into groups based on their degree of inspiration, where the upper bounds for each bin are the top 10% of inspirees, 20%, 30%, 50%, and 100%. These groups represent increasingly weakening ties to a given superstar; those in the top 10 percent are the most actively inspired, while the bottom 50 percent typically cite the superstar only once. Note that some inspirees in the bottom 50 group of one superstar may be in the top group of another superstar. The increasing bin sizes are chosen to account for the decreasing frequency of inspired citations among the least-inspired inspirees, such that there are sufficient number of papers compared between groups.
Given that we are interested in the temporal evolution of superstar influence on the novelty and innovation of the inspirees, we denote the year of the first superstar publication as \(t_{0}=0\) and for every susbsequent year \(t>t_{0}\), we consider the set of publications by the inspirees who cite the superstar. For each partitioned group, we calculate the average novelty of all of the publications in year \(t\) per partition. Denoting the set of papers inspired by superstar
Figure 1: **Average author-level statistics of novelty and innovation A** Reference Diversity, **B** Citation Diversity, **C** Shannon Entropy, **D** Innovation. The orange bar is for superstars (h-index \(\geq\) 21) and the blue bars correspond to all other authors in the corpus.
\(s\) for partition \(G\) at year \(t\) as \(P(G,s,t)\), the average novelty scores are computed as
\[\langle I_{u}^{(l)}\rangle_{G,s,t}=\frac{1}{|P(G,s,t)|}\sum_{u\in P(G,s,t)}I_{u}^ {(l)} \tag{5}\]
where \(l=S,X,I\) is the novelty or innovation score of paper \(u\).
We plot the results of our analysis in Fig. 2. In terms of the temporal evolution of the Shannon entropy, while there is a monotonic increase--reflecting an increase in the body of knowledge with time (Fig. S8)--we find little-to-no differences across the groups as seen in Fig. 2**A**. Averaging over the entire temporal range also indicates a flat trend (Fig. 2**D**). Similar trends are seen for the reference diversity both in terms of its temporal evolution (upper panel of Fig. S9**A,B**) as well as their temporally averaged values (lower panel). Unlike the entropy or reference diversity, there is a decreasing trend in time for the citation diversity. We observe a 5% decrease in the measure between those in the top 10% as compared to the bottom 50%. Figure 2**B,E** indicates the same trend for Innovation which also decreases in time across all groups, reflecting a saturation in the number of combinations of new terms
Figure 2: **Novelty and Innovation statistics at the group-level** Temporal trajectory of average paper-level statistics. **A**: Shannon Entropy, **B**: Innovation, **C**: Citations per-paper. Aggregated group-level statistics **D**: Shannon Entropy, **E**: Innovation, **F**: Citations per-paper. Curves indicate averages, shaded area 95% confidence interval.
that are combined by authors as their career progresses. The difference between the top and bottom groups is now around 15%. Finally, citations to papers experience an initial boost and then decreases in time as seen in Fig. 2**C**, with now much clearer differences between the groups. Indeed, there is a 40% difference in citations per-paper between the most and least inspired groups as seen in Fig. 2**F**.
In terms of redundancy, in Fig. S9**C** we plot the cosine similarity (Eq. (4). As the figure indicates, across all groups there is a decreasing trend in the temporal evolution of the similarity, yet a clear difference exists, whereby papers published by the top 10% are on average 8% more similar to each other in terms of content when compared to the bottom 50%. Taken together, the results indicate that groups of authors who cite superstar papers often do get a citation boost as compared to other sets of authors. However, their output is modestly more innovative and equally novel as compared to the rest of the corpus. Rather their content is more redundant than the remaining sets of authors.
Next, we dis-aggregate the group-level results and examine the degree of superstar influence at the individual author level. In Fig. 3 we plot the averages of the novelty and innovation metrics as well as citations and publication counts across authors as a function of the fraction of their papers that cite superstars. Given that many authors co-publish
Figure 3: **Novelty and Innovation statistics at the individual author-level**. **A** Reference Diversity, **B** Citation Diversity, **C** Shannon Entropy, **D** Innovation, **E** Average citation count, **F** Average publication count.
with superstars, the blue curve indicates the results when including such papers, while the orange curve shows the results excluding these papers. Figure 3**A-C** indicate that as authors cite more superstars they experience an increase in reference and citation diversity as well as the Shannon entropy irrespective of whether one includes their collaboration with superstars. While we see no indications of novelty of content being driven by superstar-influence at the group-level, at the individual level the benefits are clear. On the other hand, when looking at Innovation (Fig. 3**D**), the trend is either flat when including all papers, and decreasing when co-authored publications are excluded. Indeed, it appears that the more authors cite superstars, the _less innovative_ their own publications become (i.e those not co-authored with a superstar). The benefit of collaborating with a superstar becomes even more apparent when looking at citations (Fig. 3**E**) and number of publications (Fig. 3**F**). For the former when including collaborations there is a dramatic benefit in terms of garnered citations (approximately 67% more citations on average) that drops considerably when excluding collaborations. Indeed, the citation-benefit appears to be driven primarily by being collaborators of superstars who by definition have the largest number of citations to their papers. The same appears to be the case for the latter, with the number of publications increasing when including collaborations, and decreasing when excluded.
### Early Collaborators and Early Innovators
The results thus far provide evidence for academics inspired by superstars producing output with diverse content and that receives visibility via citations, while not necessarily being innovative in the sense of tying together new concepts. On the other hand, there is also evidence that these features are significantly boosted by direct collaboration with superstars, and when left to their own devices their publication output, novelty and innovation is lower than the rest of the corpus. Indeed, it begs the question whether superstars foster independent individual success, or rather inhibits it? For instance, as shown, at the aggregate level, the group of authors that cite superstars the most often tend to publish on mostly the same topics.
To further probe this we restrict our analysis to early-career scientists. Given that findings from prior studies have shown that collaboration with successful scientists provides a boost
for early career researchers [16], and that early success generates a cumulative advantage of long-term career success [14], we define _early collaborators_ as those authors who collaborate with superstars in at least half of their papers in the first five years of their career. As a point of comparison, we define another set of authors who do not collaborate with, or cite superstar papers, but are in the top 10% of the corpus in terms of Innovation as measured by their first five years of publications. We term these authors _early innovators_. We use innovation as a metric, given that this is the measure by which superstars outperform other academics the most (Fig. 1**D**) and therefore might serve as a robust indicator of academic potential.
Figure 4: **Citations and Innovation for frequent collaborators and early innovators A** Citations per paper when including superstar papers, **B** The same when excluding superstar papers. **C** Temporal evolution of Innovation. **D** The same when excluding superstar papers. The horizontal axis \(t-t_{0}\) indicates the time elapsed from the \(t_{0}\) the time of first publication for authors in either group.
For academics in each group we track the temporal evolution of the citations per-paper, the number of publications, as well as the Innovation, measured from the date of first publication \(t_{0}\) for authors in either group. Early collaborators get more citations per paper (Fig. 4**A**) and publish more than early innovators (Fig. S10**A**) particularly within the first ten years of their career. However, when one removes superstar publications, the trend reverses where now early innovators publish more (Fig. S10**B**) and garner a comparable rate of citations as the other group (Fig. 4**B** ). Additionally the early innovators maintain a higher degree of Innovation throughout their careers as compared to early collaborators (Fig. 4**C, D**) with or without including collaborations to superstars. Thus the evidence suggests that while early career scientists indeed get a boost from collaborating with superstars, their own academic output is less innovative and equally visible in terms of citations, as compared to other early career scientists who produce innovative output without the benefit of such collaborations.
## III Conclusion and Discussion
In the exponentially growing knowledge-base of academia in which visibility and funding are increasingly being biased towards top academics and institutions, we examine the influence that superstar academics have on the community as a whole and in terms of novelty and career success. Superstars provide an irreplaceable source of novel ideas and contributions at rates that exceed those of other academics in the corpus; our metrics support that their accolades are well deserved and should be rewarded as such. We find superstars are highly novel and inspire a higher diversity of concepts among their followers and collaborators. However they do inhibit innovation potential. Those academics most inspired by a superstar are individually themselves more diverse in their papers, but at the group level add little intrinsic novelty than groups more weakly inspired by the superstar, even though they achieve higher citations.
Additionally, we find indications of a strong Matthew Effect whereby academics who cite a superstar highly receive higher citations when collaborating with the superstar than without, despite higher gains in concept diversity than academic counterparts. Though collaboration with successful academics can stimulate a successful career path, we find these collaborations
can stifile innovation and may not provide the best indicator of long-term independent career success.
Collaboration is a requirement to tackle increasingly difficult interdisciplinary problems. Superstars are well-positioned to foster interdisciplinary research efforts by supporting early-career researchers. Although the latter receive a citation boost when collaborate with a superstar, this does not imply that they are developing more novel work than their colleagues who are less connected to top academics. In fact, our results indicate that those closest to a superstar show the lowest innovation potential. This is slightly surprising given that the literature have shown junior researchers that collaborate with superstars are more likely to publish in high quality journals and have increased chances of engaging in high quality research with other top scientists. On balance, however, we find that this does not stimulate long term independent career success. This could be an indication of individuals getting lost in the wake of a superstar, meaning these researchers "bandwagon" off the ideas and visibility of their respective superstars and iterate on the superstar's work. Although there is value in iterating upon already developed research questions, this may not foster innovative work and stimulate individual careers. Indeed, very recently it has been shown that there is a decline in disruptive ideas in both scientific publications and patents [50]. The authors attribute this to an ever increasing reliance on a narrower set of extant scientific knowledge on which to build ideas, a finding very much in line with our observation that followers of superstars produce redundant and less innovative content as a group.
The observed effects could be a consequence of superstars' strong hold over their respective fields. It's been shown that paradigm shifts in thinking occur after the sudden deaths of superstars. Collaborators of superstars suffer a drop in publication rate after their superstar death, and the field may experience a surge of contributions by outsiders who are disproportionately likely to be highly-cited [51]. One can infer that collaborators of superstars are successful because they are collaborating with superstars. Care should be taken when considering these proteges themselves for matters of funding and academic hiring. If the goal is to foster highly novel work, elements outside of prestige and social connection, such as efficacy, equity, and innovation, should be considered.
Our findings are not limited solely to early innovators, collaborators, and inspirees. Though we provide early innovators as an example, many other groups [52] can be iso
lated and studied in the way we have done here to identify promising academics based on early signatures of novelty or a range of social parameters. We outlined multiple different definitions of novelty in the introduction which we have not further developed in this study. Implementing the different definitions and distinguishing different types of novelty can elucidate what types of novelty are stifled or enhanced by different social configurations.
A subject that we have not probed but is directly relevant to our discussion is the matter of funding. In recent times, funding has increasingly become more biased towards top institutions [53], with 90% of NSF funding in 2018 going to 22% of funded institutions, serving 43% of all institutions and 34% of underrepresented minorities [54]. This is coupled with a history of funding disparities with respect to race and underrepresented communities [55; 56; 57]. Additionally, underrepresented groups produce novel works at higher rates yet are taken up by other scholars at lower rates than novel contributions by gender and racial majorities [46]. Equitable funding programs have been shown to enhance research infrastructure, investigator capabilities, and intra- and inter-university collaborations at less prominent institutions [58]. As we have shown, those that are least influenced by superstars innovate the most and consequently have higher citation rates. Coupling these results with added attention to equitable funding practices [59] we believe will reduce the growing inequality in academia and stimulate novel and innovative research.
Finally, we note that our investigation necessarily comes with limitations. Given our sole focus on the APS body of literature, one should be careful to extrapolate this to other academic disciplines. This is also an incomplete subset of the entire journal, so a full corpus with an entire citation network would give a more accurate picture.
|
2306.01638
|
Do we become wiser with time? On causal equivalence with tiered
background knowledge
|
Equivalence classes of DAGs (represented by CPDAGs) may be too large to
provide useful causal information. Here, we address incorporating tiered
background knowledge yielding restricted equivalence classes represented by
'tiered MPDAGs'. Tiered knowledge leads to considerable gains in
informativeness and computational efficiency: We show that construction of
tiered MPDAGs only requires application of Meek's 1st rule, and that tiered
MPDAGs (unlike general MPDAGs) are chain graphs with chordal components. This
entails simplifications e.g. of determining valid adjustment sets for causal
effect estimation. Further, we characterise when one tiered ordering is more
informative than another, providing insights into useful aspects of background
knowledge.
|
Christine W. Bang, Vanessa Didelez
|
2023-06-02T15:58:22Z
|
http://arxiv.org/abs/2306.01638v1
|
# Do we become wiser with time?
###### Abstract
Equivalence classes of DAGs (represented by CPDAGs) may be too large to provide useful causal information. Here, we address incorporating _tiered_ background knowledge yielding restricted equivalence classes represented by 'tiered MPDAGs'. Tiered knowledge leads to considerable gains in informativeness and computational efficiency: We show that construction of tiered MPDAGs only requires application of Meek's 1st rule, and that tiered MPDAGs (unlike general MPDAGs) are chain graphs with chordal components. This entails simplifications e.g. of determining valid adjustment sets for causal effect estimation. Further, we characterise when one tiered ordering is more informative than another, providing insights into useful aspects of background knowledge.
## 1 Introduction
We consider equivalence classes of DAGs represented by completed partially directed acyclic graphs (CPDAGs), occurring as outputs of causal discovery algorithms. A first characterisation of equivalent DAGs was given by Verma and Pearl (1990) and a full characterisation of CPDAGs by Andersson et al. (1997). Often, domain expertise provides additional information about shared features of the graphs in a class. Restricting the equivalence class by background knowledge yields a type of partially directed acyclic graph (PDAG) that is potentially much more informative due to more induced edge orientations. Meek (1995) provided a set of orientation rules to obtain a graph encoding the maximal implied information, and the resulting graph is then a maximally oriented partially directed acyclic graph (MPDAG) (Perkovic et al., 2017). While a CPDAG represents an independence model common to all DAGs in the equivalence class, an MPDAG represents an independence model as well as additional causal or directional information that is common to all DAGs in a restricted equivalence class. DAGs and CPDAGs are special cases of MPDAGs; DAGs are MPDAGs with full (or sufficient) background knowledge, while CPDAGs are MPDAGs with no (or redundant) background knowledge. A general characterisation of MPDAGs was given by Fang et al. (2022). The interpretation of MPDAGs was described in detail by Perkovic et al. (2017) and is considerably more involved than that of CPDAGs.
Background knowledge can be induced by, e.g., well-established causal or logical relations. Some kinds of knowledge, e.g. temporal or sequential structures, imply that the nodes can be partitioned into ordered tiers. This is the case in many settings where longitudinal data is collected, e.g. cohort or panel studies, common in sociology, epidemiology etc. In particular, this kind of data structure is used in the field of life course epidemiology (Kuh and Ben-Shlomo, 2004). Tiered background knowledge is typically unambiguous and it is intuitively obvious that it must be useful. Indeed, implementations of algorithms for constraint-based causal discovery with a given tiered structure exist (e.g. Scheines et al. (1998)), and have been applied to cohort data for life course analyses (Petersen et al., 2021; Foraita et al., 2022), but the general properties of these restricted model classes have not yet been investigated. Here, we provide the first formal in-depths analysis of equivalence classes restricted by tiered knowledge. We show that there are several desirable properties, distinguishing tiered from other kinds of background knowledge. Thus, we focus on MPDAGs arising from imposing a tiered ordering, which we term 'tiered MPDAGs'. We show that under the key properties of completeness and transitivity, tiered background knowledge cannot induce partially directed cycles, and, moreover, that tiered MPDAGs are chain graphs with chordal chain components. This allows us to, e.g., apply common methods for identifying causal effects using CPDAGs to tiered MPDAGs without any additional processing.
While temporal structures will often be the main source for tiered background knowledge, tiers are slightly more
general. Information about logical causal directions may be available, for instance between environment and individual or between cells and molecules. When eliciting such background knowledge, there may be more effort involved to achieve more detail; or it may be possible but more costly to achieve more detail by the design of a, say, cohort study with finer waves. Existing data from cohort studies are organised in a readily available tiered structure. But within the same wave it may be possible to further subdivide the nodes using logical, temporal or similar expertise; e.g. the first wave of a children's cohort may be composed of variables before and after birth, pertaining to mother or child etc. With view to eliciting such details or designing a cohort study, it is therefore interesting to characterise when different tiered restrictions are redundant versus when they are most informative.
In Section 2 we formalise the concepts of (tiered) background knowledge and restricted equivalence classes. We then provide a formal characterisation of tiered MPDAGs: Section 3 describes some of their properties, and Section 4 compares different tiered orderings in terms of informativeness. In Section 4.3, we illustrate which types of tiered knowledge are particularly informative via simulation, and in Section 4.4 we provide a practical example. Section 5 addresses how tiered information structurally differs from other types of background knowledge. Throughout, we rely on standard notation for (causal) graphical models, an overview of relevant definitions can be found in Section A of the Supplement; all proofs can be found in Sections D and E of the Supplement.
## 2 Background Knowledge
While the DAGs in an equivalence class have exactly the same conditional independencies, they can still have vastly different causal implications. In this section we introduce a smaller, and possibly more informative, subclass of causal graphs using background knowledge.
We define that _background knowledge_\(\mathcal{K}=(\mathcal{R},\mathcal{F})\) consists of a set of _required edges_\(\mathcal{R}\) and a set of _forbidden edges_\(\mathcal{F}\).
**Definition 1** (Encoding background knowledge).: _A graph \(\mathcal{G}\) encodes background knowledge \(\mathcal{K}=(\mathcal{R},\mathcal{F})\) if all of the edges in \(\mathcal{R}\) and none of the edges in \(\mathcal{F}\) are present in \(\mathcal{G}\)._
### Restricted Equivalence Classes
In our setup we only consider correct background knowledge in the sense that it agrees with an underlying (unknown) true DAG:
**Assumption 1**.: _The given background knowledge is correct._
Let \(\mathcal{C}\) be a CPDAG, then by \([\mathcal{C}]\) we denote the equivalence class of DAGs represented by \(\mathcal{C}\).
**Definition 2** (Meek [1995]).: _A CPDAG \(\mathcal{C}\) and background knowledge \(\mathcal{K}=(\mathcal{R},\mathcal{F})\) are consistent if and only if there exists a DAG \(\mathcal{D}\in[\mathcal{C}]\) such that all of the edges in \(\mathcal{R}\) and none of the edges in \(\mathcal{F}\) are in \(\mathcal{D}\)._
Since our focus is on equivalence classes, we will assume throughout:
**Assumption 2**.: _The given CPDAG is correct._
Combining Assumption 1 and 2, background knowledge will be consistent with the CPDAG. In actual practice, inconsistencies might occur, e.g. due to statistical errors when first learning the CPDAG. These are separate issues which we address elsewhere.
```
input : CPDAG \(\mathcal{C}=(\mathbf{V},\mathbf{E})\) and consistent background knowledge \(\mathcal{K}=(\mathcal{R},\mathcal{F})\). output :PDAG \(\mathcal{C}^{\mathcal{K}}=(\mathbf{V},\mathbf{E}^{\prime})\)
1\(\mathbf{E}^{\prime}=\mathbf{E}\)
2forall\(\{V_{i}-V_{j}\}\in\mathbf{E}\)do
3if\(\{V_{i}\to V_{j}\}\in\mathcal{F}\)then
4 replace \(\{V_{i}-V_{j}\}\) with \(\{V_{i}\gets V_{j}\}\) in \(\mathbf{E}^{\prime}\)
5elseif\(\{V_{i}\to V_{j}\}\in\mathcal{R}\)then
6 replace \(\{V_{i}-V_{j}\}\) with \(\{V_{i}\to V_{j}\}\) in \(\mathbf{E}^{\prime}\)
7
8 end if
9
10 end for
```
**Algorithm 1**Constructing \(\mathcal{C}^{\mathcal{K}}\)
Consistent background knowledge is imposed on a CPDAG by orienting the corresponding undirected edges. In turn, this may allow us to orient further undirected edges, e.g. to avoid directed cycles. Meek [1995] showed that maximal edge orientations implied by given background knowledge are obtained under a set of four orientation rules, also known as Meek's rules (see Figure B.1 in the Supplement). The resulting graph then no longer represents an equivalence class, but rather a _restricted equivalence class_.
More formally, the construction is as follows: Let \(\mathcal{C}=(\mathbf{V},\mathbf{E})\) be a CPDAG and let \(\mathcal{K}=(\mathcal{R},\mathcal{F})\) be background knowledge consistent with \(\mathcal{C}\). First, orient edges in \(\mathcal{C}\) according to \(\mathcal{K}\) as in Algorithm 1, and let \(\mathcal{C}^{\mathcal{K}}\) denote the PDAG obtained from this procedure. Second, orient additional edges by repeated application of Meek's rules 1-4 until no further change; let \(\mathcal{G}\) denote the resulting PDAG. Then \(\mathcal{G}\) is the _maximally oriented partially directed acyclic graph_ (MPDAG) obtained from \(\mathcal{C}\) relative to \(\mathcal{K}\)[Meek, 1995].
For a given CPDAG \(\mathcal{C}\) and background knowledge \(\mathcal{K}\), the MPDAG obtained from \(\mathcal{C}\) relative to \(\mathcal{K}\) is unique. However, the origin of an MPDAG is not unique: Let \(\mathcal{C}\) be a CPDAG, and \(\mathcal{K}_{1}\) and \(\mathcal{K}_{2}\) two distinct sets of background knowledge. If repeated applications of Meek's rules to \(\mathcal{C}^{\mathcal{K}_{1}}\) and \(\mathcal{C}^{\mathcal{K}_{2}}\) lead to the same MPDAG, then \(\mathcal{K}_{1}\) and \(\mathcal{K}_{2}\) are _equivalent_ given \(\mathcal{C}\). We say that \(\mathcal{K}_{2}\) is _redundant_ relative to \(\mathcal{K}_{1}\) if \(\mathcal{K}_{1}\subseteq\mathcal{K}_{2}\) and \(\mathcal{K}_{1}\) and \(\mathcal{K}_{2}\) are equivalent. We say that a PDAG
is _contained in_ another PDAG \(\mathcal{G}_{2}\) if they have the same skeleton and every directed edge in \(\mathcal{G}_{2}\) is also in \(\mathcal{G}_{1}\). We say that \(\mathcal{K}_{1}\) is more _informative_ than \(\mathcal{K}_{2}\) if the MPDAG relative to \(\mathcal{K}_{1}\) is contained in the MPDAG relative to \(\mathcal{K}_{2}\)
### Tiered Background Knowledge
In this work we focus on background knowledge about tiered structures.
**Definition 3** (Partial tiered ordering).: _Let \(\mathcal{G}\) be a PDAG with node set \(\mathbf{V}\) of size \(p\), and let \(T\in\mathbb{N}\), \(T\leq p\). A (partial) tiered ordering of the nodes in \(\mathbf{V}\) is a map \(\tau:\mathbf{V}\mapsto\{1,\ldots,T\}^{p}\) that assigns each node \(V\in\mathbf{V}\) to a unique tier \(t\in\{1,\ldots,T\}\)._
A tiered ordering is partial if multiple nodes are assigned to the same tier. The following properties are implied by the definition and reflect what kind of background knowledge is encoded in a tiered ordering:
1. A node is assigned to no more than one tier.
2. Every node belongs to a tier.
3. If \(\tau(A)\leq\tau(B)\) and \(\tau(B)\leq\tau(C)\) then \(\tau(A)\leq\tau(C)\).
A tiered ordering imposes background knowledge on a graph by demanding that no directed edges point from later tiers into earlier tiers, i.e. specifying the forbidden edges accordingly \(\mathcal{F}=\{\{A\gets B\}:\tau(A)<\tau(B),A,B\in\mathbf{V}\}\). Thus, a tiered ordering provides information on the absence of ancestral relations, but not on their presence. Combining tiered knowledge with a PDAG, it might be possible to construct some ancestral relations which allow us to orient undirected edges as illustrated in Example 1.
**Example 1**.: _Assume that we are given \(\mathbf{V}=\{A,B\}\) and tiered ordering \(\tau\) with \(\tau(A)<\tau(B)\). This corresponds to the background knowledge \(\mathcal{K}\) with forbidden set \(\mathcal{F}=\{A\gets B\}\) and no required edges: In this case it could be possible that \(A\) is a parent of \(B\) or that there is no edge between them. However, if we additionally knew that \(A\) and \(B\) are adjacent, then our background knowledge would result in the edge orientation \(\{A\to B\}\)._
As illustrated, the cross-tier edges play an important role and are defined as follows.
**Definition 4** (Cross-tier edge).: _Let \(\mathcal{G}=(\mathbf{V},\mathbf{E})\) be a PDAG and \(\tau\) a tiered ordering of \(\mathbf{V}\). An edge \(\{A\to B\}\in\mathbf{E}\) is a cross-tier edge (relative to \(\tau\)) if \(\tau(A)<\tau(B)\)._
With tiered knowledge \(\tau\), all cross-tier edges of a PDAG will be directed. Conversely, \(A-B\) only occurs if \(\tau(A)=\tau(B)\). Since a tiered ordering \(\tau\) of a node set \(\mathbf{V}\) unambiguously implies a forbidden edge set we can refer to the MPDAG obtained from a CPDAG relative to a tiered ordering \(\tau\), rather than referring to the forbidden edges implied by \(\tau\). In view of Assumption 1 and 2, any tiered ordering that does not contradict the directed edges of the CPDAG will be consistent; to establish that there is no contradiction we therefore only need to verify the cross-tier edges.
We will refer to MPDAGs relative to exclusively tiered background knowledge as 'tiered MPDAGs'. This is in contrast to general MPDAGs, which can arise from any kind of background knowledge.
**Example 2** (Equivalence class restricted by tiered ordering).: _Figure 1 shows a DAG \(\mathcal{D}\) and a tiered ordering \(\tau\) of the nodes in \(\mathcal{D}\). The differences between the equivalence class of \(\mathcal{D}\) and the restricted equivalence class of \(\mathcal{D}\) relative to \(\tau\) are illustrated in Figure 2. The CPDAG \(\mathcal{C}\) represents the equivalence class of \(\mathcal{D}\); as only two out of seven edges are directed the conditional independencies alone do not contain much causal information. Meanwhile, the restricted equivalence class represented by tiered MPDAG \(\mathcal{G}\) relative to \(\tau\) is much smaller, here six out of seven edges are oriented. \(\mathcal{G}\) will naturally contain the same adjacencies and \(v\)-structures as \(\mathcal{C}\), the dashed cross-tier edges \(A\to C\) and \(C\to F\) are implied by the forbidden directions, and the dotted edges \(C\to D\) and \(F\to G\) are consequences of Meek's 1st rule which prohibits new \(v\)-structures. Due to these last additionally implied orientations of previously undirected edges, restricted equivalence classes given tiered background knowledge might be even smaller, and thus more informative, than one might initially expect._
## 3 Properties of Tiered MPDags
When incorporating general background knowledge into a CPDAG, it might be necessary to apply all of Meek's rules 1-4 in order to obtain a maximally informative graph. Meek's 1st rule ensures that no new v-structures are created, while rules 2-4 all concern preventing directed cycles. By construction, tiered knowledge imposes an ordering of the nodes; using that this ordering is transitive and complete,
Figure 1: DAG \(\mathcal{D}=(\mathbf{V},\mathbf{E})\) with tiered ordering \(\tau\) and three tiers: \(A\) and \(B\) are assigned to tier 1, while \(C\), \(D\) and \(E\) are assigned to tier 2, and \(F\) and \(G\) are assigned to tier 3.
the following lemma shows that Meek's 1st rule is sufficient to construct a maximally informative graph. This is a strong result, which will help us prove further results in this and the following sections.
**Lemma 1**.: _Let \(\mathcal{C}=(\mathbf{V},\mathbf{E})\) be a CPDAG and let \(\tau\) be a tiered ordering of the nodes \(\mathbf{V}\). Let \(\mathcal{C}^{\tau}\) be the PDAG obtained according to Algorithm 1 and let \(\mathcal{G}\) be the PDAG obtained by repeatedly orienting edges in \(\mathcal{C}^{\tau}\) according to Meek's rule 1 until no further change occurs. Then \(\mathcal{G}\) is the MPDAG obtained from \(\mathcal{C}\) relative to \(\tau\)._
Note that so far we have taken a given CPDAG as the starting point to which tiered background knowledge is added. Alternatively, we can start at an earlier stage with a PDAG \(\mathcal{G}\) that contains directed edges only if they belong to v-structures with other edges being undirected. In this case, the tiered ordering can be incorporated into \(\mathcal{G}\) by orienting the cross-tier edges and then applying all four of Meeks rules to achieve maximality. Lemma 1 therefore highlights the extra orientations implied by tiered knowledge on top of the usual orientations.
While general MPDAGs might contain partially directed cycles, it turns out that when background knowledge arises from tiered structures the completeness and transitivity of tiered orderings ensure that no partially directed cycles can occur:
**Theorem 1**.: _Let \(\mathcal{G}=(\mathbf{V},\mathbf{E})\) be a tiered MPDAG, then \(\mathcal{G}\) does not have any partially directed cycles._
The implications of Theorem 1 allow us to work with tiered MPDAGs in a similar way as with CPDAGs. The same does not hold for MPDAGs in general (Perkovic et al., 2017); we will elaborate on this in the section below.
It was shown by Andersson et al. (1997) that CPDAGs are chain graphs with chordal chain components, which is useful for many purposes. Given Theorem 1 it becomes straightforward to show that the the same holds for tiered MPDAGs:
**Corollary 1**.: _Let \(\mathcal{G}=(\mathbf{V},\mathbf{E})\) be a tiered MPDAG, then \(\mathcal{G}\) is a chain graph with chordal chain components._
Similarly, Wang et al. (2022) obtain graphs with chordal undirected components under a different type of background knowledge: Local background knowledge for a node \(A\in\mathbf{V}\) is defined as the knowledge of whether \(A\) is a cause of \(V\), for each \(V\in\mathrm{adj}(A)\). Although not explicit, this induces a transitivity among the nodes in \(\mathrm{adj}(A)\).
### Interpretation of Undirected Paths
In a CPDAG \(\mathcal{C}=(\mathbf{V},\mathbf{E})\), an undirected path \(\pi\) between two nodes \(A,B\in\mathbf{V}\) indicates that there exist a DAG \(\mathcal{D}_{1}\in[\mathcal{C}]\) and another DAG \(\mathcal{D}_{2}\in[\mathcal{C}]\) such that \(A\) is an ancestor of \(B\) in \(\mathcal{D}_{1}\) and \(B\) is an ancestor of \(A\) in \(\mathcal{D}_{2}\). This is not necessarily the case in a general MPDAG \(\mathcal{G}=(\mathbf{V},\mathbf{E}^{\prime})\): If there is an edge \(A\to B\) in \(\mathcal{G}\) not on \(\pi\), i.e. \(\mathcal{G}\) has a partially directed cycle, then there exists a DAG \(\mathcal{D}_{1}\) represented in the class by \(\mathcal{G}\) in which the path corresponding to \(\pi\) is directed from \(A\) to \(B\), but there cannot be a DAG \(\mathcal{D}_{2}\) in the class represented by \(\mathcal{G}\) in which \(B\) is an ancestor of \(A\) since this would create a cycle. Hence, an undirected path between two nodes in an MPDAG does not necessarily mean that the path can be directed either way. Hence it is necessary to check multiple paths in the graph in order to determine whether one path can be directed. Since partially directed cycles do not occur in tiered MPDAGs, the above issue does not occur and, hence, the interpretation of undirected paths in tiered MPDAGs is the same as in CPDAGs.
### Adjustment in Tiered Mpdags
The _generalised adjustment criterion_(Perkovic et al., 2018) determines whether a set of nodes in a CPDAG constitutes a valid adjustment set in every DAG in the equivalence class. This criterion checks for possibly causal paths, i.e. paths for which there are DAGs in the equivalence in which these paths are causal. Hence, any undirected path is possibly causal. However, as described in Section 3.1, this interpretation does not hold for general MPDAGs, and the notion of possibly causal does not transfer directly from CPDAGs to MPDAGs. In order to tackle this issue, Perkovic et al. (2017) introduced the notion of _b-possibly causal_ paths, which is a stronger requirement but ensures that there are in fact paths in the equivalence class that are causal. With this notion an adjustment criterion for general MPDAGs, the _b-adjustment criterion_, can be given (Perkovic et al., 2017). To determine whether a path is b-possibly causal,
Figure 2: The CPDAG \(\mathcal{C}\) representing the equivalence class of \(\mathcal{D}\), and the tiered MPDAG \(\mathcal{G}\) representing the restricted equivalence class of \(\mathcal{D}\) relative to \(\tau\).
one needs to check multiple paths in the graph, which can be computationally heavy for large and dense graphs. For tiered MPDAGs, the definition of b-possibly causal paths simplifies to the definition of possibly causal paths, and the generalised adjustment criterion for CPDAGs is valid for tiered MPDAGs as well:
**Corollary 2**.: _Let \(\mathcal{G}=(\mathbf{V},\mathbf{E})\) be a tiered MPDAG, and let \(\mathbf{X},\mathbf{Y},\mathbf{Z}\subseteq\mathbf{V}\) be pairwise disjoint node sets. Then \(\mathbf{Z}\) satisfies the generalised adjustment criterion relative to \((\mathbf{X},\mathbf{Y})\) in \(\mathcal{G}\) if and only if it satisfies the b-adjustment criterion relative to \((\mathbf{X},\mathbf{Y})\) in \(\mathcal{G}\)._
A related result is shown in van der Zander and Liskiewicz (2016): They introduce a class of graphs called _restricted chain graphs_, which are chain graphs with (1) chordal chain components, and (2) no unshielded triples of the form \(A\to B-C\). They give a sound and complete adjustment criterion for graphs of this type, and they provide an algorithm to find adjustment sets. Clearly, a tiered MPDAG is a type of restricted chain graph, and the results of van der Zander and Liskiewicz (2016) hold for tiered MPDAGs.
### Ida for tiered Mpdags
Covariate adjustment in a CPDAG (MPDAG) requires an adjustment set to be a valid in every DAG in the (restricted) equivalence class. The IDA-algorithm (Maathuis et al., 2009), instead, finds an adjustment set for each DAG in the class. Enumerating all DAGs in an equivalence class is a computationally heavy task, but it can be done in polynomial time for chain graphs with chordal chain components (Wienobst et al., 2021), hence also for tiered MDPAGs.
The local IDA-algorithm utilises the fact that if a valid adjustment set exists, then the parent set is always valid (Pearl, 2009) and considers the possible parents, i.e. all sets that are parent sets in some DAG in the equivalence class. However, as general MPDAGs can contain partially directed cycles, it cannot be verified locally whether a set of nodes is a possible parent set, and a semi-local version was introduced to tackle this (Perkovic et al., 2017). The joint IDA-algorithm determines the joint parent sets semicolally by orienting subgraphs (Nandy et al., 2017). Similar to the local IDA-algorithm, this approach fails for general MPDAGs due to potential partially directed cycles. To tackle this issue Perkovic et al. (2017) introduced an additional step to check whether the oriented subgraphs are valid. In contrast, for tiered MPDAGs no additional steps are needed, and the original local and joint IDA both remain valid:
**Corollary 3**.: _Let \(\mathcal{G}=(\mathbf{V},\mathbf{E})\) be a tiered MPDAG. Let \(\mathbf{PA}_{\mathcal{G}}(X)\) denote the multiset of parent sets of \(X\) in all DAGs represented by \(\mathcal{G}\), and let \(\mathbf{PA}_{\mathcal{G}}^{\mathrm{local}}(X)\) denote the multiset of parent sets of \(X\) obtained from the local IDA algorithm. Then \(\mathbf{PA}_{\mathcal{G}}(X)\) and \(\mathbf{PA}_{\mathcal{G}}^{\mathrm{local}}(X)\) contain the same distinct elements. Moreover, let \(\mathbf{PA}_{\mathcal{G}}^{\mathrm{joint}}(X)\) denote the multiset of parent sets of \(X\) obtained from the joint IDA algorithm. Then \(\mathbf{PA}_{\mathcal{G}}(X)\) and \(\mathbf{PA}_{\mathcal{G}}^{\mathrm{joint}}(X)\) contain the same distinct elements and the ratios of multiplicities of any two elements are the same._
In order to adapt the local IDA-algorithm to general MPDAGs, Fang and He (2020) introduced a set of local orientation rules to verify whether a set of nodes is a possible parent set of a given node, yielding a fully local version of the IDA that can handle general MPDAGs. While this reduces computation time for general MPDAGs, it is not necessary for tiered MPDAGs due to Corollary 3.
Finally, the optimal IDA-algorithm (Witte et al., 2020) is only semi-local and it includes an additional step to check other parts of the graph: This algorithm is essentially not local as the optimal adjustment set is not likely to be the parent set (Witte et al., 2020; Henckel et al., 2022). In this case, tiered MPDAGs do not have an advantage over general MPDAGs. However, the definition of the optimal adjustment set does simplify in a similar fashion as in Section 3.2. A minimal version of the IDA-algorithm is proposed by Guo and Perkovic (2021); like the optimal IDA, this method is non-local by construction, and tiered MPDAGs do not provide an advantage in this case.
## 4 Comparing tiered background knowledge
In this section, we investigate how different tiered background knowledge can be compared. This is relevant in situations where different experts are consulted or where eliciting more detailed knowledge may require more effort. It also provides insights into what kind of background knowledge is especially valuable. In case of a cohort study spanning a whole life-time, we have a clear tiered structure due to the time-ordering, but the tiers might be large, and we need to consult different experts in order to refine the tiers depending on their expertise in e.g. children's health, life style factors, or diseases common among the elderly. This can be very costly and time consuming to obtain, and it is then beneficial to know how to prioritise. Moreover, time-ordering has the advantage of being correct, whereas other ways of motivating tiers might be less certain. The results of this section could therefore also be used at the design stage of a, say, cohort study: As our results will show, the finer we can reliably partition early variables into tiers by the design alone, the more informative it will be for causal structure learning.
Throughout, we will only compare tiered orderings that are compatible in the sense that they do not contradict each other on the ordering of the nodes; this is in line with Assumption 1. Hence, the orderings can only disagree on the status of an edge being a cross-tier edge, not the direction of it.
Consider two different tiered orderings \(\tau_{i}\) and \(\tau_{j}\). If for all \(A,B\in\mathbf{V}:\tau_{i}(A)<\tau_{i}(B)\Rightarrow\tau_{j}(A)<\tau_{j}(B)\), then \(\tau_{j}\) is _finer_ than \(\tau_{i}\), and \(\tau_{i}\) is _coarser_ than \(\tau_{j}\). In this case we have for the respective sets of forbidden edges that \(\mathcal{F}_{i}\subseteq\mathcal{F}_{j}\).
Note that a finer tiered ordering can be redundant compared to the coarser one; otherwise it must be more informative. However, two tiered orderings can be different without one being finer or coarser than the other. They can then either be equivalent, or one can be more informative than the other, or they are incomparable.
To compare tiered orderings on a given CPDAG, we must compare the resulting tiered MPDAGs. We give a graphical criterion for their equivalence in Section 4.1. Evidently, this provides a criterion for redundancy. More interestingly, this also provides insight into when one ordering is more informative than another as addressed in Section 4.2.
### Equivalence of tiered MPDAGs
First, we need some further terminology:
**Definition 5** (Earlier path).: _Let \(\mathcal{G}=(\mathbf{V},\mathbf{E})\) be a PDAG, let \(\tau\) be a tiered ordering of the nodes \(\mathbf{V}\), and let \(\pi_{1}\) and \(\pi_{2}\) be two arbitrary paths in \(\mathcal{G}\). If \(\pi_{1}\) contains a node \(V\) with \(\tau(V)<\tau(W)\) for all nodes \(W\) on \(\pi_{2}\), we say that \(\pi_{1}\) is earlier than \(\pi_{2}\); correspondingly, \(\pi_{2}\) is later than \(\pi_{1}\). A path is earliest if it does not contain subpaths of any earlier paths._
We say that an edge is _fully shielded_ if it does not occur on any unshielded path. Hence, \(A\operatorname{\text{\text{\text{\text{\text{\text{\text{\text{\text{\
\(\tau_{1}\) with_
\[\tau_{1}(A) =\tau_{1}(B)=1\] \[\tau_{1}(C) =2\] \[\tau_{1}(D) =\tau_{1}(E)=3\] \[\tau_{1}(F) =4\text{, and}\] \[\tau_{1}(G) =5\]
_Clearly, \(\tau_{1}\) results in the same MPDAG as \(\tau\) and \(\tau^{\prime}\); since \(\tau_{1}\) is finer than both \(\tau\) and \(\tau^{\prime}\) it is redundant given \(\mathcal{C}\). Consider now the tiered ordering \(\tau_{2}\) of \(\mathbf{V}\) with_
\[\tau_{2}(A) =\tau_{2}(B)=\tau_{2}(C)=1\] \[\tau_{2}(D) =\tau_{2}(E)=2\] \[\tau_{2}(F) =3\text{, and}\] \[\tau_{2}(G) =4\]
_Let \(\mathcal{G}^{\prime}\) be the MPDAG relative to \(\tau_{2}\) given \(\mathcal{C}\). Then \(\mathcal{G}^{\prime}\) has the same edge orientations as the MPDAG relative to \(\tau\), \(\tau^{\prime}\) and \(\tau_{1}\), except for \(\{A\to C\}\) which remains undirected in \(\mathcal{G}^{\prime}\). Here, \(\tau_{1}\) is finer than \(\tau_{2}\), but \(\tau_{1}\) is not redundant. In fact, \(\tau_{1}\) is more informative than \(\tau_{2}\) due to condition (iii) of Corollary 4. In addition, \(\tau\) and \(\tau^{\prime}\) are both more informative than \(\tau_{2}\), even though \(\tau_{2}\) assigns the nodes to more tiers._
Consider condition (iv) in Corollary 4. This suggests that every fully shielded cross-tier edge provides unique information. Hence, there is an immediate gain in information from each additional fully shielded cross-tier edge in \(\mathcal{C}_{u}\), since edges of this type can not be oriented by Meek's 1st rule. Hence, in the complete subgraphs of \(\mathcal{C}_{u}\), no tiered background knowledge is redundant.
**Example 5** (Fully shielded edges).: _Consider the simple case of a CPDAG \(\mathcal{C}=(\mathbf{V},\mathbf{E})\) with three nodes \(\mathbf{V}=\{A,B,C\}\), where \(\mathcal{C}\) is complete: \(\mathbf{E}=\{\{A-B\},\{B-C\},\{A-C\}\}\). Assume that the true ordering \(\tau_{\alpha}\) assigns the nodes to individual tiers with \(\tau_{\alpha}(A)<\tau_{\alpha}(B)<\tau_{\alpha}(C)\). There are then three types of partial orderings that are compatible with \(\tau_{\alpha}\): Orderings that assigns the nodes to the same tier, e.g. an ordering \(\tau_{\beta}\) with \(\tau_{\beta}(A)=\tau_{\beta}(B)=\tau_{\beta}(C)\), or orderings that assigns two nodes to the same tier and the third to an individual tier, e.g. \(\tau_{\gamma}\) and \(\tau_{\delta}\) with \(\tau_{\gamma}(A)<\tau_{\gamma}(B)=\tau_{\gamma}(C)\) and \(\tau_{\delta}(A)=\tau_{\delta}(B)<\tau_{\delta}(C)\)._
_Figure 4 shows that the MPDAGs \(\mathcal{G}_{\alpha}\) (relative to \(\tau_{\alpha}\)), \(\mathcal{G}_{\beta}\) (relative to \(\tau_{\beta}\)), \(\mathcal{G}_{\gamma}\) (relative to \(\tau_{\gamma}\)), and \(\mathcal{G}_{\delta}\) (relative to \(\tau_{\delta}\)) are distinct. All oriented edges are implied by the tiered background knowledge and no edge has been oriented as a consequence of Meek's 1st rule. Here, \(\tau_{\delta}\) and \(\tau_{\gamma}\) are incomparable, and they are both more informative than \(\tau_{\beta}\). Moreover, \(\tau_{\alpha}\) is more informative than the other orderings._
### Simulation Study
Corollary 4 shows that in graphs with many unshielded paths we can potentially obtain a large amount of additional information, and the earlier we are able to identify the direction of a causal path, the more information we can gain. In summary: (1) early knowledge is in general more beneficial than late knowledge, even if the late knowledge is more detailed (c.f. Example 4), and (2) since we expect unshielded paths to occur more frequently in sparse graphs, the effect of Meek's 1st rule is expected to be more pronounced in sparse than in dense graphs. In order to investigate to which degree these two features occur in practice, we conducted a simulation study.
To adhere to Assumption 1, we generated random DAGs and for each random DAG, we considered five different, consistent tiered orderings: full knowledge, early detailed knowledge, late detailed knowledge, early simple knowledge and late simple knowledge. For each DAG, we constructed its CPDAG, and for each combination of DAG and
Figure 5: Results of one setting of the simulation. 6000 random DAGs with 25 nodes were generated; half of them sparse, the other half dense. For each DAG and tiered ordering, the tiered MPDAG was constructed and the difference in number of directed edges to its corresponding CPDAG was computed, divided by the total number of edges.
tiered ordering, we constructed the tiered MPDAG. To adhere to Assumption 2, the CPDAGs and MPDAGs were constructed based on the independence models encoded by the DAGs; hence, finite sample issues did not occur. For each MPDAG, we counted the difference in number of directed edges between the MPDAG and its corresponding CPDAG, and divided this by the total number of edges; this measures the fraction of edges that cannot be oriented in the CPDAG, but can be oriented in the tiered MPDAG. Since we compare oracle CPDAGs to oracle MPDAGs, this measures exactly the (relative) gain in informativeness. A detailed description of the study can be found in Section C in the Supplement.
Figure 5 shows that using the full knowledge of the orderings unsurprisingly provides most new orientations. Moreover, we see that early knowledge is more beneficial than late, even though they are equally detailed, which is in line with (1) above. Interestingly, in dense graphs early knowledge can also induce more oriented edges than late knowledge, even if the later knowledge is more detailed, which is also in line with (1). Additionally, the advantage of adding tiered background knowledge is relatively larger in sparse graphs than dense graphs, which is in line with (2).
We performed the same procedure for random DAGs with node sets of size 10, 50 and 100, for which we found analogous results. These results are depicted in Figure C.2 in the Supplement.
### Practical example
Figure 6 is a simplified example based on cohort data analysed in Foraita et al. (2022), it shows the MPDAG obtained from the time-ordering of wave 1 and 2. The corresponding CPDAG only has two fewer directed edges (the dashed and dotted), so the time-ordering does not provide much new information. The induced subgraph over the first wave remains undirected. In order to obtain a more informative graph, we should consult early life experts rather than children's life style experts; alternatively, the cohort study should have been designed such that early life factors were measured at different time points. While mother's age naturally is determined before smoking during pregnancy, which again occurs before birth, experts could disagree on the causal order of pregnancy duration and birth weight, which are defined at the exact same time. However, it is not necessary to order these particular two nodes, here: Any ordering \(\tau\) with \(\tau\)(mothers age)\(<\tau\)(smoking during pregnancy)\(<\tau\)(remaining nodes) allows for the _entire_ graph to be oriented in this case.
## 5 Relation to other work
Subject matter background knowledge can come from different sources and take different forms, and previous work provides results for other knowledge than tiered one. A distinct type of causal background knowledge is, for instance, obtainable when experimentation is possible, see Hauser and Buhlmann (2012) for a characterisation of interventional equivalence classes of DAGs. While a tiered ordering is given before learning a graph, experiments can be performed iteratively, and the choice of most informative interventions might depend on the given intermediate graphical structure. It has been shown in Eberhardt (2008) and Hauser and Buhlmann (2014) that the most informative strategy is to intervene on nodes in the largest undirected complete subgraphs: this yields most new edge orientations, including those following from Meek's rules. Since the knowledge obtained from an intervention is local, this type of background knowledge lacks the completeness of tiered knowledge. This means that all four of Meek's rules might apply after orienting edges according to interventional knowledge, resulting in additional orientations within a complete subgraphs, in contrast to tiered background knowledge.
Mooij et al. (2020) considered context variables. These can be seen as a special case of tiered knowledge: Some of the variables, the context variables, form an earlier tier, and others, the system variables, form later tiers. However, there can be additional knowledge about presence/absence of relations between context variables, or their causal relations may not be of interest: In these cases we are no longer in the tiered framework.
Background knowledge about non-ancestral (pairwise) relations, considered by Fang and He (2020), can be seen as a non-complete version of tiered knowledge. They show that knowledge of non-ancestral relations can be translated to a set of direct causal relations, i.e. directed edges. In contrast, the completeness and transitivity of tiered knowledge
Figure 6: Simplified example of a cohort study. Early life factors are measured at wave 1, childhood health factors at wave 2. Edges are oriented by the v-structures, time-ordering and Meek’s rules. Expert knowledge allows the first tier to be subdivided into three new tiers.
subsumes such relations through the orientations of cross-tier edges. A more general representation of background knowledge is provided in Fang et al. (2022), where ancestral background knowledge on node pairs is considered; this is surprisingly different from tiers and cannot necessarily be encoded graphically. The authors provide a criterion for checking equivalence of background knowledge, which is more general than the one provided here since tiered background knowledge can be considered a complete version of pairwise causal constraints. However, unlike Fang et al. (2022), our criterion can be checked on the graph, and due to the properties of tiered orderings, it is rather simple.
Multivariate time series, as repeated measurements of the same variables over time (Malinsky and Spirtes, 2018; Runge et al., 2019), have a very obvious and unambiguous tiered ordering, and in this sense our results extend to time series. But because time series are observations on a single unit over a long time instead of multiple i.i.d. observations, the models typically impose additional structure. For instance, that each variable depends on its own past, thus forcing edges, which is a restriction that we have not considered; and that there is a limited memory (e.g. \(k\)-th order), thus disallowing edges, which we have also not considered; and importantly, for time series a stationarity or slow/smooth change of the structure is enforced which is also not covered by tiered background knowledge. Our results on the ensuing equivalence classes, such as absence of partially directed cycles, still apply under these additional structural assumptions as they restrict the skeleton of the true CPDAG, and the tiered (in this case temporal) background knowledge complements them. Considering informativeness, sometimes it may be possible to impose additional tiers within time-slices if there is a known order for the contemporaneous variables, e.g. due to biological processes underlying medical time series; this could be useful in obtaining more edge orientations.
A different line of work on using background knowledge relaxes the assumption of causal sufficiency. Latent variables can be accommodated in maximal ancestral graphs (MAGs) (Richardson and Spirtes (2002)), and the corresponding equivalence classes are represented by partial ancestral graphs (PAGs), see characterization by Ali et al. (2009). Different orientation rules are needed, and a set of ten rules were introduced by Zhang (2008) ensuring the maximally informative PAG. For added background knowledge it has not yet been shown, in general, that these ten rules yield a maximally informative graph. However, this has been shown for tiered background knowledge under the extra assumptions of no cross-tier confounding and no selection bias (Andrews et al., 2020). Moreover, in this case it turns out that not all ten orientation rules are needed, similarly to our Lemma 1. We therefore conjecture that analogous results to those of section 3 extend to PAGs with tiered background knowledge under the assumptions of no selection bias and no cross-tier confounding. Further work is still needed to relax the often implausible assumption of no cross-tier confounding.
## 6 Discussion
By formalising equivalence classes restricted by tiered orderings, we provided some new insights: Tiered MPDAGs do not have partially directed cycles and are chain graphs with chordal chain components; this makes them easier to handle and interpret, e.g. for causal effect estimation. We have given a characterisation of tiered MPDAGs which clarified what can be gained by adding tiered knowledge and what will still remain unknown. Sparse graphs, in particular, will benefit much from edge orientations implied by the tiered ordering; further, eliciting background knowledge to separate out early tiers is especially informative. Hence, this is when we do become 'wiser with time'.
In summary, we believe that oftentimes background knowledge comes in the form of a tiered ordering. Moreover, tiered knowledge can be expected to be reliable especially when based on temporal information. A benefit of the tiered orderings is that in case of doubts or disagreements about the ordering, these may be resolved by coarsening the tiers and thus arrive at, say, a consensus among differing expert opinions. Tiered MPDAGs are, therefore, at least as plausible as their corresponding CPDAGs without background knowledge. In addition, tiered MPDAGs will also be at least as informative as their corresponding CPDAGs - in practice they will often be much more informative. While it is self-evident that any background knowledge should be exploited for causal structure learning, we have illustrated how specific aspects and how much the background knowledge results in an information gain that goes well beyond the orientation of cross-tier edges.
## Acknowledgements
This project was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project 281474342/GRK2224/2. Support from DFG (SFB 1320 EASE) is also acknowledged. We would like to thank the reviewers for their helpful and constructive comments.
|
2305.00076
|
HausaNLP at SemEval-2023 Task 10: Transfer Learning, Synthetic Data and
Side-Information for Multi-Level Sexism Classification
|
We present the findings of our participation in the SemEval-2023 Task 10:
Explainable Detection of Online Sexism (EDOS) task, a shared task on offensive
language (sexism) detection on English Gab and Reddit dataset. We investigated
the effects of transferring two language models: XLM-T (sentiment
classification) and HateBERT (same domain -- Reddit) for multi-level
classification into Sexist or not Sexist, and other subsequent
sub-classifications of the sexist data. We also use synthetic classification of
unlabelled dataset and intermediary class information to maximize the
performance of our models. We submitted a system in Task A, and it ranked 49th
with F1-score of 0.82. This result showed to be competitive as it only
under-performed the best system by 0.052% F1-score.
|
Saminu Mohammad Aliyu, Idris Abdulmumin, Shamsuddeen Hassan Muhammad, Ibrahim Said Ahmad, Saheed Abdullahi Salahudeen, Aliyu Yusuf, Falalu Ibrahim Lawan
|
2023-04-28T20:03:46Z
|
http://arxiv.org/abs/2305.00076v1
|
HausaNLP at SemEval-2023 Task 10: Transfer Learning, Synthetic Data and Side-Information for Multi-Level Sexism Classification
###### Abstract
We present the findings of our participation in the SemEval-2023 Task 10: Explainable Detection of Online Sexism (EDOS) task, a shared task on offensive language (sexism) detection on English Gab and Reddit dataset. We investigated the effects of transferring two language models: XLM-T (sentiment classification) and HateBERT (same domain - Reddit) for multi-level classification into Sexist or not Sexist, and other subsequent sub-classifications of the sexist data. We also use synthetic classification of unlabelled dataset and intermediary class information to maximize the performance of our models. We submitted a system in Task A, and it ranked 49\({}^{th}\) with F1-score of 0.82. This result showed to be competitive as it only under-performed the best system by 0.052% F1-score.
Content warning: All examples of sexism comments used are for illustrative purpose only.
## 1 Introduction
Sexism is a form of written or verbal attack on women based on their gender and other identity Kirk et al. (2023). In general, there is a global concern about the prevalence of hate on social media. Consequently, many studies have attempted to ensure that the social media remain safe for everyone Jiang et al. (2022). In this paper, we described our experiments for the SemEval-2023 Task 10: Explainable Detection of Online Sexism (EDOS). The shared task was divided into 3 sub-tasks that are all aimed at predicting fine-grained information about the type of sexism that exists on social media sites of Gab and Reddit. More information is provided in Section 3 and the task description paper Kirk et al. (2023).
Our models were fine-tuned on two pre-trained language models, namely XLM-T Barbieri et al. (2022) and HateBERT Caselli et al. (2020). While XLM-T is a pretrained language model trained on 198 million tweets, the HateBERT on the other hand is a pretrained language model based on BERT Devlin et al. (2018) that was pretrained on a large scale English Reddit dataset for the detection of abusive language. We used only the dataset used by the task organizers in our experiment. See Kirk et al. (2023) for a description of the dataset. We made submission only to the Task A during the competition phase, and our system achieved a competitive performance of 0.82 F1-score. However, we also described our attempts at creating competitive models for Tasks B and C.
The main contributions of this paper are as follows:
1. We investigated the effectiveness of transferring two language models, namely, XLM-T and HateBERT for binary sexism classification.
2. We explore the use of synthetic classification of unlabelled dataset and intermediary class information for maximizing the performances multi-level sexism classification models.
## 2 Related Works
There is an abundance of literature on the detection of hate speech online. However, only a fraction of such studies focused on sexism detection Aliyu et al. (2022).
Jiang et al. (2022) proposed a Chinese dataset and lexicon for the detection of sexism on social media. The proposed dataset and lexicon were created from comments and posts collected on the Sina Weibo Chinese microblogging site and annotated into sexism or non-sexism. Those labelled as sexism were further annotated into four categories. Finally, each of these categories was labelled according to the target. Preliminary results show that context-based models outperform linguistics-based
models. Waseem and Hovy (2016) created a dataset of 16,000 tweets, of which 3,383 tweets were sexist. Character n-gram feature was investigated alongside others such as gender and location. The best result was obtained with the combination of char n-gram and gender.
At the IberEval 2018 task for the identification of misogyny from Spanish and English corpora, Canos (2018) used TF-IDF features vectors and SVM with a linear kernel to develop a system that classifies tweets as misogynistic or not misogynistic for subtask A, and the type and target of the misogynistic tweet for subtask B. The best result was obtained from the Spanish dataset for both tasks. Another study used TF-IDF, user-based, network-based, and text-based features together with classical and deep learning algorithms to automatically detect sexism in tweets. Logistic regression (LR), support vector machine (SVM) and random forest (RF) were used for the classical machine learning. Bidirectional long-short-term memory (Bi-LSTM) and multilingual bidirectional encoder representation from transformers (mBERT) from the deep learning algorithms. The mBERT with text features gave the best performance with an accuracy of 0.74.
(Rodriguez-Sanchez et al., 2020) conducted a review on racist and sexist hate speech detection, with special emphasis on datasets, features and approach used. According to their findings, Waseem and Hovy (2016) dataset was found to be the most used. Deep learning features perform best, and deep learning algorithms outperform the classical algorithms. Istaiteh et al. (2020) reviewed studies on the detection of racist and sexist hate speech detection with special emphasis on datasets, features and approach used. The study concluded that the Waseem and Hovy (2016) dataset is the most widely used dataset and deep learning achieves better performance for the classification task.
## 3 Task Overview
The SemEval-2023 subtasks aim to create models for the detection of sexist post from Gab and Reddit. There are three subtask which include: Task A - a binary classification of statements as either sexist or not; Task B - classifies sexist statements into four groups namely, threats, derogation, animosity and prejudiced discussions; Task C - involves classification of the sexist statements into 11 fine-grained vectors. This is illustrated in Figure 1.
### Train and development datasets
It can be observed from Table 1 that the dataset is imbalanced, with "Not Sexist" having more than 75% in both train and dev splits. Both the train and dev datasets have similar distributions, with each having a minimum, maximum and average token counts of 1, 55 and about 23.5 respectively, based on space character (" ") tokenization.
## 4 System Overview
In this section, we will elaborate on the main methods for the binary sexism task.
### Pre-trained language models
For all the models, we fine-tuned two pre-trained language models (PLMs): XLM-T (Barbieri et al., 2022) and HateBERT (Caselli et al., 2020).
Figure 1: Different level of classification as provided in the shared task
Xlm-TThis model was trained on 198 million tweets that were collected over a 2-year period starting from May 2018. The model was trained from a checkpoint of the base model, XLM-R (Conneau et al., 2019), until convergence1. This model was selected based on its excellent performance on sentiment analysis on social media datasets (Barbieri et al., 2022; Aliyu et al., 2022).
Footnote 1: [https://github.com/cardiffnlp/xlm-t](https://github.com/cardiffnlp/xlm-t)
HateBERTThis model is a BERT (Devlin et al., 2018) model that was retrained on a large scale English Reddit dataset2(Caselli et al., 2020) for the detection of abusive language. The dataset consisted of banned abusive, hateful and offensive words. We used this model because sexism is a form of offensive and hateful language, but also because the re-training dataset was from the same social platform as the provided training dataset.
Footnote 2: [https://huggingface.co/GroNLP/hateBERT](https://huggingface.co/GroNLP/hateBERT)
### Training Strategies
For Task A - the binary classification task, two models were trained by fine-tuning the pre-trained language models (PLM) mentioned above. We used the best model in this task to select sentences that are potentially sexist from the provided unlabeled. In Task B, we first used the training data provided to fine-tune the XLM-T PLM. Subsequently, we used the fine-tuned model to generate the automatic classification of the potentially sexist sentences that were generated using the model in Task A. We then mixed this bigger synthetic dataset with the authentic data to train another classification model. This is illustrated in Figure 2.
Finally, for Task C, we leveraged the parent classification of the input sentence as provided for Task B to help the model narrow the expected final sexism classification, as illustrated in Figure 3. We utilized this information because it is provided in the shared task. For real-world application, we understand that this information may not be available. For this, we anticipate using a model to predict the parent classes, before using the synthetic data as the side information.
## 5 Experimental Setup and Results
### Dataset
For this task, we only used the dataset provided by the organizers. We used a subset of training data (10%) to develop the systems and maintained the same set during the competition phase. We added the released development data to the other 90% of the training dataset to create a new train set, while maintaining the original development split. Some useful statistics of these datasets are provided in Table 2. For the test set, a total of 4,000 provided
\begin{table}
\begin{tabular}{l c c l} \hline \hline
**Sentiment** & **\#Dev** & **\#Train** & **Example** \\ \hline Sexist & 340 & 3,544 & Western women are truly the most entitled people to have ever lived on this earth. \\ Not Sexist & 1,060 & 11,056 & Our daughter was murdered, but at least we’re not racists! \\ \hline \hline \end{tabular}
\end{table}
Table 1: Dataset Description and Distribution of Sentiment Labels
Figure 3: Task C. Each input sentence is paired with its parent class [**”Threats”, ”Derogation”, ”Animosity”, ”Prejudiced Discussion”] before tokenization.
Figure 2: Task B. Using authentic and synthetic training datasets.
for the competition, and we used them as they are to evaluate the performances of the various models.
### Models
For the models, we used the publicly available checkpoints in Huggingface,3,4 training them for 20 epochs using a training batch size of 32 and a maximum sequence length of 128. We use the code5 and default model hyper-parameters as provided in Shode et al. (2022) to train the models. The configuration uses Adam Kingma and Ba (2015) for optimization, an initial learning rate of \(5e-5\), and \(1e-8\) epsilon for the Adam optimizer.
Footnote 3: [https://huggingface.co/GroNLP/hateBERT](https://huggingface.co/GroNLP/hateBERT)
Footnote 4: [https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base-sentiment)
Footnote 5: [https://github.com/IyanuSh/YOSM](https://github.com/IyanuSh/YOSM)
### Results
Our experimental results are presented in Table 3. From these results, we found that the XLM-T model achieved the best performance on the binary classification task. Even though the HateBERT model was fine-tuned on the abusive and banned data from Reddit (same domain as some of the training and evaluation data), the model was not able to outperform the XLM-T. We will conduct more in-depth experiments to determine the actual reason for this anomaly.
For Task B, a slight performance was realized after adding the synthetic data, even though the quality of such data is less than the labelled data. This further reinforces the fact that more data is more often than not beneficial to neural models.
For Task C, we recorded a substantial improvement in the performance of the model after supplementing the side information (the parent class) to influence its prediction. Rather than just passing a sentence and expecting the model to predict over 11 classes, the model performed better when its parent class is known to it at the tokenization stage. We anticipate a slight drop in performance if the parent class information is synthetic rather than, in this case, authentic. However, we cannot substantiate the truthfulness or not of this claim, or the extent of the effect, without conducting further experiments.
### Competition rank
We submitted only the XLM-T model in Task A to the competition and although our model ranked 49\({}^{th}\), the best model in the competition track only outperformed our model by 0.052%, as indicated in Table 4.
## 6 Conclusion
In this system description paper, we describe our submission for the subtask A binary classification of comments into sexist and not sexist submitted to the SemEval-2023 Task 10 - Explainable Detection of Online Sexism (EDOS). We implemented pre-trained models using XLM-T and HateBERT on the English Language Twitter. Our model achieved competitive result of 82% using F1 score, slightly below the leader with 87%. However, this performance is not exhaustive as we observe great imbalance in the distribution of the dataset and this could have influenced the results. Furthermore, we described our attempts at building competitive models
\begin{table}
\begin{tabular}{l l r} \hline \hline \# & Team & F1 \\ \hline
1 & PingAnLifeInsurance & 0.8746 \\
2 & stce & 0.8740 \\
3 & FiRC-NLP & 0.8740 \\
4 & PALI & 0.8717 \\
5 & GZHU / UCAS-IIE & 0.8692 \\ \(\vdots\) & \(\vdots\) & \(\vdots\) \\
49 & HausNLP & 0.8228 \\ \(\vdots\) & \(\vdots\) & \(\vdots\) \\
84 & NLP\_CHRISTINE & 0.5029 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Task A official ranking
\begin{table}
\begin{tabular}{l r r} \hline \hline
**data** & **\# sentences** & **\# tokens** \\ \hline train & 14,600 & 407,857 \\ dev & 1,400 & 39,326 \\ test & 4,000 & 110,282 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Data split
\begin{table}
\begin{tabular}{l l r} \hline \hline Task & Method & F1 \\ \hline A & XLM-T & 0.8228 \\ A & HateBERT & 0.8172 \\ \hline B & XLM-T & 0.5981 \\ B & XLM-T+synth & 0.6012 \\ \hline C & XLM-T & 0.3565 \\ C & XLM-T+parent class & 0.4151 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Result of the different tasks.
for Tasks B and C (which we did not submit to the competition). We utilised synthetic data and parent class information to improve the performances of the two models in the respective tasks, and some improvements were observed. Finally, we intend to improve the performances of these models by targeting the data imbalance constraint using data augmentation strategies.
|
2304.00439
|
SoftED: Metrics for Soft Evaluation of Time Series Event Detection
|
Time series event detection methods are evaluated mainly by standard
classification metrics that focus solely on detection accuracy. However,
inaccuracy in detecting an event can often result from its preceding or delayed
effects reflected in neighboring detections. These detections are valuable to
trigger necessary actions or help mitigate unwelcome consequences. In this
context, current metrics are insufficient and inadequate for the context of
event detection. There is a demand for metrics that incorporate both the
concept of time and temporal tolerance for neighboring detections. This paper
introduces SoftED metrics, a new set of metrics designed for soft evaluating
event detection methods. They enable the evaluation of both detection accuracy
and the degree to which their detections represent events. They improved event
detection evaluation by associating events and their representative detections,
incorporating temporal tolerance in over 36\% of experiments compared to the
usual classification metrics. SoftED metrics were validated by domain
specialists that indicated their contribution to detection evaluation and
method selection.
|
Rebecca Salles, Janio Lima, Rafaelli Coutinho, Esther Pacitti, Florent Masseglia, Reza Akbarinia, Chao Chen, Jonathan Garibaldi, Fabio Porto, Eduardo Ogasawara
|
2023-04-02T03:27:31Z
|
http://arxiv.org/abs/2304.00439v2
|
# SoftED: Metrics for Soft Evaluation of Time Series Event Detection
###### Abstract
Time series event detection methods are evaluated mainly by standard classification metrics that focus solely on detection accuracy. However, inaccuracy in detecting an event can often result from its preceding or delayed effects reflected in neighboring detections. These detections are valuable to trigger necessary actions or help mitigate unwelcome consequences. In this context, current metrics are insufficient and inadequate for the context of event detection. There is a demand for metrics that incorporate both the concept of time and temporal tolerance for neighboring detections. This paper introduces SoftED metrics, a new set of metrics designed for soft evaluating event detection methods. They enable the evaluation of both detection accuracy and the degree to which their detections represent events. They improved event detection evaluation by associating events and their representative detections, incorporating temporal tolerance in over 36% of experiments compared to the usual classification metrics. SoftED metrics were validated by domain specialists that indicated their contribution to detection evaluation and method selection.
Time Series Event Detection Evaluation Metrics Soft Computing
## 1 Introduction
In time series analysis, it is often possible to observe a significant change in behavior at a certain point or time interval. Such behavior change generally characterizes the occurrence of an event [29]. An event can represent a phenomenon with a defined meaning in a domain. Event detection is the process of identifying events in time series. With this process, we may be interested in learning/identifying past events [45, 20, 2, 3, 54, 71, 43, 62], identifying events in real-time (online detection) [69, 1, 5, 41], or even predicting future events before they happen (event prediction) [67, 50, 39, 26, 70]. It is recognized as a basic function in surveillance and monitoring systems and has gained much attention in research for application domains involving large datasets from critical systems [45].
To address the task of time series event detection, several methods have been developed and are surveyed in the literature [31, 12, 28, 15, 16, 10, 6, 53]. Each detection method specializes in time series that present different characteristics or make assumptions about the data distribution. Therefore, the assessment of their detection performance is important to infer their adequacy to a particular application [24]. In this case, detection performance refers to how accurate an event detection method is at identifying events in a time series. Detection performance is generally measured by classification metrics [30].
Currently, standard classification metrics (around since the 1950s), including Recall, Precision, and F1, are usually adopted [35, 59]. Although Accuracy is a specific metric [30], the expression detection accuracy is henceforth used to refer to the ability of a method to detect events correctly. Classification metrics focus mainly on an analysis of detection accuracy. On the other hand, inaccuracy in event detection does not always indicate a bad result, especially when detections are sufficiently close to events.
### Motivating example and problem definition
This section gives an example of the problem of evaluating inaccurate event detections and defines the problem of the paper regarding the demand for adequate detection performance metrics. Consider, for example, a time series \(X\) containing an event at time \(t\), represented in Figure 1. Given detection methods A and B applied to \(X\), a user must select one of them as the most adequate for the underlying application. Method A detects an event at time \(t+k_{1}\), while Method B detects an event at time \(t-k_{2}\) (\(k_{2}>k_{1}\)). As none of the methods could correctly detect the event at time \(t\), based on the usual detection accuracy evaluation, the user would deem both as inaccurate and disposable.
However, inaccuracy in detecting an event can often result from its preceding or lingering effects. Take the adoption of a new policy in a business. While a domain specialist may consider the moment of policy enforcement as a company event, its effects on profit may only be detectable a few months later. On the other hand, preparations for policy adoption may be detectable in the antecedent months. Moreover, when accurate detections are not achievable, which is common, detection applications demand events to be identified as soon as possible [35], or early enough to allow necessary actions to be taken, mitigating possible critical system failures or helping mitigate urban problems resulting from extreme weather events, for example. In this context, the results of Methods A and B would be valuable to the user. Note that while Method B seems to anticipate the event, its detection is made after the event's occurrence. On the other hand, the detection of Method A came temporally closer to the event, possibly more representative of its effects.
In this context, evaluating event detection is particularly challenging, and the detection accuracy metrics usually adopted are insufficient and inadequate for the task [56]. Standard classification metrics do not consider the concept of time, which is fundamental in the context of time series analysis, and do not reward early detection [1], for example, or any relevant neighboring detections. For the remainder of this paper, neighboring or close detections refer to detections whose temporal distance to events is within a desired threshold. Current metrics only reward true positives (exact matches in event detection). All other results are "harshly" and equally discredited.
In this case, there is a demand to soften the usual concept of detection accuracy and evaluate the methods while considering neighboring detections. However, state-of-the-art metrics designed for scoring anomaly detection [35] are still limited [56], while also being biased towards results preceding events, such as the ones produced by Method B.
Figure 1: Example regarding the problem of evaluating the detection of an event at time \(t\). Method A detects an event at time \(t+k_{1}\), while Method B detects an event at time \(t-k_{2}\) (\(k_{2}>k_{1}\)).
To the best of our knowledge, there are no metrics available in the literature that consider both the concept of time and tolerance for detections that are sufficiently close to time series events. This paper focuses on addressing this demand.
### Contribution
This paper introduces the SoftED metrics, a new set of metrics for evaluating event detection methods regarding both their detection accuracy and ability to produce neighboring detections to events. Inspired by soft (approximate) computing, SoftED metrics are designed for soft evaluation, assessing the degree to which a detection represents a particular event. Hence, they incorporate both the concept of time and temporal tolerance for inaccuracy in event detection evaluation, a scenario that domain specialists and users often face, with no standard or adequate evaluation metrics until now. SoftED metrics soften the standard classification metrics, which are considered in this paper as hard metrics, to support the decision-making regarding the most appropriate method for a given application.
Computational experiments were conducted to analyze the contribution of the developed metrics against the usual hard and state-of-the-art metrics [35]. Results indicate that the developed SoftED metrics improved event detection evaluation by associating events and their representative (neighboring) detections, incorporating temporal tolerance in over 36% of the conducted experiments compared to usual hard metrics. More importantly, surveyed domain specialists validated the contribution of SoftED metrics to the problem of detection method evaluation.
The remainder of this paper is organized as follows. Section 2 provides concepts on time series event detection and reviews the literature on detection performance metrics for detection evaluation. Section 3 formalizes the developed SoftED metrics. Section 4 presents a quantitative and qualitative experimental evaluation of the developed metrics and their empirical results. Finally, conclusions are made in Section 5.
## 2 Literature review
This section provides relevant concepts on time series events and their detection and reviews the literature on detection performance metrics and related works. Events are pervasive in real-world time series, especially in the presence of nonstationarity [29, 51]. Commonly, the occurrence of an event can be detected by observing anomalies or change points. Most event detection methods in the literature specialize in identifying a specific type of event. There exist methods that can detect multiple events in time series, generally involving the detection of both anomalies and change points [36, 4, 70]. Nonetheless, these methods are still scarce. This paper approaches methods for detecting anomalies, change points or both.
### Time series events
Events correspond to a phenomenon, generally pre-defined in a particular domain, with an inconstant or irregular occurrence relevant to an application. In the context of time series, events represent significant changes in expected behavior at a certain time or interval [29]. In general, punctual events of a given time series \(X=<x_{1},x_{2},x_{3},\cdots,x_{n}>\), can be identified in a simplified way by \(e(X,k,\sigma)\) using the Eq. 1, where \(k\) represents the length of nearby observations. If an observation \(x_{t}\) escapes the expected behavior based on previous \(\{x_{t-k},\ldots,x_{t-1}\}\) or later \(\{x_{t+1},\ldots,x_{t+k}\}\) observations (above a threshold \(\sigma\)), it can be considered an event1.
Footnote 1: Due to limited space, the general formalization of event intervals lie outside the scope of this paper.
\[\begin{split} e(X,k,\sigma)=\{t,|x_{t}-E(x_{t}|\{x_{t-k},\ldots, x_{t-1}\})|>\sigma\ \vee\\ |x_{t}-E(x_{t}|\{x_{t+1},\ldots,x_{t+k}\})|>\sigma\}\end{split} \tag{1}\]
AnomaliesMost commonly, events detected in time series refer to anomalies. Anomalies appear not to be generated by the same process as most of the observations in the time series [12]. Thus, anomalies can be modeled as isolated observations of the remaining nearby data. In this case, an event identified in \(x_{t}\) can be considered an anomaly if it escapes expected behavior both before and after time point \(t\) according to \(\alpha(X,k,\sigma)\) in Eq. 2. Generally, anomalies are identified by deviations from the time series inherent trend. However, anomalies may also present themselves as data volatility variations.
\[\begin{split} a(X,k,\sigma)=\{t,|x_{t}-E(x_{t}|\{x_{t-k},\ldots, x_{t-1}\})|>\sigma\ \wedge\\ |x_{t}-E(x_{t}|\{x_{t+1},\ldots,x_{t+k}\})|>\sigma\}\end{split} \tag{2}\]
Change pointsChange points in a time series are the points or intervals in time that represent a transition between different states in a process that generates the time series [57]. In this case, a change point event identified in \(x_{t}\) follows
the expected behavior observed before or after the time point \(t\), but not both at the same time according to \(cp(X,k,\sigma)\) in Eq. 3.
\[\begin{split} cp(X,k,\sigma)=\{t,|x_{t}-E(x_{t}||x_{t-k},\dots,x_{t -1})|>\sigma\ \ \underline{\vee}\\ |x_{t}-E(x_{t}||x_{t+1},\dots,x_{t+k})|>\sigma\}\end{split} \tag{3}\]
### Event detection
Event detection is the process of identifying the occurrence of such events based on data analysis. It is recognized as a basic function in surveillance and monitoring systems. Moreover, it becomes even more relevant for applications based on time series and sensor data analysis [45]. Event detection methods found in the literature are usually based on model deviation analysis, classification-based analysis, clustering-based analysis, domain-based analysis, or statistical techniques [12, 30, 45, 3]. Regardless of the adopted detection strategy, an important aspect of any event detection method is how the events are reported. Typically, the outputs produced by event detection methods are either scores or labels. Scoring detection methods assign an anomaly score to each instance in the data depending on the degree to which that instance is considered an anomaly. On the other hand, labeling detection methods assign a label (normal or anomalous) to each data instance. Such methods are the most commonly found in the literature [12].
### Detection performance metrics
Detection methods might variate their performance under different time series [23]. Therefore, there is a demand for comparing the results provided by them. Such a process aims to guide the choice of suitable methods for detecting events of a time series in a particular application. For comparing event detection methods, standard classification metrics, such as F1, Precision, and Recall, are usually adopted [30, 35, 59].
As usual, the standard classification metrics depend on measures of true positives (\(TP\)), true negatives (\(TN\)), false positives (\(FP\)), and false negatives (\(FN\)) [30]. In event detection, the \(TP\) refers to the number of events correctly detected (labeled) by the method. Analogously, \(TN\) is the number of observations that are correctly not detected. On the other hand, the measure \(FP\) is the number of detections that did not match any event, that is, false alarms. Analogously, \(FN\) is the number of undetected events. Among the standard classification metrics, Precision and Recall are widely adopted. Precision reflects the percentage of detections corresponding to time series events (exactness), whereas Recall reflects the percentage of correctly detected events (completeness). Precision and Recall are combined in the F\({}_{\beta}\) metrics [30]. The F1 metric is also widely used to help gauge the quality of event detection balancing Precision and Recall [59].
Event detection is particularly challenging to evaluate, as discussed in Section 1.1. In this context, the Numenta Anomaly Benchmark (NAB) provided a common scoring algorithm for evaluating and comparing the efficacy of anomaly detection methods [35]. The NAB score metric is computed based on anomaly windows of observations centered around each event in a time series. Given an anomaly window, NAB uses the sigmoidal scoring function to compute the weights of each anomaly detection. It rewards earlier detections within a given window and penalizes \(FP\)s. Also, NAB allows the definition of application profiles: standard, reward low \(FP\)s, and reward low \(FN\)s. The standard profile gives relative weights to \(TP\)s, \(FP\)s, and \(FN\)s based on the window size.
Nonetheless, the NAB scoring system presents challenges for its usage in real-world applications. For example, the anomaly window size is automatically defined as 10% of the time series size, divided by the number of events it contains, values that are generally not known in advance, especially in streaming environments. Furthermore, Singh and Olinsky [56] pointed out poor definitions and arbitrary constants in the scoring equations. Finally, score values increase with the number of events and detections. Every user can tweak the weights in application profiles, making it difficult to interpret and benchmark results obtained by other users or setups.
In addition to NAB [35], this section presents other works related to the problem of analyzing and comparing event detection performance [12]. Recent works focus on the development of benchmarks to evaluate univariate time series anomaly detection methods [34, 8, 66]. Jacob et al. [34] provide a comprehensive benchmark for explainable anomaly detection over high-dimensional time series, while the benchmark developed by Boniol et al. [8] allows the user to assess the advantages and limitations of both anomaly detection methods and detection accuracy metrics.
Standard classification metrics are generally used for evaluating the ability of an algorithm to distinguish normal from abnormal data samples [12, 59]. Aminikhanghahi and Cook [4] review traditional metrics for change point detection evaluation, such as Sensitivity, G-mean, F-Measure, ROC, PR-Curve, and MSE. Detection evaluation measures have also been investigated in the areas of sequence data anomaly detection [13], time series mining and representation [19], and sensor-based human activity learning [17, 64].
Metrics found in the literature are mainly designed to evaluate the detection of punctual anomalies. However, many real-world event occurrences extend over an interval (range-based). Motivated by this, Tatbul et al. [59] and Paparrizos et al. [44] extend the well-known Precision and Recall metrics, and the AUC-based metrics, respectively, to measure the accuracy of detection algorithms over range-based anomalies. Other recent metrics developed for the task of detecting range-based time series anomalies are also included in the benchmark of Boniol et al. [8]. In addition, Wenig et al. [66] published a benchmarking toolkit for algorithms designed for detecting anomalous subsequences in time series [7, 9].
To the best of our knowledge, few works opt to evaluate event detection algorithms based on other than traditional metrics. For example, Wang, Vuran, and Goddard [63] calculate the delay until an individual node and the delivery delay in a transmission network detect an event. The work presents a framework for capturing delays in detecting events in large-scale WSN networks with a time-space simulation. Conversely, Tatbul et al. [59] also observe the neighborhood of event detections, not to calculate detection delays, but to evaluate positional tendency in anomaly ranges. Ultimately, our previous work uses the delay measure to evaluate the ability of algorithms to detect real events in time series [22]. Escobar et al. [22] contribute by studying the time distance between event detections and identified events. It furthers a qualitative analysis of the tendency of algorithms to detect before or after the occurrence of an event.
Under these circumstances, there is still a demand for event detection performance metrics that incorporate both the concept of time and tolerance for detections that are sufficiently close to time series events. Therefore, this paper contributes by introducing new metrics for evaluating methods regarding their detection accuracy while also considering neighboring detections, incorporating temporal tolerance for inaccuracy in event detection.
## 3 SoftED
This paper adopts a distance-based approach to develop novel metrics designed to evaluate the performance of methods for detecting events in time series. The inspiration for the proposed solution is found in soft (or approximate) computing. Soft computing is a collection of methodologies that exploit tolerance for inaccuracy, uncertainty, and partial truth to achieve tractability, robustness, and low solution cost [60]. In this context, the main proposed idea is to soften the hard metrics (standard classification metrics) to incorporate temporal tolerance or inaccuracy in event detection. Such metrics seek to support the decision-making of the most appropriate method for a given application with a basis not only on the usual analysis of the detection accuracy but also on the analysis of the ability of a method to produce detections that are close enough to actual time series events. Henceforth, the proposed approach is named Soft classification metrics for Event Detection, or SoftED. This section formalizes the SoftED metrics.
Figure 2 gives a general idea of the proposed approach, illustrating the key difference between the standard hard evaluation and the proposed soft evaluation. Blue rhombuses represent actual time series events. Circles correspond to detections produced by a particular detection method. The hard evaluation concerns a binary value regarding whether detection is a perfect match to the actual event. In this case, circles are green when they perfectly match the events and red when they do not. Conversely, soft evaluation assesses the degree to which detection relates to a particular event.
### Defining an event membership function
In order to soften the standard hard metrics, we incorporate a distance-based temporal tolerance for events. It is done by defining the relevance of a particular detection to an event. This section formalizes the proposed approach. Table 1 defines the main variables used in the formalization of SoftED. Given a time series \(X\) containing a set of \(m\) events, \(E=\{e_{1},e_{2},\ldots,e_{m}\}\), where \(e_{j}\), \(j=1,\ldots,m\), is the j-th event in \(E\) occurring at time point \(t_{e_{j}}\). A particular detection method applied to \(X\) produces a set of \(n\) detections, \(D=\{d_{1},d_{2},\ldots,d_{n}\}\), where \(d_{i}\), \(i=1,\ldots,n\), is the i-th detection in \(D\) indicating the time point \(t_{d_{i}}\) as a detection occurrence.
The degree to which a detection \(d_{i}\) is relevant to a particular event \(e_{j}\) is given by an event membership function \(\mu_{e_{j}}(t)\) as defined in Equation 4 and illustrated in Figure 2(a). This solution was inspired by Fuzzy sets, where we innovate by fuzzifying the time dimension rather than the time series observations [68]. The definition of \(\mu_{e_{j}}(t)\) considers the acceptable tolerance for inaccuracy in event detection for a particular domain application. The acceptable time range in which an event detection is relevant for allowing an adequate response reaction to a domain event is given by the constant \(k\).
\[\mu_{e_{j}}(t_{d_{i}})=max\left(min\left(\frac{t_{d_{i}}-(t_{e_{j}}-k)}{k}, \frac{(t_{e_{j}}+k)-t_{d_{i}}}{k}\right),0\right) \tag{4}\]
Figure 2(b) represents the evaluation of \(\mu_{e_{j}}(t)\) for two detections, \(d_{1}\) and \(d_{2}\), produced by a particular detection method. In this context, \(\mu_{e_{j}}(t_{d_{j}})\) gives the extent to which a detection \(d_{i}\) represents event \(e_{j}\), or, in other words, its temporal closeness to a hard true positive (TP) regarding \(e_{j}\). In that case, detection \(d_{1}\) is closer to a TP, and \(d_{2}\) lies outside the tolerance range given by \(k\) and could be considered a false positive.
### Maintaining integrity with hard metrics
We are interested that the SoftED metrics still preserve concepts applicable to traditional (hard) metrics. In particular, the SoftED metrics are designed to express the same properties as their hard correspondents. Moreover, they are designed to maintain the reference to the perfect detection performance (score of 1 as in hard metrics) and indicate how close a detection method came to it. In order to achieve this goal, this approach defines constraints necessary for maintaining integrity concerning the standard hard metrics:
1. A given detection \(d_{i}\) must have only one associated score.
2. The total score associated with a given event \(e_{j}\) must not surpass 1.
The first constraint comes from the idea that the detection \(d_{i}\) should not be rewarded more than once. It avoids the possibility of the total score for \(d_{i}\) surpassing the perfect reference score of 1. Take, for example, the first scenario, presented in Figure 2(c), in which we have one detection and many close events. The detection \(d_{1}\) is evaluated for events
\begin{table}
\begin{tabular}{l l l} \hline \hline Var. & Value & Description \\ \hline \(E\) & \(\{e_{1},e_{2},\dots,e_{n}\}\) & set of time series events \\ \(m\) & \(|E|\) & number of events \\ \(j\) & \(1,\dots,m\) & event index \\ \(e_{j}\) & – & the j–th event in \(E\) \\ \(t_{e_{j}}\) & time point & time point where the \(e_{j}\) occurs \\ \hline \(D\) & \(\{d_{1},d_{2},\dots,d_{n}\}\) & set of detections \\ \(n\) & \(|D|\) & number of detections \\ \(i\) & \(1,\dots,n\) & detection index \\ \(d_{i}\) & – & the i–th detection in \(D\) \\ \(t_{d_{i}}\) & time point & time point where \(d_{i}\) occurs \\ \hline \hline \(k\) & time duration & constant of tolerance for event detections \\ \hline \hline \end{tabular}
\end{table}
Table 1: Definition of main variables for the formalization of SoftED
Figure 2: The general idea behind the proposed approach comparing the standard “hard” evaluation and the “soft” evaluation of the event detection.
\(e_{1}\), \(e_{2}\), and \(e_{3}\) resulting in three different membership evaluation of \(\mu_{e_{1}}(t_{d_{1}})\), \(\mu_{e_{2}}(t_{d_{1}})\), and \(\mu_{e_{3}}(t_{d_{1}})\), respectively. Nevertheless, to maintain integrity with hard metrics, a given detection \(d_{1}\) must not have more than one score. Otherwise, \(d_{1}\) would be rewarded three times, and its total score could surpass the score of a perfect match, which would be 1.
To address this issue, we devise a strategy for attributing each detection \(d_{i}\) to a particular event \(e_{j}\). The adopted attribution strategy is based on the temporal distance between \(d_{i}\) and \(e_{j}\). It facilitates interpretation and avoids the need for solving an optimization problem for each detection. In that case, we attribute \(d_{i}\) to the event \(e_{j}\) that maximizes the membership evaluation \(\mu_{e_{j}}(t_{d_{i}})\). This attribution is given by \(E_{d_{i}}\) defined in Equation 5. According to Figure 3c, \(d_{1}\) is attributed to event \(e_{1}\) given the maximum membership evaluation of \(\mu_{e_{1}}(t_{d_{i}})\). In case there is a tie for the maximum membership evaluation of two or more events, \(E_{d_{i}}\) represents the set of events to which \(d_{i}\) is attributed. As a consequence, we can also derive the set of detections attributed to each event \(e_{j}\), \(D_{e_{j}}\), defined by Equation 6. The addition of a detection \(d_{i}\) to the set \(D_{e_{j}}\) is further conditioned by the tolerance range, that is, a membership evaluation greater than \(0\) (\(\mu_{e_{j}}(t_{d_{i}})>0\)).
\[E_{d_{i}}=\operatorname*{arg\,max}_{e_{j}}\left(\mu_{e_{j}}(t_{d _{i}})\right) \tag{5}\] \[D_{e_{j}}=\{d_{i}\mid E_{d_{i}}\supset e_{j}\land\mu_{e_{j}}(t _{d_{i}})>0\} \tag{6}\]
The second constraint defined by this approach comes from the idea that a particular detection method should not be rewarded more than once for detecting the same event \(e_{j}\). It assures that the total score of detections for event \(e_{j}\) does not surpass the perfect reference score of 1. Take, for example, the second scenario, presented in Figure 3d, in which we have many detections attributed to the same event. The event \(e_{1}\) is present in the sets \(E_{d_{1}}\), \(E_{d_{2}}\), \(E_{d_{3}}\) and \(E_{d_{4}}\). Moreover, \(D_{e_{1}}\) contains \(d_{1}\), \(d_{2}\), and \(d_{3}\). But in order to maintain integrity with hard metrics, the total score for \(D_{e_{1}}\) (\(\mu_{e_{1}}(t_{d_{1}})+\mu_{e_{1}}(t_{d_{2}})+\mu_{e_{1}}(t_{d_{3}})\)) must not surpass the score of a perfect match.
To address this issue we devise an analogous distance-based strategy for attributing a representative detection \(d_{i}\) to each event \(e_{j}\). In that case, we attribute to event \(e_{j}\) the detection \(d_{i}\), contained in \(D_{e_{j}}\), that maximizes the membership evaluation \(\mu_{e_{j}}(t_{d_{j}})\). This attribution is given by \(\hat{d}_{e_{j}}\) defined in Equation 7. According to Figure 3d, \(e_{1}\) is best represented by detection \(d_{1}\) given the maximum membership evaluation of \(\mu_{e_{1}}(t_{d_{1}})\). As a consequence, we can compute the associated score for each event \(e_{j}\) as \(es(e_{j})\), defined by Equation 8.
\[\hat{d}_{e_{j}}=\operatorname*{arg\,max}_{d_{i}}\left(\{\mu_{e_{j}}(t_{d_{i} })\mid d_{i}\in D_{e_{j}}\}\right) \tag{7}\]
\[es(e_{j})=\mu_{e_{j}}(t_{d_{j}}) \tag{8}\]
Figure 3: Auxiliary plots for comprehension of SoftED. (a) represents an event membership function \(\mu_{e_{j}}(t)\). (b) represents \(\mu_{e_{j}}(t)\) for detections \(d_{1}\) and \(d_{2}\). (c) depicts the example scenario containing one detection to many events, motivating the first constraint of SoftED. (d) depicts the example scenario containing many detections to a single event, motivating the second constraint of SoftED.
Finally, each detection \(d_{i}\) produced by a particular detection method is scored by \(ds(d_{i})\) defined in Equation 9. Representative detections (\(d_{i}=\hat{d}_{e_{j}}\)) are scored based on \(es(e_{j})\). All other detections are scored 0. This definition ensures the total score for detections of a particular method does not surpass the number of real events \(m\) contained in the time series \(X\). It holds true the Equation 10 and maintains the reference to the score of a perfect detection Recall according to usual hard metrics. Furthermore, it penalizes false positives and multiple detections for the same event \(e_{j}\).
\[ds(d_{i})=\begin{cases}es(e_{j}),&\text{if }\exists e_{j}\in E\mid d_{i}=\hat{d} _{e_{j}}\\ 0,&\text{otherwise}\end{cases} \tag{9}\]
\[\sum_{i=1}^{n}ds(d_{i})\leq m \tag{10}\]
### Computing the SoftED metrics
The scores computed for each detection \(d_{i}\), \(ds(d_{i})\), are used to create soft versions of the hard metrics TP, FP, TN, and FN, as formalized in Table 2. In particular, while the value of TP gives the number of detections that perfectly matched an event (score of 1), the sum of \(ds(d_{i})\) scores indicate the degree to which the detections of a method approximate the \(m\) events contained in time series \(X\) given the temporal tolerance of \(k\) observations. Hence, the soft version of the TP metric, \(\text{TP}_{s}\) is given by \(\sum_{i=1}^{n}ds(d_{i})\). Conversely, the soft version of FN, \(\text{FN}_{s}\), indicates the degree to which a detection method could not approximate the \(m\) events in an acceptable time range. The \(\text{FN}_{s}\) can then be defined as the difference between \(\text{TP}_{s}\) and the perfect Recall score (\(m-\text{TP}_{s}\)).
On the other hand, while the value of FP gives the number of detections that did not match an event (score of 0), its soft version, \(\text{FP}_{s}\), indicates how far the detections of a method came to the events contained in time series \(X\) given the temporal tolerance of \(k\) observations. In that sense, \(\text{FP}_{s}\) is the complement of \(\text{TP}_{s}\) and can be defined by \(\sum_{i=1}^{n}\left(1-ds(d_{i})\right)\). Finally, the soft version of TN, \(\text{TN}_{s}\), indicates the degree to which a detection method could avoid noenrovent observations of \(X\) (\(|t|-m\)). The \(\text{TN}_{s}\) is given by the difference between \(\text{FP}_{s}\) and the perfect specificity score (\(\left(|t|-m\right)-\text{FP}_{s}\)).
Due to the imposed constraints described in Section 3.2, the defined SoftED metrics \(\text{TP}_{s}\), \(\text{FP}_{s}\), \(\text{TN}_{s}\), and \(\text{FN}_{s}\), hold the same properties and the same scale as traditional hard metrics. Consequently, using their same characteristic formulas, they can derive soft versions of traditional scoring methods, such as Sensitivity, Specificity, Precision, Recall, and F1. Moreover, SoftED scoring methods still provide the same interpretation while including temporal tolerance for inaccuracy, which is pervasive in time series event detection applications. An implementation of SoftED metrics in R is made publicly available at Github2.
Footnote 2: SoftED implementation, datasets and experiment codes: [https://github.com/cefet-rj-dal/softed](https://github.com/cefet-rj-dal/softed)
## 4 Experimental evaluation
SoftED metrics were submitted to an experimental evaluation to analyze their contribution against the traditional hard metrics and the NAB score, both being the current state-of-the-art detection scoring methods [35]. The proposed metrics are evaluated based on two complementary analyses: (i) a quantitative analysis of the effects of the incorporated temporal tolerance in event detection evaluation; and (ii) a qualitative analysis of its contribution under different scenarios. For that, a large set of computational experiments were performed with the application of several different methods for event detection in real-world and synthetic time series datasets containing ground truth event data. Detection results were evaluated based on SoftED, hard, and NAB metrics. First, this section describes the adopted time series datasets and experimental settings. Finally, the quantitative and qualitative results are presented.
\begin{table}
\begin{tabular}{l l l} \hline \hline \(\text{TP}_{s}=\) & \(\sum_{i=1}^{n}ds(d_{i})\) & \(\text{FN}_{s}=\) & \(m-\text{TP}_{s}\) \\ \(\text{FP}_{s}=\) & \(\sum_{i=1}^{n}\left(1-ds(d_{i})\right)\) & \(\text{TN}_{s}=\) & \(\left(|t|-m\right)-\text{FP}_{s}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Formalization of SoftED Metrics
### Datasets
This section presents the datasets selected for evaluating the SoftED metrics. The selected datasets are widely available in the literature and are composed of simulated and real-world time series regarding several different domain applications such as water quality monitoring (GECCO) [48]3, network service traffic (Yahoo) [65], social media (NAB) [1], oil well exploration (3W) [61], and public health (NMR) 4, among others. The selected datasets present over 6 hundred representative time series containing different types of events. In particular, GECCO, Yahoo, and NAB contain mostly anomalies, while 3W and NMR contain mostly change points. Moreover, the datasets present different types of nonstationarity and statistical properties to provide a more thorough discussion of the effects of the incorporated temporal tolerance on event detection evaluation on diverse datasets.
Footnote 3: The GECCO dataset is provided by the R-package EventDetectR [40].
Footnote 4: The NMR dataset was produced by Fiocruz and comprised data on neonatal mortality in Brazilian health facilities from 2005 to 2017. It is publicly available at [https://doi.org/10.7303/sym23651701](https://doi.org/10.7303/sym23651701).
### Experimental settings
For evaluating SoftED metrics, a set of up to 12 different event detection methods were applied to all time series in the adopted datasets, totalizing \(4,026\) event detection experiments. Each experiment comprised an offline detection application, where the methods had access to the entire time series given as input. The applied detection methods are implemented and publicly available in the _Harbinger_ framework [52]. It integrates and enables the benchmarking of different state-of-the-art event detection methods. These methods encompass searching for anomalies and change points through different techniques, including statistical, volatility, proximity, and machine learning methods. The adopted methods are described in detail by Escobar et al. [22], namely: the Forward and Backward Inertial Anomaly Detector (FBIAD) [38], K-Nearest Neighbors (KNN-CAD) [25], anomalize (based on time series decomposition [21, 55]) [28, 18], and GARCH [11], for anomaly detection; the Exponentially Weighted Moving Average (EWMA) [47], seminal method of detecting change points (SCP) [29], and ChangeFinder (CF) [57], for change point detection; and the machine learning methods based on the use Feed-Forward Neural Network (NNET) [49], Convolutional Neural Networks (CNN) [27, 37], Support Vector Machine (SVM) [14, 46], Extreme Learning Machine (ELM) [32, 58], and K-MEANS [42], for general purpose event detection.
In each experiment, detection methods were evaluated using the hard metrics Precision, Recall, and F1. Among them, the F1 was the main metric used for comparison. The NAB score was also computed with the standard application profile for each detection method result. The NAB scoring algorithm is implemented and publicly available in the R-package _otstad_[33]. Anomaly window sizes were automatically set. During the computation of the NAB score, confusion matrix metrics are built. Based on these metrics, the F1 metric of the NAB scoring approach was also computed. Finally, the SoftED metrics were computed for soft evaluation of the applied event detection methods. In particular, as the constant of temporal tolerance, \(k\), is domain-dependent, this experimental evaluation was set to 15, defining a tolerance window of 30 observations enough to hold the central limit theorem unless stated otherwise. Nevertheless, \(k\) also experimented with values in \([30,45,60]\) for sensitivity analysis. The datasets and codes used in this experimental evaluation were made available for reproducibility.
### Quantitative analysis
This section presents a quantitative analysis of the SoftED metrics. The main goal of this analysis is to assess the effects of temporal tolerance incorporated by SoftED in event detection evaluation. Four experiments were conducted to compare SoftED metrics against hard metrics and the NAB score.
Experiment 1The first experiment assesses the number of times SoftED metrics considered more _TP_s while evaluating detection methods to answer whether SoftED can incorporate temporal tolerance to event detection evaluation. For that, we are interested in comparing SoftED F1 and hard F1 metrics. Time series detections where SoftED F1 was higher than its corresponding hard metric, represent the incorporation of temporal tolerance. In contrast, detection results that maintained an unchanged F1 score had their evaluation confirmed, representing either perfect Recall scenarios in which no tolerance is needed or scenarios with a low rate of neighboring detections in which there are few opportunities for tolerance. Finally, there are inaccurate results, with detections that did not allow temporal tolerance, presenting zero Precision/Recall, and no detections were sufficiently close to events given the defined tolerance level (\(k=15\)). In the latter case, the F1 metric cannot be computed.
Figure 3(a) compares SoftED F1 and hard F1 metrics for each adopted dataset. In blue are presented the percentage of time series detections where SoftED F1 was able to incorporate temporal tolerance. SoftED metrics incorporated temporal tolerance in over 43% (NMR) and at least 25% (3W) of detection method evaluations in all datasets. In total,
36% of the overall conducted time series detections were more tolerantly evaluated (in blue). Furthermore, 45% of all detection results had their evaluation confirmed (in gray), maintaining an unchanged F1 score, reaching a maximum of 64% for the NAB dataset and a minimum of 6% for the NMR dataset. Finally, other 19% of the overall results corresponded to inaccurate detections that did not allow temporal tolerance (in red). The percentages of inaccurate results (F1 n/a) for each dataset are also given in Figure 3(a) in red.
Figure 3(b) shows the cases of incorporated temporal tolerance in detail. The datasets NAB and Yahoo presented an increase in F1 in 11% and 29% of the cases, respectively (lighter blue). The other respective 15% and 9% are cases in which methods got no \(TP\)s, presenting zero Precision/Recall and non-applicable F1 based on hard metrics (darker blue). Nonetheless, SoftED could score sufficiently close detections, enabling the evaluation of such methods. This is also the case for almost all evaluations of the 3W and NMR datasets that had incorporated temporal tolerance. In fact, in total 17% of the overall conducted detection evaluations could not have been made without SoftED metrics incorporating temporal tolerance.
It is possible to observe by Figure 4 that datasets that contained more anomalies (NAB and Yahoo) got more accurate detection results. It occurs as most adopted methods are designed for anomaly detection. Also, the number of anomaly events in their time series gives several opportunities for incorporating temporal tolerance and increasing F1. On the other hand, 3W and NMR datasets, containing only one or two change points per series, got a higher rate of inaccurate detections. These results indicate that change points pose a particular challenge for detection evaluation. SoftED metrics contribute by incorporating temporal tolerance whenever possible and scoring methods that could be disregarded.
Experiment 2The second experiment focuses on whether the temporal tolerance incorporated by SoftED can affect the selection of different detection methods. For that, we measured the number of times the use of SoftED metrics as criteria changed the ranking of the best-evaluated detection methods. Figure 5 presents the changes in the top-ranked methods for each time series based on the SoftED F1 metric compared to the hard F1. For all datasets, there were changes in the best-evaluated detection method (Top 1) in over 74% (NMR) or at least 6% (Yahoo) of the cases (in blue), affecting the recommendation of the most suitable detection method for their time series. While the most accurate results maintained their top position (in dark gray), over all adopted time series, 31% of detection methods that could have been dismissed became the most prone to selection.
Furthermore, SoftED metrics also caused changes in the second (Top 2) and third-best (Top 3) evaluated methods. Percentages for each dataset are depicted in Figure 5, however, over all adopted time series, 24% of the methods in the Top 2 climbed to that position, while 16% dropped to that position when other methods assumed the Top 1 (in light gray). For methods in the Top 3, 23% climbed to the position, and in 24% of the cases, they were pushed down by methods that climbed to the first two rank positions. Due to the higher rates of perfect Recall results in the NAB and Yahoo datasets (Figure 4), most of the methods applied maintained their ranking positions at the top. In contrast, 3W and NMR datasets presented more changes in ranking based on the SoftED metrics, affecting the selection of suitable methods, especially for change point detection.
Experiment 3Once analyzed the incorporated temporal tolerance and its effects in the ranking of detection methods, the third experiment encompasses a sensitivity analysis to answer the question of how SoftED is affected by different
Figure 4: Incorporated temporal tolerance from SoftED F1 metric evaluation of event detection methods compared to hard F1 metric.
levels of temporal tolerance. The temporal tolerance level of SoftED metrics is given by the \(k\) constant set to 30, 45, and 60, besides the minimum value of 15 as in the previous experiments. Figure 6 presents the average difference between SoftED and hard Precision and Recall metrics given the different levels of temporal tolerance for each dataset. Overall, as temporal tolerance increases, more \(TP\)s were considered, and metrics increased in value, which means the detection methods were more tolerantly evaluated. In particular, higher levels of temporal tolerance lead to a decrease in the number of \(FN\)s, which most directly affected Recall values.
Experiment 4The last quantitative experiment aims to answer the following question: whether there is a difference between the temporal tolerance incorporated by SoftED and the tolerance incorporated by NAB score anomaly windows. For that, we measured the number of times the NAB F1 metrics, derived from the NAB scoring algorithm, considered more \(TP\)s than hard F1 metrics while evaluating detection methods. This measure is compared against the tolerance incorporated by SoftED metrics presented in Experiment 1 (Figure 4a). For datasets 3W, NAB, and Yahoo, NAB increased the incorporated tolerance at 42%, 40%, and 39%, respectively. Whereas, for the NMR dataset, the percentage of incorporated tolerance decreased by 6%.
NAB metrics were more tolerant than SoftED in method evaluations over most datasets, which does not mean better. The tolerance level incorporated by NAB depends directly on the anomaly window size, which is automatically set by the algorithm. Table 3 presents the interval and the average of the anomaly window sizes set for the time series of each dataset. While the tolerance level given by SoftED was consistently set by \(k=15\), giving a tolerance window
Figure 5: Changes in the ranking of top evaluated event detection methods based on the SoftED F1 metric compared to hard F1 metric
Figure 6: Average difference between SoftED and hard Precision and Recall metrics given different levels of temporal tolerance
of \(30\) observations, the NAB anomaly windows were mostly wider, reaching a maximum of \(12,626\) observations or \(1,357\) on average for the \(3\)W dataset. Wider anomaly windows allow a greater number of hard \(FPs\) to be considered \(TPPs\), which causes F1 metrics to increase in value. It is similar to what was discussed in Experiment 3, explaining the increase in tolerance opportunities. The inverse is also true, as exemplified by the NMR dataset, for which anomaly windows did not surpass \(14\) observations, decreasing the number of tolerance opportunities compared to SoftED.
Overall, wider anomaly windows caused the NAB score to increase its computation time by three orders of magnitude on average compared to hard metrics (\(1\) versus \(1\times 10^{-3}\) seconds). In contrast, SoftED increased metrics computation time by only one order of magnitude higher than the hard metrics (\(1\times 10^{-2}\) versus \(1\times 10^{-3}\) seconds). Moreover, it is important to note from Table 3 that the anomaly window size computation proposed by the NAB algorithm allows the definition of zero-sized windows, which do not give any tolerance to inaccuracy as in hard metrics or narrow windows which are not enough to hold the central limit theorem guaranteed in SoftED results. On the other hand, the NAB window size automatic definition is not domain-dependent. Consequently, domain specialists may find windows too wide or too narrow for their detection application, making the incorporated tolerance and metric results non-applicable or at least difficult to interpret. In this context, SoftED contributes by allowing domain specialists to define the desired temporal tolerance level for their detection method results.
### Qualitative analysis
This section presents a qualitative analysis of SoftED metrics and the scenarios in which they bring the most contribution compared to hard metrics and the NAB score. To this end, we have surveyed \(13\) specialists from three domains: oil exploration, public health, and weather monitoring. We have interviewed \(3\) specialists from Petrobras (Brazil oil company), \(5\) specialists from the Oswaldo Cruz Foundation (Fiocruz), linked to the Brazilian Ministry of Health, the most prominent institution of science and technology applied to health in Latin America, and \(5\) weather forecast specialists from the Rio Operations Center (COR) of the City Hall of Rio de Janeiro. All interviewed specialists work on the problem of time series analysis and event detection daily. Furthermore, we have also surveyed other \(57\) student volunteers from the Federal Center for Technological Education of Rio de Janeiro (CEFET/RJ) and the National Laboratory for Scientific Computing (LNCC), totalizing \(70\) participants.
The survey addressed the problem of selecting the most suitable method in six experiments, each representing a particular event detection scenario. Two event detection methods (A and B) were applied to a representative time series of the GECCO, \(3\)W, or NMR datasets for each experiment. The plots of the detection results were presented to participants as in Figure 7, where blue dots represent events, red dots represent detections, and green dots represent detections that match events. Moreover, we presented the participants with Table 4, containing detection evaluation metrics computed for methods A and B for each experiment scenario, namely the F1 metric, in its hard and SoftED versions and the NAB score. Values that maximize each metric and could be used for recommending a particular method are underlined.
Given the results of both methods, we asked the participants to analyze the plots in Figure 7 and answer, for each experiment, the first question of the survey:
Question 1 - Which event detection method performed better?
Question 1 was closed with three disjoint options: _Method A_, _Method B_, or _None_. The main goal of Question 1 was to get the specialists' intuitive and personal opinions of the most suitable detection method for selection on that particular application scenario.
Next, we asked the participants to analyze the metrics in Table 4 and answer, for each experiment, the second question of the survey:
Question 2 - Which metric corroborates with your opinion?
\begin{table}
\begin{tabular}{l l l} \hline \hline \multirow{2}{*}{**Dataset**} & \multicolumn{2}{c}{**Anomaly window sizes**} \\ \cline{2-3} & **Interval** & **Mean** \\ \hline \(3\)W & [52, 12626] & \(1357\) \\ NAB & [0, 902] & \(286\) \\ Yahoo & [0, 168] & \(38\) \\ NMR & [0, 14] & \(13\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Summary of anomaly window sizes automatically set by the NAB scoring algorithm for each dataset
Question 2 was also closed with three joint options: _F1_, _NAB score_, or _Other_. The main goal of Question 2 was to assess the metrics (and corresponding evaluation approach) that would further the selection of the most suitable detection method in that particular application scenario, according to specialist opinion.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{**Experiment**} & \multirow{2}{*}{**Method**} & \multicolumn{3}{c}{**Metric**} \\ \cline{3-5} & & \multicolumn{2}{c}{**F1**} & \multirow{2}{*}{**NAB score**} \\ & & **Hard** & **SoftED** & \\ \hline \multirow{2}{*}{**5**} & A & 0.43 & 0.43 & 16.05 \\ & B & \(\underline{1}\) & \(\underline{1}\) & \(\underline{35.87}\) \\ \hline \multirow{2}{*}{**6**} & A & n/a & 0.07 & - 36.53 \\ & B & n/a & 0.01 & - 50.17 \\ \hline \multirow{2}{*}{**7**} & A & n/a & 0.87 & 0.94 \\ & B & n/a & 0.87 & 0.77 \\ \hline \multirow{2}{*}{**8**} & A & n/a & 0.12 & 0.85 \\ & B & n/a & 0.6 & 0.85 \\ \hline \multirow{2}{*}{**9**} & A & n/a & 0.07 & 1.89 \\ & B & n/a & 0.87 & 1.76 \\ \hline \multirow{2}{*}{**10**} & A & 1 & 1 & 0.88 \\ & B & n/a & 0.53 & \(\underline{1}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Event detection metrics for methods A and B for each experiment scenario. Values that could be used for recommending a particular method are underlined.
Figure 7: Detection results of experiments for qualitative analysis, each representing a different scenario of comparison of two given event detection methods (A and B). Blue dots refer to time series events, red dots refer to method detections, and green dots refer to detections that coincide with events. (a) and (b) show time series of the GECC dataset (variables Trueb and pH). (c) and (f) show time series of the NMR dataset (health facilities code 2080052 and 2295407). (d) and (e) show time series of the 3W dataset (event type 2, variable P-PDG and event type 6, variable P-MON-CKP).
Experiment 5The first survey experiment refers to a scenario of perfect Recall, where all events contained in a time series of the GECCO dataset were detected by both Method A and Method B. However, Method A presents more detections (in red). From Table 5, we observe that almost all specialists (12/13) agreed that Method B performed better. According to them, Method B managed to minimize _FP_s, presenting a higher Precision rate, indicated by the F1 metric, which was also the winning response for Question 2 with 12/13 votes. For this experiment, both hard and SoftED F1 give the same evaluation of Method B so that both approaches can be used for recommendation.
Nonetheless, 6 specialists (46%) also selected the NAB score, which also corroborates with the recommendation of Method B. Other specialists said they preferred not to select the NAB score, as they were unfamiliar with the metric and wanted to avoid drawing any conclusions with this experiment. Overall, all computed metrics corroborated with specialists' opinions recommending Method B as the most suitable for the application. This result indicates that the evaluation of decidedly good detection performances based on SoftED and other state-of-the-art metrics available in the literature is still valid.
Experiment 6The second survey experiment is based on another time series taken from the GECCO dataset. It addresses the scenario in which Method A and Method B presented detections that, despite not coinciding with the events contained in the series, are in the surroundings or the neighborhood of the events. Furthermore, Method A and Method B detections differ in the distance to events. In this case, most specialists (11/13) agreed to select Method A as giving the best detection performance, for their detections are temporally closer to the events. Both metrics, F1 and NAB score, corroborated with specialists' opinions recommending Method A, while F1 was the winning response to Question 2. At this point, it is important to note that the hard approach to F1 computation can no longer give an evaluation for the methods, as both results had no Precision or Recall. Hence, the winning response for Question 2 regards the F1 metric produced by the SoftED approach as the one that furthers the selection of the best detection performance according to specialists.
Experiment 7The third survey experiment is based on a time series from the NMR dataset containing monthly neonatal mortality rates for a healthcare facility in Brazil over the years. In this scenario, Method A and Method B produced only one detection close to the event contained in the series. The detections of Method A and Method B are symmetric. They have the same distance from the event and differ only in whether they come before or after it, respectively. In this experiment, almost all specialists (12/13) responded that Method A gave the best detection performance, as it seems to anticipate the event, allowing time to take prior needed actions. Furthermore, as there was a tie regarding the F1 metrics, the NAB score was the winning response, corroborating with specialists' opinions.
However, a public health specialist from Fiocruz disagreed and responded that none of the methods performed better, which is corroborated by the F1 metrics. For example, consider implementing a public health policy in which a human milk bank is supposed to decrease neonatal mortality rates. Although it makes sense to detect the first effects of preparing for the implementation of the policy, it may not be reasonable to give greater weight to anticipated detections rather than the detection of the effects after the implementation. They defend:
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{**Experiment**} & \multicolumn{6}{c}{**Specialists responses**} & \multicolumn{3}{c}{**Volunteer winning responses**} \\ \cline{2-9} & \multicolumn{3}{c}{**Question 1**} & \multicolumn{3}{c}{**Question 2**} & \multirow{2}{*}{**Question 1**} & \multirow{2}{*}{**Question 2**} \\ \cline{2-9} & **Method A** & & & & & & & \\ \hline
**5** & 1 (8\%) & 12 (92\%) & 0 (0\%) & 12 (92\%) & 6 (46\%) & 1 (8\%) & Method B (84\%) & F1 (96\%) \\
**6** & 11 (84\%) & 1 (8\%) & 1 (8\%) & 12 (92\%) & 7 (54\%) & 1 (8\%) & Method A (88\%) & F1 (96\%) \\
**7** & 12 (92\%) & 0 (0\%) & 1 (8\%) & 4 (31\%) & 12 (92\%) & 0 (0\%) & Method A (84\%) & NAB Score (82\%) \\
**8** & 0 (0\%) & 12 (92\%) & 1 (8\%) & 13 (100\%) & 1 (8\%) & 0 (0\%) & Method B (86\%) & F1 (98\%) \\
**9** & 3 (23\%) & 10 (77\%) & 0 (0\%) & 11 (85\%) & 5 (38\%) & 0 (0\%) & Method B (74\%) & F1 (86\%) \\
**10** & 10 (77\%) & 3 (23\%) & 0 (0\%) & 10 (77\%) & 6 (46\%) & 0 (0\%) & Method A (82\%) & F1 (89\%) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Domain specialists’ responses to the survey questions for each experiment scenario. The winning responses are underlined. Volunteers winning responses are also given for comparison with the non-specialist common sense.
It is important to deepen the understanding of the context of the event and the reach of its effects (before and after).
**Experiment 8** The fourth survey experiment is based on a time series taken from the 3W dataset produced by Petrobras. In this scenario, Method A and Method B presented detections close to the event contained in the series, differing only in the number of detections made. The closest detections for both methods have the same distance from the event. For this experiment, except for one specialist that responded _None_ to Question 1, all specialists agree that Method B performed better. As it minimizes the overall \(FP\)s, it increases Precision which conditions F1, the winning response of Question 2, selected by 100% of the specialists. The NAB score indicates a tie between both methods, therefore, not penalizing the excess \(FP\)s, and the hard F1 does not provide any method evaluation. In this case, the SoftED F1 metric is the only one that corroborates with specialists' opinions.
**Experiment 9** The fifth survey experiment is based on another time series taken from the 3W dataset. This scenario addresses the problem of evaluating methods based on their detection proximity to events. In this experiment, both Method A and Method B presented a detection close and antecedent to the two events contained in the series. Method A and Method B differ only concerning the distance of their detections to the events. Most specialists (10/13) agreed that Method B performed better, as they say:
It seems reasonable to give greater weight to detections closer to the actual events.
Again, the only metric that corroborated with the specialists' opinion was the F1 from SoftED. Furthermore, specialists mentioned that SoftED F1 was approximately 12 times greater for Method B than for Method A, while the difference in the NAB score did not seem high enough to give the same confidence in results from Method A.
**Experiment 10** Finally, we used another time series of neonatal mortality rates from the NMR dataset for the sixth and final survey experiment. This experiment addresses the problem of detection bias in detection evaluation. Method A and Method B produced a detection related to the event contained in the series. However, Method A and Method B detections differ regarding their distance to the event. Method A managed to correctly detect the time series event, while the detection of Method B came close before the event. Most specialists (10/13) agreed that Method A performed better since it produced, for all intents and purposes, a \(TP\), presenting perfect Recall and perfect Precision. On the other hand, the evaluation of Method B depended solely on the incorporation of temporal tolerance.
As metrics disagree with the recommendation, the F1 metric again corroborates with the specialists' opinion, being the winning response for Question 2. In particular, the SoftED F1 metric is the only approach that recommends Method A. The hard F1 metric cannot be computed for Method B, being incomparable. The difference in metric values of SoftED is also greater than for the NAB score, increasing confidence in the recommendation.
### Summary of results and discussion
Given different detection evaluation scenarios, the majority of the surveyed domain specialists agreed that the most desired detection method for selection was the one that minimizes \(FP\)s and \(FN\)s, giving higher Precision and Recall rates, while also producing detections that are temporally closer to the events. In this context, the F1 was the metric most corroborated with specialists' opinions for 5 of the 6 experiments. In particular, the SoftED F1 metric was the only one that furthered the selection of the most desired detection method according to specialists in four experiments, only tying with hard F1 in Experiment 5. To elaborate, a domain specialist from Fiocruz argued that:
For health policies, for example, the SoftED approach seems to make more sense since the hard and NAB approaches do not seem adequate for events that produce prior and subsequent effects that may have a gradual and even non-monotonous evolution.
Volunteer winning responses in Table 5 also indicate that common sense does not differ from specialist opinion, which means the contribution of SoftED metrics is noticeable even to a wider and non-specialist research public.
There was still one experiment (Experiment 7) where the NAB score was the metric that most corroborated with specialists. They claimed that for their usual detection application, it is interesting to have \(FP\)s (warnings) before the event, so there is a time window for measures to be taken to prevent any of its unwelcome effects. Also, the longer the time window set by \(FP\)s preceding the event, the better, as there is more valuable time to take preventive actions. All specialists consistently presented this argument that either disagreed regarding Question 1 or gave the NAB score as a response for Question 2 over all experiments. This argument demands a deeper discussion.
At this point, it is important to mention that to avoid bias in the responses, the discussion regarding our motivating example of Section 1.1 was not presented prior to the interviews. Hence, detections that preceded the events were
misconceived as event predictions [70]. However, this problem was not in the scope of our experiments. Also, as discussed in Section 1.1, detections preceding events can be made passed their occurrence. Evaluating methods that anticipate events is not about how temporally distant a preceding detection is from the events. However, it is actually about the time lag necessary for a method to detect the event accurately. The misconception regarding detections that preceded the events was addressed in detail by the end of the survey interviews.
After discussion and deliberation, all disagreeing specialists changed their opinion and sided with the majority that evaluated methods regarding contexts (i) and (ii). Also, all specialists rethought their responses for Experiment 7. Finally, all domain specialists agreed that:
SoftED metrics contribute to the problem of detection method evaluation and selection in different domains.
They allow the assessment of the adequacy of a method for a time series event detection application regarding the quality of its detections and its ability to approximate the events compared to other methods. Furthermore, the specialists see the evaluation regarding detection lags, that is, the analysis of its ability to anticipate (or not) the events as complementary.
## 5 Conclusions
This paper introduced the SoftED metrics, new softened versions of the standard classification metrics designed to incorporate temporal tolerance in evaluating the performance of methods for detecting events in time series applications. SoftED metrics support the comparative analysis of methods based on their ability to accurately produce detections of interest to the user, given their desired tolerance level, both accurately and neighboring the events.
The SoftED metrics were quantitatively and qualitatively evaluated and compared against the current state-of-the-art in detection scoring methods. They incorporated temporal tolerance in event detection, enabling evaluations that could not have been made without them while also confirming accurate results. Consequently, SoftED metrics changed evaluation rankings, causing detection methods that could be disregarded to become the top best evaluated and most prone to selection. Moreover, surveyed domain specialists noted the contribution of SoftED metrics to the problem of detection method evaluation in different domains. In particular, SoftED metrics were the only metrics able to improve the selection of the most desired detection method, according to specialists in most experimental scenarios.
Specialists suggest that SoftED metrics are particularly adequate for evaluating detections of domain events that produce prior and subsequent effects of gradual or non-monotonous evolution. At the same time, they can also be used to benchmark different initial conditions, parameters, and threshold values, whose definition is one of the main challenges for event detection algorithms [39]. Furthermore, analyzing a method's ability to anticipate (or not) the events is complementary after the evaluation enabled by SoftED metrics.
## Acknowledgments
The authors thank CNPq, CAPES (finance code 001), FAPERJ, and CEFET/RJ for partially funding this research.
## Conflict of interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.
|
2305.00710
|
Multi-Fidelity Data-Driven Design and Analysis of Reactor and Tube
Simulations
|
The development of new manufacturing techniques such as 3D printing have
enabled the creation of previously infeasible chemical reactor designs.
Systematically optimizing the highly parameterized geometries involved in these
new classes of reactor is vital to ensure enhanced mixing characteristics and
feasible manufacturability. Here we present a framework to rapidly solve this
nonlinear, computationally expensive, and derivative-free problem, enabling the
fast prototype of novel reactor parameterizations. We take advantage of
Gaussian processes to adaptively learn a multi-fidelity model of reactor
simulations across a number of different continuous mesh fidelities. The search
space of reactor geometries is explored through an amalgam of different,
potentially lower, fidelity simulations which are chosen for evaluation based
on weighted acquisition function, trading off information gain with cost of
simulation. Within our framework we derive a novel criteria for monitoring the
progress and dictating the termination of multi-fidelity Bayesian optimization,
ensuring a high fidelity solution is returned before experimental budget is
exhausted. The class of reactor we investigate are helical-tube reactors under
pulsed-flow conditions, which have demonstrated outstanding mixing
characteristics, have the potential to be highly parameterized, and are easily
manufactured using 3D printing. To validate our results, we 3D print and
experimentally validate the optimal reactor geometry, confirming its mixing
performance. In doing so we demonstrate our design framework to be extensible
to a broad variety of expensive simulation-based optimization problems,
supporting the design of the next generation of highly parameterized chemical
reactors.
|
Tom Savage, Nausheen Basha, Jonathan McDonough, Omar K Matar, Ehecatl Antonio del Rio Chanona
|
2023-05-01T08:20:13Z
|
http://arxiv.org/abs/2305.00710v3
|
# Multi-Fidelity Data-Driven Design and Analysis of Reactor and Tube Simulations
###### Abstract
Optimizing complex reactor geometries is vital to promote enhanced efficiency. We present a framework to solve this nonlinear, computationally expensive, and derivative-free problem. Gaussian processes are used to learn a multi-fidelity model of reactor simulations correlating multiple continuous mesh fidelities. The search space of reactor geometries is explored through lower fidelity simulations, evaluated based on a weighted acquisition function, trading off information gain with cost. Within our framework, DARTS, we derive a novel criteria for dictating optimization termination, ensuring a high fidelity solution is returned before budget is exhausted. We investigate the design of helical-tube reactors under pulsed-flow conditions, which have demonstrated outstanding mixing characteristics. To validate our results, we 3D print and experimentally validate the optimal reactor geometry, confirming mixing performance. Our framework is applicable to a wide variety of expensive simulation-based optimization problems, supporting the design of the next generation of highly parameterized chemical reactors.
## 1 Introduction
Processes to transform raw materials into valuable products will always need to occur; as markets grow, practitioners seek to identify novel economically and environmentally friendly alternatives to existing solutions. At the core of these processes, chemical reactors have long been studied and optimized for industrial chemical synthesis. However, as batch processes are being made continuous [42; 37], and specialty products such as biologically-derived chemicals enter industrial viability [50], there is a need for the design of novel case-specific reactors.
Novel reactor configurations have increasingly been considered for chemical synthesis, for example, microfluidic reactors [8; 25], and mesoscale reactors [41]. Microfluidic reactors can enable finer
control over local conditions resulting in increased product selectivity, and improved heat transfer resulting in more sustainable processes [8]. 3D printed mesoscale reactors have been proposed as next-generation alternatives to traditionally manufactured designs, lending to their large potential design space. 3D printed reactors for biodiesel production have been shown to provide high yields [41], and helical coil-in-coil reactors, unable to be manufactured otherwise, have been shown to demonstrate 'excellent plug-flow behavior' [31, 28, 29].
The modeling and design of traditional chemical reactors has largely been considered an art [15], with small improvements in performance resulting in wider impacts on product yield, sustainability, and economic costs. However, with the promise of new reactors comes the need for new analytical techniques to model and optimize in increasingly complex design spaces.
Chemical reactors have been investigated through computational fluid dynamics (CFD) simulations, where systems of partial differential equations (PDEs) with large degrees of freedom are solved iteratively, resulting in large computational costs. In addition to being expensive, gradient information is practically unavailable. In some simulation-based scenarios, optimization quantities can be obtained directly from a simulation [24]. As such, the adjoint method can be applied to derive gradient quantities which can be used within a gradient-based local optimization scheme such as the Newton method or BFGS [22, 33, 36]. However, in chemical reactor-based domains, performance is often quantified through a secondary quantity. For example, a tracer is simulated to pass through the reactor, and a value2 is then subsequently derived from this concentration profile through an additional fitting procedure [44]. This results in a scenario in which the gradient of a simulation is practically inaccessible, and derivative-free optimization must be applied.
Footnote 2: For example the number of equivalent tanks-in-series
Derivative-free optimization has found significant application in domains where mathematical expressions or gradients are unavailable. Examples include the optimization of proprietary chemical process software [43, 4], chemical reaction optimization [10], real time optimization [51, 7], and topology optimization of two-dimensional chemical reactor channels [5]. With the advent of new technologies in reactor design, reactor geometries are becoming highly-parameterized, resulting in higher-dimensional, more complex derivative-free optimization problems. As such, there exists significant scope for a robust, domain-specific approach for the optimization of simulated chemical reactors to support the next generation of sustainable chemical processes.
In many real-world and simulated engineering systems, differing quality evaluations of quantities of interest exist. Reactor performance can be quantified by a correlation of dimensionless numbers, a CFD simulation, or a pilot-scale experiment. These all attempt to capture the true underlying performance of a system, with differing accuracies and associated costs. Taking the view that only the industrial-scale reactor in its intended setting will provide a true evaluation of performance: any approximation to this, including pilot and lab-scale experiments, simulations, and basic calculations all become valid lower-fidelity evaluations which may be used simultaneously for design and optimization. For CFD simulations of a reactor, fidelities are most often associated with the number of finite element cells in a simulation as they dictate the accuracy and computational cost. By motivating the notion that all predicted, unmeasured quantities derive from a lower-fidelity approximation to a desired high-fidelity function, it becomes pertinent to investigate methodologies that apply these approximations to learn about the true system of interest.
The purpose of this article is to formulate the design of a simulated helical-tube reactor as a multi-fidelity black-box optimization problem. We present a novel approach building upon previous work and demonstrate our framework for the Design and Analysis of Reactor and Tube Simulations, DARTS, by applying it for the simultaneous optimization of a helical-tube reactor geometry and operating conditions. In motivating our methodology, we derive a number of new criteria for monitoring the progress and dictating the termination of multi-fidelity Bayesian optimization. Our approach is extensible to a large number of simulation-based design problems, and our industrially relevant application is among the largest presented in multi-fidelity Bayesian optimization literature in terms of the number of independent fidelities as well as decision variables. The optimal reactor geometry is 3D printed and experimentally validated using associated optimal operating conditions.
The rest of this article is structured as follows. Section 2 provides background around approaches for multi-fidelity Bayesian optimization. Section 3 outlines our methodology, starting with the problem setting and including details about simulation properties, reactor parameterization and fidelities. We
then detail our specific approach to the multi-fidelity Bayesian optimization of simulated chemical reactors. Section 4 presents our results including experimental validation via 3D printing. Finally, Section 5 outlines our conclusions and future work.
## 2 Background
### Notation
For the rest of this work, we apply the following notation. \(\mathbf{x}\in\mathcal{X}\subseteq\mathbb{R}^{n}\) are decision variables, or inputs, where \(\mathcal{X}\) represents the region of feasible inputs. Similarly, \(\mathbf{z}\in\mathcal{Z}\subseteq\mathbb{R}^{m}\) are fidelity parameters, defined within the continuous or discrete set \(\mathcal{Z}\); \(m\) may potentially be greater than one, and we place no restriction on the number of continuous/discrete fidelities available 3. To make this distinction clearer, when \(\mathbf{z}\in\mathbb{R}^{1}\), it is denoted in 'normal' type, \(z\). In the case that \(m>1\), \(\mathcal{Z}\) is a non-ordered set. Therefore, we denote \(\mathbf{z}_{\bullet}\) as the component-wise vector of highest fidelities, following the convention of Kandasamy et al. [21]. We denote the function to be optimized as \(f^{*}\), that takes arguments \(\mathbf{x}\) and \(\mathbf{z}\) and returns an objective value \(y\) with associated computational cost \(c\). The set of previously evaluated inputs, fidelities, objective values, corresponding to the total number of data \(D\) available at a given iteration \(t\) is denoted as \(\mathcal{D}_{t}:=\{(\mathbf{x}_{i},\mathbf{z}_{i},y_{i},c_{i})\}_{i=1}^{D}\). A model of objective \(f\) at iteration \(t\) is denoted \(\hat{f}_{t}\), and a model of cost at iteration \(t\) is denoted \(\lambda_{t}\). Given that in this work \(\lambda\) and \(\hat{f}\) will be modeled using Gaussian processes, the mean and standard deviation are denoted as \(\mu\) and \(\sigma\), respectively, and are indexed by their respective model. For example, the mean of the posterior Gaussian process modeling \(f\) is denoted \(\mu_{f_{t}}\).
Footnote 3: Using the traditional naming convention of ‘multi-fidelity’, in the case that \(m>1\), approaches could feasibly be referred to as ‘multi-multi-fidelity’ though here we maintain the traditional convention.
### Gaussian Processes
A Gaussian process is an infinite-dimension generalization of a multi-variate Gaussian distribution [53]. The mean vector and covariance matrix are replaced by mean and kernel functions, respectively. A Gaussian process can be described as
\[f(x)\sim\mathcal{GP}(m(\mathbf{x}),k(\mathbf{x},\mathbf{x}^{{}^{\prime}})).\]
The kernel function \(k\) dictates the behavior of functions from this distribution, and can be parameterized by hyper-parameters including length scale. By conditioning a Gaussian process on a dataset \(\mathbf{X}_{*}\), a posterior distribution of functions can be obtained. At inputs \(\mathbf{X}\), and previously evaluated function values \(\mathbf{y}\) the posterior predictive mean and standard deviation become
\[\mu_{f}(\mathbf{X}) =K(\mathbf{X},\mathbf{X}_{*})K(\mathbf{X},\mathbf{X})^{-1}\mathbf{ y}\] \[\sigma_{f}(\mathbf{X}) =K(\mathbf{X}_{*},\mathbf{X}_{*})-K(\mathbf{X}_{*},\mathbf{X})K( \mathbf{X},\mathbf{X})^{-1}K(\mathbf{X},\mathbf{X}_{*})\]
where \(K\) is a covariance matrix derived from kernel function \(k\). The ability to derive analytical probabilistic predictions make Gaussian processes an attractive modeling framework.
### Bayesian Optimization
Bayesian optimization is a model-based approach for the solution of expensive black-box optimization problems. The black-box optimization problem is stated as
\[\mathbf{x}^{*}=\operatorname*{argmax}_{\mathbf{x}\in\mathcal{X}}\ f(\mathbf{x }), \tag{1}\]
where the black-box function \(f\) is evaluated with decision variables \(\mathbf{x}\), and bounded by the set \(\mathcal{X}\).
A number of approaches can be applied in order to solve Eq.1. These include direct methods, which solely rely on function evaluations to inform which value of \(\mathbf{x}\in\mathcal{X}\) is selected for evaluation next [23]. Alternatively, model-based methods leverage previous function evaluations to learn a model, \(\hat{f}\), of \(f\). Given that \(\hat{f}\) is often cheaper to evaluate than \(f\), and gradient evaluations of \(\hat{f}\) are available by design, \(f\) can be optimized tractably, and used to inform the selection of the next point. Model-based methods are known to be more efficient on a function evaluation basis than direct-methods,
and are applied when \(f\) is computationally expensive and a large amount of information is required from each evaluation [23; 51; 52; 2].
Within Bayesian optimization, a Gaussian process is trained as the surrogate model of \(f\) from an initial data set of potential solutions [13]. The selection of the next point to be evaluated is then made based on a combination of the expected value, and the predicted variance of \(f\), as a Gaussian process provides a posterior probability distribution of function values. Equation 2 demonstrates the Upper Confidence Bound (UCB) criterion for selecting the next sampled decision variable \(x_{t+1}\):
\[\mathbf{x}_{t+1}=\operatorname*{argmax}_{\mathbf{x}\in\mathcal{X}}\ \mu_{\hat{f}_{t}}(\mathbf{x})+\beta^{1/2}\sigma_{\hat{f}_{t}}(\mathbf{x}), \tag{2}\]
where \(\beta^{1/2}\) is a hyper-parameter controlling the trade-off between exploration (\(\sigma\)) and exploitation (\(\mu\)). Other acquisition functions have been proposed and applied including probability of improvement [3], expected improvement [47], and entropy search [18]. Garnett [13] provides an excellent overview into alternative Bayesian optimization approaches.
Recently, domain-specific adaptations to Bayesian optimization have been developed, leading to improved performance in specific applications [34; 14; 12]. For example, alternative formulations have been proposed to facilitate the setting where function evaluations can be made in parallel resulting in larger information gain and potentially faster convergence [14; 3; 35]. Other domain-specific adaptations to Bayesian optimization include approaches for scenarios where there is an additional cost incurred for evaluating decision variables that are 'far' from previous evaluations [12], various approaches for high-dimensional [9], and multi-objective [6] Bayesian optimization have also been developed.
### Multi-fidelity Bayesian Optimization
Multi-fidelity approaches take advantage of one or more cheaper to evaluate, but potentially biased and more uncertain, function evaluations. Correctly leveraging this information can lead to accelerating design and optimization. In a multi-fidelity context, the black-box optimization problem presented in Eq.1 becomes
\[\mathbf{x}^{*}=\operatorname*{argmax}_{\mathbf{x}\in\mathcal{X}}\ f(\mathbf{x },\mathbf{z}_{\bullet}), \tag{3}\]
where fidelity parameters \(\mathbf{z}\) may influence both the accuracy and cost, computational or otherwise, of a function evaluation. Equation 3 highlights the emphasis on optimizing for the highest fidelity \(\mathbf{z}_{\bullet}\), given this is the system considered'most true'. Multi-fidelity Bayesian optimization algorithms to solve Eq.3 have been developed and applied across a number of domains [21; 17; 48; 11; 44; 39; 54; 1]. These approaches all have a common characteristic in that both \(\mathbf{x}\) and \(\mathbf{z}\) need to be selected at each iteration. How these decisions are made, with respect to the trade off between information gained, and expense incurred, are key aspects of multi-fidelity Bayesian optimization algorithms.
Savage et al. [44] apply an algorithm in a setting where a single discrete fidelity parameter dictates the cost and information trade-off, applying a deep Gaussian process (DGP) to model the objective at each discrete fidelity. The DGP formulation, and other multi-fidelity Bayesian optimization formulations that rely on sequential multi-fidelity models are inherently limited to discrete fidelities. Likewise, due to modeling limitations, multiple independent fidelities cannot be considered. Savage et al. [44] make the recommendation that when using a discrete multi-fidelity Bayesian optimization method, the number of fidelities should be kept to 2 or 3. Increasing the number of fidelities available to the algorithm may result in a more complex and difficult to train multi-fidelity model, with the potential for a less accurate description of the design space. In addition, simulations have to be performed at every discrete fidelity in order to gain an initial data set for optimization. As the number of discrete fidelities increases more initial simulations have to be performed, reducing the efficiency of the overall algorithm.
Kandasamy et al. [21] assume that both inputs and fidelities are smoothly varying in a continuous space, modeling both within a joint space \(\mathcal{Z}\times\mathcal{X}\), using a single Gaussian process. As a result the modeling approach serves as a continuous or continuous approximation of \(\mathbb{Z}\), not limiting the setting to discrete fidelities. By modeling multi-fidelity data within a joint space, fidelities are able to influence the prediction of objective at other fidelities. A two-step approach is applied to select \(\mathbf{x}_{t+1}\) and \(\mathbf{z}_{t+1}\). First, decision variables are selected by performing a standard UCB step, conditioned at the highest fidelity. To select \(\mathbf{z}_{t+1}\), the authors create a set of candidate fidelities \(\mathcal{Z}_{t+1}\)
From this set, the lowest cost fidelity is selected as \(\mathbf{z}_{t+1}\). This approach can optimize functions parameterized by an arbitrary number of continuous fidelities. However, as the number of fidelities increases the selection of candidate fidelities \(\mathcal{Z}_{t+1}\) becomes more difficult as the size of \(\mathcal{Z}\) grows exponentially. Additionally, the approach assumes that the cost of a simulation at a given fidelity, \(\lambda(\mathbf{z})\) is independent of \(\mathbf{x}\).
He et al. [17] assumes that the cost of a function evaluation is both a function of \(\mathbf{x}\) and \(\mathbf{z}\), presenting the case study of a finite-element thermal and fluid dynamics model of alloy casting with each simulation takes approximately 10 minutes to complete. To solve the multi-fidelity optimization problem, the authors present an augmented acquisition function, weighted by the cost of a simulation. However, all case studies considered are 2, 3, or 4 dimensional, and each case study consists only of a single discrete fidelity parameter. Similarly, in obtaining results the authors relax the assumption that \(\mathbf{x}\) influences \(\lambda\). Thodoroff et al. [49] presents a cost-adjusted criterion to select both \(\mathbf{x}_{t}\) and \(\mathbf{z}_{t}\) in a single step, in the context of ice-sheet simulation experimental design. \(\mathbf{x}\) and \(\mathbf{z}\) are modeled in a joint space by a single Gaussian process \(\hat{f}(\mathbf{x},\mathbf{z})\). Simulation cost is a separately modeled function of both \(\mathbf{x}\) and \(\mathbf{z}\), with \(\mathbf{x}_{t}\) and \(\mathbf{z}_{t}\) are chosen in a single step. Experimental design criteria are modified to weight expected information gain by simulation cost, and both decision variables and fidelity are selected within the same step, necessary if cost \(\lambda\) is a function of both \(\mathbf{x}\) and \(\mathbf{z}\). Serani et al. [45] applies multi-fidelity optimization within the CFD domain to benchmark problems including airfoil and boat hull design. Huang et al. [19] applied a Bayesian optimization-based technique to a sequentially-trained multi-fidelity model to optimize a metal-forming finite-element simulation. March et al. [27] perform multi-fidelity optimization of a typical airfoil design problem using a trust-region approach. Mansour et al. [26] optimize the geometry of a coiled tube reactor without pulsed-flow conditions using a genetic algorithm approach. The work assumes a single simulation fidelity, with a result taking over two weeks to produce.
In this article, we build upon previous work for the optimal design and operation of simulated chemical reactors. Our contributions are three-fold. Firstly, we present a novel framework (DARTS) for the multi-fidelity Bayesian optimization of expensive reactor and tube simulations. Our methodology chooses a simulation fidelity at each iteration based on a cost-adjusted estimated information density, with the overall aim to optimize for the highest fidelity given a time budget. In addition, it utilizes a novel stopping criteria, ensuring that a high-fidelity solution is returned. Secondly, we apply our framework to the industrially relevant problem of simultaneously optimizing the geometry and operating conditions of a simulated pulsed-flow helical-tube reactor. Unlike the majority of approaches, our framework allows for the cost of a simulation to be modeled as a function of both simulation fidelities and decision variables. We also account for two independent fidelities, the axial and radial cell count of a simulation, both of which are treated as jointly continuous. Thirdly, we validate assumptions made within the methodology using experimental data. The optimal geometry is 3D printed and evaluated using associated optimal pulsed-flow operating conditions, validating both the performance of the framework as well as our multi-fidelity model. Our approach is extensible to a large number of potential design and optimization problems, ranging from microscale to industrial reactors. We make all the code available to facilitate reproducibility.
## 3 Method
### Parameterization
A helical-tube reactor is parameterized by a coil radius, coil pitch, and inversion. Coil pitch denoted by \(\phi\) controls how extended the helical tube is, coil radius denoted by \(\rho\) controls how tight the coils are within the helical tube, and the inversion parameter is denoted by \(\delta\) controls the change in coil direction. Inversions within helical-tube reactors have been shown to provide effective mixing properties byMcDonough et al. [31], Rossi et al. [40], Singh and Nigam [46]; \(\delta\) takes a value between 0 and 1, and specifies where along the coil the inversion takes place. The length of the coil is maintained as fixed, resulting in all parameterized coils having the same volume. Within the parameterization, we include a fixed-length inlet and outlet to the coil. The inlet and outlet are horizontal, and a smooth interpolation is used to ensure that the transition from inlet to coil and coil to outlet, is smooth. Figure 1 demonstrates the parameterization we apply within this work.
The reactor inlet flow is at a Reynolds number of 50 for which relatively insignificant mixing is expected to take place. A superimposition of oscillatory velocity is, therefore, needed to operate
under a wide range of plug flow conditions [31]. This oscillatory velocity \(v_{o}\) is achieved through parameters representing oscillation amplitude \(a\) and frequency \(f\). Oscillatory velocity is defined as
\[v_{o}=2\pi fa\sin(2\pi ft).\]
We vary oscillatory parameters along with the design parameters within the optimization procedure.
### Fidelities
In addition to geometric design and oscillatory parameters, the output of a simulation is also influenced by one or more fidelities \(\mathbf{z}\). A typical fidelity used within CFD simulations is the number of discrete finite elements that are contained within the mesh, known as cell count. Fluid behavior within pulsed-flow helical-tube reactors is complex due to the transient formation and unravelling vortex structures affecting axial and radial mixing. Therefore, it is important to capture simulation resolution both axially, through the length of the tube, as well as radially throughout the cross-section of the tube for a high-quality simulation. In this work, as opposed to applying cell count as a single scalar fidelity parameter, we instead maintain the ability to adapt the resolution of a simulation both axially and radially resulting in two independent fidelities. We implement a custom meshing procedure that allows for axial and radial fidelity to be varied independently. The simulation that serves as the highest fidelity \(\mathbf{z_{\bullet}}\) corresponds to when both the axial and radial fidelities are greatest. Whilst both axial and radial fidelities are discrete due to their meaning within a finite element context, we treat them as continuous parameters. During mesh creation values of both fidelities are rounded to the nearest integer. Within the solution to CFD problems, a number of solver options may be considered as valid fidelities. However, as many of these take on non-continuous or Boolean values, we leave their integration for future work.
Figure 2 demonstrates the final mesh affected by axial fidelity, as well as providing an outline of how meshing is performed. Increased mesh density near the inlet and outlet wall is induced throughout all radial fidelity values to reduce numerical diffusion errors that can compromise the accuracy of the CFD solution. Figure 3 demonstrates how the final mesh is affected by radial fidelity, as well as providing an outline of the meshing procedure through the creation of a block-structured mesh. We refine the mesh near the walls to better capture the boundary layer and thereby velocity gradients affecting flow characteristics.
### Simulation
To perform an evaluation of a given reactor mesh and set of operating conditions, a simulation is performed using the open-source code OpenFOAM [20]. An impulse tracer is injected as a scalar field at the inlet of the reactor until \(t=0.15\) s into the water as a medium. The concentration of the tracer is tracked by solving a convection-diffusion equation through scalarTransportFoam. Here, the diffusion coefficient is set as a constant with a small value of \(I\times e^{-10}\)\(m^{2}/s\) as diffusion in liquids is slow and is dominated by convection. The pressure-velocity coupled, transient pimpleFOAM solver is used for solving the unsteady momentum equations as time-dependent oscillatory velocities are introduced. The pimpleFoam solver is integrated with the scalarTransportFoam through 'Solver
Figure 1: _Left_: A side-view of how coil radius, pitch, and the inversion parameter effects a given coil, with an additional horizontal inlet and outlet. _Right_: When parameterizing a coil with horizontal inlet and outlet, we include a smooth transition based on a quadratic interpolation of points.
function Objects'. The convection flux on the computational cells was calculated using second-order discretization schemes to ensure the numerical accuracy of the solution. The groovyBC boundary condition is used for imposing oscillatory velocity through swak4Foam library [16]. This oscillatory velocity along with the steady velocity was initialized at the inlet as Hagen-Poiseuille parabolic velocity profile to cut down on the coil length needed for the flow development and the computational cost. Additionally, we terminate the solution by monitoring the tracer concentration at the outlet; this is processed when the tracer concentration drops to values less than 1\(\times e^{-7}\) for 10 consecutive iterations. This variable, early-stopping criterion based on output accelerates the optimization procedure, unlike other studies where a fixed termination based on certain iterations is enforced [26], but introduces another time dependence on the simulation cost. This OpenFOAM solver is integrated with the optimization algorithm via the PyFOAM Python library.
The output from a simulation returned from PyFOAM is a set of concentration values and respective times at the outlet of the reactor. This represents the residence time distribution of the reactor. To convert this distribution to a single optimization objective, the distribution is transformed to an equivalent number of tanks-in-series, \(N\). First, the time and concentration values are converted to dimensionless quantities, \(\theta\) and \(E(\theta)\), respectively. Equation 4 shows how \(E(\theta)\) can be represented as a function of \(\theta\) and \(N\)[31]:
\[E(\theta)=\frac{N(N\theta)^{N-1}}{(N-1)!}e^{-N\theta}. \tag{4}\]
A fitting procedure is performed to obtain a value of \(N\) from \(E(\theta)\) and \(\theta\). This may be done based on least-squares error (\(L_{2}\)-norm), however, we find that due to non-idealities and assumptions made to derive Eq. 4, using the difference in maximum predicted value and the maximum value returned from the simulation results in a more robust fit. The estimated tanks in series, \(N^{*}\), is therefore calculated as
\[N^{*}=\arg\max_{N}\ \left|\max\left[E(\theta)\right]-\max\left[\frac{N(N \theta)^{N-1}}{(N-1)!}e^{-N\theta}\right]\right|. \tag{5}\]
Figure 3: A cross-section of the mesh radially. First, a coarse finite element topology is created. Subsequently, the number of cells near the tube wall are increased in order to better capture the effects of fluid flow near the boundary layer. Finally the mesh is further subdivided based on the radial fidelity value.
Figure 2: A side-view of the meshing procedure of a coil based on axial fidelity. We first define a central path based on geometric parameters. Subsequently, circles (tube cross-sections) are defined along this path the number of which depends on the axial fidelity. Finally, the mesh density near the inlet and outlet walls is increased by raising the number of circles to track the tracer concentration with higher accuracy.
Figure 4 demonstrates \(E(\theta)\) against \(\theta\) as derived from fitting values of \(N\) using Eq. 5, for two sets of experimental data.
The case file used can be found within the code for this article. Though the current case considers the flow with two non-reactive, miscible liquids with identical fluid physical properties, this model can be extended to problems with fluids of differing physical properties modeled as multiphase flows or with reactions where chemical-kinetics can be coupled to CFD simulations in the future.
### Multi-fidelity Bayesian Optimization
In addition to dependence on simulation fidelity, the cost of a simulation is also dependent on \(\mathbf{x}\) in chemical reactor simulation domains. Variable step-size CFD solvers, along with early stopping criteria both contribute to how \(\mathbf{x}\) can influence simulation cost, particularly in cases where operating conditions are optimized over. Therefore, the two-step approach presented by Kandasamy et al. [21] cannot be applied, where \(\mathbf{x}\) is selected first, and then \(\mathbf{z}\), as the second-step where information and cost is traded off becomes dependent on \(\mathbf{x}\). Therefore, we follow an approach most similar to He et al. [17] and Thodoroff et al. [49]; by utilizing a cost-adjusted acquisition function to trade off information gain and cost in a single step. To do so, the optimization objectives \(\mathbf{y}\) and cost of simulations \(\mathbf{c}\) are modeled by two separate Gaussian processes, \(\hat{f}\) and \(\lambda\), respectively. The cost-adjusted acquisition function is as follows:
\[\mathbf{x}_{t+1},\mathbf{z}_{t+1}=\operatorname*{argmax}_{(\mathbf{x}, \mathbf{z})\in\mathcal{X}\times\mathcal{Z}}\ \frac{\mu_{\hat{f}_{t}}(\mathbf{x},\mathbf{z}_{\bullet})+\beta^{1/2}\sigma_{ \hat{f}_{t}}(\mathbf{x},\mathbf{z}_{\bullet})}{\gamma\mu_{\lambda_{t}}( \mathbf{x},\mathbf{z})\sqrt{1-k((\mathbf{x},\mathbf{z}),(\mathbf{x},\mathbf{z} _{\bullet}))^{2}}}. \tag{6}\]
The numerator on the right-hand side represents expected information gained at the highest fidelity since this is the system of interest (\(\mathbf{z}=\mathbf{z}_{\bullet}\)). The denominator then weights this expected information by the predicted cost of that simulation, enabling lower fidelity solutions to be selected. The term \(\sqrt{1-k((\mathbf{x},\mathbf{z}),(\mathbf{x},\mathbf{z}_{\bullet}))^{2}}\) quantifies how much information is lost when evaluating at a lower fidelity. The parameter \(\gamma\) weights how attractive cheaper solutions are against the information they provide, and \(\beta^{1/2}\) weights the exploration-exploitation trade-off as is standard in UCB Bayesian optimization. However, with a cost-adjusted acquisition function, there is no guarantee that a high-fidelity simulation will be selected for evaluation. We address this in the following subsection.
### Stopping criteria
In traditional Bayesian optimization, function evaluations are terminated when the computational budget is exhausted, and the evaluated data point with the highest objective is selected as a final solution. In a multi-fidelity framework, this criterion contains subtle nuances. As the underlying function of interest is \(f(\mathbf{x},\mathbf{z}_{\bullet})\), the optimal solution returned should be the highest evaluated function-value where \(\mathbf{z}=\mathbf{z}_{\bullet}\). Throughout optimization \(f(\mathbf{x},\mathbf{z}_{\bullet})\) is maximized through lower fidelity simulations, with no guarantee that simulations at the highest fidelity will be selected to be evaluated. To mitigate this detail in their two-step approach Kandasamy et al. [21] includes criteria to ensure that in some scenarios \(\mathbf{z}_{\bullet}\) is selected in favor of a lower fidelity. It follows that when
Figure 4: Experimental residence time distribution against the residence time distribution modeled using the ideal tank-in-series relation, given by Eq. 4, with a value of equivalent tanks in series derived from Eq. 5
\(f(\mathbf{x},\mathbf{z_{\bullet}})\) has been optimized 'as much as possible' and the most information has been gained, _but before all computational budget has been exhausted_, a function evaluation of the highest fidelity needs to take place, to ensure a valid final solution is returned. Therefore, we derive the notion of a multi-fidelity stopping criteria for Bayesian optimization. The three key criteria for the return of a final highest-fidelity solution are as follows:
1. This solution need-not be generated by trading off mean and variance (as within a standard acquisition function), and therefore can be performed greedily given that it will be the final evaluation.
2. This solution should be the last solution selected, taking advantage of the maximum amount of information.
3. There must be enough time remaining to evaluate this solution.
Depending on the inclination of the decision-maker, the third criterion may be relaxed and a final solution may be returned without having been evaluated. However, we assume that a valid solution must have been evaluated on the true, high-fidelity function for which \(\mathbf{z}=\mathbf{z_{\bullet}}\). At each iteration, this greedy, potentially final, solution is obtained by solving the following equation given by
\[\mathbf{x}_{g}=\arg\max_{\mathbf{x}\in\mathcal{X}}\;\mu_{f_{t}}(\mathbf{x}, \mathbf{z_{\bullet}}). \tag{7}\]
where \(\mathbf{x}_{g}\) is a solution that fulfills the first criterion for a final solution. In order to ensure that the second and third criteria are adhered to, the cost of evaluating \(\mathbf{x}_{g}\), and the cost of evaluating \(\mathbf{x}_{t+1}\) are compared with the time or computational budget remaining. As the cost of evaluating a simulation is modeled using a Gaussian process, we apply both the mean and the standard deviation resulting in a probabilistic upper-bound of each evaluation cost. Equation 8 details the maximum time required to evaluate both the next explorative solution as well as the greedy high-fidelity solution
\[c_{\max}=\underbrace{\mu_{\lambda_{t}}(\mathbf{x}_{t+1},\mathbf{z}_{t+1})+p_{ \lambda}\sigma_{\lambda_{t}}(\mathbf{x}_{t+1},\mathbf{z}_{t+1})}_{\text{ Predicted maximum cost of}\atop\text{next `standard evaluation}}+\overbrace{\mu_{\lambda_{t}}(\mathbf{x}_{g},\mathbf{z_{\bullet}})+p_{ \lambda}\sigma_{\lambda_{t}}(\mathbf{x}_{g},\mathbf{z_{\bullet}})}^{\text{ Predicted maximum cost of}\atop\text{high-fidelity' execution}} \tag{8}\]
where \(p_{\lambda}\) is a parameter weighting the standard deviation, providing the ability to be more conservative and ensure greater probability \(\mathbf{x}_{g}\) can be evaluated. If the value of \(c_{\max}\) is greater than the remaining evaluation budget, then \((\mathbf{x}_{g},\mathbf{z_{\bullet}})\) should be evaluated as a final solution. Otherwise \((\mathbf{x}_{t+1},\mathbf{z}_{t+1})\) should be evaluated as usual, as it is highly probable that there is enough time remaining for both a standard evaluation as well as a greedy evaluation.
In addition, the objective value returned from the solution of Eq. 7, \(\mu_{\hat{f}}(\mathbf{x}_{g},\mathbf{z_{\bullet}})\), provides a proxy for monitoring the progress of multi-fidelity Bayesian optimization. As function values across different fidelities cannot be compared like-for-like, it enables practitioners to observe the progress of optimization. Algorithm 1 demonstrates our approach for multi-fidelity Bayesian optimization.
```
1:Initialization \(\mathbf{x}_{g}\), \(\mathbf{z}_{\bullet}\), \(
bounds ensure that tube self-intersections cannot occur which would have resulted in an unconstrained problem. Figures 5, 6, 7 demonstrate the effect of \(\delta\), \(\phi\), and \(\rho\) on the helical coil tube geometry, respectively. The mesh for a given set of parameters and fidelities is generated using a custom mesh generation scheme in Python using the classy_blocks library, available at [https://github.com/OptiMaL-PSE-Lab/pulsed-reactor-optimization/](https://github.com/OptiMaL-PSE-Lab/pulsed-reactor-optimization/).
The lowest axial and radial fidelity values are 1 and 20, and the corresponding highest fidelity values are 5 and 60, respectively. Figure 8 demonstrates the effect of both axial and radial fidelities on the final mesh, given a fixed set of design parameters.
### Model Validation
In order to gain insight into the effect of a simulation fidelity on the accuracy of \(f(\mathbf{x},\mathbf{z})\) for a fixed geometry and operating conditions, simulations with five discrete fidelities were performed
Figure 5: The effect of inversion parameter \(\delta\) for a helical coil tube with fixed length, top view. From left to right: \(\delta\) is evaluated at 0, 0.15 0.3, 0.6, and 0.75 with coil radius and pitch remaining constant.
Figure 6: The effect of pitch \(\phi\) for a helical coil tube with a fixed length, side view. From left to right: \(\phi\) is evaluated from 7.5mm to 15mm with coil radius and inversion location remaining constant.
for fixed \(\mathbf{x}\) and compared with experimental data. In a multi-fidelity context, experimental measurements may themselves be considered the highest-fidelity evaluations available. However, we leave this extension for future work and maintain our assumption that the highest fidelity simulation is the function of interest. The set of fidelities validated are \(\{(z_{\text{axial}}=20,z_{\text{radial}}=1),(z_{\text{axial}}=30,z_{\text{radial }}=2),(z_{\text{axial}}=40,z_{\text{radial}}=3),(z_{\text{axial}}=50,z_{\text{radial }}=4),(z_{\text{axial}}=60,z_{\text{radial}}=5)\}\). Figure 9 demonstrates the tracer concentration profile of simulations using these five selected fidelity combinations against two sets of experimental data. The experimental data were generated from a 3D-printed reactor with a length of 75.3mm, coil radius of 12mm, coil pitch of 10mm, and no inversion. Pulsed-flow operating conditions were induced with a frequency of 5 Hz, the Reynolds number was equal to 50, and the two experiments differed by applying a flow amplitude of 2mm and 4mm. An experimental value of equivalent tanks-in-series, denoted by \(\hat{N}\), was calculated for both experiments from each resulting residence-time distribution.
Figure 9 also demonstrates how increasing overall fidelity (and therefore cell count) results in a closer approximation to the experimental value of \(N\), derived from each concentration profile. For both experiments, the nature with which \(N\) approaches \(\hat{N}\) when the overall fidelity is increased implies that \(f(\cdot,\mathbf{z})\) is smoothly varying. The information that fidelity smoothly influences \(f\) validates the assumption that \(\mathbf{z}\in\mathcal{Z}\) varies smoothly and continuously enabling a Gaussian process to be applied as a multi-fidelity model.
Figure 8: An instance of helical-tube reactor geometry as affected by axial and radial fidelity.
Figure 7: The effect of coil radius \(\rho\) for a helical coil tube with a fixed length, side view. From left to right: \(\rho\) is evaluated from 3mm to 12.5mm with inversion location and pitch remaining constant.
Figure 9: Validation of five discrete mesh fidelities corresponding to different cell counts, across two sets of experimental data under different conditions.
According to the first steps of Algorithm 1, we generate an initial data set of solutions \(\mathcal{D}=\{(\mathbf{x}_{i},\mathbf{z}_{i},y_{i},c_{i})\}_{i=1}^{D}\). To do so, we perform a Latin hypercube design of experiments scheme [32] to maximize the initial information gained, with 25 solutions (\(D=25\)). This initial number represents a hyper-parameter. How best to distribute initial samples across variables and fidelities within multi-fidelity Bayesian optimization, and what constitutes a reasonable number of samples, is yet to be investigated, and we leave this for future investigation. We normalise all inputs and outputs to have zero mean and a standard deviation of unity at the beginning of each iteration. Before solutions are evaluated, we re-scale inputs back to their physical quantities. Initially, we set the hyper-parameters of Eq. 6 as \(\gamma=1.5\) and \(\beta=2.5\), and in Eq. 8 we set \(p_{\lambda}=2\) resulting in a maximum cost prediction with 95% confidence. The maximum computational budget we specify is 64 hours, not including the budget spent sampling initial solutions, though this may be included. Figure 10 shows the optimization progress throughout the 64 hours. The time to generate, and objective values of initial solutions generated are denoted by using negative iteration and wall-clock time values. Therefore, optimization itself is denoted as starting at iteration 0, and at a wall-clock time of 0.
Simulations performed to generate the initial data set of solutions are plotted, and are designated to end at iteration and time 0. When optimization begins, the search space can be seen to be explored through a number of lower-cost simulations. As the value of the objective function at different fidelities is assumed to be biased, the objective values cannot be compared like-for-like. However, Fig. 10 provides insight into the progression of the algorithm.
Figure 11 shows the combinations of axial and radial fidelity evaluated throughout optimization, alongside respective simulation costs. Generally, simulations performed with lower radial and axial fidelities have lower costs. However, as this relationship is also dependent on reactor geometry and operating conditions, some simulations have higher costs at lower fidelities. The framework tends to evaluate simulations at the lower half of axial fidelity at all five discrete radial fidelity values. This indicates that simulations with different axial fidelities are more correlated than simulations with differing radial fidelities. Less variance across the axial fidelity will result in the acquisition function favouring lower axial fidelity values, as information about the highest fidelity can be gained due to the higher correlation. The final high-fidelity solution can be seen at the component-wise highest-fidelity at the top-right of Fig 11. The quantities tracked throughout optimization in order to ensure this solution is returned are demonstrated in Fig 12.
The actual evaluated costs of each simulation generally fall beneath their predicted maximum cost, with the exception of a few simulations. The Gaussian process that models simulation cost can be validated as being accurate as a result. When the time remaining falls beneath the maximum predicted time to perform a standard iteration as well as a greedy final i
Figure 10: The number of equivalent tanks-in-series evaluated colored by the respective cost of simulation. The upper half of the figure shows these quantities against iteration and the lower half shows these quantities against wall-clock time, highlighting the importance of lower-cost simulations.
solution is evaluated and the algorithm terminates. The high-fidelity solution returned at the end of optimization is demonstrated in Figure 13. The optimal coil geometry has a pitch of 1.04cm, radius of 1.25cm, and an inversion that occurs 66% along the coil length. The associated optimal operating conditions are pulsed-flow with an amplitude of 1mm, frequency of 2 Hz and a Reynolds number of 50.
Figure 14 highlights the specific flow behavior within the optimal reactor throughout a single oscillatory cycle. Streamlines are coloured with tracer concentration to show the movement through the coil. For the positive part of the oscillatory cycle, the tracer moves in the forward direction in a streamlined manner towards the outlet. The important features start to occur in the negative part of the oscillation cycle where the secondary flow begins to emerge. Cross-sectional flow streamlines transitioning to Dean-type vortices due to the centrifugal forces at the coil turns are shown. These counter-rotating Dean vortices promote radial mixing of the tracer with the water medium and close to the walls of the computational domain; no tracer dead zones are left behind. Additionally, along with the reverse flow, the swirling motion developed during the negative oscillation cycle redirects the flow in the tangential direction which limits the axial dispersion of the tracer. This combination
Figure 11: The fidelities selected throughout optimization within \(\mathcal{Z}\). The upper plot demonstrates the average simulation cost at each fidelity. As we apply discrete fidelities which are continuously approximated, we present the lower plot which demonstrates the number of times each discrete fidelity is evaluated. Simulation cost is confirmed to be not only a function of fidelity, but \(\mathbf{x}\) as well, as there is no clear distribution of simulation costs across \(\mathcal{Z}\).
Figure 12: The maximum predicted time to complete a ‘standard’ iteration compared with the maximum predicted time to complete both a standard iteration as well as a greedy iteration, the actual simulation time, and the remaining time. As the blue line passes above the black line, a greedy iteration must be performed as there is not enough remaining budget to perform both a greedy and standard evaluation
of promoting radial mixing and inhibiting axial mixing results in high plug flow performance. This flow cycle is periodic in nature for the amplitude of 1 mm and frequency of 2 Hz, and the simulation is continued until the tracer leaves the computational domain.
In the optimal configuration shown in Figs. 13 and 14, the inversion just before the outlet helps in maintaining the Dean vortex structure until the flow leaves the domain. Whereas, in a standard configuration, the Dean vortices closer to the outlet could be disappearing gradually. In addition, the inversion helps in the release of a highly concentrated tracer that is trapped between the vortices to undergo further radial mixing. Therefore, enhanced radial mixing is achieved with this configuration due to the inversion, leading to an optimal plug flow performance.
### Solution Validation
The optimal reactor geometry was exported into an STL file format and modified into a 3D printable model by adding a bounding box to give the part volume. The straight sections at the inlet and outlet were given 8 mm and 10 mm OD tube fittings for connection to the experiment apparatus, and the bounding box was then trimmed to reduce the amount of resin needed for the print. The inlet region was also extended by 20 mm to provide additional development length to ensure the experiment matched as closely as possible the parabolic inlet velocity used in the simulations. The optimal geometry was printed using a FormLabs Form3+ using the Clear V4 resin using the default settings. Post-processing involved washing in IPA for 20 min, drying for 24 hours, and post-curing in a FormCure at 60\({}^{\circ}\)C for 30 min. Figure 15 shows the raw STL file, modified geometry, and printed geometry.
We implemented the same RTD method reported by McDonough et al. [31]. In brief, RTDs were measured by injecting a 0.1 M KCl aqueous tracer solution into the geometry and measuring the conductivity over time at the outlet. The net flow of deionized water, oscillations, and tracer injection were controlled using three separate OEM syringe pumps (C3000, TriContinent) that were hydraulically linked to the reactor via PTFE tubing routed through a custom Swagelok piece (shown
Figure 14: Streamlines indicating tracer concentration within the optimal reactor throughout a single oscillatory pulse, lasting 0.5 seconds. As the oscillatory velocity changes from positive to negative, Dean’s vortices form providing mixing throughout the radial direction. The inversion within the coil also contributes to these vortices shifting across the radial direction, providing stronger mixing characteristics.
Figure 13: Optimal reactor geometry viewed from different perspectives. The optimal geometry contains an inversion.
in Figure (c)c). To minimize the influence of poor mixing inside this Swagelok fitting, the tracer was injected at point C (Figure 15) after the additional 20 mm inlet straight section by routing the PTFE tubing through the fitting (see McDonough et al. [31] for further clarity).
Figure 16 demonstrates the predicted residence-time distribution alongside 3 sets of experimental data obtained from the 3D printed optimal reactor and operating conditions. We conjecture that the RTD method is another example of a lower-fidelity approximation to the desired mixing performance. First, the printed geometry is not perfectly smooth due to the well-known stair-stepping effect; this enhanced roughness (wall friction) will differ around the coil due to variations in the orientation. Second, the conductivity probe was located slightly downstream of the outlet face used in the simulations (point D in Figure (c)c). For reference, in the simulation the tracer was injected at point A and measured at point B (Figure (a)a). Nevertheless, the simulated optimal RTD closely matches the 'expected' distribution measured in the experiments across all three replicates, indicating good accuracy of the CFD simulations used to inform the optimization process. This provides confidence that the output of the optimization process is meaningful for a reactor geometry operating in a real-world setting.
## 5 Conclusions and Future Work
In this article, we have formulated the design of a helical-tube reactor as a multi-fidelity black-box optimization problem. We have demonstrated a general framework that takes advantage of different quality simulations to enhance the optimization of reactor simulations via a multi-fidelity Bayesian optimization. We have validated our framework by applying it to optimize a helical-tube reactor ge
Figure 16: The residence-time distribution predicted via CFD simulation of the optimal, high-fidelity solution returned from the framework, alongside 3 sets of experimental data obtained via the 3D printed reactor.
Figure 15: Experimental validation of optimal reactor configuration via additive manufacturing and operation under pulsed-flow operating conditions.
ometry and operating conditions. By motivating our framework with the specific goal of returning a high-fidelity solution, we have derived a new criterion for monitoring the progress and dictating the termination of multi-fidelity Bayesian optimization. We show that this criterion enables a high-quality final solution to be returned before the computational budget is exhausted. The optimal reactor geometry and operating conditions is then 3D printed and the performance is experimentally validated. Our approach is extensible to a large number of simulation-based design problems. Future work will serve to apply the framework to investigate alternative reactor parameterizations and longer classes of reactors. In addition, different performance indicators will be optimized to investigate the differences between reactor parameterizations optimized for different quantities.
## Acknowledgements
The authors would like to acknowledge the funding provided by the Engineering & Physical Sciences Research Council, United Kingdom through the PREMIERE (EP/T000414/1). Tom Savage would like to acknowledge the support of the Imperial College President's scholarship, and Ilya Orson Sandoval for providing helpful discussion for this work.
|
2307.11064
|
Banach fixed point property for Steinberg groups over commutative rings
|
The main result of this paper is that all affine isometric actions of higher
rank Steinberg groups over commutative rings on uniformly convex Banach spaces
have a fixed point. We consider Steinberg groups over classical root systems
and our analysis covers almost all such Steinberg groups excluding a single
rank 2 case.
The proof of our main result stems from two independent results - a result
regarding relative fixed point properties of root subgroups of Steinberg groups
and a result regarding passing from relative fixed point properties to a
(global) fixed point property. The latter result is proven in the general
setting of groups graded by root systems and provides a far reaching
generalization of the work of Ershov, Jaikin-Zapirain and Kassabov who proved a
similar result regarding property (T) for such groups.
As an application of our main result, we give new constructions of
super-expanders.
|
Izhar Oppenheim
|
2023-07-20T17:44:15Z
|
http://arxiv.org/abs/2307.11064v1
|
# Banach fixed point property for Steinberg groups over commutative rings
###### Abstract.
The main result of this paper is that all affine isometric actions of higher rank Steinberg groups over commutative rings on uniformly convex Banach spaces have a fixed point. We consider Steinberg groups over classical root systems and our analysis covers almost all such Steinberg groups excluding a single rank 2 case.
The proof of our main result stems from two independent results - a result regarding relative fixed point properties of root subgroups of Steinberg groups and a result regarding passing from relative fixed point properties to a (global) fixed point property. The latter result is proven in the general setting of groups graded by root systems and provides a far reaching generalization of the work of Ershov, Jaikin-Zapirain and Kassabov who proved a similar result regarding property (T) for such groups.
As an application of our main result, we give new constructions of super-expanders.
Department of Mathematics, Ben-Gurion University of the Negev, Be'er Sheva 84105, Israel [email protected]
## 1. Introduction
Given a topological group \(\Gamma\) and a Banach space \(\mathbb{E}\), we say that \(\Gamma\) has property \((F_{\mathbb{E}})\) if every continuous affine isometric action of \(\Gamma\) on \(\mathbb{E}\) admits a fixed point. For a class of Banach spaces \(\mathcal{E}\), we will say that \(\Gamma\) has property \((F_{\mathcal{E}})\), if \(\Gamma\) has property \((F_{\mathbb{E}})\) for every \(\mathbb{E}\in\mathcal{E}\).
The most classical instantiation of property \((F_{\mathcal{E}})\) is property \((FH)\) which is property \((F_{\mathcal{E}})\) where \(\mathcal{E}\) is the class of all real Hilbert spaces. For a \(\sigma\)-compact, locally compact group \(\Gamma\), Delorme-Guichardet Theorem states that property (FH) is equivalent to Kazhdan's property (T).
In this work, we will consider property \((F_{\mathcal{E}_{uc}})\), where \(\mathcal{E}_{uc}\) is the class of all the uniformly convex Banach spaces. We recall that a Banach space \(\mathbb{E}\) is called _uniformly convex_ if there is a function \(\delta:(0,2]\to(0,1]\) that is called the _modulus of convexity_ such that for every two unit vectors \(\xi,\eta\in\mathbb{E}\) it holds that if \(\|\xi-\eta\|\geq\varepsilon\), then \(\|\xi\frac{+\eta}{2}\|\leq 1-\delta(\varepsilon)\). For example, Hilbert spaces are uniformly convex (due to the parallelogram equality) and for every \(1<p<\infty\), every \(L^{p}\) space is uniformly convex (due to Clarkson's inequalities).
The study of property \((F_{\mathcal{E}_{uc}})\) was initiated in the seminal paper of Bader, Furman, Gelander and Monod [3], in which they proved (among several other results) that higher rank algebraic groups have property \((F_{L^{p}})\) for every \(L^{p}\) space with \(1<p<\infty\). They also conjectured that higher rank algebraic group should have property \((F_{\mathcal{E}_{uc}})\). This
conjecture can be split into two cases depending on the local field. For non-Archimedian local fields, the conjecture of [1] was settled by the work of V. Lafforgue [10] and the subsequent work of Liao [11] in which stronger results were proven, i.e., in [10, 11] it was proven that higher rank algebraic groups over non-Archimedian local fields have strong Banach property (T) which implies property \((F_{\mathcal{E}_{uc}})\). In the Archimedian case, the conjecture of [1] was settled only recently: in [12], the author made a breakthrough and proved that \(\mathrm{SL}_{n}(\mathbb{R}),n\geq 4\) and its lattices have property \((F_{\mathcal{E}_{uc}})\). The technique of [12] was then generalized by de Laat and de la Salle [1] to yield a proof of property \((F_{\mathcal{E}_{uc}})\) for every real higher rank algebraic group.
In this paper, we turn our attention to the study of property \((F_{\mathcal{E}_{uc}})\) for Steinberg groups and elementary Chevalley groups over commutative rings. For a classical (crystallographic) reduced, irreducible root system \(\Phi\) and a commutative ring \(R\), we denote \(\mathbb{G}_{\Phi}(R)\) to be the simply-connected Chevalley group corresponding to \(\Phi\) over \(R\). We further denote \(\mathrm{EL}_{\Phi}(R)\) to be elementary Chevalley group corresponding to \(\Phi\) over \(R\), i.e., the subgroup of \(\mathbb{G}_{\Phi}(R)\) generated by the root subgroups with respect to the standard torus (we refer the reader to [1] and references therein for more detailed definitions). The Steinberg group \(\mathrm{St}_{\Phi}(R)\) is a group extension of \(\mathrm{EL}_{\Phi}(R)\) that "forgets" all the relations of \(\mathrm{EL}_{\Phi}(R)\) apart from the relations of \(R\) (as an Abelian group) in each root subgroup and the Chevalley commutator formula (a more explicit definition is given in SS2.5 below). For example, in the case where \(\Phi=A_{n}\), the groups discussed above are \(\mathbb{G}_{A_{n}}(R)=\mathrm{SL}_{n+1}(R)\), \(\mathrm{EL}_{A_{n}}(R)=\mathrm{EL}_{n+1}(R)\) and \(\mathrm{St}_{A_{n}}(R)=\mathrm{St}_{n+1}(R)\). With these notations, our main result is:
**Theorem 1.1**.: _Let \(\Phi\) be a classical reduced, irreducible root system of rank \(\geq 2\) such that \(\Phi\neq C_{2}\). For every finitely generated, commutative (unital) ring \(R\), the groups \(\mathrm{St}_{\Phi}(R)\) and \(\mathrm{EL}_{\Phi}(R)\) have property \((F_{\mathcal{E}_{uc}})\)._
Historically, establishing property (T) (or, equivalently, property (FH)) for \(\mathrm{St}_{\Phi}(R)\) and \(\mathrm{EL}_{\Phi}(R)\) was more challenging than establishing property (T) for higher rank algebraic groups. Indeed, property (T) for higher rank algebraic groups was proved a few years after Kazhdan defined property (T). In contrast, property (T) for \(\mathrm{EL}_{n}(R),n\geq 3\) was only established much more recently in the works of Shalom [12] and Vaserstein [13] using a bounded generation machinery of Shalom (partial results were proven in [12, 13]). For \(\mathrm{St}_{n}(R),n\geq 3\), Property (T) was proven by Ershov and Jaikin-Zapirain [1] using the machinery of angles between subspaces (which is very different from the bounded generation machinery of Shalom). This angle machinery was later generalized by Ershov, Jaikin-Zapirain and Kassabov [1] to prove property (T) for \(\mathrm{St}_{\Phi}(R)\) and \(\mathrm{EL}_{\Phi}(R)\) for every classical reduced, irreducible root system \(\Phi\).
The proof of our main result has two components: relative property \((F_{\mathcal{E}_{uc}})\) and synthesis of property \((F_{\mathcal{E}_{uc}})\) (the term "synthesis" is due to Mimura [14]).
### Relative property \((F_{\mathcal{E}_{uc}})\)
For a topological group \(\Gamma\), a subgroup \(K<\Gamma\) and a Banach space \(\mathbb{E}\), we say that the pair \((\Gamma,K)\) has _relative property_\((F_{\mathbb{E}})\) if every continuous, affine, isometric action of \(\Gamma\) on \(\mathbb{E}\) has a \(K\)-fixed point. For a class of Banach spaces, we will say that the pair \((\Gamma,K)\) has _relative property_\((F_{\mathcal{E}})\) if it has relative property \((F_{\mathbb{E}})\) for every \(\mathbb{E}\in\mathcal{E}\).
Our main result regarding relative property \((F_{\mathcal{E}_{uc}})\) is the following:
**Theorem 1.2**.: _Let \(\Phi\neq C_{2}\) be a classical, reduced, irreducible root system of rank \(\geq 2\) and \(R\) a finitely generated commutative (unital) ring. For \(\alpha\in\Phi\), we denote \(K_{\alpha}\) to be the root subgroup of \(\mathrm{St}_{\Phi}(R)\). Then for every \(\alpha\in\Phi\), the pair \((\mathrm{St}_{\Phi}(R),K_{\alpha})\) has relative property \((F_{\mathcal{E}_{uc}})\)._
Most of our work toward proving Theorem 1.2 is proving the relative fix point property in the case where \(\Phi=A_{2}\):
**Theorem 1.3**.: _For every \(m\in\mathbb{N}\) and every \(\alpha\in A_{2}\), the pair \((\mathrm{St}_{A_{2}}(\mathbb{Z}[t_{1},...,t_{m}]),K_{\alpha})\) has relative property \((F_{\mathcal{E}_{uc}})\)._
Our approach for proving Theorem 1.3 is to show that for every \(\mathbb{E}\in\mathcal{E}_{uc}\) and every \(\xi\in\mathbb{E}\), there is \(\widetilde{\xi}\) in the closure of the convex hull of the \(\mathrm{St}_{A_{2}}(\mathbb{Z}[t_{1},...,t_{m}])\)-orbit of \(\xi\) such that \(\widetilde{\xi}\) is fixed by \(K_{\alpha}\). This is done by defining a sequence of finitely supported averages of vectors in the orbit of \(\xi\) and showing that this sequence converges to \(\widetilde{\xi}\) as above. The method of proof is very similar to that in [10] and relies of bounding the differences between averages in the Heisenberg group. However, there is a difficulty that arises from the fact that here we work with the Heisenberg group with entries in \(\mathbb{Z}[t_{1},...,t_{m}]\), while in [10], the analysis was for the Heisenberg group with entries in \(\mathbb{Z}\). Resolving this difficulty requires a non-trivial improvement of the methods developed in [10].
Passing from Theorem 1.3 to Theorem 1.2 when \(\Phi\) is simply laced or \(\Phi=F_{4}\) is straightforward: in those cases every root is an \(A_{2}\)-subsystem and thus for every \(\alpha\in\Phi\) there is \(\mathrm{St}_{A_{2}}(R)<\mathrm{St}_{\Phi}(R)\) such that \(K_{\alpha}<\mathrm{St}_{A_{2}}(R)\). The cases where \(\Phi=B_{n},C_{n},n\geq 3\) or \(\Phi=G_{2}\) require some additional analysis and are proven via a bounded generation argument.
### Synthesis of property \((F_{\mathcal{E}_{uc}})\)
After establishing relative fixed point properties for the pairs \((\mathrm{St}_{\Phi}(R),K_{\alpha}),\alpha\in\Phi\), we need to "synthesize" the relative fixed point property to a global fixed point property of the entire group \(\mathrm{St}_{\Phi}(R)\). In doing so, we cannot use bounded generation arguments a la Shalom [1], since the bounded generation assumptions do not hold for Steinberg groups. In [1], Ershov, Jaikin-Zapirain and Kassabov gave a synthesis argument for property (T) for Steinberg groups that relied on the notion of angle between subspaces. Although there are some results of the author regarding generalizing the angle machinery to the Banach setting (see [10, 10, 11]), it seems that generalizing the approach of [1] to all uniformly convex Banach spaces is hard to implement. In [12], Mimura gave a "soft" synthesis argument for the groups \(\mathrm{St}_{A_{n}}(R),n\geq 2\) that does not use bounded generation or angle arguments and that can be applied to the class of all uniformly convex Banach spaces. However, the argument in [12] was limited to \(\mathrm{St}_{A_{n}}(R)\).
Our work on synthesis of fixed point properties can be seen as a vast generalization of the basic idea of Mimura [12]. First, we develop a very general machinery for synthesis of fixed point properties that is interesting in its own right:
**Theorem 1.4**.: _Let \(\Gamma\) be a finitely generated group with a finite Abelianization and \(\mathcal{E}\) a class of uniformly convex Banach spaces such that either is closed under passing to ultraproducts
_or \(\mathcal{E}=\mathcal{E}_{uc}\). Also, let \(\mathcal{G}\) be a (non-directed) connected graph with a vertex set \(V\). Assume that for every \(v\in V\), there are subgroups \(N^{v},H_{+}^{v},H_{-}^{v}<\Gamma\) such that the following holds:_
1. _For every_ \(v\in V\)_,_ \(N^{v}\) _normalizes_ \(H_{+}^{v}\) _and_ \(H_{-}^{v}\)_._
2. _For every_ \(v\in V\)_,_ \(\langle H_{+}^{v},H_{-}^{v}\rangle=\Gamma\)_._
3. _If_ \(u,v\in V\) _such that_ \(u\sim v\)_, then_ \(H_{\pm}^{u}<\langle H_{\pm}^{v},N^{v}\rangle,\) _and_ \(H_{\pm}^{v}<\langle H_{\pm}^{u},N^{u}\rangle\)_._
4. _It holds that_ \[\langle H_{+}^{v},N^{v}:v\in V\rangle=\Gamma.\]
5. _For every_ \(v\in V\)_, the pairs_ \((\Gamma,H_{+}^{v}),(\Gamma,H_{-}^{v})\) _have relative property_ \((F_{\mathcal{E}})\)_._
_Then \(\Gamma\) has property \((F_{\mathcal{E}})\)._
Using this machinery, we derive a far reaching generalization of the main result of [1] ([1, Theorem 1.2]):
**Theorem 1.5**.: _Let \(\Phi\) be a classical reduced irreducible root system of rank \(\geq 2\), \(R\) a commutative ring, \(\mathrm{St}_{\Phi}(R)\) the Steinberg group and \(\{K_{\alpha}:\alpha\in\Phi\}\) the root subgroups of \(\mathrm{St}_{\Phi}(R)\)._
_If for every \(\alpha\in\Phi\), the pair \((\mathrm{St}_{\Phi}(R),K_{\alpha})\) has relative property \((F_{\mathcal{E}_{uc}})\), then \(\mathrm{St}_{\Phi}(R)\) has property \((F_{\mathcal{E}_{uc}})\)._
We note that Theorem 1.5 is actually a special case of a more general result: In [1], Ershov, Jaikin-Zapirain and Kassabov defined the general notion of a group that is strongly graded over a root system and showed that Steinberg groups \(\mathrm{St}_{\Phi}(R)\) are a special case of this notion. They then proved a synthesis result for property (T) for groups strongly graded over a root systems. In Theorem 7.7 below, we generalize their result and prove a synthesis result in the context of uniformly convex Banach spaces for groups that are strongly graded over root systems.
### Super-expanders
As an application of Theorem 1.1, we derive new constructions of super-expanders (see exact definition in SS9 below):
**Theorem 1.6**.: _Let \(n\geq 3,m\in\mathbb{N}\) and let \(\{R_{i}\}_{i\in\mathbb{N}}\) be a sequence of finite commutative (unital) rings such that for each \(i\), \(R_{i}\) is generated by \(p_{0}^{(i)}=1,p_{1}^{(i)},...,p_{m}^{(i)}\in R_{i}\) and \(|R_{i}|\to\infty\). Also, let \(\Phi\neq C_{2}\) be a classical reduced irreducible root system of rank \(\geq 2\). Denote \(\phi_{i}:\mathrm{St}_{\Phi}(\mathbb{Z}[t_{1},...,t_{m}])\to\mathrm{EL}_{\Phi}(R _{i})\) be the epimorphisms induced by the ring epimorphism \(\mathbb{Z}[t_{1},...,t_{m}]\to R_{i},1\mapsto r_{0}^{(i)},t_{j}\mapsto r_{j}^ {(i)},\forall 1\leq j\leq m\)._
_For a finite generating set \(S\) of \(\mathrm{St}_{\Phi}(\mathbb{Z}[t_{1},...,t_{m}])\) it holds that the family of Cayley graphs of \(\{(\mathrm{EL}_{\Phi}(R_{i}),\phi_{i}(S))\}_{i\in\mathbb{N}}\) is a super-expander family._
**Structure of this paper**. This paper is organized as follows: In SS2, we cover some needed preliminaries. In SS3, we prove some bounds on the difference of averaging operations for the Heisenberg group that are needed for our relative fixed point result. In SS4, we prove a bound on the word norm growth in the \(A_{2}\) Steinberg group. In SS5, we prove our result regarding relative fixed point properties of root subgroups in Steinberg group (Theorem 1.2 above). In SS6, we prove Theorem 1.4 (and also some more general versions). In SS7, we prove a general synthesis result for groups graded by root systems and derive Theorem 1.5 as a special case. In SS8, we prove our main result (Theorem 1.1 above). Last, in SS9, we show that our main result yields new construction of super-expanders (Theorem 1.6).
## 2. Preliminaries
### Basic notation
\(\lesssim\) **notation**. We will use the \(\lesssim\) notation as follows: For to expressions \(X,Y\), we will write \(X\lesssim Y\) if there is a universal constant \(C\) such that \(X\leq CY\). We will write \(X\lesssim_{B}Y\) if there is a constant \(C=C(B)\) such that \(X\leq CY\).
**Monomials notation**. For a monomial \(\underline{t}=t_{1}^{n_{1}}...t_{m}^{n_{m}}\) where \(n_{i}\in\mathbb{N}\cup\{0\}\), we will denote \(\deg(\underline{t})=n_{1}+...+n_{m}\). We will also use the convention \(t_{1}^{0}...t_{m}^{0}=1\).
**Graph notation**. We will denote (non-directed) graphs by \(\mathcal{G}\) and directed graphs by \(\vec{\mathcal{G}}\). For two vertices \(u,v\) in \(\mathcal{G}\), we will denote \(u\sim v\) if there is an edge between \(u\) and \(v\) (when there is a chance of ambiguity regarding \(\mathcal{G}\), we will denote \(u\sim^{\mathcal{G}}v\)). For two vertices \(u,v\) in \(\vec{\mathcal{G}}\), we will denote \(u\to v\) if there is a directed edge from \(u\) to \(v\).
### Linear representations and affine isometric actions of a group
Let \(\Gamma\) be a discrete group and \(\mathbb{E}\) a Banach space. An _isometric representation_ of \(\Gamma\) on \(\mathbb{E}\) is a homomorphism \(\pi:\Gamma\to O(\mathbb{E})\), where \(O(\mathbb{E})\) denotes the group of all isometric linear invertible maps from \(\mathbb{E}\) to itself. An affine isometric action of \(\Gamma\) on \(\mathbb{E}\) is a homomorphism \(\rho:G\to\operatorname{Isom}_{aff}(\mathbb{E})\), where \(\operatorname{Isom}_{aff}(\mathbb{E})\) is the group of bijective affine isometries from \(\mathbb{E}\) to itself. We recall that \(\rho\) is of the form
\[\rho(g)\xi=\pi(g)\xi+c(g),\forall\xi\in\mathbb{E}\]
where \(\pi:\Gamma\to O(\mathbb{E})\) is an isometric linear representation that is called the _linear part of \(\rho\)_ and \(c:\Gamma\to\mathbb{E}\) is a \(1\)-cocycle into \(\pi\), i.e., for every \(g,h\in\Gamma\),
\[c(gh)=c(g)+\pi(g)c(h).\]
Denote \(\mathbb{R}[\Gamma]\) to be the group ring of \(\Gamma\), i.e., the ring of functions \(f:\Gamma\to\mathbb{R}\) with finite support where multiplication is via convolution. Equivalently, an element in \(\mathbb{R}[\Gamma]\) can be represented of as a formal sum \(\sum_{g\in F}a_{g}g\), where \(F\subseteq\Gamma\) is a finite set and \(\{a_{g}\}_{g\in F}\subseteq\mathbb{R}\). In this convention, the multiplication is defined as
\[\left(\sum_{g\in F_{1}}a_{g}g\right)\left(\sum_{g^{\prime}\in F_{2}}b_{g^{ \prime}}g^{\prime}\right)=\sum_{g\in F_{1},g^{\prime}\in F_{2}}a_{g}b_{g^{ \prime}}gg^{\prime}.\]
Below, we will use both conventions of \(\mathbb{R}[\Gamma]\) according to convenience.
For an affine isometric action \((\rho,\mathbb{E})\) and \(f\in\mathbb{R}[\Gamma]\) define
\[\rho(f)\xi=\sum_{g\in\Gamma}f(g)\rho(g)\xi,\forall\xi\in\mathbb{E}\]
(note that the sum above is finite and thus well defined). We note that \(\rho(f)\) need not be an isometry, but it is always an affine map, i.e., for every \(f\in\mathbb{R}[\Gamma]\) there is a bounded linear map \(T:\mathbb{E}\to\mathbb{E}\) and a vector \(\xi_{0}\in\mathbb{E}\) such that for every \(\xi\in\mathbb{E}\),
\[\rho(f)\xi=T\xi+\xi_{0}.\]
We also note that since every isometric linear representation \(\pi:\Gamma\to O(\mathbb{E})\) is an affine isometric action, this definition also extends to \(\pi(f)\) and in that case \(\pi(f)\) is always a bounded linear map.
We define \(\operatorname{Prob}_{c}(\Gamma)\subseteq\mathbb{R}[\Gamma]\) to be finitely supported probability measures on \(\Gamma\), i.e., \(f\in\operatorname{Prob}_{c}(\Gamma)\) if it is finitely supported, \(f(g)\geq 0,\forall g\) and \(\sum_{g}f(g)=1\). Equivalently, \(\operatorname{Prob}_{c}(\Gamma)\) is the set of all the formal sums \(\sum_{g\in F}a_{g}g\), where \(F\subseteq\Gamma\) is a finite set, \(\{a_{g}\}_{g\in F}\subseteq[0,1]\) and \(\sum_{g\in F}a_{g}=1\).
**Observation 2.1**.: Let \((\rho,\mathbb{E})\) be an affine isometric action of \(\Gamma\). For every \(g\in\Gamma\), every \(\xi_{1},...,\xi_{n}\in\mathbb{E}\) and every \(a_{1},...,a_{n}\in[0,1]\) such that \(\sum_{i}a_{i}=1\), it holds that
\[\rho(g)\left(\sum_{i=1}^{n}a_{i}\xi_{i}\right)=\sum_{i=1}^{n}a_{i}\rho(g)\xi_{ i}.\]
It follows that for every \(f_{1},f_{2}\in\operatorname{Prob}_{c}(\Gamma)\), it holds that \(\rho(f_{1})\rho(f_{2})=\rho(f_{1}f_{2})\).
**Claim 2.2**.: _Let \((\rho,\mathbb{E})\) be an affine isometric action of \(\Gamma\) with a linear part \(\pi\). Then for every \(f\in\mathbb{R}[\Gamma]\) and every \(\xi_{1},\xi_{2}\in\mathbb{E}\) it holds that_
\[\rho(f)\xi_{1}-\rho(f)\xi_{2}=\pi(f)(\xi_{1}-\xi_{2}).\]
_In particular, for \(f\in\operatorname{Prob}_{c}(\Gamma)\),_
\[\|\rho(f)\xi_{1}-\rho(f)\xi_{2}\|\leq\|\xi_{1}-\xi_{2}\|.\]
Proof.: We note that by the definition of affine map it holds for every \(g\in\Gamma\) that
\[\rho(g)\xi_{1}-\rho(g)\xi_{2}=\pi(g)(\xi_{1}-\xi_{2}).\]
Thus, for every \(f\in\mathbb{R}[\Gamma]\) it holds that
\[\rho(f)\xi_{1}-\rho(f)\xi_{2}=\sum_{g\in\Gamma}f(g)\rho(g)\xi_{1 }-\sum_{g\in\Gamma}f(g)\rho(g)\xi_{2}=\] \[\sum_{g\in\Gamma}f(g)(\rho(g)\xi_{1}-\rho(g)\xi_{2})=\sum_{g\in \Gamma}f(g)\pi(g)(\xi_{1}-\xi_{2})=\pi(f)(\xi_{1}-\xi_{2}),\]
as needed.
**Proposition 2.3**.: _Let \((\rho,\mathbb{E})\) be an affine isometric action of \(\Gamma\). For every integers \(0\leq N\leq n,n>0\), every \(g\in\Gamma\) and every \(\xi\in\mathbb{E}\) it holds that_
\[\left\|\rho\left(\frac{1}{n}\sum_{i=0}^{n-1}g^{i}\right)\xi-\rho(g^{N})\rho \left(\frac{1}{n}\sum_{i=0}^{n-1}g^{i}\right)\xi\right\|\leq\frac{N}{n}\|\xi- \rho(g^{n})\xi\|.\]
Proof.: We note that
\[\rho(g^{N})\rho\left(\frac{1}{n}\sum_{i=0}^{n-1}g^{i}\right)=\rho\left(\frac{ 1}{n}\sum_{i=N}^{n+N-1}g^{i}\right).\]
Thus,
\[\left\|\rho\left(\frac{1}{n}\sum_{i=0}^{n-1}g^{i}\right)\xi-\rho(g^{ N})\rho\left(\frac{1}{n}\sum_{i=0}^{n-1}g^{i}\right)\xi\right\|=\left\|\frac{1}{n} \sum_{i=0}^{n-1}\rho(g^{i})\xi-\frac{1}{n}\sum_{i=N}^{n+N-1}\rho(g^{i})\xi \right\|=\] \[\left\|\frac{1}{n}\sum_{i=0}^{N-1}\left(\rho(g^{i})\xi-\rho(g^{i+ n})\right)\xi\right\|\leq\] \[\frac{1}{n}\sum_{i=0}^{N-1}\left\|\rho(g^{i})\xi-\rho(g^{i})\rho( g^{n})\xi\right\|\leq^{\text{Claim \ref{claim:1}}}\frac{N}{n}\|\xi-\rho(g^{n})\xi\|,\]
as needed.
Let \(S\) be a symmetric generating set of \(\Gamma\) (we do not assume that \(S\) is necessarily finite). For every \(g\in\Gamma\), denote \(|g|_{S}\) to be the word norm with respect to \(S\), i.e., \(|g|_{S}\) is the distance between \(g\) and \(e\) in the Cayley graph \(\text{Cay}(\Gamma,S)\). For \(f\in\mathbb{R}[\Gamma]\), we denote
\[|f|_{S}=\max\{|g|_{S}\colon f(g)\neq 0\}.\]
**Observation 2.4**.: Observe that for \(f_{1},f_{2}\in\mathbb{R}[\Gamma]\) it holds that \(|f_{1}f_{2}|_{S}\leq|f_{1}|_{S}+|f_{2}|_{S}\).
**Proposition 2.5**.: _Let \(\mathbb{E}\) a Banach space, \(\Gamma\) a group, \(S\) a symmetric generating set of \(\Gamma\) and \(\rho\) an affine isometric action of \(\Gamma\). Assume that_
\[\sup_{s\in S}\lVert\rho(s)0\rVert<\infty.\]
_Then for every \(\xi\in\mathbb{E}\) and every \(g\in\Gamma\) it holds that_
\[\lVert\xi-\rho(g)\xi\rVert\leq(\sup_{s\in S}\lVert\rho(s)0\rVert+2\lVert\xi \rVert)|g|_{S}.\]
_Also, for every \(f,f_{1},f_{2}\in\text{Prob}_{c}(\Gamma)\) and every \(\xi\in\mathbb{E}\) it holds that_
\[\lVert\xi-\rho(f)\xi\rVert\lesssim_{\lVert\xi\rVert,S}|f|_{S},\]
_and_
\[\lVert\rho(f_{1})\xi-\rho(f_{2})\xi\rVert\lesssim_{\lVert\xi\rVert,S}|f_{1}|_{ S}+|f_{2}|_{S}.\]
Proof.: Let \(\pi\) denote the linear part of \(\rho\) and \(c\) denote the \(1\)-cocycle of \(\rho\). We note that
\[\sup_{s\in S}\lVert c(s)\rVert=\sup_{s\in S}\lVert\rho(s)0\rVert<\infty.\]
Fix \(\xi\in\mathbb{E}\) and denote \(C=2\lVert\xi\rVert+\sup_{s\in S}\lVert c(s)\rVert\). Note that
\[\sup_{s\in S}\lVert\xi-\rho(s)\xi\rVert=\sup_{s\in S}\lVert\xi-\pi(s)\xi-c(s) \rVert\leq 2\lVert\xi\rVert+\sup_{s\in S}\lVert c(s)\rVert=C.\]
We will show that for every \(g\in\Gamma\),
\[\lVert\xi-\rho(g)\xi\rVert\leq C|g|_{S}.\]
The proof is by induction on \(|g|_{S}\). If \(|g|_{S}\)= 0, then \(g=e\) and \(\|\xi-\rho(e)\xi\|\)= 0 as needed. Assume that the inequality holds for every \(g^{\prime}\in\Gamma\) with \(|g^{\prime}|_{S}\)= \(n\) and let \(g\in\Gamma\) with \(|g|_{S}\)= \(n+1\). There are \(s_{1}...s_{n+1}\in S\) such that \(g=s_{1}...s_{n+1}\). Denote \(g^{\prime}=s_{2}...s_{n+1}\). Then
\[\|\xi-\rho(g)\xi\|\leq\|\xi-\rho(s_{1})\xi\|+\|\rho(s_{1})\xi-\rho (s_{1}...s_{n+1})\xi\|\leq\] \[C+\|\rho(s_{1})\xi-\rho(s_{1})\rho(g^{\prime})\xi\|=C+\|\xi-\rho (g^{\prime})\xi\|\leq C+Cn=C(n+1),\]
as needed.
For \(f\in\operatorname{Prob}_{c}(\Gamma)\), it holds that
\[\|\xi-\rho(f)\xi\|=\left\|\xi-\sum_{g}f(g)\rho(g)\xi\right\|= \left\|\sum_{g}f(g)(\xi-\rho(g)\xi)\right\|\leq\] \[\sum_{g}f(g)\|\xi-\rho(g)\xi\|\leq\max_{g,f(g)\neq 0}\|\xi-\rho(g )\xi\|\lesssim_{\|\xi\|,S}|f|_{S}.\]
It follows that for \(f_{1},f_{2}\in\operatorname{Prob}_{c}(\Gamma)\) it holds that
\[\|\rho(f_{1})\xi-\rho(f_{2})\xi\|\leq\|\rho(f_{1})\xi-\xi\|+\|\xi-\rho(f_{2}) \xi\|\lesssim_{\|\xi\|,S}|f_{1}|_{S}+|f_{2}|_{S}.\]
### Bounded generation and Banach fixed point properties
Let \(\Gamma\) be a group with subgroups \(H_{1},...,H_{k},\Gamma^{\prime}\). We say that \(H_{1},...,H_{k}\) boundedly generate \(\Gamma^{\prime}\) if \(\Gamma^{\prime}\subseteq\langle H_{1},...,H_{k}\rangle\) and \(\sup_{g\in\Gamma^{\prime}}\)\(|g|_{H_{1}\cup...\cup H_{k}}\)\(<\infty\). Shalom [29], observed that bounded generation and relative property (T) implies global property (T). His argument is very simple and requires very little adaptation to the setting of uniformly convex Banach spaces (this result is well-known and the proof is given below for completeness):
**Lemma 2.6** (The bounded generation Lemma).: _Let \(\Gamma\) be a group with subgroups \(H_{1},...,H_{k},\Gamma^{\prime}\) and \(\mathbb{E}\) be a uniformly convex Banach space. Assume that \((\Gamma,H_{1}),...,(\Gamma,H_{k})\) have relative property \((F_{\mathbb{E}})\) and that \(\Gamma^{\prime}\subseteq\langle H_{1},...,H_{k}\rangle\). If \(H_{1},...,H_{k}\) boundedly generate \(\Gamma^{\prime}\), then \((\Gamma,\Gamma^{\prime})\) has relative property \((F_{\mathbb{E}})\)._
Proof.: Let \(\rho\) be an affine isometric action of \(\Gamma\) on \(\mathbb{E}\). Denote \(S=H_{1}\cup...\cup H_{k}\). By the assumption that \((\Gamma,H_{1}),...,(\Gamma,H_{k})\) have relative property \((F_{\mathbb{E}})\) it follows that for every \(\xi\in\mathbb{E}\),
\[\sup_{s\in S}\|\xi-\rho(s)\xi\|<\infty.\]
Thus, by the bounded generation assumption and Proposition 2.5 it follows that for every \(\xi\in\mathbb{E}\),
\[\sup_{g\in\Gamma^{\prime}}\|\xi-\rho(g)\xi\|<\infty,\]
i.e., the \(\Gamma^{\prime}\) orbit of every \(\xi\) is bounded. It follows from Ryll-Nardzewski Theorem or a circumcenter argument that \(\Gamma^{\prime}\) has property \((F_{\mathbb{E}})\)
### Root systems
Here, we will give the basic definitions and results given in [1] regarding root systems.
**Definition 2.7** (Root system).: Let \(E\) be a finite dimensional real vector space. A finite non-empty set \(\Phi\subset E\) is called a _root system_ if
* \(\Phi\) spans \(E\).
* \(\Phi\) does not contain \(0\).
* \(\Phi\) is closed under inversion, i.e., \(\alpha\in\Phi\Rightarrow(-\alpha)\in\Phi\).
The vectors in \(\Phi\) are called the _roots of \(\Phi\)_ and the dimension of \(E\) is called the _rank of \(\Phi\)_.
**Definition 2.8**.: Let \(\Phi\) be a root system in \(E\).
* \(\Phi\) is called _reduced_ if every \(1\)-dimensional subspace of \(E\) contains at most two elements in \(\Phi\).
* \(\Phi\) is called _irreducible_ if there are no disjoint non-empty sets \(\Phi_{1},\Phi_{2}\subset\Phi\) such that \(\Phi=\Phi_{1}\cup\Phi_{2}\) and \(\operatorname{span}(\Phi_{1})\cap\operatorname{span}(\Phi_{2})=\{0\}\).
* A non-empty set \(\Psi\subseteq\Phi\) is called a _root subsystem of \(\Phi\)_ if \(\Phi\cap\operatorname{span}(\Psi)=\Psi\).
* \(\Phi\) is called _regular_ if every root of \(\Phi\) is contained in an irreducible root subsystem of rank \(2\).
We note that these definitions generalize the definition of classical root systems:
**Definition 2.9** (Classical root system).: Let \(\Phi\) be a root system in a space \(E\). \(\Phi\) is called a _classical root system_ if there exists an inner-product \((.,.)\) on \(E\) such that for every \(\alpha,\beta\in\Phi\) it holds that
\[\frac{2(\alpha,\beta)}{(\beta,\beta)}\in\mathbb{Z}\text{ and }\alpha-\frac{2( \alpha,\beta)}{(\beta,\beta)}\beta\in\Phi.\]
We will give only the basic definitions and facts regarding classical root systems - a much more complete account can be found in [1] and references therein. It is well-known that there is a classification of classical reduced irreducible root systems. Namely, every reduce irreducible classical root system is isomorphic to one of the following: \(A_{n},B_{n}(n\geq 3),C_{n}(n\geq 2),D_{n}(n\geq 4),F_{4},E_{6},E_{7},E_{8},G_{2}\). A classical reduced irreducible root system \(\Phi\) is called _simply laced_ if all the roots of \(\Phi\) have the same length/norm. We also note that all the classical reduced irreducible root systems are regular.
**Definition 2.10**.: Let \(\Phi\) be a root system in \(E\). A linear functional \(f:E\to\mathbb{R}\) is said to be _in general position with respect to \(\Phi\)_ if
* \(f(\alpha)\neq 0\) for every \(\alpha\in\Phi\).
* \(f(\alpha)\neq f(\beta)\) for every \(\alpha,\beta\in\Phi,\alpha\neq\beta\).
We denote \(\mathfrak{F}(\Phi)\) to be the set of all the linear functionals that are general position with respect to \(\Phi\).
**Definition 2.11** (Borel set, boundary, core).: Let \(\Phi\) be a root system in \(E\) and \(f\in\mathfrak{F}(\Phi)\).
* The _Borel set of \(f\)_ is the set \[\Phi_{f}=\{\alpha\in\Phi:f(\alpha)>0\}.\] (note that it can be that \(f\neq f^{\prime}\) and \(\Phi_{f}=\Phi_{f^{\prime}}\)).
* We denote \(\mathrm{Borel}(\Phi)\) to be the set of all the Borel sets of \(\Phi\) and note that this set is closed under inversion, i.e., \(\Phi_{1}\in\mathrm{Borel}(\Phi)\Rightarrow-\Phi_{1}\in\mathrm{Borel}(\Phi)\).
* For \(\Phi_{1},\Phi_{2}\in\mathrm{Borel}(\Phi)\), \(\Phi_{1}\) and \(\Phi_{2}\) are called _co-minimal_ if \(\Phi_{1}\cap\Phi_{2}\) spans a \(1\)-dimensional space.
* For \(\Phi_{1},\Phi_{2}\in\mathrm{Borel}(\Phi)\), \(\Phi_{1}\) and \(\Phi_{2}\) are called _co-maximal_ if \(\Phi_{1}\) and \(-\Phi_{2}\) are co-minimal.
* The _boundary of the Borel set \(\Phi_{1}\)_ is the set \(\partial\Phi_{1}=\{\alpha\in\Phi_{1}:\exists\Phi_{2}\in\mathrm{Borel}(\Phi)\) such that \(\Phi_{1}\) and \(\Phi_{2}\) are co-minimal and \(\alpha\in\Phi_{1}\cap\Phi_{2}\}\).
* The core of the Borel set \(\Phi_{2}\) is the set \[\mathrm{Core}(\Phi_{1})=\Phi_{1}\setminus\partial\Phi_{1}.\]
Below, we will need the following facts from [1]:
**Lemma 2.12**.: _Let \(\Phi\) be a root system in \(E\)._
1. _[_1_, Lemma 4.4]_ _Let_ \(\Phi_{1},\Phi_{2}\in\mathrm{Borel}(\Phi)\) _such that_ \(\Phi_{1}\) _and_ \(\Phi_{2}\) _are not co-maximal. Then there exists_ \(\Phi_{3}\in\mathrm{Borel}(\Phi)\) _such that_ \(\Phi_{1}\cap\Phi_{2}\subsetneqq\Phi_{1}\cap\Phi_{3}\) _and_ \(\Phi_{1}\cap\Phi_{2}\subsetneqq\Phi_{2}\cap\Phi_{3}\)_._
2. _[_1_, Lemma 4.6]_ _Let_ \(\Phi_{1}\in\mathrm{Borel}(\Phi)\) _and_ \(\alpha,\beta\in\Phi_{1}\) _such that_ \(\alpha,\beta\) _are linearly independent. For every_ \(a,b\in(0,\infty)\)_, if_ \(a\alpha+b\beta\in\Phi\)_, then_ \(a\alpha+b\beta\in\mathrm{Core}(\Phi_{1})\)_._
3. _[_1_, Lemma 4.8]_ _Assume that_ \(\Phi\) _is regular, then for every_ \(\alpha\in\Phi\)_, there is_ \(\Phi_{1}\in\mathrm{Borel}(\Phi)\) _such that_ \(\alpha\in\mathrm{Core}(\Phi_{1})\)_._
Let \(\Phi\) be a root system in \(E\). We define \(\mathcal{G}_{\mathrm{co-max}}\) to be the graph with the vertex set \(V(\mathcal{G}_{\mathrm{co-max}})=\mathrm{Borel}(\Phi)\) such that for \(\Phi_{1},\Phi_{2}\in\mathrm{Borel}(\Phi),\Phi_{1}\neq\Phi_{2}\) it holds that \(\Phi_{1}\sim\Phi_{2}\) if and only if \(\Phi_{1}\) and \(\Phi_{2}\) are co-maximal.
**Lemma 2.13**.: _The graph \(\mathcal{G}_{\text{co-max}}\) is connected._
The proof of this lemma is very similar to the proof given for the connectedness of the small Weyl graph in [1] and we give the proof for completeness.
Proof.: For \(\alpha\in\Phi\), we denote \(\mathbb{R}\alpha\) to be the \(1\)-dimensional subspace spanned by \(\alpha\).
Let \(\Phi_{1},\Phi_{2}\in\mathrm{Borel}(\Phi)\), \(\Phi_{1}\neq\Phi_{2}\). We will show that there is a path in \(\mathcal{G}_{\mathrm{co-max}}\) connecting \(\Phi_{1}\) and \(\Phi_{2}\) by a downward induction on \(|\{\mathbb{R}\alpha:\alpha\in\Phi_{1}\cap\Phi_{2}\}|\). We note that \(|\{\mathbb{R}\alpha:\alpha\in\Phi_{1}\cap\Phi_{2}\}|\) is maximal if and only if
\[|\{\mathbb{R}\alpha:\alpha\in\Phi_{1}\cap\Phi_{2}\}|=|\{\mathbb{R}\alpha: \alpha\in\Phi_{1}\}|-1=|\{\mathbb{R}\alpha:\alpha\in\Phi_{2}\}|-1\]
and this happens if and only if \(\Phi_{1}\) and \(\Phi_{2}\) are co-maximal and thus connected by an edge in \(\mathcal{G}_{\mathrm{co-max}}\). Assume that \(\Phi_{1}\) and \(\Phi_{2}\) are not co-maximal, then by Lemma 2.12 (1), there is \(\Phi_{3}\in\mathrm{Borel}(\Phi)\) such that \(\Phi_{1}\cap\Phi_{2}\subsetneqq\Phi_{1}\cap\Phi_{3}\) and \(\Phi_{1}\cap\Phi_{2}\subsetneqq\Phi_{2}\cap\Phi_{3}\). Thus
\[|\{\mathbb{R}\alpha:\alpha\in\Phi_{1}\cap\Phi_{2}\}|<|\{\mathbb{R}\alpha: \alpha\in\Phi_{1}\cap\Phi_{3}\}|\]
and
\[|\{\mathbb{R}\alpha:\alpha\in\Phi_{1}\cap\Phi_{2}\}|<|\{\mathbb{R}\alpha: \alpha\in\Phi_{2}\cap\Phi_{3}\}|.\]
It follows from the induction assumption that there is a path in \(\mathcal{G}_{\text{co-max}}\) connecting \(\Phi_{1}\) and \(\Phi_{3}\) and a path in \(\mathcal{G}_{\text{co-max}}\) connecting \(\Phi_{2}\) and \(\Phi_{3}\) and thus there is a path connecting \(\Phi_{1}\) and \(\Phi_{2}\).
### Steinberg groups over rings
Steinberg groups can be thought of as abstract groups defined via the Chevalley commutator relation. More explicitly, given a classical reduced irreducible root system \(\Phi\) and a ring \(R\), the group \(\text{St}_{\Phi}(R)\) is the group generated by the set \(\{x_{\alpha}(p):\alpha\in\Phi,p\in R\}\) with the following relations:
1. For every \(\alpha\in\Phi\) and every \(p_{1},p_{2}\in R\), \(x_{\alpha}(p_{1})x_{\alpha}(p_{2})=x_{\alpha}(p_{1}+p_{2})\).
2. For every \(\alpha,\beta\in\Phi\) and every \(p_{1},p_{2}\in R\), \[[x_{\alpha}(p_{1}),x_{\beta}(p_{2})]=\prod_{i,j\in\mathbb{N},i\alpha+j\beta\in \Phi}x_{\alpha+j\beta}(C^{\alpha,\beta}_{i,j},\mathrm{p}^{i}_{1}p^{j}_{2}),\] where the product is taken in order of increasing \(i+j\) and \(C^{\alpha,\beta}_{i,j}\) are the constants of the Chevalley commutator relation for Chevalley groups over fields (see for instance [1, Section 5.2]).
The subgroups \(K_{\alpha}=\{x_{\alpha}(p):p\in R\}\) will be called the _root subgroups_ of \(\text{St}_{\Phi}(R)\).
The most important Steinberg group for our analysis is \(\text{St}_{A_{2}}(R)\), which can be defined explicitly as follows: For every \(p\in R\), we denote
\[x_{1,2}(p)=x_{\alpha}(p),x_{2,3}(p)=x_{\beta}(p),x_{1,3}(p)=x_{\alpha+\beta}(p),\]
\[x_{2,1}(p)=x_{-\alpha}(p),x_{3,2}(p)=x_{-\beta}(p),x_{3,1}(p)=x_{-(\alpha+ \beta)}(p).\]
With this notation, by [1, Proposition 7.6], the Steinberg group \(\text{St}_{A_{2}}(R)\) is the group generated by the set \(\{x_{i,j}(p):1\leq i,j\leq 3,i\neq j,p\in R\}\) under the relations:
1. For every \(1\leq i,j\leq 3,i\neq j\) and every \(p_{1},p_{2}\in R\), \[x_{i,j}(p_{1})x_{i,j}(p_{2})=x_{i,j}(p_{1}+p_{2}).\]
2. For every \(1\leq i,j,k\leq 3\) that are pairwise distinct and every \(p_{1},p_{2}\in R\), \[[x_{i,j}(p_{1}),x_{j,k}(p_{2})]=x_{i,k}(p_{1}p_{2}).\]
3. For every \(1\leq i,j,k\leq 3\) that are pairwise distinct and every \(p_{1},p_{2}\in R\), \[[x_{i,j}(p_{1}),x_{i,k}(p_{2})]=[x_{j,i}(p_{1}),x_{k,i}(p_{2})]=e.\]
We also denote \(K_{i,j},i\neq j\) to be the root subgroup
\[K_{i,j}=\{x_{i,j}(p):p\in R\}.\]
We will also need some of the relations for the groups \(\text{St}_{C_{2}}(R)\) and \(\text{St}_{G_{2}}(R)\) (these are not all the relations, but only the relations that are needed for us below):
**Proposition 2.14**.: _[_1_, Proposition 7.6]_ _For \(\Phi=C_{2}=\{\pm\alpha,\pm\beta,\pm(\alpha+\beta),\pm(\alpha+2\beta)\}\) (\(\alpha\) denotes the long root) it holds for every \(p_{1},p_{2}\in R\) that_
\[[x_{\alpha+\beta}(p_{1}),x_{\beta}(p_{2})]=x_{\alpha+2\beta}(2p_{1}p_{2}),\]
\[[x_{\alpha}(p_{1}),x_{\beta}(p_{2})]=x_{\alpha+\beta}(p_{1}p_{2})x_{\alpha+2 \beta}(p_{1}p_{2}^{2}).\]
_For \(\Phi=G_{2}=\{\pm\alpha,\pm\beta,\pm(\alpha+\beta),\pm(\alpha+2\beta),\pm(\alpha+3 \beta),\pm(2\alpha+3\beta)\}\) (\(\alpha\) denotes the long root) it holds for every \(p_{1},p_{2}\in R\) that_
\[[x_{\alpha}(p_{1}),x_{\beta}(p_{2})]=x_{\alpha+\beta}(p_{1}p_{2})x_{\alpha+2 \beta}(p_{1}p_{2}^{2})x_{\alpha+3\beta}(p_{1}p_{2}^{3})x_{2\alpha+3\beta}(p_{1 }^{2}p_{2}^{3}),\]
\[[x_{\alpha}(p_{1}),x_{\alpha+3\beta}(p_{2})]=x_{2\alpha+3\beta}(p_{1}p_{2}).\]
Action of \(\mathrm{H}_{3}(\mathbb{Z}[\mathrm{t}_{1},...,\mathrm{t}_{\mathrm{m}}])\) on uniformly convex Banach spaces
Throughout this section, \(m\in\mathbb{N}\) is a fixed integer. Below, we will study the action of \(\mathrm{H}_{3}(\mathbb{Z}[\mathrm{t}_{1},...,\mathrm{t}_{\mathrm{m}}])\) on uniformly convex Banach spaces and deduce results similar to those of [10] which dealt with isometric representations of \(\mathrm{H}_{3}(\mathbb{Z})\).
For every \(k\in\mathbb{N}\cup\{0\}\) and every \(p\in\mathbb{Z}[\mathrm{t}_{1},...,\mathrm{t}_{m}]\), we define \(X_{k}(p),Y_{k}(p),Z_{k}(p)\in\mathrm{Prob}_{c}(\mathrm{H}_{3}(\mathbb{Z}[ \mathrm{t}_{1},...,\mathrm{t}_{\mathrm{m}}]))\) by
\[X_{k}(p)=\frac{e+x(2^{k}p)}{2},Y_{k}(p)=\frac{e+y(2^{k}p)}{2},Z_{k}(p)=\frac{ e+z(2^{k}p)}{2}.\]
Also, for \(d\in\mathbb{N}\), we define
\[X^{d}(p)=\frac{1}{2^{d}}\sum_{a=0}^{2^{d}-1}x(ap),Y^{d}=\frac{1}{2^{d}}\sum_{b =0}^{2^{d}-1}y(bp),Z^{d}=\frac{1}{2^{d}}\sum_{c=0}^{2^{d}-1}z(cp).\]
We note that
\[X^{d}(p)=\prod_{a=0}^{d-1}X_{a}(p),Y^{d}(p)=\prod_{b=0}^{d-1}Y_{b}(p),Z^{d}(p) =\prod_{c=0}^{d-1}Z_{c}(p).\]
Let \(\underline{t}\) denote monomials of the form \(\underline{t}=t_{1}^{n_{1}}...t_{m}^{n_{m}}\) and recall that we denoted \(\deg(t_{1}^{n_{1}}...t_{m}^{n_{m}})=n_{1}+....+n_{m}\). For \(d\in\mathbb{N}\cup\{0\}\), denote \(\mathcal{S}^{d}\) to be
\[\mathcal{S}^{d}=\{\underline{t}:\deg(\underline{t})=d\},\]
e.g., \(\mathcal{S}^{0}=\{1\}\) and \(\mathcal{S}^{1}=\{t_{1},...,t_{m}\}\). Also, denote \(\mathcal{B}^{d}=\bigcup_{k=0}^{d}\mathcal{S}^{d}\). We observe that \(|\mathcal{S}^{d}|=\binom{d+m-1}{d}\) and \(|\mathcal{B}^{d}|=\binom{d+m}{d}\).
Define
\[\underline{X}^{d}=\prod_{\underline{t}\in\mathcal{B}^{d}}X^{d}(\underline{t} ),\underline{Y}^{d}=\prod_{\underline{t}\in\mathcal{B}^{d}}Y^{d}(\underline{t }),\underline{Z}^{d}=\prod_{\underline{t}\in\mathcal{B}^{d}}Z^{d}(\underline{t }).\]
For \(d,d^{\prime}\in\mathbb{N}\), we further define
\[\mathrm{Poly}(d,d^{\prime})=\left\{\sum_{\underline{t}\in\mathcal{B}^{d}}c_{ \underline{t}}\underline{t}\in\mathbb{Z}[\mathrm{t}_{1},...,\mathrm{t}_{m}]:0 \leq c_{\underline{t}}\leq 2^{d^{\prime}}-1\right\}.\]
We note that with this notation,
\[\underline{X}^{d}=\frac{1}{|\mathrm{Poly}(d,d)|}\sum_{p\in\mathrm{Poly}(d,d)}x( p),\]
and similar equalities hold for \(\underline{Y}^{d}\) and \(\underline{Z}^{d}\).
The aim of this section is to bound the (norms of) the following differences: Bounding
\[\left\|\rho\left(\underline{X}^{d_{1}}\right)\rho\left(\underline{Z}^{d_{2}} \right)\rho\left(\underline{Y}^{d_{3}}\right)\xi-\rho\left(\underline{Y}^{d_{ 3}}\right)\rho\left(\underline{Z}^{d_{2}}\right)\rho\left(\underline{X}^{d_{1} }\right)\xi\right\|, \tag{1}\]
\[\left\|\rho\left(\underline{X}^{d_{1}}\right)\rho\left(\underline{Z}^{d_{2}} \right)\rho\left(\underline{Y}^{d_{3}}\right)\xi-\rho\left(\underline{X}^{d_{1 }}\right)\rho\left(\underline{Z}^{d_{2}+1}\right)\rho\left(\underline{Y}^{d_{ 3}}\right)\xi\right\|, \tag{2}\]
\[\left\|\rho\left(\underline{Y}^{d_{1}}\right)\rho\left(\underline{Z}^{d_{2}} \right)\rho\left(\underline{X}^{d_{3}}\right)\xi-\rho\left(\underline{Y}^{d_ {1}}\right)\rho\left(\underline{Z}^{d_{2}+1}\right)\rho\left(\underline{X}^{d _{3}}\right)\xi\right\|, \tag{3}\]
where \(\rho\) is an affine isometric action of \(\mathrm{H}_{3}(\mathbb{Z}[\mathrm{t}_{1},...,\mathrm{t}_{\mathrm{m}}])\) on a uniformly convex Banach space \(\mathbb{E}\).
**Lemma 3.1**.: _Let \(\mathbb{E}\) be a Banach space, \(\rho\) an affine isometric action of \(\mathrm{H}_{3}(\mathbb{Z}[\mathrm{t}_{1},...,\mathrm{t}_{\mathrm{m}}])\) on \(\mathbb{E}\) and \(f,f^{\prime},h_{1},...,h_{k}\in\mathrm{Prob}_{c}(\mathrm{H}_{3}(\mathbb{Z}[ \mathrm{t}_{1},...,\mathrm{t}_{\mathrm{m}}]))\) such that \(h_{1},...,h_{k}\) are supported on \(\{z(p):p\in\mathbb{Z}[t_{1},...,t_{m}]\}\). Assume that there is \(\epsilon\in\mathbb{R}\) such that for every \(\xi\in\mathbb{E}\) it hold that_
\[\|\rho(f)\xi-\rho(f^{\prime})\xi\|\leq\epsilon\left(\max_{1\leq i\leq k}\|\xi- \rho(h_{i})\xi\|\right).\]
_Then for every \(f_{1},f_{2}\in\mathrm{Prob}_{c}(\mathrm{H}_{3}(\mathbb{Z}[\mathrm{t}_{1},..., \mathrm{t}_{\mathrm{m}}]))\) it holds that_
\[\|\rho(f_{1})\rho(f)\rho(f_{2})\xi-\rho(f_{1})\rho(f^{\prime})\rho(f_{2})\xi\| \leq\epsilon\left(\max_{1\leq i\leq k}\|\xi-\rho(h_{i})\xi\|\right).\]
Proof.: Let \(f_{1},f_{2}\in\mathrm{Prob}_{c}(\mathrm{H}_{3}(\mathbb{Z}[\mathrm{t}_{1},..., \mathrm{t}_{\mathrm{m}}]))\). We note that by Claim 2.2 it holds that
\[\|\rho(f_{1})\rho(f)\rho(f_{2})\xi-\rho(f_{1})\rho(f^{\prime})\rho(f_{2})\xi\| \leq\|\rho(f)\rho(f_{2})\xi-\rho(f^{\prime})\rho(f_{2})\xi\|\leq\]
\[\epsilon\left(\max_{1\leq i\leq k}\|\rho(f_{2})\xi-\rho(h_{i})\rho(f_{2})\xi \|\right)=\epsilon\left(\max_{1\leq i\leq k}\|\rho(f_{2})\xi-\rho(f_{2})\rho(h _{i})\xi\|\right)\leq\epsilon\left(\max_{1\leq i\leq k}\|\xi-\rho(h_{i})\xi\| \right).\]
**Lemma 3.2**.: _Let \(d,d^{\prime},d_{2}\in\mathbb{N}\) be constants such that \(d\leq d_{2}\) and \(d^{\prime}\leq d_{2}\). Then for every Banach space \(\mathbb{E}\), every isometric affine action \(\rho\) of \(\mathrm{H}_{3}(\mathbb{Z}[\mathrm{t}_{1},...,\mathrm{t}_{\mathrm{m}}]))\), every \(\xi\in\mathbb{E}\) and every \(p\in\mathrm{Poly}(d,d^{\prime})\) it holds that_
\[\left\|\rho(z(p))\rho(\underline{Z}^{d_{2}})\xi-\rho(\underline{Z}^{d_{2}}) \xi\right\|\lesssim(d+m)^{m}\left(\frac{1}{2}\right)^{d_{2}-d^{\prime}}\max_{ \underline{t}\in\mathcal{B}^{d}}\|\xi-\rho(z(2^{d_{2}}\underline{t}))\xi\|.\]
Proof.: Fix \(p=\sum_{\underline{t}\in\mathcal{B}^{d}}c_{\underline{t}}\underline{t}\) as above.
We fix an order on \(\mathcal{B}^{d}\):
\[\mathcal{B}^{d}=\{\underline{t}(1),\underline{t}(2),...\}\]
and according to this order, we denote the coefficients of \(p\) as \(c_{\underline{t}(i)}=c_{i}\).
We denote \(\sum_{i=1}^{0}c_{i}\underline{t}(i)=0\) and observe that for every \(\xi\) it holds that
\[\left\|\rho(z(p))\rho(\underline{Z}^{d_{2}})\xi-\rho(\underline{Z} ^{d_{2}})\xi\right\|\] \[\leq\sum_{k=1}^{|\mathcal{B}^{d}|}\left\|\rho\left(z\left(\sum_{i= 1}^{k}c_{i}\underline{t}(i)\right)\right)\rho(\underline{Z}^{d_{2}})\xi-\rho \left(z\left(\sum_{i=1}^{k-1}c_{i}\underline{t}(i)\right)\right)\rho( \underline{Z}^{d_{2}})\xi\right\|\] \[\leq^{\text{Claim }2.2}\sum_{k=1}^{|\mathcal{B}^{d}|}\left\|\rho(z(c_ {k}\underline{t}(k)))\rho(\underline{Z}^{d_{2}})\xi-\rho(\underline{Z}^{d_{2} })\xi\right\|.\]
We recall that \(|\mathcal{B}^{d}|=\binom{d+m}{d}\leq(d+m)^{m}\) and thus it is enough to show that for every \(1\leq c\leq 2^{d^{\prime}}-1\) and every \(\underline{t}\in\mathcal{B}^{d}\) it holds that
\[\left\|\rho(z(c\underline{t}))\rho(\underline{Z}^{d_{2}})\xi-\rho(\underline{ Z}^{d_{2}})\xi\right\|\lesssim c\left(\frac{1}{2}\right)^{d_{2}}\|\xi-\rho(z(2^{d_{2} }\underline{t}))\xi\|.\]
Fix \(1\leq c\leq 2^{d^{\prime}}-1\) and \(\underline{t}\in\mathcal{B}^{d}\). By Lemma 3.1, it is enough to show that for every \(\xi\in\mathbb{E}\) it holds that
\[\left\|\rho(z(c\underline{t}))\rho(Z^{d_{2}}(\underline{t}))\xi-\rho(Z^{d_{2} }(\underline{t}))\xi\right\|\lesssim c\left(\frac{1}{2}\right)^{d_{2}}\|\xi- \rho(z(2^{d_{2}}\underline{t}))\xi\|,\]
and this readily follows from Proposition 2.3.
The following result provides a bound on (1):
**Theorem 3.3**.: _Let \(d_{1},d_{1}^{\prime},d_{3},d_{3}^{\prime},d_{2}\in\mathbb{N}\) be constants such that \(d_{1}+d_{3}\leq d_{2}\) and \(d_{1}^{\prime}+d_{3}^{\prime}\leq d_{2}\). Also, let \(f,h\in\mathrm{Prob}_{c}(\mathrm{H}_{3}(\mathbb{Z}[\mathrm{t}_{1},...,\mathrm{t }_{\mathrm{m}}]))\) such that_
\[\mathrm{supp}(f)\subseteq\{x(p):p\in\mathrm{Poly}(d_{1},d_{1}^{\prime})\},\]
_and_
\[\mathrm{supp}(h)\subseteq\{y(p):p\in Poly(d_{3},d_{3}^{\prime})\}.\]
_Then for every Banach space \(\mathbb{E}\) and every isometric affine action \(\rho\) of \(\mathrm{H}_{3}(\mathbb{Z}[\mathrm{t}_{1},...,\mathrm{t}_{\mathrm{m}}]))\) on \(\mathbb{E}\) it holds for every \(\xi\in\mathbb{E}\) that_
\[\left\|\rho(fh)\rho(\underline{Z}^{d_{2}})\xi-\rho(hf)\rho(\underline{Z}^{d_{2 }})\xi\right\|\lesssim(d_{2}+m)^{m}\left(\frac{1}{2}\right)^{d_{2}-d_{1}^{ \prime}-d_{3}^{\prime}}\max_{\underline{t}\in\mathcal{B}^{d_{1}+d_{3}}}\|\xi- \rho(z(2^{d_{2}}\underline{t}))\xi\|.\]
_In particular, for \(d_{1},d_{3},d_{2}\in\mathbb{N}\cup\{0\}\), if \(d_{1}+d_{3}\leq d_{2}\), it holds for every Banach space \(\mathbb{E}\), every isometric affine action \(\rho\) of \(\mathrm{H}_{3}(\mathbb{Z}[\mathrm{t}_{1},...,\mathrm{t}_{\mathrm{m}}]))\) on \(\mathbb{E}\) and every \(\xi\in\mathbb{E}\) that_
\[\left\|\rho\left(\underline{X}^{d_{1}}\right)\rho\left(\underline {Z}^{d_{2}}\right)\rho\left(\underline{Y}^{d_{3}}\right)\xi-\rho\left( \underline{Y}^{d_{3}}\right)\rho\left(\underline{Z}^{d_{2}}\right)\rho\left( \underline{X}^{d_{1}}\right)\xi\right\|\lesssim\] \[(d_{2}+m)^{m}\left(\frac{1}{2}\right)^{d_{2}-d_{1}-d_{3}}\max_{ \underline{t}\in\mathcal{B}^{d_{2}}}\|\xi-\rho(z(2^{d_{2}}\underline{t}))\xi\|.\]
Proof.: Fix \(\rho,\mathbb{E}\) and \(\xi\) as above. Let us denote
\[A_{1}=\{p\in\text{Poly}(d_{1},d_{1}^{\prime}):x(p)\in\text{supp}(f)\},\]
\[A_{2}=\{p\in\text{Poly}(d_{3},d_{3}^{\prime}):y(p)\in\text{supp}(h)\}.\]
We note that
\[A_{1}A_{2}\subseteq\text{Poly}(d_{1},d_{1}^{\prime})\,\text{Poly}(d_{3},d_{3}^{ \prime})\subseteq\text{Poly}(d_{1}+d_{3},d_{1}^{\prime}+d_{3}^{\prime}).\]
Thus,
\[\left\|\rho(fh)\rho(\underline{Z}^{d_{2}})\xi-\rho(hf)\rho( \underline{Z}^{d_{2}})\xi\right\|\leq\] \[\sum_{p_{1}\in A_{1},p_{2}\in A_{2}}f(x(p_{1}))h(y(p_{2}))\left\| \rho(x(p_{1})y(p_{2}))\rho(\underline{Z}^{d_{2}})\xi-\rho(y(p_{2})x(p_{1})) \rho(\underline{Z}^{d_{2}})\xi\right\|=\] \[\sum_{p_{1}\in A_{1},p_{2}\in A_{2}}f(x(p_{1}))h(y(p_{2}))\left\| \rho(y(p_{2})x(p_{1}))\rho(z(p_{1}p_{2}))\rho(\underline{Z}^{d_{2}})\xi-\rho (y(p_{2})x(p_{1}))\rho(\underline{Z}^{d_{2}})\xi\right\|\leq^{\text{Claim \ref{claim:2}}}.\] \[\sum_{p_{1}\in A_{1},p_{2}\in A_{2}}f(x(p_{1}))h(y(p_{2}))\left\| \rho(z(p_{1}p_{2}))\rho(\underline{Z}^{d_{2}})\xi-\rho(\underline{Z}^{d_{2}}) \xi\right\|\lesssim^{\text{Lemma \ref{claim:2}}}\] \[\sum_{p_{1}\in A_{1},p_{2}\in A_{2}}f(x(p_{1}))h(y(p_{2}))(d_{1} ^{\prime}+d_{3}^{\prime}+m)^{m}\left(\frac{1}{2}\right)^{d_{2}-d_{1}^{\prime} -d_{3}^{\prime}}\max_{\underline{t}\in\mathcal{B}^{d_{1}+d_{3}}}\|\xi-\rho(z(2 ^{d_{2}}\underline{t}))\xi\|\leq\] \[(d_{2}+m)^{m}\left(\frac{1}{2}\right)^{d_{2}-d_{1}^{\prime}-d_{3 }^{\prime}}\max_{\underline{t}\in\mathcal{B}^{d_{2}}}\|\xi-\rho(z(2^{d_{2}} \underline{t}))\xi\|,\]
as needed.
Bounding (2), (3) will take more work. We start by proving a bound in the case of the Heisenberg group \(\text{H}_{3}(\mathbb{Z})\). To ease the reading, in that case we denote
\[X_{k}(1)=X_{k},X^{k}(1)=X^{k},Y_{k}(1)=Y_{k},....,Z^{k}(1)=Z^{k}.\]
In [12], the following result was proven:
**Lemma 3.4**.: _[_12_, Lemma 5.4]_ _For every uniformly convex Banach space \(\mathbb{E}\) with a modulus of convexity \(\delta\), there is a constant \(r_{1}=r_{1}(\delta)\), \(0\leq r_{1}<1\) such that for every isometric linear representation \(\pi:\text{H}_{3}(\mathbb{Z})\rightarrow\text{O}(\mathbb{E})\), every \(n,c_{0}\in\mathbb{N}\) such that \(1\leq c_{0}\leq 2^{n-1}\) and every \(\zeta\in\mathbb{E}\), if \(\|I-\pi(z(c_{0}))\zeta\|\geq\frac{1}{2}\|\zeta\|\), then_
\[\|\pi\left(X_{0}Y^{n}\right)\zeta\|\leq r_{1}\|\zeta\|.\]
Using this Lemma, we prove the following:
**Theorem 3.5**.: _For every uniformly convex Banach space \(\mathbb{E}\) with a modulus of convexity \(\delta\), there is a constant \(r_{1}=r_{1}(\delta)\), \(0\leq r_{1}<1\) such that for every \(n\in\mathbb{N}\), every affine isometric action \(\rho\) of \(\text{H}_{3}(\mathbb{Z})\) on \(\mathbb{E}\) and every \(\xi\in\mathbb{E}\) it holds that_
\[\left\|\rho\left(X_{0}Y^{n}\right)\xi-\rho\left(X_{0}Y^{n}\right)\rho\left(Z_{0 }\right)\xi\right\|\leq\max\left\{r_{1}\|\xi-\rho(Z_{0})\xi\|,\frac{\|\xi-\rho( z(2^{n-1}))\xi\|}{2^{n-1}}\right\}.\]
Proof.: Let \(\pi\) denote the linear part of \(\rho\).
If \(\|\xi-\rho(Z_{0})\xi\|\leq\frac{\|\xi-\rho(z(2^{n-1}))\xi\|}{2^{n-1}}\), then by Claim 2.2,
\[\|\rho\left(X_{0}Y^{n}\right)\xi-\rho\left(X_{0}Y^{n}\right)\rho\left(Z_{0} \right)\xi\|\leq\|\xi-\rho\left(Z_{0}\right)\xi\|\leq\frac{\|\xi-\rho(z(2^{n-1 }))\xi\|}{2^{n-1}},\]
and we are done.
Assume that \(\|\xi-\rho(Z_{0})\xi\|>\frac{\|\xi-\rho(z(2^{n-1}))\xi\|}{2^{n-1}}\). Then, using Claim 2.2,
\[\left\|\pi(Z^{n-1})\left(\xi-\rho\left(Z_{0}\right)\xi\right)\right\|=\frac{1 }{2}\left\|\pi(Z^{n-1})\left(\xi-\rho\left(z(1)\right)\xi\right)\right\|=\]
\[\frac{1}{2}\left\|\rho(Z^{n-1})\xi-\rho(Z^{n-1})\rho\left(z(1)\right)\xi \right\|=\frac{1}{2}\left\|\frac{1}{2^{n-1}}\sum_{c=0}^{2^{n-1}-1}\rho(z(c)) \xi-\frac{1}{2^{n-1}}\sum_{c=1}^{2^{n-1}}\rho(z(c))\xi\right\|=\]
\[\frac{1}{2}\left\|\frac{\xi-\rho(z(2^{n-1}))\xi}{2^{n-1}}\right\|<\frac{1}{2} \|\xi-\rho(Z_{0})\xi\|.\]
It follows that
\[\frac{1}{2^{n-1}}\sum_{c=0}^{2^{n-1}-1}\|(I-\pi(z(c)))\left(\xi- \rho\left(Z_{0}\right)\xi\right)\|\geq\left\|(I-\pi(Z^{n-1}))\left(\xi-\rho \left(Z_{0}\right)\xi\right)\right\|\geq\] \[\left\|\xi-\rho\left(Z_{0}\right)\xi\right\|-\left\|\pi(Z^{n-1}) \left(\xi-\rho\left(Z_{0}\right)\xi\right)\right\|>\frac{1}{2}\|\xi-\rho(Z_{0} )\xi\|.\]
Thus, there is \(1\leq c_{0}\leq 2^{n-1}\) such that
\[\|(I-\pi(z(c_{0})))\left(\xi-\rho\left(Z_{0}\right)\xi\right)\|\geq\frac{1}{2} \|\xi-\rho(Z_{0})\xi\|,\]
and by Lemma 3.4 applied to \(\zeta=\xi-\rho(Z_{0})\xi\) it follows that
\[\|\rho\left(X_{0}Y^{n}\right)\xi-\rho\left(X_{0}Y^{n}\right)\rho\left(Z_{0} \right)\xi\|=\]
as needed.
Going back to the group \(\mathrm{H}_{3}(\mathbb{Z}[\mathrm{t}_{1},...,\mathrm{t}_{\mathrm{m}}])\), we deduce the following:
**Corollary 3.6**.: _Let \(\mathbb{E}\) be a uniformly convex Banach space \(\mathbb{E}\) with a modulus of convexity \(\delta\) and \(r_{1}=r_{1}(\delta)\), \(0\leq r_{1}<1\) be the constant of Theorem 3.5. For every \(n\in\mathbb{N}\), every \(a_{0},b_{0}\in\mathbb{N}\cup\{0\}\), every \(p,q\in\mathbb{Z}[t_{1},...,t_{m}]\setminus\{0\}\), every affine isometric action \(\rho\) of \(\mathrm{H}_{3}(\mathbb{Z}[\mathrm{t}_{1},...,\mathrm{t}_{\mathrm{m}}])\) on \(\mathbb{E}\) and every \(\xi\in\mathbb{E}\) it holds that_
\[\left\|\rho\left(X_{a_{0}}(p)\left(\prod_{b=0}^{n-1}Y_{b_{0}+b}( q)\right)\right)\xi-\rho\left(X_{a_{0}}(p)\left(\prod_{b=0}^{n-1}Y_{b_{0}+b}(q) \right)\right)\rho\left(Z_{a_{0}+b_{0}}(pq)\right)\xi\right\|\leq\] \[\max\left\{r_{1}\|\xi-\rho(Z_{a_{0}+b_{0}}(pq))\xi\|,\frac{\|\xi- \rho(z(2^{a_{0}+b_{0}+n-1}pq))\xi\|}{2^{n-1}}\right\}.\]
Proof.: Fix \(\rho,n,a_{0},b_{0},p,q\) as above.
Let \(H<H_{3}(\mathbb{Z}[t_{1},...,t_{m}])\) be the subgroup \(H=\langle x(2^{a_{0}}p),y(2^{b_{0}}q)\rangle\). We note that \(H\) is isomorphic to \(H_{3}(\mathbb{Z})\) via the isomorphism \(\phi:H_{3}(\mathbb{Z})\to H\) induced by \(\phi(x(1))=x(2^{a_{0}}p),\phi(y(1))=y(2^{b_{0}}q)\). Note that (by extending \(\phi\) linearly)
\[\phi(X_{0})=X_{a_{0}}(p),\phi(Y^{n})=\prod_{b=0}^{n-1}Y_{b_{0}+b}(q).\]
Also, note that
\[\phi(z(1))=\phi([x(1),y(1)])=[x(2^{a_{0}}p),y(2^{b_{0}}q)]=z(2^{a_{0}+b_{0}}pq),\]
and thus \(\phi(Z_{0})=Z_{a_{0}+b_{0}}(pq)\).
Define a new isometric affine action \(\rho_{0}\) of \(H_{3}(\mathbb{Z})\) on \(\mathbb{E}\) by \(\rho_{0}=\rho\circ\phi\). Let \(\xi\in\mathbb{E}\), then
\[\left\|\rho\left(X_{a_{0}}(p)\left(\prod_{b=0}^{n-1}Y_{b_{0}+b}(q )\right)\right)\xi-\rho\left(X_{a_{0}}(p)\left(\prod_{b=0}^{n-1}Y_{b_{0}+b}(q) \right)\right)\rho\left(Z_{a_{0}+b_{0}}(pq)\right)\xi\right\| =\] \[\left\|\rho_{0}\left(X_{0}Y^{n}\right)\xi-\rho_{0}\left(X_{0}Y^{n} \right)\rho_{0}\left(Z_{0}\right)\xi\right\| \leq\] \[\max\left\{r_{1}\|\xi-\rho_{0}(Z_{0})\xi\|,\frac{\|\xi-\rho_{0}(z (2^{n-1}))\xi\|}{2^{n-1}}\right\} =\] \[\max\left\{r_{1}\|\xi-\rho(Z_{a_{0}+b_{0}}(pq))\xi\|,\frac{\|\xi- \rho(z(2^{n-1+a_{0}+b_{0}}pq))\xi\|}{2^{n-1}}\right\}\]
as needed.
As a result, we obtain the following:
**Lemma 3.7**.: _Let \(\mathbb{E}\) be a uniformly convex Banach space \(\mathbb{E}\) with a modulus of convexity \(\delta\) and \(r_{1}=r_{1}(\delta)\), \(0\leq r_{1}<1\) be the constant of Theorem 3.5. Also, let \(n,N\in\mathbb{N}\), \(a_{1},...,a_{N},b_{1},...,b_{N}\in\mathbb{N}\cup\{0\}\) and \(p_{1},...,p_{N},q_{1},...,q_{N}\in\mathbb{Z}[t_{1},...,t_{m}]\setminus\{0\}\). Assume there are \(d\in\mathbb{N}\cup\{0\}\) and \(p\in\mathbb{Z}[t_{1},...,t_{m}]\setminus\{0\}\) such that for every \(1\leq k\leq N\) it holds that \(a_{k}+b_{k}=d\) and \(p_{k}q_{k}=p\). For every \(1\leq k\leq N\), we define \(f_{k}\in\mathrm{Prob}_{c}(\mathrm{H}_{3}(\mathbb{Z}[\mathrm{t}_{1},..., \mathrm{t}_{\mathrm{m}}]))\) to be_
\[f_{k}=X_{a_{k}}(p_{k})\left(\prod_{b=0}^{n-1}Y_{b_{k}+b}(q_{k})\right).\]
_Then for every affine isometric action \(\rho\) of \(\mathrm{H}_{3}(\mathbb{Z}[\mathrm{t}_{1},...,\mathrm{t}_{\mathrm{m}}])\) on \(\mathbb{E}\) and every \(\xi\in\mathbb{E}\) it holds that_
\[\left\|\rho\left(\prod_{k=1}^{N}f_{k}\right)\xi-\rho\left(\prod_{k=1}^{N}f_{k} \right)\rho\left(Z_{d}(p)\right)\xi\right\|\leq\max\left\{r_{1}^{N}\|\xi-\rho( Z_{d}(p))\xi\|,\frac{\|\xi-\rho(z(2^{d+n-1}p))\xi\|}{2^{n-1}}\right\}.\]
_In particular, if \(N=n\), then_
\[\left\|\rho\left(\prod_{k=1}^{n}f_{k}\right)\xi-\rho\left(\prod_{k=1 }^{n}f_{k}\right)\rho\left(Z_{d}(p)\right)\xi\right\|\leq\max\left\{r_{1}^{n} \|\xi-\rho(Z_{d}(p))\xi\|,\frac{\|\xi-\rho(z(2^{d+n-1}p))\xi\|}{2^{n-1}}\right\} \lesssim\] \[\left(\max\left\{r_{1},\frac{1}{2}\right\}\right)^{n}\max\left\{\| \xi-\rho(z(2^{d}p))\xi\|,\|\xi-\rho(z(2^{d+n-1}p))\xi\|\right\}.\]
Proof.: Fix \(\rho\) and \(\xi\) as above. The proof on the inequality is by induction on \(N\). The case \(N=1\) was proved in Corollary 3.6.
Assume that the inequality holds for \(N\) and let \(a_{1},...,a_{N+1},b_{1},...,b_{N+1}\in\mathbb{N}\cup\{0\}\) and \(p_{1},...,p_{N+1},q_{1},...,q_{N+1}\in\mathbb{Z}[t_{1},...,t_{m}]\setminus\{0\}\) such that there are \(d\in\mathbb{N}\cup\{0\}\) and \(p\in\mathbb{Z}[t_{1},...,t_{m}]\setminus\{0\}\) such that for every \(1\leq k\leq N+1\) it holds that \(a_{k}+b_{k}=d\) and \(p_{k}q_{k}=p\). Denote
\[f=\prod_{k=2}^{N+1}f_{k}.\]
By Corollary 3.6
\[\left\|\rho\left(\prod_{k=1}^{N+1}f_{k}\right)\xi-\rho\left(\prod _{k=1}^{N+1}f_{k}\right)\rho\left(Z_{d}(p)\right)\xi\right\|\] \[=\left\|\rho\left(f_{1}\right)\xi-\rho\left(f_{1}f\right)\rho \left(Z_{d}(p)\right)\xi\right\|\] \[=\left\|\rho\left(f_{1}\right)\rho(f)\xi-\rho\left(f_{1}\right) \rho\left(Z_{d}(p)\right)\rho(f)\xi\right\|\] \[\leq\max\left\{r_{1}\|\rho(f)\xi-\rho(Z_{d}(p))\rho(f)\xi\|, \frac{\|\rho(f)\xi-\rho(z(2^{d+n-1}p))\rho(f)\xi\|}{2^{n-1}}\right\}\] \[=\max\left\{r_{1}\|\rho(f)\xi-\rho(f)\rho(Z_{d}(p))\xi\|,\frac{ \|\rho(f)\xi-\rho(f)\rho(z(2^{d+n-1}p))\xi\|}{2^{n-1}}\right\}.\]
If \(r_{1}\|\rho(f)\xi-\rho(f)\rho(Z_{d}(p))\xi\|\leq\frac{\|\rho(f)\xi-\rho(f) \rho(z(2^{d+n-1}p))\xi\|}{2^{n-1}}\), we are done by Claim 2.2:
\[\frac{\|\rho(f)\xi-\rho(f)\rho(z(2^{d+n-1}p))\xi\|}{2^{n-1}}\leq\frac{\|\xi- \rho(z(2^{d+n-1}p))\xi\|}{2^{n-1}}.\]
Assume that \(r_{1}\|\rho(f)\xi-\rho(f)\rho(Z_{d}(p))\xi\|>\frac{\|\rho(f)\xi-\rho(f)\rho(z(2 ^{d+n-1}p))\xi\|}{2^{n-1}}\), then by the induction assumption
\[r_{1}\left\|\rho\left(f\right)\xi-\rho\left(f\right)\rho\left(Z_ {d}(p)\right)\xi\right\|\leq r_{1}\max\left\{r_{1}^{N}\|\xi-\rho(Z_{d}(p))\xi \|,\frac{\|\xi-\rho(z(2^{d+n-1}p))\xi\|}{2^{n-1}}\right\}\leq\] \[\max\left\{r_{1}^{N+1}\|\xi-\rho(Z_{d}(p))\xi\|,\frac{\|\xi-\rho( z(2^{d+n-1}p))\xi\|}{2^{n-1}}\right\}\]
as needed.
**Lemma 3.8**.: _Let \(\mathbb{E}\) be a Banach space \(\mathbb{E}\) and \(d_{2}\in\mathbb{N}\) be a constant. Also, let \(n,N\in\mathbb{N},2n<d_{2}\), \(a_{1},...,a_{N},b_{1},...,b_{N}\in\mathbb{N}\cup\{0\}\) and \(p_{1},...,p_{N},q_{1},...,q_{N}\in\mathcal{B}^{d_{2}}\) such that for every \(1\leq k\leq N-1\) the following holds:_
* \(a_{k}\geq a_{k+1}\) _and_ \(b_{k}\leq b_{k+1}\)__
* \(\deg(p_{k})\geq\deg(p_{k+1})\) _and_ \(\deg(q_{k})\leq\deg(q_{k+1})\)__
* \(a_{k+1}+b_{k}\leq d_{2}-2n\) _and_ \(\deg(p_{k+1})+\deg(q_{k})\leq d_{2}\)__
* _Either_ \(a_{k}>a_{k+1}\) _or_ \(\deg(p_{k})>\deg(p_{k+1})\)__
_For every \(1\leq k\leq N\), we define \(h_{k},f_{k}\in\operatorname{Prob}_{c}(\operatorname{H}_{3}(\mathbb{Z}[ \operatorname{t}_{1},...,\operatorname{t}_{\operatorname{m}}]))\) to be_
\[h_{k}=\prod_{b=0}^{n-1}Y_{b_{k}+b}(q_{k}),\]
\[f_{k}=X_{a_{k}}(p_{k})h_{k}=X_{a_{k}}(p_{k})\left(\prod_{b=0}^{n-1}Y_{b_{k}+b}( q_{k})\right).\]
_Then for every affine isometric action \(\rho\) of \(\operatorname{H}_{3}(\mathbb{Z}[\operatorname{t}_{1},...,\operatorname{t}_{ \operatorname{m}}])\) on \(\mathbb{E}\) and every \(\xi\in\mathbb{E}\) it holds that_
\[\left\|\rho\left(\prod_{k=1}^{N}X_{a_{k}}(p_{k})\right)\rho\left( \prod_{k=1}^{N}h_{k}\right)\rho\left(\underline{Z}^{d_{2}}\right)\xi-\rho \left(\prod_{k=1}^{N}f_{k}\right)\rho\left(\underline{Z}^{d_{2}}\right)\xi \right\|\lesssim\] \[(N-1)(d_{2}+m)^{m}\left(\frac{1}{2}\right)^{n}\max_{\underline{t }\in\mathcal{B}^{d_{2}}}\|\xi-\rho(z(2^{d_{2}}\underline{t}))\xi\|.\]
_In particular, if \(N=n\), then_
\[\left\|\rho\left(\prod_{k=1}^{n}X_{a_{k}}(p_{k})\right)\rho\left( \prod_{k=1}^{n}h_{k}\right)\rho\left(\underline{Z}^{d_{2}}\right)\xi-\rho \left(\prod_{k=1}^{n}f_{k}\right)\rho\left(\underline{Z}^{d_{2}}\right)\xi \right\|\lesssim\] \[(n-1)(d_{2}+m)^{m}\left(\frac{1}{2}\right)^{n}\max_{\underline{t }\in\mathcal{B}^{d_{2}}}\|\xi-\rho(z(2^{d_{2}}\underline{t}))\xi\|\lesssim(d_ {2}+m)^{m}\left(\frac{3}{4}\right)^{n}\max_{\underline{t}\in\mathcal{B}^{d_{2} }}\|\xi-\rho(z(2^{d_{2}}\underline{t}))\xi\|.\]
Proof.: By Lemma 3.1 (and the triangle inequality), it is enough to prove that for every \(\rho\), every \(\xi\) and every \(1\leq k_{0}\leq N-1\) it holds that
\[\left\|\rho\left(\prod_{k=k_{0}+1}^{N}X_{a_{k}}(p_{k})\right)\rho \left(h_{k_{0}}\right)\rho\left(\underline{Z}^{d_{2}}\right)\xi-\rho\left(h_{k_ {0}}\right)\rho\left(\prod_{k=k_{0}+1}^{N}X_{a_{k}}(p_{k})\right)\rho\left( \underline{Z}^{d_{2}}\right)\xi\right\|\lesssim\] \[(d_{2}+m)^{m}\left(\frac{1}{2}\right)^{n}\max_{\underline{t}\in \mathcal{B}^{d_{2}}}\|\xi-\rho(z(2^{d_{2}}\underline{t}))\xi\|.\]
Denote \(f=\prod_{k=k_{0}+1}^{N}X_{a_{k}}(p_{k})\). We note that by the assumption that for every \(k\), \(a_{k}\geq a_{k+1}\) and \(\deg(p_{k})\geq\deg(p_{k+1})\) and at least one of these inequalities is a sharp. Thus,
\[\operatorname{supp}(f)\subseteq\{x(p):p\in\operatorname{Poly}(\deg(p_{k_{0}+1} ),a_{k_{0}+1}+1)\},\]
and
\[\operatorname{supp}(h_{k_{0}})\subseteq\{y(p):p\in\operatorname{Poly}(\deg(q_{ k_{0}}),b_{k_{0}}+n)\}.\]
By Theorem 3.3 it holds that
\[\left\|\rho\left(f\right)\rho\left(h_{k_{0}}\right)\rho\left(\underline {Z}^{d_{2}}\right)\xi-\rho\left(h_{k_{0}}\right)\rho\left(f\right)\rho\left( \underline{Z}^{d_{2}}\right)\xi\right\|\lesssim\] \[(d_{2}+m)^{m}\left(\frac{1}{2}\right)^{d_{2}-(a_{k_{0}+1}+1)-(b_{ k_{0}}+n)}\max_{\underline{t}\in\mathcal{B}^{\deg(p_{k_{0}+1})+\deg(q_{k_{0}})}}\|\xi- \rho(z(2^{d_{2}}\underline{t}))\xi\|\lesssim^{a_{k_{0}+1}+b_{k_{0}}}\leq^{d_{2 }-2n}\] \[(d_{2}+m)^{m}\left(\frac{1}{2}\right)^{n}\max_{\underline{t}\in \mathcal{B}^{d_{2}}}\|\xi-\rho(z(2^{d_{2}}\underline{t}))\xi\|.\]
Combining these two lemmas yields the following result:
**Theorem 3.9**.: _Let \(d_{1},d_{2},d_{3},n\in\mathbb{N},d^{\prime}\in\mathbb{N}\cup\{0\}\) be constants and \(\underline{t}^{\prime}\in\mathbb{Z}[t_{1},...,t_{m}]\setminus\{0\}\) be a monomial. Also, let \(\mathbb{E}\) be a uniformly convex Banach space with a modulus of convexity \(\delta\). Denote \(r_{2}=r_{2}(\delta)<1\) to be_
\[r_{2}=\max\left\{r_{1}(\delta),\frac{3}{4}\right\},\]
_where \(r_{1}(\delta)\) is the constant of Theorem 3.5._
_If there are \(n\in\mathbb{N}\), \(2n<d_{2}\), \(a_{1},...,a_{n},b_{1},...,b_{n}\in\mathbb{N}\cup\{0\}\) and \(p_{1},...,p_{n},q_{1},...,q_{n}\in\mathcal{B}^{d_{2}}\) such that the following holds:_
* _For every_ \(1\leq k\leq n\)_,_ \(a_{k}+b_{k}=d^{\prime}\) _and_ \(p_{k}q_{k}=\underline{t}^{\prime}\)__
* _For every_ \(1\leq k\leq n-1\)_,_ \(d_{1}-1\geq a_{k}\geq a_{k+1}\) _and_ \(b_{k}\leq b_{k+1}\leq d_{3}-1\)__
* _For every_ \(1\leq k\leq n-1\)_,_ \(\deg(p_{k})\geq\deg(p_{k+1})\) _and_ \(\deg(q_{k})\leq\deg(q_{k+1})\)__
* _For every_ \(1\leq k\leq n-1\)_,_ \(a_{k+1}+b_{k}\leq d_{2}-2n\) _and_ \(\deg(p_{k+1})+\deg(q_{k})\leq d_{2}\)__
* _For every_ \(1\leq k\leq n-1\)_, either_ \(a_{k}>a_{k+1}\) _or_ \(\deg(p_{k})>\deg(p_{k+1})\)__
_Then for every affine isometric action \(\rho\) of \(\mathrm{H}_{3}(\mathbb{Z}[\mathrm{t}_{1},...,\mathrm{t}_{m}])\) on \(\mathbb{E}\) and every \(\xi\in\mathbb{E}\) it holds that_
\[\left\|\rho\left(\underline{X}^{d_{1}}\right)\rho\left(\underline {Z}^{d_{2}}\right)\rho\left(\underline{Y}^{d_{3}}\right)\xi-\rho\left( \underline{X}^{d_{1}}\right)\rho\left(\underline{Z}^{d_{2}}\right)\rho\left( \underline{Z}_{d^{\prime}}(\underline{t}^{\prime})\right)\rho\left( \underline{Y}^{d_{3}}\right)\xi\right\|\lesssim\] \[(d_{2}+m)^{m}r_{2}^{n}\max\left\{\max_{\underline{t}\in \mathcal{B}^{d_{2}}}\|\xi-\rho(z(2^{d_{2}}\underline{t}))\xi\|,\|\xi-\rho(z(2 ^{d^{\prime}}\underline{t}^{\prime}))\xi\|,\|\xi-\rho(z(2^{d^{\prime}+n-1} \underline{t}^{\prime}))\xi\|\right\}.\]
Proof.: Fix \(\rho\) as above. Denote
\[h_{k}=\prod_{b=0}^{n-1}Y_{b_{k}+b}(q_{k}),\]
\[f_{k}=X_{a_{k}}(p_{k})h_{k}=X_{a_{k}}(p_{k})\left(\prod_{b=0}^{n-1}Y_{b_{k}+b }(q_{k})\right).\]
By Lemma 3.1, it is enough to prove that for every \(\xi\in\mathbb{E}\),
\[\left\|\rho\left(\prod_{k=1}^{n}X_{a_{k}}(p_{k})\right)\rho\left( \underline{Z}^{d_{2}}\right)\rho\left(\prod_{k=1}^{n}h_{k}\right)\xi\right.\] \[\left.\quad-\rho\left(\prod_{k=1}^{n}X_{a_{k}}(p_{k})\right)\rho \left(\underline{Z}^{d_{2}}\right)\rho\left(Z_{d^{\prime}}(\underline{t}^{ \prime})\right)\rho\left(\prod_{k=1}^{n}h_{k}\right)\xi\right\|\lesssim\] \[\left.(d_{2}+m)^{m}r_{2}^{n}\max\left\{\max_{\underline{t}\in \mathcal{B}^{d_{2}}}\|\xi-\rho(z(2^{d_{2}}\underline{t}))\xi\|,\|\xi-\rho(z(2 ^{d^{\prime}}\underline{t}^{\prime}))\xi\|,\|\xi-\rho(z(2^{d^{\prime}+n-1} \underline{t}^{\prime}))\xi\|\right\}.\]
Fix \(\xi\in\mathbb{E}\). Then
\[\left\|\rho\left(\prod_{k=1}^{n}X_{a_{k}}(p_{k})\right)\rho\left( \underline{Z}^{d_{2}}\right)\rho\left(\prod_{k=1}^{n}h_{k}\right)\xi\right.\] \[\left.-\rho\left(\prod_{k=1}^{n}X_{a_{k}}(p_{k})\right)\rho\left( \underline{Z}^{d_{2}}\right)\rho\left(Z_{d^{\prime}}(\underline{t}^{\prime}) \right)\rho\left(\prod_{k=1}^{n}h_{k}\right)\xi\right\|\lesssim^{\text{Lemma \ref{lem:2}}}\] \[\left.(d_{2}+m)^{m}r_{2}^{n}\max_{\underline{t}\in\mathcal{B}^{d_ {2}}}\|\xi-\rho(z(2^{d_{2}}\underline{t}))\xi\|\right.\] \[\left.\quad+\left\|\rho\left(\underline{Z}^{d_{2}}\right)\rho \left(\prod_{k=1}^{n}f_{k}\right)\xi-\rho\left(\underline{Z}^{d_{2}}\right) \rho\left(\prod_{k=1}^{n}f_{k}\right)\rho\left(Z_{d^{\prime}}(\underline{t}^{ \prime})\right)\xi\right\|\leq^{\text{Claim \ref{lem:2}}}\] \[\left.(d_{2}+m)^{m}r_{2}^{n}\max_{\underline{t}\in\mathcal{B}^{d_ {2}}}\|\xi-\rho(z(2^{d_{2}}\underline{t}))\xi\|+\left\|\rho\left(\prod_{k=1}^ {n}f_{k}\right)\xi-\rho\left(\prod_{k=1}^{n}f_{k}\right)\rho\left(Z_{d^{ \prime}}(\underline{t}^{\prime})\right)\xi\right\|\lesssim^{\text{Lemma \ref{lem:2}}}\] \[\left.(d_{2}+m)^{m}r_{2}^{n}\max_{\underline{t}\in\mathcal{B}^{d_ {2}}}\|\xi-\rho(z(2^{d_{2}}\underline{t}))\xi\|,\|\xi-\rho(z(2^{d^{\prime}} \underline{t}^{\prime}))\xi\|,\|\xi-\rho(z(2^{d^{\prime}+n-1}\underline{t}^{ \prime}))\xi\|\right\}.\]
This result has two useful instantiations:
**Corollary 3.10**.: _Let \(d_{1},d_{2},d_{3}\in\mathbb{N},d_{2}\geq 3\) be constants such that \(d_{1},d_{3}\leq d_{2},d_{1}+d_{3}\geq d_{2}+4\) and \(\mathbb{E}\) be a uniformly convex Banach space \(\mathbb{E}\) with a modulus of convexity \(\delta\). Also, let \(r_{2}=r_{2}(\delta)\) be the constant of Theorem 3.9._
_Denote \(n=\lfloor\frac{\sqrt{d_{1}+d_{3}-d_{2}}}{2}\rfloor\). Then for every \(\underline{t}^{\prime}\in\mathcal{B}^{d_{2}+1}\), every \(d_{2}-2n\leq d^{\prime}\leq d_{2}\), every affine isometric action \(\rho\) of \(\mathrm{H}_{3}(\mathbb{Z}[\mathrm{t}_{1},...,\mathrm{t}_{\mathrm{m}}])\) on \(\mathbb{E}\) and every \(\xi\in\mathbb{E}\) it holds that_
\[\left\|\rho\left(\underline{X}^{d_{1}}\right)\rho\left(\underline {Z}^{d_{2}}\right)\rho\left(\underline{Y}^{d_{3}}\right)\xi-\rho\left( \underline{X}^{d_{1}}\right)\rho\left(\underline{Z}^{d_{2}}\right)\rho\left( \underline{Z}_{d^{\prime}}(\underline{t}^{\prime})\right)\rho\left( \underline{Y}^{d_{3}}\right)\xi\right\|\lesssim\] \[\left(d_{2}+m)^{m}r_{2}^{n}\max_{0\leq d\leq d_{2}+n-1}\max_{ \underline{t}\in\mathcal{B}^{d_{2}+1}}\|\xi-\rho(z(2^{d}\underline{t}))\xi\|.\]
Proof.: Fix \(\underline{t}^{\prime},d^{\prime},\rho\) and \(\xi\) as above.
The proof is an application of Theorem 3.9. Observe that \(2n\leq\sqrt{2d_{2}}<d_{2}\).
We introduce the following notation: recall that the monomial \(\underline{t}^{\prime}\in\mathcal{B}^{d_{2}+1}\) is of the form \(\underline{t}^{\prime}=t_{1}^{n_{1}}...t_{m}^{n_{m}}\) where \(n_{1},...,n_{m}\in\mathbb{N}\cup\{0\}\) and \(n_{1}+...+n_{m}\leq d_{2}+1\). For \(k\in\mathbb{Z}\), we denote \(\operatorname{prefix}(\underline{t}^{\prime},k)\) to be the \(i\)-th prefix of \(\underline{t}^{\prime}\):
\[\operatorname{prefix}(\underline{t}^{\prime},k)=\begin{cases}1&k\leq 0\\ \underline{t}^{\prime}&k>n_{1}+...+n_{m}\\ t_{1}^{n_{1}}...t_{i}^{n_{i}}t_{i+1}^{k-(n_{1}+...+n_{i})}&n_{1}+...+n_{i}<k \text{ and }n_{1}+...+n_{i+1}\geq k\end{cases}.\]
Similarly, for \(k\in\mathbb{Z}\), we denote \(\operatorname{suffix}(\underline{t}^{\prime},k)\) to be the \(k\)-th suffix of \(\underline{t}^{\prime}\), i.e.,
\[\operatorname{prefix}(\underline{t}^{\prime},k)=\begin{cases}1&k\leq 0\\ \underline{t}^{\prime}&k>n_{1}+...+n_{m}\\ t_{i}^{k-(n_{1}+...+n_{i+1})}t_{i+1}^{n_{i+1}}...t_{m}^{n_{m}}&n_{i+1}+...+n_{m }<k\text{ and }n_{i}+...+n_{m}\geq k\end{cases}.\]
We note that for every \(k\),
\[\operatorname{prefix}(\underline{t}^{\prime},\deg(\underline{t}^{\prime})-k )\operatorname{suffix}(\underline{t}^{\prime},k)=\underline{t}^{\prime}.\]
For \(1\leq k\leq n\), we define
\[a_{k}=d_{1}-2nk,b_{k}=d^{\prime}-a_{k},p_{k}=\operatorname{prefix}( \underline{t}^{\prime},\deg(\underline{t}^{\prime})-k),q_{k}=\operatorname{ suffix}(\underline{t}^{\prime},k).\]
We will finish the proof by verifying that for this choice, the conditions of Theorem 3.9 are fulfilled:
* It is obvious that for every \(1\leq k\leq n\), \(a_{k}+b_{k}=d^{\prime}\) and \(p_{k}q_{k}=\underline{t}^{\prime}\).
* For every \(1\leq k\leq n-1\), \(d_{1}-1\geq a_{k}>a_{k+1}\) and we note that \[a_{n} =d_{1}-2n^{2}\] \[\geq d_{1}-2\left(\frac{\sqrt{d_{1}+d_{3}-d_{2}}}{2}\right)^{2}\] \[=\frac{d_{1}}{2}+\frac{d_{2}}{2}-\frac{d_{3}}{2}\geq^{d_{3}\leq d _{2}}0,\] i.e., for every \(k\), \(a_{k}\in\mathbb{N}\cup\{0\}\). Also, for every \(1\leq k\leq n-1\), \(b_{k}<b_{k+1}\) and we note that \[b_{1}=d^{\prime}-a_{1}=d^{\prime}-d_{1}+2n\geq^{d^{\prime}>d_{2}-2n}d_{2}-d_{1} \geq 0.\] Also, \[b_{n}=d^{\prime}-a_{n}\leq d^{\prime}-(\frac{d_{1}}{2}+\frac{d_{2}}{2}- \frac{d_{3}}{2})\leq^{d^{\prime}\leq d_{2}}\frac{d_{3}}{2}+\frac{d_{2}}{2}- \frac{d_{1}}{2}=\] \[d_{3}+\frac{d_{2}-(d_{1}+d_{3})}{2}\leq^{d_{1}+d_{3}\geq d_{2}+ 4}d_{3}-2,\] i.e., for every \(1\leq k\leq n\), \(b_{k}\in\mathbb{N}\cup\{0\}\) and \(b_{k}\leq d_{3}-1\).
* It is obvious that for every \(1\leq k\leq n-1\), \(\deg(p_{k})\geq\deg(p_{k+1})\) and \(\deg(q_{k})\leq\deg(q_{k+1})\).
* For every \(1\leq k\leq n-1\), \[a_{k+1}+b_{k}=d_{1}-2n(k+1)+(d^{\prime}-(d_{1}-2nk))=d^{\prime}+2n\leq d_{2}+2n.\] Last, we need to show that \[\deg(p_{k+1})+\deg(q_{k})=\deg(\underline{t}^{\prime})\leq d_{2}.\] If \(\deg(\underline{t}^{\prime})\leq n\), then \[\deg(p_{k+1})+\deg(q_{k})\leq 2\deg(\underline{t}^{\prime})\leq 2n<d_{2}.\] If \(d_{2}+1\geq\deg(\underline{t}^{\prime})>n\), then for every \(1\leq k\leq n\), \[\deg(p_{k+1})+\deg(q_{k})=\deg(\underline{t}^{\prime})-(k+1)+k=\deg(\underline {t}^{\prime})-1\leq d_{2}.\]
**Corollary 3.11**.: _Let \(d_{1},d_{2},d_{3}\in\mathbb{N},d_{2}\geq 3\) be constants such that \(d_{1},d_{3}\leq d_{2},d_{1}+d_{3}\geq d_{2}+4\) and \(\mathbb{E}\) be a uniformly convex Banach space \(\mathbb{E}\) with a modulus of convexity \(\delta\). Also, let \(r_{2}=r_{2}(\delta)\) be the constant of Theorem 3.9._
_Denote \(n=\lfloor\frac{\sqrt{d_{1}+d_{3}-d_{2}}}{2}\rfloor\). Then for every \(\underline{t}^{\prime}\in\mathcal{S}^{d_{2}+1}\), every, \(d^{\prime}\in\mathbb{N}\cup\{0\},d^{\prime}\leq d_{2}-2n\), every affine isometric action \(\rho\) of \(\mathrm{H}_{3}(\mathbb{Z}[\mathrm{t}_{1},...,\mathrm{t}_{\mathrm{m}}])\) on \(\mathbb{E}\) and every \(\xi\in\mathbb{E}\) it holds that_
\[\left\|\rho\left(\underline{X}^{d_{1}}\right)\rho\left( \underline{Z}^{d_{2}}\right)\rho\left(\underline{Y}^{d_{3}}\right)\xi-\rho \left(\underline{X}^{d_{1}}\right)\rho\left(\underline{Z}^{d_{2}}\right) \rho\left(Z_{d^{\prime}}(\underline{t}^{\prime})\right)\rho\left(\underline{Y }^{d_{3}}\right)\xi\right\|\lesssim\] \[(d_{2}+m)^{m}r_{2}^{n}\max_{0\leq d\leq d_{2}+n-1}\max_{ \underline{t}\in\mathcal{B}^{d_{2}+1}}\lVert\xi-\rho(z(2^{d}\underline{t})) \xi\rVert.\]
Proof.: Fix \(\rho,\xi,\underline{t}^{\prime},d^{\prime}\) as above.
The proof is an application of Theorem 3.9. Observe that \(2n\leq\sqrt{2d_{2}}<d_{2}\).
By the assumption that \(d_{1}+d_{3}\geq d_{2}+4\) there are \(0\leq a_{0}\leq d_{1}-1\) and \(0\leq b_{0}\leq d_{3}-1\) such that \(a_{0}+b_{0}=d^{\prime}\). Using the notation of \(\mathrm{prefix},\mathrm{suffix}\) as in the proof of the previous corollary, we define
\[a_{k}=a_{0},b_{k}=b_{0},p_{k}=\mathrm{prefix}(\underline{t}^{\prime},d_{2}+1- k),q_{k}=\mathrm{suffix}(\underline{t}^{\prime},k).\]
Checking that the conditions of Theorem 3.9 are fulfilled is straight-forward and we leave it for the reader.
Combining these two corollaries leads to the following:
**Theorem 3.12**.: _Let \(d_{1},d_{2},d_{3}\in\mathbb{N},d_{2}\geq 3\) be constants such that \(d_{1},d_{3}\leq d_{2},d_{1}+d_{3}\geq d_{2}+4\) and \(\mathbb{E}\) be a uniformly convex Banach space \(\mathbb{E}\) with a modulus of convexity \(\delta\). Also, let \(r_{2}=r_{2}(\delta)\) be the constant of Theorem 3.9._
_Denote \(n=\lfloor\frac{\sqrt{d_{1}+d_{3}-d_{2}}}{2}\rfloor\). Then for every affine isometric action \(\rho\) of \(\mathrm{H}_{3}(\mathbb{Z}[\mathrm{t}_{1},...,\mathrm{t}_{\mathrm{m}}])\) on \(\mathbb{E}\) and every \(\xi\in\mathbb{E}\) it holds that_
\[\left\|\rho\left(\underline{X}^{d_{1}}\right)\rho\left(\underline {Z}^{d_{2}}\right)\rho\left(\underline{Y}^{d_{3}}\right)\xi-\rho\left( \underline{X}^{d_{1}}\right)\rho\left(\underline{Z}^{d_{2}+1}\right)\rho \left(\underline{Y}^{d_{3}}\right)\xi\right\|\lesssim\] \[(d_{2}+m)^{2m}r_{2}^{n}\max_{0\leq d\leq d_{2}+n-1}\max_{ \underline{t}\in\mathcal{B}^{d_{2}+1}}\lVert\xi-\rho(z(2^{d}\underline{t})) \xi\rVert.\]
Proof.: Fix \(\rho,\xi\) as above. We note that
\[\underline{Z}^{d_{2}+1}=\underline{Z}^{d_{2}}\left(\prod_{\underline{t^{\prime}} \in\mathcal{B}^{d_{2}}}Z_{d_{2}}(\underline{t^{\prime}})\right)\left(\prod_{ \underline{t^{\prime\prime}}\in\mathcal{S}^{d_{2}+1}}\prod_{d^{\prime}=0}^{d_{ 2}}Z_{d^{\prime}}(\underline{t^{\prime\prime}})\right).\]
Denote
\[Z_{d_{2}}(\mathcal{B}^{d_{2}})=\left(\prod_{\underline{t^{\prime}}\in\mathcal{ B}^{d_{2}}}Z_{d_{2}}(\underline{t^{\prime}})\right),\]
\[Z^{d_{2}+1}(\mathcal{S}^{d_{2}+1})=\left(\prod_{\underline{t^{\prime\prime}} \in\mathcal{S}^{d_{2}+1}}\prod_{d^{\prime}=0}^{d_{2}}Z_{d^{\prime}}(\underline {t^{\prime\prime}})\right).\]
Thus,
\[\begin{split}&\left\|\rho\left(\underline{X}^{d_{1}}\right) \rho\left(\underline{Z}^{d_{2}}\right)\rho\left(\underline{Y}^{d_{3}}\right) \xi-\rho\left(\underline{X}^{d_{1}}\right)\rho\left(\underline{Z}^{d_{2}+1} \right)\rho\left(\underline{Y}^{d_{3}}\right)\xi\right\|\\ &\leq\left\|\rho\left(\underline{X}^{d_{1}}\right)\rho\left( \underline{Z}^{d_{2}}\right)\rho\left(\underline{Y}^{d_{3}}\right)\xi-\rho \left(\underline{X}^{d_{1}}\right)\rho\left(\underline{Z}^{d_{2}}Z_{d_{2}}( \mathcal{B}^{d_{2}})\right)\rho\left(\underline{Y}^{d_{3}}\right)\xi\right\| +\\ &\left\|\rho\left(\underline{X}^{d_{1}}\right)\rho\left(\underline{ Z}^{d_{2}}Z_{d_{2}}(\mathcal{B}^{d_{2}})\right)\rho\left(\underline{Y}^{d_{3}} \right)\xi-\rho\left(\underline{X}^{d_{1}}\right)\rho\left(\underline{Z}^{d_{ 2}}Z_{d_{2}}(\mathcal{B}^{d_{2}})Z^{d_{2}+1}(\mathcal{S}^{d_{2}+1})\right) \rho\left(\underline{Y}^{d_{3}}\right)\xi\right\|\\ &=\left\|\rho\left(\underline{X}^{d_{1}}\right)\rho\left( \underline{Z}^{d_{2}}\right)\rho\left(\underline{Y}^{d_{3}}\right)\xi-\rho \left(\underline{X}^{d_{1}}\right)\rho\left(\underline{Z}^{d_{2}}Z_{d_{2}}( \mathcal{B}^{d_{2}})\right)\rho\left(\underline{Y}^{d_{3}}\right)\xi\right\| +\\ &\left\|\rho\left(Z_{d_{2}}(\mathcal{B}^{d_{2}})\right)\rho\left( \underline{X}^{d_{1}}\right)\rho\left(\underline{Z}^{d_{2}}\right)\rho\left( \underline{Y}^{d_{3}}\right)\xi\right.\\ &\hskip 113.811024pt-\rho\left(Z_{d_{2}}(\mathcal{B}^{d_{2}}) \right)\rho\left(\underline{X}^{d_{1}}\right)\rho\left(\underline{Z}^{d_{2}}Z ^{d_{2}+1}(\mathcal{S}^{d_{2}+1})\right)\rho\left(\underline{Y}^{d_{3}}\right) \xi\right\|\\ \leq^{\text{Claim }\ref{claim:2.2}}\left\|\rho\left( \underline{X}^{d_{1}}\right)\rho\left(\underline{Z}^{d_{2}}\right)\rho\left( \underline{Y}^{d_{3}}\right)\xi-\rho\left(\underline{X}^{d_{1}}\right)\rho \left(\underline{Z}^{d_{2}}Z_{d_{2}}(\mathcal{B}^{d_{2}})\right)\rho\left( \underline{Y}^{d_{3}}\right)\xi\right\|+\\ &\left\|\rho\left(\underline{X}^{d_{1}}\right)\rho\left(\underline{ Z}^{d_{2}}\right)\rho\left(\underline{Y}^{d_{3}}\right)\xi-\rho\left( \underline{X}^{d_{1}}\right)\rho\left(\underline{Z}^{d_{2}}Z^{d_{2}+1}( \mathcal{S}^{d_{2}+1})\right)\rho\left(\underline{Y}^{d_{3}}\right)\xi\right\|.\end{split}\]
It follows that it is enough to prove that
\[\begin{split}&\left\|\rho\left(\underline{X}^{d_{1}}\right)\rho \left(\underline{Z}^{d_{2}}\right)\rho\left(\underline{Y}^{d_{3}}\right)\xi- \rho\left(\underline{X}^{d_{1}}\right)\rho\left(\underline{Z}^{d_{2}}Z_{d_{2}} (\mathcal{B}^{d_{2}})\right)\rho\left(\underline{Y}^{d_{3}}\right)\xi\right\| \lesssim\\ &(d_{2}+m)^{2m+1}r_{2}^{n}\max_{0\leq d\leq d_{2}+n-1}\max_{ \underline{t}\in\mathcal{B}^{d_{2}+1}}\!\!\|\xi-\rho(z(2^{d}\underline{t})) \xi\|\end{split}\]
and
\[\begin{split}&\left\|\rho\left(\underline{X}^{d_{1}}\right)\rho \left(\underline{Z}^{d_{2}}\right)\rho\left(\underline{Y}^{d_{3}}\right)\xi- \rho\left(\underline{X}^{d_{1}}\right)\rho\left(\underline{Z}^{d_{2}}Z^{d_{2}+ 1}(\mathcal{S}^{d_{2}+1})\right)\rho\left(\underline{Y}^{d_{3}}\right)\xi \right\|\lesssim\\ &(d_{2}+m)^{2m+1}r_{2}^{n}\max_{0\leq d\leq d_{2}+n-1}\max_{ \underline{t}\in\mathcal{B}^{d_{2}+1}}\!\!\|\xi-\rho(z(2^{d}\underline{t})) \xi\|.\end{split}\]
We observe that \(|\mathcal{B}^{d_{2}}|=\binom{d_{2}+m}{d_{2}}\). Fix an order on \(\mathcal{B}^{d_{2}}\):
\[\mathcal{B}^{d_{2}}=\{\underline{t}^{\prime}(1),\underline{t}^{\prime}(2),...\}.\]
Thus
\[\left\|\rho\left(\underline{X}^{d_{1}}\right)\rho\left(\underline{Z} ^{d_{2}}\right)\rho\left(\underline{Y}^{d_{3}}\right)\xi-\rho\left(\underline{X} ^{d_{1}}\right)\rho\left(\underline{Z}^{d_{2}}Z_{d_{2}}(\mathcal{B}^{d_{2}}) \right)\rho\left(\underline{Y}^{d_{3}}\right)\xi\right\|\] \[\leq\sum_{k=0}^{|\mathcal{S}^{d_{2}}|-1}\left\|\rho\left( \underline{X}^{d_{1}}\right)\rho\left(\underline{Z}^{d_{2}}\left(\prod_{i=1}^{ k}Z_{d_{2}}(\underline{t}^{\prime}(i))\right)\right)\rho\left(\underline{Y}^{d_{3}} \right)\xi\!-\!\rho\left(\underline{X}^{d_{1}}\right)\rho\left(\underline{Z}^ {d_{2}}\left(\prod_{i=1}^{k+1}Z_{d_{2}}(\underline{t}^{\prime}(i))\right) \right)\rho\left(\underline{Y}^{d_{3}}\right)\xi\right\|\] \[=\sum_{k=0}^{|\mathcal{S}^{d_{2}}|-1}\left\|\rho\left(\prod_{i=1} ^{k}Z_{d_{2}}(\underline{t}^{\prime}(i))\right)\rho\left(\underline{X}^{d_{1 }}\right)\rho\left(\underline{Z}^{d_{2}}\right)\rho\left(\underline{Y}^{d_{3} }\right)\xi\right.\] \[-\rho\left(\prod_{i=1}^{k}Z_{d_{2}}(\underline{t}^{\prime}(i)) \right)\rho\left(\underline{X}^{d_{1}}\right)\rho\left(\underline{Z}^{d_{2}}Z _{d_{2}}(\underline{t}^{\prime}(k+1))\right)\rho\left(\underline{Y}^{d_{3}} \right)\xi\right\|\] \[\leq^{\text{Claim }2.2}\sum_{k=0}^{|\mathcal{B}^{d_{2}}|-1}\left\| \rho\left(\underline{X}^{d_{1}}\right)\rho\left(\underline{Z}^{d_{2}}\right) \rho\left(\underline{Y}^{d_{3}}\right)\xi\!-\!\rho\left(\underline{X}^{d_{1}} \right)\rho\left(\underline{Z}^{d_{2}}Z_{d_{2}}(\underline{t}^{\prime}(k\!+ \!1))\right)\rho\left(\underline{Y}^{d_{3}}\right)\xi\right\|\!\lesssim^{ \text{Corollary }\ref{cor:2011}}\] \[\sum_{k=0}^{|\mathcal{B}^{d_{2}}|-1}(d_{2}+m)^{m}r_{2}^{n}\max_{0 \leq d\leq d_{2}+n-1}\max_{\underline{t}\in\mathcal{B}^{d_{2}+1}}\|\xi-\rho(z (2^{d}\underline{t}))\xi\|\!\lesssim\] \[(d_{2}+m)^{2m}r_{2}^{n}\max_{0\leq d\leq d_{2}+n-1}\max_{ \underline{t}\in\mathcal{B}^{d_{2}+1}}\|\xi-\rho(z(2^{d}\underline{t}))\xi\|,\]
as needed.
The proof of the second inequality is similar and we allow ourselves to omit some details.
We recall that \(|\mathcal{S}^{d_{2}+1}|\!=\binom{d_{2}+m}{d_{2}+1}\) and fix an order on \(\mathcal{S}^{d_{2}+1}\):
\[\mathcal{S}^{d_{2}+1}=\{\underline{t}^{\prime\prime}(1),\underline{t}^{\prime \prime}(2),...\}.\]
Thus,
\[\left\|\rho\left(\underline{X}^{d_{1}}\right)\rho\left(\underline{Z }^{d_{2}}\right)\rho\left(\underline{Y}^{d_{3}}\right)\xi-\rho\left(\underline{ X}^{d_{1}}\right)\rho\left(\underline{Z}^{d_{2}}Z^{d_{2}+1}(\mathcal{S}^{d_{2}+1}) \right)\rho\left(\underline{Y}^{d_{3}}\right)\xi\right\|\] \[\leq\sum_{k=0}^{|\mathcal{S}^{d_{2}+1}|-1}\sum_{d^{\prime}=0}^{d_ {2}}\left\|\rho\left(\underline{X}^{d_{1}}\right)\rho\left(\underline{Z}^{d_{2} }\right)\rho\left(\underline{Y}^{d_{3}}\right)\xi\!-\!\rho\left(\underline{X}^{ d_{1}}\right)\rho\left(\underline{Z}^{d_{2}}Z_{d^{\prime}}(\underline{t}^{ \prime\prime}(k\!+\!1))(\mathcal{S}^{d_{2}+1})\right)\rho\left(\underline{Y}^{ d_{3}}\right)\xi\right\|\] \[\lesssim(d_{2}+m)^{2m}r_{2}^{n}\max_{0\leq d\leq d_{2}+n-1}\max_{ \underline{t}\in\mathcal{B}^{d_{2}+1}}\|\xi-\rho(z(2^{d}\underline{t}))\xi\|,\]
as needed.
As a consequence, we can bound (2) and (3):
**Theorem 3.13**.: _Let \(d_{1},d_{2},d_{3}\in\mathbb{N},d_{2}\geq 16\) be constants such that \(\min\{d_{1},d_{3}\}\geq 16\), \(\max\{d_{1},d_{3}\}\geq d_{2}\) and \(\mathbb{E}\) be a uniformly convex Banach space \(\mathbb{E}\) with a modulus of convexity \(\delta\). Also, let \(r=r(\delta)=\sqrt[4]{r_{2}(\delta)}<1\) where \(r_{2}(\delta)\) is the constant of Theorem 3.9._
_For every affine isometric action \(\rho\) of \(\mathrm{H}_{3}(\mathbb{Z}[\mathrm{t}_{1},...,\mathrm{t}_{\mathrm{m}}])\) on \(\mathbb{E}\) and every \(\xi\in\mathbb{E}\) it holds that_
\[\left\|\rho\left(\underline{X}^{d_{1}}\right)\rho\left(\underline{Z }^{d_{2}}\right)\rho\left(\underline{Y}^{d_{3}}\right)\xi-\rho\left( \underline{X}^{d_{1}}\right)\rho\left(\underline{Z}^{d_{2}+1}\right)\rho\left( \underline{Y}^{d_{3}}\right)\xi\right\|\lesssim\] \[(d_{2}+m)^{2m}r^{\sqrt{\min\{d_{1},d_{3},d_{2}\}}}\max_{0\leq d \leq 2d_{2}}\max_{\underline{t}\in\mathcal{B}^{d_{2}+1}}\lVert\xi-\rho(z(2^{d} \underline{t}))\xi\rVert,\]
_and_
\[\left\|\rho\left(\underline{Y}^{d_{1}}\right)\rho\left(\underline{Z }^{d_{2}}\right)\rho\left(\underline{X}^{d_{3}}\right)\xi-\rho\left( \underline{Y}^{d_{1}}\right)\rho\left(\underline{Z}^{d_{2}+1}\right)\rho\left( \underline{X}^{d_{3}}\right)\xi\right\|\lesssim\] \[(d_{2}+m)^{2m}r^{\sqrt{\min\{d_{1},d_{3},d_{2}\}}}\max_{0\leq d \leq 2d_{2}}\max_{\underline{t}\in\mathcal{B}^{d_{2}+1}}\lVert\xi-\rho(z(2^{d} \underline{t}))\xi\rVert.\]
Proof.: We will start by proving the first inequality. Fix \(\rho\) as above.
We denote \(d_{1}^{\prime}=\min\{d_{1},d_{2}\},d_{3}^{\prime}=\min\{d_{3},d_{2}\}\). By Lemma 3.1, it is enough to prove that for every \(\xi\in\mathbb{E}\) it holds that
\[\left\|\rho\left(\underline{X}^{d_{1}^{\prime}}\right)\rho\left( \underline{Z}^{d_{2}}\right)\rho\left(\underline{Y}^{d_{3}^{\prime}}\right) \xi-\rho\left(\underline{X}^{d_{1}^{\prime}}\right)\rho\left(\underline{Z}^{d _{2}+1}\right)\rho\left(\underline{Y}^{d_{3}^{\prime}}\right)\xi\right\|\lesssim\] \[(d_{2}+m)^{2m}r^{\sqrt{\min\{d_{1},d_{3},d_{2}\}}}\max_{0\leq d \leq 2d_{2}}\max_{\underline{t}\in\mathcal{B}^{d_{2}+1}}\lVert\xi-\rho(z(2^{d} \underline{t}))\xi\rVert.\]
Note that \(d_{1}^{\prime}+d_{3}^{\prime}\geq d_{2}+\min\{d_{1},d_{3},d_{2}\}\geq d_{2}+4\) and thus we can apply Theorem 3.12. Namely, we denote \(n=\lfloor\frac{\sqrt{d_{1}^{\prime}+d_{3}^{\prime}-d_{2}}}{2}\rfloor\) and note that \(n\leq d_{2}\) and that
\[n\geq\lfloor\frac{\sqrt{\min\{d_{1},d_{3},d_{2}\}}}{2}\rfloor\geq^{\min\{d_{1 },d_{3},d_{2}\}\geq 16}\frac{\sqrt{\min\{d_{1},d_{3},d_{2}\}}}{4}.\]
It follows from Theorem 3.12 that for every \(\xi\in\mathbb{E}\) it holds that
\[\left\|\rho\left(\underline{X}^{d_{1}^{\prime}}\right)\rho\left( \underline{Z}^{d_{2}}\right)\rho\left(\underline{Y}^{d_{3}^{\prime}}\right) \xi-\rho\left(\underline{X}^{d_{1}^{\prime}}\right)\rho\left(\underline{Z}^{ d_{2}+1}\right)\rho\left(\underline{Y}^{d_{3}^{\prime}}\right)\xi\right\|\lesssim\] \[(d_{2}+m)^{2m}r_{2}^{n}\max_{0\leq d\leq d_{2}+n-1}\max_{ \underline{t}\in\mathcal{B}^{d_{2}+1}}\lVert\xi-\rho(z(2^{d}\underline{t})) \xi\rVert\leq^{\frac{\sqrt{\min\{d_{1},d_{3},d_{2}\}}}{4}\leq n\leq d_{2}}\] \[(d_{2}+m)^{2m}r_{2}^{\frac{\sqrt{\min\{d_{1},d_{3},d_{2}\}}}{4}} \max_{0\leq d\leq 2d_{2}}\max_{\underline{t}\in\mathcal{B}^{d_{2}+1}}\lVert\xi- \rho(z(2^{d}\underline{t}))\xi\rVert=\] \[(d_{2}+m)^{2m}r^{\sqrt{\min\{d_{1},d_{3},d_{2}\}}}\max_{0\leq d \leq 2d_{2}}\max_{\underline{t}\in\mathcal{B}^{d_{2}+1}}\lVert\xi-\rho(z(2^{d} \underline{t}))\xi\rVert.\]
Next, we will show that the second inequality stated in the theorem follows from the first one. Define \(\phi:\mathrm{H}_{3}(\mathbb{Z}[\mathrm{t}_{1},...,\mathrm{t}_{\mathrm{m}}]) \rightarrow\mathrm{H}_{3}(\mathbb{Z}[\mathrm{t}_{1},...,\mathrm{t}_{\mathrm{m}}])\) to be the isomorphism induced by
\[\phi(y(p))=x(p),\phi(x(p))=y(-p),\forall p\in\mathbb{Z}[t_{1},...,t_{m}].\]
We note that for every \(p\in\mathbb{Z}[t_{1},...,t_{m}]\) it follows that \(\phi(z(p))=z(p)\). Fix \(\rho\) and define \(\rho_{0}\) to by the action \(\rho\circ\phi\). By the inequality we already proven, it is enough to prove that
for every \(\xi\in\mathbb{E}\) it holds that
\[\left\|\rho\left(\underline{Y}^{d_{1}}\right)\rho\left(\underline{Z} ^{d_{2}}\right)\rho\left(\underline{X}^{d_{3}}\right)\xi-\rho\left(\underline{Y }^{d_{1}}\right)\rho\left(\underline{Z}^{d_{2}+1}\right)\rho\left(\underline{X} ^{d_{3}}\right)\xi\right\|\leq\] \[\left\|\rho_{0}\left(\underline{X}^{d_{1}}\right)\rho_{0}\left( \underline{Z}^{d_{2}}\right)\rho_{0}\left(\underline{Y}^{d_{3}}\right)\xi- \rho_{0}\left(\underline{X}^{d_{1}}\right)\rho_{0}\left(\underline{Z}^{d_{2}+1 }\right)\rho_{0}\left(\underline{Y}^{d_{3}}\right)\xi\right\|.\]
We note that
\[\rho_{0}(\underline{Y}^{d_{3}})=\rho(\underline{X}^{d_{3}}),\rho_{0}( \underline{Z}^{d_{2}})=\rho(\underline{Z}^{d_{2}}),\rho_{0}(\underline{Z}^{d _{2}+1})=\rho(\underline{Z}^{d_{2}+1}).\]
Also, for every \(\underline{t}\in\mathcal{B}^{d_{1}}\)
\[\rho_{0}(x(-(2^{d_{1}}-1)\underline{t}))\rho_{0}(X^{d_{1}}( \underline{t}))=\rho(y((2^{d_{1}}-1)\underline{t}))\left(\frac{1}{2^{d_{1}}} \sum_{a=0}^{2^{d_{1}}-1}\rho(y(-at\underline{t}))\right)=\rho(Y^{d_{1}}( \underline{t})).\]
Thus,
\[\left(\prod_{\underline{t}\in\mathcal{B}^{d_{1}}}\rho_{0}(x(-(2^{d_{1}}-1) \underline{t}))\right)\rho_{0}(\underline{X}^{d_{1}})=\rho(\underline{Y}^{d_ {1}}).\]
It follows that for every \(\xi\in\mathbb{E}\) it holds that
\[\left\|\rho\left(\underline{Y}^{d_{1}}\right)\rho\left(\underline {Z}^{d_{2}}\right)\rho\left(\underline{X}^{d_{3}}\right)\xi-\rho\left( \underline{Y}^{d_{1}}\right)\rho\left(\underline{Z}^{d_{2}+1}\right)\rho\left( \underline{X}^{d_{3}}\right)\xi\right\|\] \[=\left\|\left(\prod_{\underline{t}\in\mathcal{B}^{d_{1}}}\rho_{0 }(x(-(2^{d_{1}}-1)\underline{t}))\right)\rho_{0}\left(\underline{X}^{d_{1}} \right)\rho_{0}\left(\underline{Z}^{d_{2}}\right)\rho_{0}\left(\underline{Y}^{ d_{3}}\right)\xi\right.\] \[\leq {}^{\text{Claim }}2.2\,\left\|\rho_{0}\left(\underline{X}^{d_{1}} \right)\rho_{0}\left(\underline{Z}^{d_{2}}\right)\rho_{0}\left(\underline{Y}^{ d_{3}}\right)\xi-\rho_{0}\left(\underline{X}^{d_{1}}\right)\rho_{0}\left( \underline{Z}^{d_{2}+1}\right)\rho_{0}\left(\underline{Y}^{d_{3}}\right)\xi \right\|,\]
as needed.
## 4. Word norm growth in the \(A_{2}\) Steinberg group
Let \(\operatorname{St}_{A_{2}}(\mathbb{Z}[t_{1},...,t_{m}])\) be the Steinberg group defined above. We note that \(\operatorname{St}_{A_{2}}(\mathbb{Z}[t_{1},...,t_{m}])\) is finitely generated and in particular the set
\[S=\bigcup_{1\leq i,j\leq 3,i\neq j}\{x_{i,j}(\pm 1),x_{i,j}(\pm t_{1}),...,x_{i, j}(\pm t_{m})\},\]
is a finite generating set.
Below, we will need the following result regarding the growth rate of the word norm in \(\operatorname{St}_{A_{2}}(\mathbb{Z}[t_{1},...,t_{m}])\):
**Proposition 4.1**.: _Let \(a\in\mathbb{Z}\setminus\{0\},d\in\mathbb{N}\cup\{0\}\) be constants and \(\underline{t}\in\mathbb{Z}[t_{1},...,t_{m}]\) be a monomial. Also, let_
\[S=\bigcup_{1\leq i,j\leq 3,i\neq j}\{x_{i,j}(\pm 1),x_{i,j}(\pm t_{1}),...,x_{i,j}( \pm t_{m})\}.\]
_For every \(1\leq i,j\leq 3,i\neq j\), it holds that_
\[|x_{i,j}(2^{d}\underline{t})|_{S}\lesssim(1+(d+\deg(\underline{t}))^{2})\]
_and_
\[|x_{i,j}(\underline{at})|_{S}\lesssim(1+\log^{3}(|a|)+(\deg(\underline{t}))^{2})\]
_where \(\log=\log_{2}\)._
Proof.: We will start by showing that for every \(n\in\mathbb{N}\) and every \(p_{1},....,p_{n}\in\mathbb{Z}[t_{1},...,t_{m}]\) it holds that
\[\max_{1\leq i,j\leq 3,i\neq j}\left|x_{i,j}\left(\prod_{k=1}^{n}p_{k}\right) \right|_{S}\leq 4n^{2}\left(\max_{1\leq i,j\leq 3,i\neq j,1\leq k\leq n} |x_{i,j}(p_{k})|_{S}\right). \tag{4}\]
We note that for every \(p_{1},p_{2}\in\mathbb{Z}[t_{1},...,t_{m}]\), it holds that
\[|x_{i,j}(p_{1}p_{2})|_{S}=|[x_{i,k}(p_{1}),x_{k,j}(p_{2})]|_{S}\leq 2(|x_{i,k}( p_{1})|_{S}+|x_{k,j}(p_{2})|_{S}).\]
Thus,
\[\max_{1\leq i,j\leq 3,i\neq j}|x_{i,j}(p_{1}p_{2})|_{S}\leq 2\left(\max_{1\leq i,j\leq 3,i\neq j}|x_{i,j}(p_{1})|_{S}+\max_{1\leq i,j\leq 3,i\neq j}|x_{i,j}(p_{2})|_{ S}\right). \tag{5}\]
Using (5) it follows that for every \(N\in\mathbb{N}\) and every \(p_{1},...,p_{2^{N}}\) it holds that
\[\max_{1\leq i,j\leq 3,i\neq j}\left|x_{i,j}\left(\prod_{k=1}^{2^{N}}p_{ k}\right)\right|_{S}\leq\] \[2\max_{1\leq i,j\leq 3,i\neq j}\left|x_{i,j}\left(\prod_{k=1}^{2^{N -1}}p_{k}\right)\right|_{S}+2\max_{1\leq i,j\leq 3,i\neq j}\left|x_{i,j}\left( \prod_{k=2^{N-1}+1}^{2^{N}}p_{k}\right)\right|_{S},\]
and by induction on \(N\) it follows that
\[\max_{1\leq i,j\leq 3,i\neq j}\left|x_{i,j}\left(\prod_{k=1}^{2^{N}}p_{ k}\right)\right|_{S}\leq\sum_{k=1}^{2^{N}}2^{N}\max_{1\leq i,j\leq 3,i\neq j} \left|x_{i,j}\left(p_{k}\right)\right|_{S}\leq 4^{N}\max_{1\leq i,j\leq 3,i\neq j,1 \leq k\leq n}\left|x_{i,j}\left(p_{k}\right)\right|_{S}. \tag{6}\]
We will show by induction on \(N\) that for every \(1\leq n<2^{N}\) and every \(p_{1},...,p_{n}\in\mathbb{Z}[t_{1},....,t_{n}]\) it holds that
\[\max_{1\leq i,j\leq 3,i\neq j}\left|x_{i,j}\left(\prod_{k=1}^{n}p_{k}\right) \right|_{S}\leq(2^{2N}-2^{N})\left(\max_{1\leq i,j\leq 3,i\neq j,1\leq k\leq n} |x_{i,j}(p_{k})|_{S}\right).\]
If \(N=1\), then \(n=1\) and \(2^{2N}-2^{N}=2^{2}-2^{1}=2\) and the inequality holds. Next, we will assume that the inequality holds for \(N\) and prove it for \(N+1\). Let \(1\leq n<2^{N+1}\) and \(p_{1},...,p_{n}\in\mathbb{Z}[t_{1},...,t_{m}]\). If \(n<2^{N}\), we are done by the induction assumption. Also, if \(n=2^{N}\), then we are done by (6) and the fact that \(4^{N}\leq 2^{2N+2}-2^{N+1}\).
Thus, we can assume that \(2^{N}+1\leq n<2^{N+1}\). Denote \(n^{\prime}=n-2^{N}\). Then \(1\leq n^{\prime}<2^{N}\) and by (5) it holds that
\[\max_{1\leq i,j\leq 3,i\neq j}\left|x_{i,j}\left(\prod_{k=1}^{n}p_{k }\right)\right|_{S}\leq\] \[2\max_{1\leq i,j\leq 3,i\neq j}\left|x_{i,j}\left(\prod_{k=1}^{2^{N }}p_{k}\right)\right|_{S}+2\max_{1\leq i,j\leq 3,i\neq j}\left|x_{i,j}\left( \prod_{k=2^{N}+1}^{2^{N}+n^{\prime}}p_{k}\right)\right|_{S}\leq\] \[\left(2\cdot 4^{N}+2(2^{2N}-2^{N})\right)\left(\max_{1\leq i,j\leq 3,i\neq j,1\leq k\leq n}\left|x_{i,j}(p_{k})\right|_{S}\right)=\] \[(2^{2N+2}-2^{N+1})\left(\max_{1\leq i,j\leq 3,i\neq j,1\leq k \leq n}\left|x_{i,j}(p_{k})\right|_{S}\right),\]
as needed.
It follows that for every \(N\) and every \(2^{N-1}\leq n<2^{N}\) it holds that
\[\max_{1\leq i,j\leq 3,i\neq j}\left|x_{i,j}\left(\prod_{k=1}^{n}p_{k}\right) \right|_{S}\leq 2^{2N}\left(\max_{1\leq i,j\leq 3,i\neq j,1\leq k\leq n} \left|x_{i,j}(p_{k})\right|_{S}\right)\leq 4n^{2}\left(\max_{1\leq i,j\leq 3,i\neq j,1 \leq k\leq n}\left|x_{i,j}(p_{k})\right|_{S}\right)\]
and thus (4) is proven.
As a consequence (4), we deduce that for \(\underline{t}=2^{d}t_{1}^{n_{1}}...t_{m}^{nm}\), if \(d+n_{1}+....+n_{m}>0\), then
\[\max_{1\leq i,j\leq 3,i\neq j}\left|x_{i,j}(2^{d}t_{1}^{n_{1}}...t_{m}^{nm}) \right|_{S}\leq 8(d+n_{1}+...+n_{m})^{2}=8(d+\deg(\underline{t}))^{2},\]
which proves the first inequality stated in the Proposition.
In order to prove the second inequality stated in the Proposition, it is enough to show that
\[\max_{1\leq i,j\leq 3,i\neq j}\left|x_{i,j}(a)\right|_{S}\leq 1+8\log^{3}(|a|),\]
for every \(a\in\mathbb{Z}\setminus\{0\}\). Without loss of generality, we can assume that \(a\in\mathbb{N}\). Fix \(a\in\mathbb{N}\) and \(1\leq i,j\leq 3,i\neq j\). We observe that by (4) it holds for every \(l\in\mathbb{N}\) that
\[|x_{i,j}(2^{l})|_{S}\leq 4l^{2}\left(\max_{1\leq i^{\prime},j^{\prime}\leq 3,i^ {\prime}\neq j^{\prime}}\left|x_{i^{\prime},j^{\prime}}(2)\right|_{S}\right) \leq 8l^{2}.\]
Denote \(n=\lfloor\log(a)\rfloor\). Then \(2^{n}\leq a<2^{n+1}\) and there are \(a_{0},...,a_{n}\in\{0,1\}\) such that \(a=\sum_{l}a_{l}2^{l}\). Thus
\[|x_{i,j}(a)|_{S}\leq\sum_{l=0}^{n}|x_{i,j}(a_{l}2^{l})|_{S}\leq a_{0}+\sum_{l= 1}^{n}8a_{l}l^{2}\leq 1+8\sum_{l=1}^{n}l^{2}\leq 1+8n^{3}\leq 1+8\log^{3}(a),\]
as needed. \(\Box\)
**Corollary 4.2**.: _Using the notations of SS3, for \(i,j\in\{1,2,3\},i\neq j\) and \(d\in\mathbb{N}\), we define \(\underline{X}_{i,j}^{d}\in\operatorname{Prob}_{c}(\operatorname{St}_{A_{2}}( \mathbb{Z}[t_{1},...,t_{m}])\) as_
\[\underline{X}_{i,j}^{d}=\frac{1}{|\operatorname{Poly}(d,d)|}\sum_{p\in \operatorname{Poly}(d,d)}x_{i,j}(p).\]
_Then for_
\[S=\bigcup_{1\leq i,j\leq 3,i\neq j}\{x_{i,j}(\pm 1),x_{i,j}(\pm t_{1}),...,x_{i, j}(\pm t_{m})\}\]
_it holds that_
\[|\underline{X}_{i,j}^{d}|_{S}\lesssim d^{m+3}.\]
Proof.: Fix \(i,j\in\{1,2,3\},i\neq j\). It is enough to show that for every \(p\in\operatorname{Poly}(d,d),p\neq 0\) it holds that
\[|x_{i,j}(p)|_{S}\lesssim(d+m)^{m+3}.\]
Let \(p=\sum_{\underline{t}\in\mathcal{B}^{d}}c_{\underline{t}}\in\operatorname{ Poly}(d,d)\setminus\{0\}\). Then,
\[|x_{i,j}(p)|_{S}\leq\sum_{\underline{t}\in\mathcal{B}^{d}}|x_{i,j}(c_{ \underline{t}}\underline{t})|_{S}\lesssim^{\operatorname{Proposition}\ref{ p:S_T
**Lemma 5.1**.: _Let \(d\in\mathbb{N},d\geq D\), \((d_{1},...,d_{6})\in(\mathbb{N}\cap[d,3d])^{6}\) and \(d^{\prime},d^{\prime\prime}\in\mathbb{N}\cap[d,3d]\)._
_For \(j=1,2\) it holds that_
\[\|F_{j}(d_{1},d^{\prime},d^{\prime},d_{4},d_{5},d_{6})-F_{j}(d_{1},d^{\prime \prime},d^{\prime\prime},d_{4},d_{5},d_{6})\|\lesssim d\epsilon(d),\]
\[\|F_{j}(d_{1},d_{2},d^{\prime},d^{\prime},d_{5},d_{6})-F_{j}(d_{1},d_{2},d^{ \prime\prime},d^{\prime\prime},d_{5},d_{6})\|\lesssim d\epsilon(d),\]
\[\|F_{j}(d_{1},d_{2},d_{3},d^{\prime},d^{\prime},d_{6})-F_{j}(d_{1},d_{2},d_{3}, d^{\prime\prime},d^{\prime\prime},d_{6})\|\lesssim d\epsilon(d).\]
Proof.: The proofs of all these inequalities are similar and we will prove only the first one.
Assume without loss of generality that \(d^{\prime}<d^{\prime\prime}\), then it is enough to prove that
\[\|F_{j}(d_{1},d^{\prime},d^{\prime},d_{4},d_{5},d_{6})-F_{j}(d_{1},d^{\prime} +1,d^{\prime}+1,d_{4},d_{5},d_{6})\|\lesssim\epsilon(d).\]
Indeed, by \((\mathcal{A}2)\) it holds that
\[\|F_{j}(d_{1},d^{\prime},d^{\prime},d_{4},d_{5},d_{6})-F_{j}(d_{1 },d^{\prime}+1,d^{\prime}+1,d_{4},d_{5},d_{6})\|\leq\] \[\|F_{j}(d_{1},d^{\prime},d^{\prime},d_{4},d_{5},d_{6})-F_{j}(d_{1 },d^{\prime}+1,d^{\prime},d_{4},d_{5},d_{6})\|+\] \[\|F_{j}(d_{1},d^{\prime}+1,d^{\prime},d_{4},d_{5},d_{6})-F_{j}(d_ {1},d^{\prime}+1,d^{\prime}+1,d_{4},d_{5},d_{6})\|\lesssim\epsilon(d).\]
**Corollary 5.2**.: _Let \(d\in\mathbb{N},d\geq D\) and \(d_{1},d_{6},d^{\prime},d^{\prime\prime}\in(\mathbb{N}\cap[d,3d])^{6}\)._
_For \(j=1,2\) it holds that_
\[\|F_{j}(d_{1},d^{\prime},d^{\prime},d^{\prime},d^{\prime},d_{6})-F_{j}(d_{1 },d^{\prime\prime},d^{\prime\prime},d^{\prime\prime},d_{6})\|\lesssim d \epsilon(d).\]
Thus, we can prove the following general convergence result:
**Theorem 5.3**.: _Let \(\mathbb{E},D,F_{1},F_{2},\epsilon\) be as above and assume that \(\sum_{d\in\mathbb{N},d\geq D}d\epsilon(d)<\infty\). Define a sequence \(\{\xi_{k}\}_{k\in\mathbb{N}}\subseteq\mathbb{E}\) as follows:_
\[\xi_{k}=\begin{cases}F_{1}(\frac{k+1}{2},...,\frac{k+1}{2})&\text{$k$ is odd}\\ F_{2}(\frac{k}{2},...,\frac{k}{2})&\text{$k$ is even}\end{cases}.\]
_Then this sequence is a Cauchy sequence in \(\mathbb{E}\) and thus converges._
Proof.: By the assumption that \(\sum_{d\in\mathbb{N}d\geq D}d\epsilon(d)<\infty\), it is enough to prove that for \(\{j_{1},j_{2}\}=\{1,2\}\) and \(d\geq D\) it holds that
\[\|F_{j_{1}}(d,...,d)-F_{j_{2}}(d+1,...,d+1)\|\lesssim d\epsilon(d).\]
Fix \(\{j_{1},j_{2}\}=\{1,2\}\) and \(d\geq D\). Then
\[\|F_{j_{1}}(d,...,d)-F_{j_{2}}(d+1,...,d+1)\|\lesssim^{\text{Corollary \ref{lem:2}}}\] \[d\epsilon(d)+\|F_{j_{1}}(d,3d,3d,3d,3d,d)-F_{j_{2}}(d+1,...,d+1) \|\lesssim^{\text{Lemma \ref{lem:2}}}\] \[d\epsilon(d)+\|F_{j_{1}}(d,3d,d+1,d+1,3d,d)-F_{j_{2}}(d+1,...,d+1) \|\lesssim^{(\mathcal{A}1)}\] \[d\epsilon(d)+\|F_{j_{2}}(d+1,3d,d,d,3d,d+1)-F_{j_{2}}(d+1,...,d+1) \|\lesssim^{\text{Lemma \ref{lem:2}}}\] \[d\epsilon(d)+\|F_{j_{2}}(d+1,3d,3d,3d,3d,d+1)-F_{j_{2}}(d+1,..., d+1)\|\lesssim^{\text{Corollary \ref{lem:2}}}d\epsilon(d),\]
as needed.
### Relative fixed point property for \(\mathrm{St}_{A_{2}}(\mathbb{Z}[t_{1},...,t_{m}])\)
We fix \(m\in\mathbb{N}\) and for \(1\leq i,j\leq 3,i\neq j\), we denote \(K_{i,j}\) to be the \((i,j)\)-root subgroup of \(\mathrm{St}_{A_{2}}(\mathbb{Z}[t_{1},...,t_{m}])\) as defined above. Also, for \(\{i,j,k\}=\{1,2,3\}\), we define
\[H_{i,k}=\langle K_{i,j},K_{j,k}\rangle<\mathrm{St}_{A_{2}}(\mathbb{Z}[t_{1},...,t_{m}])\]
and note that \(H_{i,k}\) is isomorphic to the group \(\mathrm{H}_{3}(\mathbb{Z}[t_{1},...,t_{m}])\). A key ingredient of the proof of our relative fixed point property is that we can order the root subgroups such that each three consecutive root subgroups form a Heisenberg group and the middle group in the each three consecutive root subgroups is the center of this Heisenberg group. Explicitly, we define two maps \(\sigma_{1},\sigma_{2}:\{1,...,6\}\rightarrow\{(i_{1},i_{2}):1\leq i_{1},i_{2 }\leq 3,i_{1}\neq i_{2}\}\) as follows:
\[\sigma_{1}(1)=(1,2),\sigma_{1}(2)=(1,3),\sigma_{1}(3)=(2,3),\]
\[\sigma_{1}(4)=(2,1),\sigma_{1}(5)=(3,1),\sigma_{1}(6)=(3,2),\]
and
\[\sigma_{2}(1)=(2,3),\sigma_{2}(2)=(1,3),\sigma_{2}(3)=(1,2),\]
\[\sigma_{2}(4)=(3,2),\sigma_{2}(5)=(3,1),\sigma_{2}(6)=(2,1).\]
We note that for every \(2\leq i\leq 5\) and every \(j=1,2\) it holds that
\[\langle K_{\sigma_{j}(i-1)},K_{\sigma_{j}(i)},K_{\sigma_{j}(i+1)}\rangle=H_{ \sigma_{j}(i)},\]
and that \(K_{\sigma_{j}(i)}\) is the center of \(H_{\sigma_{j}(i)}\) (we use a slight abuse of notation here, i.e., we identify \(K_{i_{1},i_{2}}=K_{(i_{1},i_{2})}\) and \(H_{i_{1},i_{2}}=H_{(i_{1},i_{2})}\)).
Using the notations of SS3, for \(i\in\{1,...,6\},j=1,2\) and \(d\in\mathbb{N}\), we define \(\underline{X}^{d}_{\sigma_{j}(i)}\in\mathrm{Prob}_{c}(\mathrm{St}_{A_{2}}( \mathbb{Z}[t_{1},...,t_{m}])\) as
\[\underline{X}^{d}_{\sigma_{j}(i)}=\frac{1}{|\mathrm{Poly}(d,d)|}\sum_{p\in \mathrm{Poly}(d,d)}x_{\sigma_{j}(i)}(p).\]
With these notations, we can use the results of SS3 and prove the following:
**Theorem 5.4**.: _Let \(\mathbb{E}\) be a uniformly convex Banach space with a modulus of convexity \(\delta:(0,2]\rightarrow(0,1]\) and \(r=r(\delta)<1\) be the constant of Theorem 3.13. Also, let_
\[S=\bigcup_{1\leq i_{1},i_{2}\leq 3,i_{1}\neq i_{2}}\{x_{i_{1},i_{2}}(\pm 1),x_{i_ {1},i_{2}}(\pm t_{1}),...,x_{i_{1},i_{2}}(\pm t_{m})\}.\]
_For every \(d\geq\max\{16,m\}\), every \(f\in\mathrm{Prob}_{c}(\mathrm{St}_{A_{2}}(\mathbb{Z}[t_{1},...,t_{m}]))\), every affine isometric action \(\rho\) of \(\mathrm{St}_{A_{2}}(\mathbb{Z}[t_{1},...,t_{m}])\) on \(\mathbb{E}\) and every unit vector \(\xi\in\mathbb{E}\) the following holds:_
1. _For_ \(i=2,5\) _and_ \(d_{i-1},d_{i},d_{i+1}\in\mathbb{N}\cap[d,3d]\)_, if_ \(d_{i}-(d_{i-1}+d_{i+1})\geq d-1\)_, then_ \[\Big{\|}\rho(\underline{X}^{d_{i-1}}_{\sigma_{1}(i-1)})\rho(\underline{X}^{d_{i }}_{\sigma_{1}(i)})\rho(\underline{X}^{d_{i+1}}_{\sigma_{1}(i+1)})\rho(f)\xi\] \[\quad-\rho(\underline{X}^{d_{i+1}}_{\sigma_{2}(i-1)})\rho( \underline{X}^{d_{i}}_{\sigma_{2}(i)})\rho(\underline{X}^{d_{i-1}}_{\sigma_{2} (i+1)})\rho(f)\xi\Big{\|}\lesssim d^{m}\left(\frac{1}{2}\right)^{d}(d^{2}+|f| _{S}).\]
_._
2. _For_ \(2\leq i\leq 5\)_,_ \(j=1,2\) _and_ \(d_{i-1},d_{i},d_{i+1}\in\mathbb{N}\cap[d,3d]\)_, if_ \(\max\{d_{i-1},d_{i+1}\}\geq d_{i}\)_, then_ \[\Big{\|}\rho(\underline{X}_{\sigma_{j}(i-1)}^{d_{i-1}})\rho( \underline{X}_{\sigma_{j}(i)}^{d_{i}})\rho(\underline{X}_{\sigma_{j}(i+1)}^{d _{i+1}})\rho(f)\xi\] \[\quad-\rho(\underline{X}_{\sigma_{j}(i-1)}^{d_{i-1}})\rho( \underline{X}_{\sigma_{j}(i)}^{d_{i}+1})\rho(\underline{X}_{\sigma_{j}(i+1)}^{ d_{i+1}})\rho(f)\xi\Big{\|}\lesssim d^{2m}r^{\sqrt{d}}(d^{2}+|f|_{S}).\]
Proof.: For \(2\leq i\leq 5\) and \(j=1,2\), we note that \(H_{\sigma_{j}(i)}\cong\mathrm{H}_{3}(\mathbb{Z}[\mathrm{t}_{1},...,\mathrm{t} _{\mathrm{m}}])\) and that the following holds:
* If \(i+j\) is odd, then the isomorphism between \(H_{\sigma_{j}(i)}\) and \(\mathrm{H}_{3}(\mathbb{Z}[\mathrm{t}_{1},...,\mathrm{t}_{\mathrm{m}}])\) is induced by \(x_{\sigma_{j}(i-1)}(p)\mapsto x(p),x_{\sigma_{j}(i+1)}(p)\mapsto y(p),x_{ \sigma_{j}(i)}(p)\mapsto z(p),\forall p\in\mathbb{Z}[t_{1},...,t_{m}]\). Moreover, extending this isomorphism to the group ring yields that \[\underline{X}_{\sigma_{j}(i-1)}^{d_{i-1}}\mapsto\underline{X}^{d_{i-1}}, \underline{X}_{\sigma_{j}(i)}^{d_{i}}\mapsto\underline{Z}^{d_{i}},\underline{ X}_{\sigma_{j}(i+1)}^{d_{i+1}}\mapsto\underline{Y}^{d_{i+1}}.\]
* If \(i+j\) is even, then the isomorphism between \(H_{\sigma_{j}(i)}\) and \(\mathrm{H}_{3}(\mathbb{Z}[\mathrm{t}_{1},...,\mathrm{t}_{\mathrm{m}}])\) is induced by \(x_{\sigma_{j}(i-1)}(p)\mapsto y(p),x_{\sigma_{j}(i+1)}(p)\mapsto x(p),x_{ \sigma_{j}(i)}(p)\mapsto z(p),\forall p\in\mathbb{Z}[t_{1},...,t_{m}]\). Moreover, extending this isomorphism to the group ring yields that \[\underline{X}_{\sigma_{j}(i-1)}^{d_{i-1}}\mapsto\underline{Y}^{d_{i-1}}, \underline{X}_{\sigma_{j}(i)}^{d_{i}}\mapsto\underline{Z}^{d_{i}},\underline{ X}_{\sigma_{j}(i+1)}^{d_{i+1}}\mapsto\underline{X}^{d_{i+1}}.\]
Below, we will use these isomorphisms implicitly in order to apply theorems from SS3 in our setting.
Fix \(d,f,\xi\) as above.
1. Assume that \(i=2,5\), \(d_{i-1},d_{i},d_{i+1}\in\mathbb{N}\cap[d,3d]\) and \(d_{i}-(d_{i-1}+d_{i+1})\geq d-1\). Then \[\Big{\|}\rho(\underline{X}_{\sigma_{1}(i-1)}^{d_{i-1}})\rho( \underline{X}_{\sigma_{1}(i)}^{d_{i}})\rho(\underline{X}_{\sigma_{1}(i+1)}^{d _{i+1}})\rho(f)\xi\] \[-\rho(\underline{X}_{\sigma_{2}(i-1)}^{d_{i+1}})\rho(\underline{X }_{\sigma_{2}(i)}^{d_{i}})\rho(\underline{X}_{\sigma_{2}(i+1)}^{d_{i-1}})\rho( f)\xi\Big{\|}\lesssim^{\text{Theorem \ref{thm:2}}\ref{thm:3},d\geq m}\] \[d^{m}\left(\frac{1}{2}\right)^{d-1}\max_{\underline{t}\in\mathcal{ B}^{d_{i}}}\|\rho(f)\xi-\rho(x_{\sigma_{1}(i)}(2^{d_{i}}t))\rho(f)\xi\|\lesssim^{ \text{Proposition \ref{thm:2}}\ref{thm:3},\|\xi\|=1}\] \[d^{m}\left(\frac{1}{2}\right)^{d}\left(\max_{\underline{t}\in \mathcal{B}^{d_{i}}}\lvert x_{x_{\sigma_{1}(i)}}(2^{d_{i}}\underline{t}) \rvert_{S}+\lvert f\rvert_{S}\right)\lesssim^{\text{Proposition \ref{thm:2}}\ref{thm:3},d_{i}\leq 3d}\] \[d^{m}\left(\frac{1}{2}\right)^{d}\left(d^{2}+\lvert f\rvert_{S}\right),\] as needed.
2. Assume that \(2\leq i\leq 5\), \(d_{i-1},d_{i},d_{i+1}\in\mathbb{N}\cap[d,3d]\) and \(\max\{d_{1},d_{3}\}\geq d_{2}\). Then \[\left\|\rho(\underline{X}_{\sigma_{j}(i-1)}^{d_{i-1}})\rho( \underline{X}_{\sigma_{j}(i)}^{d_{i}})\rho(\underline{X}_{\sigma_{j}(i+1)}^{d_ {i+1}})\rho(f)\xi\right.\] \[-\rho(\underline{X}_{\sigma_{j}(i-1)}^{d_{i-1}})\rho(\underline{X }_{\sigma_{j}(i)}^{d_{i}+1})\rho(\underline{X}_{\sigma_{j}(i+1)}^{d_{i+1}}) \rho(f)\xi\right\|\lesssim^{\text{Theorem \ref{thm:2nd
Thus, it is enough to prove that
\[\|F_{1}(d_{1},d_{2},d_{3},d_{4},d_{5},d_{6})-F_{3}(d_{3},d_{2},d_{1},d_{4},d_{5}, d_{6})\|\lesssim\epsilon(d)\]
and
\[\|F_{3}(d_{3},d_{2},d_{1},d_{4},d_{5},d_{6})-F_{2}(d_{3},d_{2},d_{1},d_{6},d_{5}, d_{4})\|\lesssim\epsilon(d).\]
For the first inequality, we denote \(f=\underline{X}^{d_{4}}_{\sigma_{1}(4)}\underline{X}^{d_{5}}_{\sigma_{1}(5)} \underline{X}^{d_{6}}_{\sigma_{1}(6)}\in\operatorname{Prob}_{c}(\operatorname{ St}_{A_{2}}(\mathbb{Z}[t_{1},...,t_{m}])\) and note that
\[|f|_{S}\lesssim^{\operatorname{Corollary}\,4.2,d_{4},d_{5},d_{6}\leq 3d} \,d^{m+3}.\]
Then
\[\|F_{1}(d_{1},d_{2},d_{3},d_{4},d_{5},d_{6})-F_{3}(d_{3},d_{2},d_{ 1},d_{4},d_{5},d_{6})\|=\] \[\left\|\rho(\underline{X}^{d_{1}}_{\sigma_{1}(1)})\rho(\underline {X}^{d_{2}}_{\sigma_{1}(2)})\rho(\underline{X}^{d_{3}}_{\sigma_{1}(3)})\rho( f)\xi-\rho(\underline{X}^{d_{3}}_{\sigma_{2}(1)})\rho(\underline{X}^{d_{2}}_{ \sigma_{2}(2)})\rho(\underline{X}^{d_{1}}_{\sigma_{1}(3)})\rho(f)\xi\right\| \lesssim^{\operatorname{Theorem}\,5.4}\] \[d^{m}\left(\frac{1}{2}\right)^{d}\left(d^{2}+|f|_{S}\right) \lesssim\left(\frac{1}{2}\right)^{d}(d^{m+2}+d^{2m+3})\lesssim\epsilon(d).\]
The proof of the bound on \(\|F_{3}(d_{3},d_{2},d_{1},d_{4},d_{5},d_{6})-F_{2}(d_{3},d_{2},d_{1},d_{6},d_{ 5},d_{4})\|\) is somewhat simpler:
\[\|F_{3}(d_{3},d_{2},d_{1},d_{4},d_{5},d_{6})-F_{2}(d_{3},d_{2},d_ {1},d_{6},d_{5},d_{4})\|\leq^{\operatorname{Claim}\,2.2}\] \[\left\|\rho(\underline{X}^{d_{4}}_{\sigma_{1}(4)})\rho(\underline {X}^{d_{5}}_{\sigma_{1}(5)})\rho(\underline{X}^{d_{6}}_{\sigma_{1}(6)})\xi- \rho(\underline{X}^{d_{6}}_{\sigma_{2}(4)})\rho(\underline{X}^{d_{5}}_{\sigma _{1}(5)})\rho(\underline{X}^{d_{4}}_{\sigma_{1}(6)})\xi\right\|\lesssim^{ \operatorname{Theorem}\,5.4}\] \[d^{m}\left(\frac{1}{2}\right)^{d}d^{2}\leq\epsilon(d).\]
Thus, we proved that \((\mathcal{A}1)\) holds.
Next, we will prove that \((\mathcal{A}2)\) holds. Fix \(j\in\{1,2\}\), \(d\in\mathbb{N},d\geq D\), \((d_{1},...,d_{6})\in(\mathbb{N}\cap[d,3d])^{6}\) and \(2\leq i\leq 5\). Assume that \(\max\{d_{i-1},d_{i+1}\}\geq d_{i}\).
We denote \(f\in\operatorname{Prob}_{c}(\operatorname{St}_{A_{2}}(\mathbb{Z}[t_{1},...,t_{ m}])\) to be
\[f=\underline{X}^{d_{i+2}}_{\sigma_{j}(i+2)}...\underline{X}^{d_{6}}_{\sigma_{j}(6)}\]
(in the case where \(i=5\), \(f=\delta_{e}\)). We note that
\[|f|_{S}\lesssim^{\operatorname{Corollary}\,4.2}d^{m+3}.\]
Then,
\[\|F_{j}(d_{1},...,d_{i},...,d_{6})-F_{j}(d_{1},...,d_{i}+1,...,d_{6})\|\leq^{ \operatorname{Claim}\,2.2}\]
\[\left\|\rho(\underline{X}^{d_{i-1}}_{\sigma_{j}(i-1)})\rho(\underline{X}^{d_ {i}}_{\sigma_{j}(i)})\rho(\underline{X}^{d_{i+1}}_{\sigma_{j}(i+1)})\rho(f) \xi-\rho(\underline{X}^{d_{i-1}}_{\sigma_{j}(i-1)})\rho(\underline{X}^{d_{i} +1}_{\sigma_{j}(i)})\rho(\underline{X}^{d_{i+1}}_{\sigma_{j}(i+1)})\rho(f)\xi \right\|\lesssim^{\operatorname{Theorem}\,5.4}\]
\[d^{2m}r^{\sqrt{d}}\left(d^{2}+|f|_{S}\right)\lesssim d^{2m}r^{\sqrt{d}}(d^{2}+ d^{m+3})\lesssim d^{3m+3}r^{\sqrt{d}}=\epsilon(d).\]
Thus, we proved that \((\mathcal{A}2)\) is fulfilled.
As in Theorem 5.3, we will define
\[\xi_{k}=\begin{cases}F_{1}(\frac{k+1}{2},...,\frac{k+1}{2})&\text{$k$ is odd}\\ F_{2}(\frac{k}{2},...,\frac{k}{2})&\text{$k$ is even}\end{cases}.\]
By Theorem 5.3 it follows that this is a Cauchy sequence and thus converges. Denote \(\widetilde{\xi}=\lim\xi_{k}\). We will show that \(\widetilde{\xi}\in\mathbb{E}^{\rho(H_{1,3})}\) and thus \(\mathbb{E}^{\rho(H_{1,3})}\neq\emptyset\).
Fix \(\underline{t}_{0}\) to be a monomial. For \(d\geq\deg(\underline{t}_{0})\) and \(j=1,2\), we denote \(f_{j}^{d}\in\operatorname{Prob}_{c}(\operatorname{St}_{A_{2}}(\mathbb{Z}[t_{1},...,t_{m}]))\) to be the probability function such that
\[\underline{X}_{\sigma_{j}(1)}^{d}\underline{X}_{\sigma_{j}(2)}^{d}\underline{X }_{\sigma_{j}(3)}^{d}\underline{X}_{\sigma_{j}(4)}^{d}\underline{X}_{\sigma_{j }(5)}^{d}\underline{X}_{\sigma_{j}(6)}^{d}=\left(\frac{1}{2^{d}}\sum_{a=0}^{2^ {d}-1}x_{\sigma_{j}(1)}(a\underline{t}_{0})\right)f_{j}^{d}.\]
Thus,
\[\left\|\rho(x_{\sigma_{j}(1)}(\underline{t}_{0}))F_{j}(d,....,d)- F_{j}(d,....,d)\right\|\ =\] \[\left\|\rho(x_{\sigma_{j}(1)}(\underline{t}_{0}))\rho\left(\frac {1}{2^{d}}\sum_{a=0}^{2^{d}-1}x_{\sigma_{j}(1)}(a\underline{t}_{0})\right) \rho(f_{j}^{d})\xi\right.\] \[\left.\qquad\qquad\qquad\qquad\qquad-\rho\left(\frac{1}{2^{d}} \sum_{a=0}^{2^{d}-1}x_{\sigma_{j}(1)}(a\underline{t}_{0})\right)\rho(f_{j}^{ d})\xi\right\|\lesssim_{S}^{\operatorname{Proposition}}2.5,\|\xi\|=1\] \[\frac{1}{2^{d}}\left(|x_{\sigma_{j}(1)}(2^{d}\underline{t}_{0})| _{S}+|f_{j}^{d}|_{S}\right)\lesssim_{S}^{\operatorname{Proposition}}4.1, \operatorname{Corollary}4.2\ \frac{d^{m+3}}{2^{d}}.\]
It follows that
\[\|\rho(x_{\sigma_{1}(1)}(\underline{t}_{0}))\widetilde{\xi}-\widetilde{\xi}\| =\lim_{d\to\infty}\|\rho(x_{\sigma_{1}(1)}(\underline{t}_{0}))\xi_{2d-1}-\xi _{2d-1}\|=\]
\[\lim_{d\to\infty}\|\rho(x_{\sigma_{1}(1)}(\underline{t}_{0}))F_{1}(d,....,d)- F_{1}(d,....,d)\|\lesssim\lim_{d\to\infty}\frac{d^{m+3}}{2^{d}}=0,\]
and thus \(\rho(x_{\sigma_{1}(1)}(\underline{t}_{0}))\widetilde{\xi}=\widetilde{\xi}\). This holds for every \(\underline{t}_{0}\) and thus \(\widetilde{\xi}\in\mathbb{E}^{\rho(K_{1,2})}\). Similarly,
\[\|\rho(x_{\sigma_{2}(1)}(\underline{t}_{0}))\widetilde{\xi}- \widetilde{\xi}\| =\lim_{d\to\infty}\|\rho(x_{\sigma_{2}(1)}(\underline{t}_{0})\xi_{2d}-\xi _{2d}\|=\] \[\lim_{d\to\infty}\|\rho(x_{\sigma_{2}(1)}(\underline{t}_{0}))F_{2 }(d,....,d)-F_{2}(d,....,d)\|=0,\]
and thus \(\rho(x_{\sigma_{2}(1)}(\underline{t}_{0}))\widetilde{\xi}=\widetilde{\xi}\). This holds for every \(\underline{t}_{0}\) and thus \(\widetilde{\xi}\in\mathbb{E}^{\rho(K_{2,3})}\).
We conclude that
\[\widetilde{\xi}\in\mathbb{E}^{\rho(K_{1,2})}\cap\mathbb{E}^{\rho(K_{2,3})}= \mathbb{E}^{\rho(H_{1,3})}\]
as needed.
### Relative fixed point property for \(\operatorname{St}_{\Phi}\)
Below, for a subgroup \(K<\Gamma\) and \(g\in\Gamma\), we denote \(K^{g}=gKg^{-1}\). Also, given a root \(\gamma\in\Phi\) and a ring \(R\), we will denote
\[K_{\gamma}(R)=\{x_{\gamma}(p):p\in R\}.\]
**Observation 5.6**.: We note that if the pair \((\Gamma,K)\) has relative property \((F_{\mathcal{E}_{uc}})\) and \(\phi:\Gamma\to\Gamma\) is an isomorphism of \(\Gamma\), then the \((\Gamma,\phi(K))\) has relative property \((F_{\mathcal{E}_{uc}})\). In particular, if the pair \((\Gamma,K)\) has relative property \((F_{\mathcal{E}_{uc}})\), then for every \(g\in\Gamma\), the pair \((\Gamma,K^{g})\) has relative property \((F_{\mathcal{E}_{uc}})\).
**Theorem 5.7**.: _If for some root \(\gamma_{0}\in C_{2}\), the pair \((\mathrm{St}_{C_{2}}(\mathbb{Z}[t_{1},...,t_{m}]),K_{\gamma_{0}})\) has relative property \((F_{\mathcal{E}_{uc}})\), then for every \(\gamma\in C_{2}\), the pair \((\mathrm{St}_{C_{2}}(\mathbb{Z}[t_{1},...,t_{m}]),K_{\gamma})\) has relative property \((F_{\mathcal{E}_{uc}})\)._
Proof.: We note that for every \(\gamma,\gamma^{\prime}\in C_{2}\) such that \(\gamma\) and \(\gamma^{\prime}\) are both short/long roots, there is an automorphism \(\phi:\mathrm{St}_{C_{2}}(\mathbb{Z}[t_{1},...,t_{m}])\to\mathrm{St}_{C_{2}}( \mathbb{Z}[t_{1},...,t_{m}])\) such that \(\phi(K_{\gamma})=K_{\gamma^{\prime}}\). This implies that if \(\gamma,\gamma^{\prime}\) are both both short/long roots, then, by Observation 5.6, the pair \((\mathrm{St}_{C_{2}}(\mathbb{Z}[t_{1},...,t_{m}]),K_{\gamma})\) has relative property \((F_{\mathcal{E}_{uc}})\) if and only if the pair \((\mathrm{St}_{C_{2}}(\mathbb{Z}[t_{1},...,t_{m}]),K_{\gamma^{\prime}})\) has relative property \((F_{\mathcal{E}_{uc}})\).
Let \(\gamma_{0}\in C_{2}\) such that the pair \((\mathrm{St}_{C_{2}}(\mathbb{Z}[t_{1},...,t_{m}]),K_{\gamma_{0}})\) has relative property \((F_{\mathcal{E}_{uc}})\).
**Case 1: \(\gamma_{0}\) is a long root**. By our argument above, it follows that for every long root \(\gamma\), the pair \((\mathrm{St}_{C_{2}}(\mathbb{Z}[t_{1},...,t_{m}],K_{\gamma})\) has relative property \((F_{\mathcal{E}_{uc}})\). We are left to show that there is a short root \(\gamma\) such that the pair \((\mathrm{St}_{C_{2}}(\mathbb{Z}[t_{1},...,t_{m}]),K_{\gamma})\) has relative property \((F_{\mathcal{E}_{uc}})\).
Explicitly, let \(\alpha\) denote a long root of \(C_{2}\) and \(\beta\) denote a short root of \(C_{2}\) such that the angle between \(\alpha\) and \(\beta\) is \(\frac{3}{4}\pi\). We already showed that \((\mathrm{St}_{C_{2}}(\mathbb{Z}[t_{1},...,t_{m}]),K_{\alpha})\) and \((\mathrm{St}_{C_{2}}(\mathbb{Z}[t_{1},...,t_{m}]),K_{\alpha+2\beta})\) have relative property \((F_{\mathcal{E}_{uc}})\). We will show that the pair \((\mathrm{St}_{C_{2}}(\mathbb{Z}[t_{1},...,t_{m}]),K_{\alpha+\beta})\) has relative property \((F_{\mathcal{E}_{uc}})\).
By Proposition 2.14, it holds for every \(p\in\mathbb{Z}[t_{1},...,t_{m}]\) that
\[[x_{\alpha}(p),x_{\beta}(1)]=x_{\alpha+\beta}(p)x_{\alpha+2\beta}(p).\]
Thus,
\[x_{\alpha+\beta}(p) =[x_{\alpha}(p),x_{\beta}(1)]x_{\alpha+2\beta}(-p)\] \[=x_{\alpha}(-p)(x_{\beta}(-1)x_{\alpha}(p)x_{\beta}(1))x_{\alpha +2\beta}(-p)\] \[\subseteq K_{\alpha}K_{\alpha}^{x_{\beta}(-1)}K_{\alpha+2\beta},\]
and it follows that
\[K_{\alpha+\beta}\subseteq K_{\alpha}K_{\alpha}^{x_{\beta}(-1)}K_{\alpha+2\beta}\]
and in particular, \(K_{\alpha+\beta}\) is boundedly generated by \(K_{\alpha}\), \(K_{\alpha}^{x_{\beta}(-1)}\) and \(K_{\alpha+2\beta}\).
Thus, \(K_{\alpha+\beta}\) is boundedly generated by \(3\) subgroups that all have relative property \((F_{\mathcal{E}_{uc}})\) and it follows from Lemma 2.6 that \((\mathrm{St}_{C_{2}}(\mathbb{Z}[t_{1},...,t_{m}]),K_{\alpha+\beta})\) has relative property \((F_{\mathcal{E}_{uc}})\).
**Case 2: \(\gamma_{0}\) is a short root**. The proof of this case is similar to the proof of the previous case (but it is more involved). By our argument above, it follows that for every short root \(\gamma\), the pair \((\mathrm{St}_{C_{2}}(\mathbb{Z}[t_{1},...,t_{m}]),K_{\gamma})\) has relative property \((F_{\mathcal{E}_{uc}})\). We are left to show that there is a long root \(\gamma\) such that the pair \((\mathrm{St}_{C_{2}}(\mathbb{Z}[t_{1},...,t_{m}]),K_{\gamma})\) has relative property \((F_{\mathcal{E}_{uc}})\).
A monomial \(\underline{t}=t_{1}^{k_{1}}...t_{m}^{k_{m}}\) will be called square-free if \(k_{i}\in\{0,1\}\) for every \(1\leq i\leq m\). We note that for every monomial \(\underline{t}=t_{1}^{k_{1}}...t_{m}^{k_{m}}\), there is a unique square-free monomial, which we will denote by \(\underline{t}_{\rm{sf}}\) and a unique monomial \(\underline{t}_{\rm{rt}}\) such that \(\underline{t}=\underline{t}_{\rm{sf}}\underline{t}_{\rm{rt}}^{2}\). We note that there are exactly \(2^{m}\) different square-free monomials. For a square-free monomial \(\underline{t}^{\prime}\), we will denote \((\mathbb{Z}[t_{1},...,t_{m}])_{\underline{t}^{\prime}}\) to be polynomials that are of the form \(a_{1}\underline{t}_{1}+...+a_{n}\underline{t}_{n}\) such that for every \(1\leq i\leq n\), \((\underline{t}_{i})_{\rm{sf}}=\underline{t}^{\prime}\). We define
\[K_{\alpha+2\beta}((\mathbb{Z}[t_{1},...,t_{m}])_{\underline{t}^{\prime}})=\{x _{\alpha+2\beta}(p):p\in(\mathbb{Z}[t_{1},...,t_{m}])_{\underline{t}^{\prime}}\},\]
and note that \(K_{\alpha+2\beta}\) is boundedly generated by \(K_{\alpha+2\beta}((\mathbb{Z}[t_{1},...,t_{m}])_{\underline{t}^{\prime}}), \underline{t}^{\prime}\) square-free. Thus, it is enough to show that for every square-free \(\underline{t}^{\prime}\), it holds that the pair \(({\rm St}_{C_{2}}(\mathbb{Z}[t_{1},...,t_{m}]),K_{\alpha+2\beta}((\mathbb{Z}[ t_{1},...,t_{m}])_{\underline{t}^{\prime}}))\) has relative property \((F_{\mathcal{E}_{uc}})\).
Fix \(\underline{t}^{\prime}\) to be a square-free monomial. By Proposition 2.14, it holds for every \(p_{1}\in\mathbb{Z}[t_{1},...,t_{m}]\) that
\[[x_{\alpha+\beta}(p_{1}),x_{\beta}(1)]=x_{\alpha+2\beta}(2p_{1})\]
and thus \(K_{\alpha+2\beta}(2\mathbb{Z}[t_{1},...,t_{m}])\) is boundedly generated by \(K_{\alpha+\beta}\) and \(K_{\beta}\).
Also, by Proposition 2.14, it holds for every \(p_{2}\in\mathbb{Z}[t_{1},...,t_{m}]\) that
\[[x_{\alpha}(\underline{t}^{\prime}),x_{\beta}(p_{2})]=x_{\alpha+\beta}( \underline{t}^{\prime}p_{2})x_{\alpha+2\beta}(\underline{t}^{\prime}p_{2}^{2}).\]
Thus, the set
\[\{x_{\alpha+2\beta}(\underline{t}^{\prime}p_{2}^{2}):p_{2}\in\mathbb{Z}[t_{1},...,t_{m}]\}\]
is boundedly generated by \(K_{\beta},K_{\alpha+\beta}\) and \(K_{\beta}^{x_{\alpha}(-\underline{t}^{\prime})}\).
It follows that it is enough to show that for every \(p\in(\mathbb{Z}[t_{1},...,t_{m}])_{\underline{t}^{\prime}}\), there are \(p_{1},p_{2}\in\mathbb{Z}[t_{1},...,t_{m}]\) such that \(p=2p_{1}+\underline{t}^{\prime}p_{2}^{2}\). Indeed, if such \(p_{1},p_{2}\) exist for every \(p\in(\mathbb{Z}[t_{1},...,t_{m}])_{\underline{t}^{\prime}}\), then it follows that \(K_{\alpha+2\beta}((\mathbb{Z}[t_{1},...,t_{m}])_{\underline{t}^{\prime}})\) is boundedly generated by \(K_{\beta},K_{\alpha+\beta}\) and \(K_{\beta}^{x_{\alpha}(-\underline{t}^{\prime})}\) and by Lemma 2.6, the pair \(({\rm St}_{C_{2}}(\mathbb{Z}[t_{1},...,t_{m}]),K_{\alpha+2\beta}((\mathbb{Z}[ t_{1},...,t_{m}])_{\underline{t}^{\prime}}))\) has relative property \((F_{\mathcal{E}_{uc}})\).
Fix \(p\in(\mathbb{Z}[t_{1},...,t_{m}])_{\underline{t}^{\prime}}\). We note that there are monomials \(\underline{t}_{1},...,\underline{t}_{n}\in(\mathbb{Z}[t_{1},...,t_{m}])_{ \underline{t}^{\prime}}\) such that \(p+2\mathbb{Z}[t_{1},...,t_{m}]=\sum_{i=1}^{n}\underline{t}_{i}+2\mathbb{Z}[t_{ 1},...,t_{m}]\). As noted above, for every \(i\), there is a monomial \((\underline{t}_{i})_{\rm{rt}}\) such that \(\underline{t}_{i}=\underline{t}^{\prime}(\underline{t}_{i})_{\rm{rt}}^{2}\). Thus,
\[p+2\mathbb{Z}[t_{1},...,t_{m}] =\sum_{i=1}^{n}\underline{t}_{i}+2\mathbb{Z}[t_{1},...,t_{m}]\] \[=\underline{t}^{\prime}\sum_{i=1}^{n}(\underline{t}_{i})_{\rm{rt} }^{2}+2\mathbb{Z}[t_{1},...,t_{m}]\] \[=\underline{t}^{\prime}\left(\left(\sum_{i=1}^{n}(\underline{t}_{i })_{\rm{rt}}\right)^{2}-2\sum_{1\leq i<j\leq n}(\underline{t}_{i})_{\rm{rt}}( \underline{t}_{j})_{\rm{rt}}\right)+2\mathbb{Z}[t_{1},...,t_{m}]\] \[=\underline{t}^{\prime}\left(\sum_{i=1}^{n}(\underline{t}_{i})_{ \rm{rt}}\right)^{2}+2\mathbb{Z}[t_{1},...,t_{m}].\]
It follows that there are \(p_{1},p_{2}\in\mathbb{Z}[t_{1},...,t_{m}]\) such that \(p=2p_{1}+\underline{t}^{\prime}p_{2}^{2}\) as needed.
The proof for relative fixed point property in the case where \(\Phi=G_{2}\) appeared in [1] in the context of property (T). The adaptation of the proof of [1] to our setting is straight-forward and we claim no originality here.
**Theorem 5.8**.: _For every root \(\gamma\in G_{2}\), the pair \((\mathrm{St}_{G_{2}}(\mathbb{Z}[t_{1},...,t_{m}],K_{\gamma})\) has relative property \((F_{\mathcal{E}_{uc}})\)._
Proof.: Using the notations of Proposition 2.14, we note that \(K_{\{\pm\alpha,\pm(\alpha+3\beta),\pm(2\alpha+3\beta)\}}\cong\mathrm{St}_{A_{ 2}}(\mathbb{Z}[t_{1},...,t_{m}])\). By Theorem 5.5, we deduce that for every long root \(\gamma\in G_{2}\), it holds that pair \((\mathrm{St}_{G_{2}}(\mathbb{Z}[t_{1},...,t_{m}]),K_{\gamma})\) has relative property \((F_{\mathcal{E}_{uc}})\). Thus, we are left to prove relative property \((F_{\mathcal{E}_{uc}})\) for subgroups of short roots in \(G_{2}\). By symmetry, it is enough to show that the pair \((\mathrm{St}_{G_{2}}(\mathbb{Z}[t_{1},...,t_{m}]),K_{\alpha+2\beta})\) has relative property \((F_{\mathcal{E}_{uc}})\).
By Proposition 2.14, for every \(p_{1},p_{2}\in\mathbb{Z}[t_{1},...,t_{m}]\) it holds that
\[[x_{\alpha}(p_{1}),x_{\beta}(p_{2})]=x_{\alpha+\beta}(p_{1}p_{2})x_{\alpha+2 \beta}(p_{1}p_{2}^{2})x_{\alpha+3\beta}(p_{1}p_{2}^{3})x_{2\alpha+3\beta}(p_{1 }^{2}p_{2}^{3}),\]
and
\[[x_{\alpha}(p_{1}p_{2}),x_{\beta}(1)]=x_{\alpha+\beta}(p_{1}p_{2})x_{\alpha+2 \beta}(p_{1}p_{2})x_{\alpha+3\beta}(p_{1}p_{2})x_{2\alpha+3\beta}(p_{1}^{2}p_{ 2}^{2}).\]
Thus,
\[x_{\alpha+2\beta}(p_{1}(p_{2}^{2}-p_{2})) \in[x_{\alpha}(p_{1}p_{2}),x_{\beta}(1)]^{-1}[x_{\alpha}(p_{1}),x _{\beta}(p_{2})]K_{\alpha+3\beta}K_{2\alpha+3\beta}\] \[\subseteq K_{\alpha}^{x_{\beta}(1)}K_{\alpha}K_{\alpha}^{x_{\beta }(-p_{2})}K_{\alpha+3\beta}K_{2\alpha+3\beta}.\]
Taking \(p_{1}=p\in\mathbb{Z}[t_{1},...,t_{m}]\) to be a general polynomial and \(p_{2}=2,t_{1},t_{2},...,t_{m}\), we deduce that
\[K_{\alpha+2\beta}(2\mathbb{Z}[t_{1},...,t_{m}])\subseteq K_{\alpha}^{x_{ \beta}(1)}K_{\alpha}K_{\alpha}^{x_{\beta}(-2)}K_{\alpha+3\beta}K_{2\alpha+3 \beta},\]
\[K_{\alpha+2\beta}((t_{i}^{2}-t_{i})\mathbb{Z}[t_{1},...,t_{m}])\subseteq K_{ \alpha}^{x_{\beta}(1)}K_{\alpha}K_{\alpha}^{x_{\beta}(-t_{i})}K_{\alpha+3 \beta}K_{2\alpha+3\beta},\forall 1\leq i\leq m.\]
Denote
\[K_{\alpha+2\beta}^{\prime}=K_{\alpha+2\beta}\left(2\mathbb{Z}[t_{1},...,t_{m} ]+\sum_{i=1}^{m}(t_{i}^{2}-t_{i})\mathbb{Z}[t_{1},...,t_{m}]\right).\]
By the computations above, \(K_{\alpha+2\beta}^{\prime}\) is boundedly generated by \(K_{\alpha},K_{\alpha+3\beta},K_{2\alpha+3\beta},K_{\alpha}^{x_{\beta}(1)},K_{ \alpha}^{x_{\beta}(-2)}\) and \(K_{\alpha}^{x_{\beta}(-t_{i})},i=1,...,m\). It follows from Lemma 2.6 that \((\mathrm{St}_{G_{2}}(\mathbb{Z}[t_{1},...,t_{m}]),K_{\alpha+2\beta}^{\prime})\) has relative property \((F_{\mathcal{E}_{uc}})\). We note that \(K_{\alpha+2\beta}^{\prime}\) is a finite index subgroup of \(K_{\alpha+2\beta}\) (explicitly, \(K_{\alpha+2\beta}/K_{\alpha+2\beta}^{\prime}=\oplus_{i=1}^{2^{m}}(\mathbb{Z}/2 \mathbb{Z})\)) and thus it follows that \((\mathrm{St}_{G_{2}}(\mathbb{Z}[t_{1},...,t_{m}]),K_{\alpha+2\beta})\) has relative property \((F_{\mathcal{E}_{uc}})\) as needed.
Combining these results and Theorem 5.5 give rise to the following result that appeared in the introduction:
**Theorem 5.9**.: _Let \(\Phi\neq C_{2}\) be a classical, reduced, irreducible root system of rank \(\geq 2\) and \(R\) a finitely generated commutative (unital) ring. For every \(\alpha\in\Phi\), the pair \((\mathrm{St}_{\Phi}(R),K_{\alpha})\) has relative property \((F_{\mathcal{E}_{uc}})\)._
Proof.: Fix \(\Phi\) and \(R\) as above.
We note that there is an integer \(m\in\mathbb{N}\) such that there is a ring epimorphism \(\phi:\mathbb{Z}[t_{1},...,t_{m}]\to R\). This ring epimorphism induces a group epimorphism \(\widetilde{\phi}:\mathrm{St}_{\Phi}(\mathbb{Z}[t_{1},...,t_{m}])\to\mathrm{St}_{ \Phi}(R)\) such that every \(\alpha\in\Phi\), \(\widetilde{\phi}(K_{\alpha}(\mathbb{Z}[t_{1},...,t_{m}]))=K_{\alpha}(R)\). Thus, it is enough to prove that for every \(\alpha\in\Phi\), the pair \((\mathrm{St}_{\Phi}(\mathbb{Z}[t_{1},...,t_{m}]),K_{\alpha})\) has relative property \((F_{\mathcal{E}_{uc}})\).
**Case 1: \(\Phi\) is simply laced, \(\Phi=F_{4}\) or \(\Phi=G_{2}.\)** In this case, every root \(\alpha\) of \(\Phi\) is either in an \(A_{2}\) subsystem or in a \(G_{2}\) system (if \(\Phi=G_{2}\)) and thus by Theorems 5.5, 5.8, for every \(\alpha\in\Phi\), the pair \((\mathrm{St}_{\Phi}(\mathbb{Z}[t_{1},...,t_{m}]),K_{\alpha})\) has relative property \((F_{\mathcal{E}_{uc}})\).
**Case 2: \(\Phi=B_{n},n\geq 3\) or \(\Phi=C_{n},n\geq 3.\)** In this case, the only roots that are not in an \(A_{2}\) subsystem are the short roots in the \(C_{2}\) subsystem (if \(\Phi=B_{n}\)) or the long roots in the \(C_{2}\) subsystem (if \(\Phi=C_{n}\)). By Theorem 5.7, it follows that for every \(\alpha\) the pair \((\mathrm{St}_{\Phi}(\mathbb{Z}[t_{1},...,t_{m}]),K_{\alpha})\) has relative property \((F_{\mathcal{E}_{uc}})\).
## 6. General synthesis machinery
The aim of this section is to develop a general machinery that allows one to deduce property \((F_{\mathcal{E}_{uc}})\) for a group \(\Gamma\) from relative property \((F_{\mathcal{E}_{uc}})\) given a certain subgroup structure of \(\Gamma\). This machinery was inspired by the work of Mimura [16] (which in turn was inspired by the work of Shalom [14]), but our machinery is much more general than that of [16].
### Ultraproduct of Banach spaces and group actions
We recall that an _ultrafilter_\(\mathcal{U}\) on \(\mathbb{N}\) is a non-empty collections of subsets of \(\mathbb{N}\) such that the following holds:
* \(\emptyset\notin\mathcal{U}\).
* If \(B\in\mathcal{U}\) and \(B\subseteq A\), then \(A\in\mathcal{U}\).
* If \(A,B\in\mathcal{U}\), then \(A\cap B\in\mathcal{U}\).
* If \(A\notin\mathcal{U}\), then \(\mathbb{N}\setminus A\in\mathcal{U}\).
An ultrafilter \(\mathcal{U}\) on \(\mathbb{N}\) is called _non-principal_ if for every \(n\in\mathbb{N}\), \(\{n\}\notin\mathcal{U}\). The existence of non-principal ultrafilters is proven via the axiom of choice. In the discussion below, we fix \(\mathcal{U}\) to be a non-principal ultrafilter.
Given a sequence of real numbers \(\{a_{n}\}_{n\in\mathbb{N}}\), the limit of the sequence along the ultrafilter \(\mathcal{U}\) is denoted by \(\lim_{n\to\mathcal{U}}a_{n}\) and defined to be a number \(a\in\mathbb{R}\) such that for every \(\varepsilon>0\),
\[\{n:|a-a_{n}|<\varepsilon\}\in\mathcal{U}.\]
One can show that if \(\{a_{n}\}_{n\in\mathbb{N}}\) is a bounded sequence, then this limit always exists and it is unique.
Given a sequence of pointed Banach spaces \(\{\mathbb{E}_{n}\}_{n\in\mathbb{N}}\), the \(\ell^{\infty}\) product \(\ell^{\infty}(\mathbb{N},\{\mathbb{E}_{n}\}_{n\in\mathbb{N}})\) is the space of bounded sequence, i.e.,
\[\ell^{\infty}(\mathbb{N},\{\mathbb{E}_{n}\}_{n\in\mathbb{N}})=\{\{\xi_{n}\in \mathbb{E}_{n}\}_{n\in\mathbb{N}}:\sup_{n}\lVert\xi_{n}\rVert<\infty\}.\]
The _ultraproduct_ of the sequence \(\{\mathbb{E}_{n}\}_{n\in\mathbb{N}}\) is defined as follows: we define
\[N_{\mathcal{U}}=\{\{\xi_{n}\}_{n\in\mathbb{N}}\in\ell^{\infty}(\mathbb{N},\{ \mathbb{E}_{n}\}_{n\in\mathbb{N}}):\lim_{n\to\mathcal{U}}\|\xi_{n}\|=0\},\]
and the ultraproduct is defined as
\[\{\mathbb{E}_{n}\}_{\mathcal{U}}=\ell^{\infty}(\mathbb{N},\{\mathbb{E}_{n}\}_ {n\in\mathbb{N}})/N_{\mathcal{U}},\]
with the norm
\[\|\{\xi_{n}\}_{n\in\mathbb{N}}\|=\lim_{n\to\mathcal{U}}\|\xi_{n}\|.\]
We note that \(\{\mathbb{E}_{n}\}_{\mathcal{U}}\) is a Banach space and for a class of Banach spaces \(\mathcal{E}\), we say that \(\mathcal{E}\) is _closed under passing to ultraproducts_ if for every sequence \(\{\mathbb{E}_{n}\}_{n\in\mathbb{N}}\subseteq\mathcal{E}\) it holds that \(\{\mathbb{E}_{n}\}_{\mathcal{U}}\in\mathcal{E}\).
Let \(\Gamma\) be a finitely generated group and fix a finite generating set \(S\). For an affine isometric action \(\rho\) of \(\Gamma\) on a Banach space \(\mathbb{E}\), we define the displacement function \(\operatorname{disp}_{\rho,S}:\mathbb{E}\to[0,\infty)\) as
\[\operatorname{disp}_{\rho,S}(\xi)=\max_{s\in S}\|\xi-\rho(s)\xi\|.\]
We note that the action \(\rho\) has a fixed point if and only if there is \(\xi_{0}\in\mathbb{E}\) such that \(\operatorname{disp}_{\rho,S}(\xi_{0})=0\).
Let \(\{\mathbb{E}_{n}\}_{n\in\mathbb{N}}\) be a sequence of Banach spaces and a sequence of affine isometric actions \(\rho_{n}\) of \(\Gamma\) on \(\mathbb{E}_{n}\) such that
\[\sup_{n}\operatorname{disp}_{\rho_{n},S}(0_{n})<\infty,\]
where \(0_{n}\in\mathbb{E}_{n}\) denotes the origin of \(\mathbb{E}_{n}\). In this setting, there is an affine isometric action \(\rho_{\mathcal{U}}\) of \(\Gamma\) on the ultraproduct \(\{\mathbb{E}_{n}\}_{\mathcal{U}}\) induced by the action of \(\Gamma\) on \(\ell^{\infty}(\mathbb{N},\{\mathbb{E}_{n}\}_{n\in\mathbb{N}})\).
An affine isometric action \(\rho\) will be called _uniform with respect to \(S\)_ (or _\(S\)-uniform_) if
\[\inf_{\xi\in\mathbb{E}}\operatorname{disp}_{\rho,S}(\xi)\geq 1.\]
Given a class of Banach space \(\mathcal{E}\), we define the class of uniform \(S\)-actions as
\[\mathcal{C}^{S-\operatorname{uniform}}_{\mathcal{E}}=\{(\mathbb{E},\rho): \mathbb{E}\in\mathcal{E},\rho\text{ is uniform with respect to }S\}.\]
The class \(\mathcal{C}^{S-\operatorname{uniform}}_{\mathcal{E}}\) can be thought of as a class of actions that are "uniformly far" from having a fixed point. What will be important to us in the discussion below is that if \(\mathcal{E}\) is closed under passing to ultraproducts, then so is \(\mathcal{C}^{S-\operatorname{uniform}}_{\mathcal{E}}\), i.e., if \(\mathcal{E}\) is closed under passing to ultraproducts, then given a sequence \(\{(\mathbb{E}_{n},\rho_{n})\}_{n\in\mathbb{N}}\subseteq\mathcal{C}^{S- \operatorname{uniform}}_{\mathcal{E}}\) with
\[\sup_{n}\operatorname{disp}_{\rho_{n},S}(0_{n})<\infty,\]
then \((\{\mathbb{E}_{n}\}_{\mathcal{U}},\rho_{\mathcal{U}})\in\mathcal{C}^{S- \operatorname{uniform}}_{\mathcal{E}}\).
The following result is proved (in much greater generality) in [10] (where it is attributed to Gromov):
**Theorem 6.1**.: _[_10_, Theorem 1.5]_ _Let \(\Gamma\) be a finitely generated group with a finite generating set \(S\) and \(\mathcal{E}\) be a class of Banach spaces that is closed under passing to ultraproducts. If are \(\mathbb{E}\in\mathcal{E}\) and an affine isometric action \(\rho\) of \(\Gamma\) on \(\mathbb{E}\) such that \(\mathbb{E}^{\rho(\Gamma)}=\emptyset\), then \(\mathcal{C}^{S-\operatorname{uniform}}_{\mathcal{E}}\neq\emptyset\)._
_In other words, if \(\mathcal{E}\) is closed under passing to ultraproducts and there is an action of \(\Gamma\) on some \(\mathbb{E}\in\mathcal{E}\) that does not admit a fixed point, then there is an \(S\)-uniform action of \(\Gamma\) on a space in \(\mathcal{E}\)._
### Our synthesis machinery
We start with two results which were proven by Shalom [2] in the Hilbert setting and by Mimura [10] in the Banach setting (we include the proofs for completeness).
**Theorem 6.2**.: _Let \(\Gamma\) be a discrete group with finite Abelianization and \(N,H_{+},H_{-}<\Gamma\) be subgroups such that \(N\) normalizes both \(H_{+}\) and \(H_{-}\) and \(H_{+},H_{-}\) generate \(\Gamma\). Also, let \(\mathbb{E}\) be a uniformly convex Banach space such that the pairs \((\Gamma,H_{+}),(\Gamma,H_{-})\) have relative property \((F_{\mathbb{E}})\). Given an affine isometric action \(\rho\) of \(\Gamma\) denote_
\[D_{\rho}=\inf_{\xi^{+}\in\mathbb{E}^{\rho(H_{+})},\xi^{-}\in\mathbb{E}^{\rho( H_{+})}}\|\xi^{+}-\xi^{-}\|.\]
_Assume that there are \(\xi^{+}\in\mathbb{E}^{\rho(H_{+})},\xi^{-}\in\mathbb{E}^{\rho(H_{-})}\) such that \(\|\xi^{+}-\xi^{-}\|=D_{\rho}\). Then \(\xi^{+}\in\mathbb{E}^{\rho(\langle H_{+},N\rangle)}\) and \(\xi^{-}\in\mathbb{E}^{\rho(\langle H_{-},N\rangle)}\)._
Proof.: Let \(\pi\) denote the linear part of \(\rho\) and \(c\) denote its cocycle. By [1, Proposition 2.6], there are \(\pi\)-invariant subspaces \(\mathbb{E}^{\pi(\Gamma)},\mathbb{E}^{\prime}\) such that \(\mathbb{E}=\mathbb{E}^{\pi(\Gamma)}\oplus\mathbb{E}^{\prime}\). We decompose the cocycle \(c=c_{0}+c^{\prime}\) where \(c_{0}:\Gamma\to\mathbb{E}^{\pi(\Gamma)}\) and \(c^{\prime}:\Gamma\to\mathbb{E}^{\prime}\) (note that \(c_{0},c^{\prime}\) are cocycles). It follows that \(c_{0}:\Gamma\to\mathbb{E}^{\pi(\Gamma)}\) is a group homomorphism between \(\Gamma\) and \(\mathbb{E}^{\pi(\Gamma)}\). Since \(\Gamma\) has a finite Abelianization, it follows that \(c_{0}\equiv 0\). We will first show that for every \(g\in\Gamma\) and every \(\xi\in\mathbb{E}\), it holds that
\[\rho(g)\xi-\xi\in\mathbb{E}^{\prime}\,.\]
Indeed, fix some \(\xi\in\mathbb{E}\) and decompose \(\xi=\xi_{0}+\xi^{\prime}\), where \(\xi_{0}\in\mathbb{E}^{\rho(\Gamma)}\) and \(\xi^{\prime}\in\mathbb{E}^{\prime}\). Then
\[\rho(g)\xi-\xi =\pi(g)\xi+c(g)-\xi\] \[=\pi(g)(\xi_{0}+\xi^{\prime})+c^{\prime}(g)-(\xi_{0}+\xi^{\prime})\] \[=\xi_{0}+\pi(g)\xi^{\prime}+c^{\prime}(g)-(\xi_{0}+\xi^{\prime})\] \[=\pi(g)\xi^{\prime}+c^{\prime}(g)-\xi^{\prime}\in\mathbb{E}^{ \prime}.\]
We need to prove that for every \(g\in N\) it holds that \(\rho(g)\xi^{\pm}=\xi^{\pm}\). By the fact that \(\rho(g)\xi^{\pm}-\xi^{\pm}\in\mathbb{E}^{\prime}\), this is equivalent to showing that for every \(g\in N\), it holds that \(\rho(g)\xi^{\pm}-\xi^{\pm}\in\mathbb{E}^{\pi(\Gamma)}\).
Fix \(g\in N\). We assumed that \(N\) normalizes \(H_{+},H_{-}\) and thus \(\rho(g)\xi^{\pm}\in\mathbb{E}^{\rho(H_{\pm})}\). By the fact that \(\rho\) is an isometry, it follows that
\[\|\rho(g)\xi^{+}-\rho(g)\xi^{-}\|=\|\xi^{+}-\xi^{-}\|=D_{\rho}.\]
We note that if \(\|(\rho(g)\xi^{+}-\rho(g)\xi^{-})-(\xi^{+}-\xi^{-})\|>0\), then by strict convexity it follows that
\[\|\frac{1}{2}(\rho(g)\xi^{+}-\rho(g)\xi^{-})+\frac{1}{2}(\xi^{+}-\xi^{-})\|<D _{\rho}\]
and thus for \(\eta^{\pm}=\frac{\xi^{\pm}+\rho(g)\xi^{\pm}}{2}\in\mathbb{E}^{\rho(H_{\pm})}\) it holds that
\[\|\eta^{+}-\eta^{-}\|<D_{\rho},\]
which is a contradiction to the definition of \(D_{\rho}\). Thus
\[(\rho(g)\xi^{+}-\rho(g)\xi^{-})-(\xi^{+}-\xi^{-})=0\]
and it follows that
\[\rho(g)\xi^{+}-\xi^{+}=\rho(g)\xi^{-}-\xi^{-}.\]
We note that \(\rho(g)\xi^{+},\xi^{+}\in\mathbb{E}^{\rho(H_{+})}\) and thus \(\rho(g)\xi^{+}-\xi^{+}\in\mathbb{E}^{\pi(H_{+})}\). Similarly, \(\rho(g)\xi^{-}-\xi^{-}\in\mathbb{E}^{\pi(H_{-})}\). Thus from the equality \(\rho(g)\xi^{+}-\xi^{+}=\rho(g)\xi^{-}-\xi^{-}\), it follows that
\[\rho(g)\xi^{\pm}-\xi^{\pm}\in\mathbb{E}^{\pi(H_{+})}\cap\mathbb{E}^{\pi(H_{-} )}=\mathbb{E}^{\pi((H_{+},H_{-}))}=\mathbb{E}^{\pi(\Gamma)}\]
as needed.
**Theorem 6.3**.: _Let \(\Gamma\) be a finitely generated group with a finite generating set \(S\) and \(H_{+},H_{-}<\Gamma\) subgroups that generate \(\Gamma\). Also, let \(\mathcal{E}\) be a class of Banach spaces that is closed under passing to ultraproducts such that \(\mathcal{C}^{S-\text{uniform}}_{\mathcal{E}}\neq\emptyset\). Assume that the pairs \((\Gamma,H_{+})\), \((\Gamma,H_{-})\) have relative property \((F_{\mathcal{E}})\). For every \((\mathbb{E},\rho)\in\mathcal{C}^{S-\text{uniform}}_{\mathcal{E}}\) we denote_
\[D_{\rho}=\inf_{\xi^{+}\in\mathbb{E}^{\rho(H_{+})},\xi^{-}\in\mathbb{E}^{\rho( H_{-})}}\|\xi^{+}-\xi^{-}\|\]
_and further denote_
\[D=\inf_{(\mathbb{E},\rho)\in\mathcal{C}^{S-\text{uniform}}_{\mathcal{E}}}D_{ \rho}.\]
_Then there is \((\mathbb{E},\rho)\in\mathcal{C}^{S-\text{uniform}}_{\mathcal{E}}\) and \(\xi^{\pm}\in\mathbb{E}^{\rho(H_{\pm})}\) such that \(\|\xi^{+}-\xi^{-}\|=D\)._
Proof.: Let \(\{(\mathbb{E}_{n},\rho_{n})\}_{n\in\mathbb{N}}\subseteq\mathcal{C}^{S-\text{ uniform}}_{\mathcal{E}}\) such that for every \(n\), \(D_{\rho_{n}}\leq D+\frac{1}{2n}\). Thus, for every \(n\), there are \(\xi^{+}_{n}\in\mathbb{E}^{\rho_{n}(H_{+})},\xi^{-}_{n}\in\mathbb{E}^{\rho_{n}(H _{-})}\) such that
\[\|\xi^{+}_{n}-\xi^{-}_{n}\|\leq D+\frac{1}{n}\leq D+1.\]
By translating the origin of \(\mathbb{E}_{n}\), we can assume that for every \(n\), \(\xi^{+}_{n}=0_{n}\).
Denote \(\mathbb{E}_{\mathcal{U}}=\{\mathbb{E}_{n}\}_{\mathcal{U}}\) to be the ultraproduct. We note that the sequences \(\{\xi^{+}_{n}\}_{n\in\mathbb{N}}\) and \(\{\xi^{-}_{n}\}_{n\in\mathbb{N}}\) are in \(\ell^{\infty}(\mathbb{N},\{\mathbb{E}_{n}\}_{n\in\mathbb{N}})\), i.e., that these are bounded sequences. Indeed, we translated the origin such that for every \(n\), \(\|\xi^{+}_{n}\|=0\) and
\[\|\xi^{-}_{n}\|=\|\xi^{-}_{n}-\xi^{+}_{n}\|\leq D+1.\]
Denote \(\xi^{+}_{\infty}\) and \(\xi^{-}_{\infty}\) to be the \(\mathcal{U}\)-limits of these sequences, then \(\|\xi^{+}_{\infty}-\xi^{-}_{\infty}\|=D\). We are left to check that the action \(\rho_{\mathcal{U}}\) on \(\mathbb{E}_{\mathcal{U}}\) is well-defined (and it will readily follow that \(\xi^{+}_{\infty}\in\mathbb{E}^{\rho_{\mathcal{U}}(H_{+})}_{\mathcal{U}}\) and \(\xi^{-}_{\infty}\in\mathbb{E}^{\rho_{\mathcal{U}}(H_{-})}_{\mathcal{U}}\)). Thus, it is left to verify that for every \(g\in S\),
\[\sup_{n}\|\rho_{n}(g)0_{n}\|<\infty.\]
Fix \(g\in S\). We will show that for every \(h\in H_{+}\cup H_{-}\) it holds for every \(n\) that
\[\|\rho_{n}(h)0_{n}\|\leq 2(D+1).\]
If \(h\in H_{+}\), then \(\rho_{n}(h)0_{n}=0_{n}\) and thus \(\|\rho_{n}(h)0_{n}\|=0\). If \(h\in H_{-}\), then
\[\|\rho_{n}(h)0_{n}\|\leq\|\rho_{n}(h)0_{n}-\rho_{n}(h)\xi^{-}_{n}\|+\|\rho_{n }(h)\xi^{-}_{n}\|=2\|\xi^{-}_{n}\|\leq 2(D+1).\]
Thus, by Proposition 2.5, it follows that
\[\sup_{n\in\mathbb{N}}\lVert\rho_{n}(g)0_{n}\rVert\leq 2(D+1)|g|_{H_{+}\cup H_{-}}\]
as needed.
After this preparation, we prove a general criterion for property \((F_{\mathcal{E}})\):
**Theorem 6.4**.: _Let \(\Gamma\) be a finitely generated group with a finite Abelianization and \(\mathcal{E}\) a class of uniformly convex Banach spaces that is closed under passing to ultraproducts. Also, let \(\vec{\mathcal{G}}\) be a directed graph with a vertex set \(V\) such that for every \(u,v\in V,u\neq v\), there are directed paths from \(u\) to \(v\) and from \(v\) to \(u\). Assume that for every \(v\in V\) there are subgroups \(N^{v},H_{+}^{v},H_{-}^{v}<\Gamma\) such that the following holds:_
1. _For every_ \(v\in V\)_,_ \(N^{v}\) _normalizes_ \(H_{+}^{v}\) _and_ \(H_{-}^{v}\)_._
2. _For every_ \(v\in V\)_,_ \(\langle H_{+}^{v},H_{-}^{v}\rangle=\Gamma\)_._
3. _For_ \(u,v\in V\)_, if_ \(u\to v\)_, then_ \(H_{\pm}^{v}<\langle H_{\pm}^{u},N^{u}\rangle\)_._
4. _It holds that_ \[\langle H_{+}^{v},N^{v}:v\in V\rangle=\Gamma.\]
5. _For every_ \(v\in V\)_, the pairs_ \((\Gamma,H_{+}^{v}),(\Gamma,H_{-}^{v})\) _have relative property_ \((F_{\mathcal{E}})\)_._
_Then \(\Gamma\) has property \((F_{\mathcal{E}})\)._
Proof.: To ease the reading, we will denote \(u\to\to v\) if there is a directed path from \(u\) to \(v\).
Fix \(S\) to be a finite generating set of \(\Gamma\). Assume towards contradiction that \(\Gamma\) does not have property \((F_{\mathcal{E}})\). By Theorem 6.1, it follows that \(\mathcal{C}_{\mathcal{E}}^{S-\mathrm{uniform}}\neq\emptyset\). For every \((\mathbb{E},\rho)\in\mathcal{C}_{\mathcal{E}}^{S-\mathrm{uniform}}\) and every \(v\in V\), we denote
\[D_{\rho}^{v}=\inf_{\xi^{+}\in\mathbb{E}^{\rho(H_{+}^{v})},\xi^{-}\in\mathbb{E} ^{\rho(H_{-}^{v})}}\lVert\xi^{+}-\xi^{-}\rVert\]
and further denote
\[D^{v}=\inf_{(\mathbb{E},\rho)\in\mathcal{C}_{\mathcal{E}}^{S-\mathrm{uniform }}}D_{\rho}^{v}.\]
Let \(u,v\in V\) such that \(u\to v\). By Theorem 6.3, there are \((\mathbb{E},\rho)\in\mathcal{C}_{\mathcal{E}}^{S-\mathrm{uniform}}\), \(\xi^{+}\in\mathbb{E}^{\rho(H_{+}^{u})}\) and \(\xi^{-}\in\mathbb{E}^{\rho(H_{-}^{u})}\) such that \(\lVert\xi^{+}-\xi^{-}\rVert=D^{u}\). By Theorem 6.2, it follows that \(\xi^{\pm}\in\mathbb{E}^{\rho(\langle H_{\pm}^{u},N^{u}\rangle)}\). By our assumptions, \(H_{\pm}^{v}<\langle H_{\pm}^{u},N^{u}\rangle\) it follows that \(\xi^{\pm}\in\mathbb{E}^{\rho(H_{\pm}^{v})}\) and thus
\[D^{u}=\lVert\xi^{+}-\xi^{-}\rVert\geq^{\xi^{\pm}\in\mathbb{E}^{\rho(H_{\pm}^{ v})}}D_{\rho}^{v}\geq D^{v}.\]
By induction, if \(u\to\to v\), then \(D^{u}\geq D^{v}\). By our assumptions, for every \(u,v\in V,u\neq v\), it holds that \(u\to\to v\) and \(v\to\to u\). It follows that for every \(u,v\in V\), \(D^{u}=D^{v}\).
Fix \(v_{0}\in V\). By Theorem 6.3, there are \((\mathbb{E},\rho)\in\mathcal{C}_{\mathcal{E}}^{S-\mathrm{uniform}}\), \(\xi^{+}\in\mathbb{E}^{\rho(H_{+}^{v_{0}})}\) and \(\xi^{-}\in\mathbb{E}^{\rho(H_{-}^{v_{0}})}\) such that \(\lVert\xi^{+}-\xi^{-}\rVert=D^{v_{0}}\). By Theorem 6.2, it follows that \(\xi^{\pm}\in\mathbb{E}^{\rho(\langle H_{\pm}^{v_{0}},N^{v_{0}}\rangle)}\). We will show that for every \(v\in V\), \(\xi^{\pm}\in\mathbb{E}^{\rho(\langle H_{\pm}^{v},N^{v}\rangle)}\).
Let \(v\in V\). We will say that \(\mathrm{dist}_{\vec{\mathcal{G}}}(v_{0},v)=n\) where \(n\) is the smallest integer such that there are \(v_{0},v_{1},...,v_{n}=v\) and \(v_{i}\to v_{i+1}\) for every \(0\leq i\leq n-1\). We will show that for every \(v\in V\), \(\xi^{\pm}\in\mathbb{E}^{\rho(\langle H_{\pm}^{v},N^{v}\rangle)}\) by induction on \(\mathrm{dist}_{\vec{\mathcal{G}}}(v_{0},v)\). For \(n=0\), we already showed that
\(\xi^{\pm}\in\mathbb{E}^{\rho(\langle H^{v_{0}}_{\pm},N^{v_{0}}\rangle)}\). Assume that for \(n-1\), if \(\operatorname{dist}_{\vec{\mathcal{G}}}(v_{0},u)=n-1\), then \(\xi^{\pm}\in\mathbb{E}^{\rho(\langle H^{u}_{\pm},N^{u}\rangle)}\). Let \(v\in V\) such that \(\operatorname{dist}_{\vec{\mathcal{G}}}(v_{0},v)=n\). Then there is \(u\in V\) such that \(\operatorname{dist}_{\vec{\mathcal{G}}}(v_{0},u)=n-1\) and \(u\to v\). By our assumptions, \(H^{v}_{\pm}<\langle H^{u}_{\pm},N^{u}\rangle\) it follows that \(\xi^{\pm}\in\mathbb{E}^{\rho(H^{u}_{\pm})}\) and thus
\[D^{v}=D^{v_{0}}=\|\xi^{+}-\xi^{-}\|{\geq}^{\xi^{\pm}\in\mathbb{E}^{\rho(H^{v}_ {\pm})}}\,D^{v}_{\rho}\geq D^{v}.\]
Thus \(D^{v}_{\rho}=D^{v}\) and by Theorem 6.2, it follows that \(\xi^{\pm}\in\mathbb{E}^{\rho(\langle H^{v}_{\pm},N^{v}\rangle)}\) as needed.
Therefore we showed that
\[\xi^{+}\in\bigcap_{v\in V}\mathbb{E}^{\rho(\langle H^{v}_{+},N^{v}\rangle)}= \mathbb{E}^{\rho(\langle H^{v}_{\pm},N^{v}:v\in V\rangle)}=\mathbb{E}^{\rho( \Gamma)}\,.\]
This is a contradiction to the fact that \(\rho\) is \(S\)-uniform and thus \(\mathbb{E}^{\rho(\Gamma)}=\emptyset\).
As a consequence, we deduce the following machinery for the class of all uniformly convex Banach spaces:
**Theorem 6.5**.: _Let \(\Gamma\) be a finitely generated group with a finite Abelianization. Also, let \(\vec{\mathcal{G}}\) be a directed graph with a vertex set \(V\) such that for every \(u,v\in V,u\neq v\), there are directed paths from \(u\) to \(v\) and from \(v\) to \(u\). Assume for every \(v\in V\) there are subgroups \(N^{v},H^{v}_{+},H^{v}_{-}<\Gamma\) such that the following holds:_
1. _For every_ \(v\in V\)_,_ \(N^{v}\) _normalizes_ \(H^{v}_{+}\) _and_ \(H^{v}_{-}\)_._
2. _For every_ \(v\in V\)_,_ \(\langle H^{v}_{+},H^{v}_{-}\rangle=\Gamma\)_._
3. _For_ \(u,v\in V\)_, if_ \(u\to v\)_, then_ \(H^{v}_{\pm}<\langle H^{u}_{\pm},N^{u}\rangle\)_._
4. _It holds that_ \[\langle H^{v}_{+},N^{v}:v\in V\rangle=\Gamma.\]
5. _For every_ \(v\in V\)_, the pairs_ \((\Gamma,H^{v}_{+}),(\Gamma,H^{v}_{-})\) _have relative property_ \((F_{\mathcal{E}_{uc}})\)_._
_Then \(\Gamma\) has property \((F_{\mathcal{E}_{uc}})\)._
Proof.: We note that cannot apply Theorem 6.4 directly for \(\mathcal{E}=\mathcal{E}_{uc}\), because \(\mathcal{E}_{uc}\) is not closed under passing to ultraproducts. However, \(\mathcal{E}_{uc}\) can be written as a union of classes of Banach spaces that are closed under passing to ultraproducts. Explicitly, for \(\delta_{0}:(0,2]\to(0,1]\), we denote \(\mathcal{E}_{uc}(\delta_{0})\) to be the class of all uniformly convex Banach spaces with moduli of convexity \(\geq\delta_{0}\), i.e., \(\mathbb{E}\in\mathcal{E}_{uc}(\delta_{0})\) if it is uniformly convex with modulus of convexity \(\delta:(0,2]\to(0,1]\) such that for every \(0<\varepsilon\leq 2\), it holds that \(\delta(\varepsilon)\geq\delta_{0}(\varepsilon)\). We note that
\[\mathcal{E}_{uc}=\bigcup_{\delta_{0}:(0,2]\to(0,1]}\mathcal{E}_{uc}(\delta_{0})\]
and for every \(\delta_{0}:(0,2]\to(0,1)\), the class \(\mathcal{E}_{uc}(\delta_{0})\) is closed under passing to ultraproducts.
By our assumptions, for every \(\delta:(0,2]\to(0,1)\), the conditions of Theorem 6.4 holds for \(\mathcal{E}=\mathcal{E}_{uc}(\delta_{0})\) and thus for every \(\delta:(0,2]\to(0,1)\), \(\Gamma\) has property \((F_{\mathcal{E}_{uc}(\delta_{0})})\). It readily follows that \(\Gamma\) has property \((F_{\mathcal{E}_{uc}})\).
Theorem 6.4 and Theorem 6.5 have the following variation in which we use a non-directed graph that appeared in the introduction (Theorem 1.4 above):
**Theorem 6.6**.: _Let \(\Gamma\) be a finitely generated group with a finite Abelianization and \(\mathcal{E}\) a class of uniformly convex Banach spaces such that either is closed under passing to ultraproducts or \(\mathcal{E}=\mathcal{E}_{uc}\). Also, let \(\mathcal{G}\) be a (non-directed) connected graph with a vertex set \(V\). Assume that for every \(v\in V\) there are subgroups \(N^{v},H^{v}_{+},H^{v}_{-}<\Gamma\) such that the following holds:_
1. _For every_ \(v\in V\)_,_ \(N^{v}\) _normalizes_ \(H^{v}_{+}\) _and_ \(H^{v}_{-}\)_._
2. _For every_ \(v\in V\)_,_ \(\langle H^{v}_{+},H^{v}_{-}\rangle=\Gamma\)_._
3. _If_ \(u,v\in V\) _such that_ \(u\sim v\)_, then_ \(H^{u}_{\pm}<\langle H^{v}_{\pm},N^{v}\rangle,\) _and_ \(H^{v}_{\pm}<\langle H^{u}_{\pm},N^{u}\rangle.\)__
4. _It holds that_ \[\langle H^{v}_{+},N^{v}:v\in V\rangle=\Gamma.\]
5. _For every_ \(v\in V\)_, the pairs_ \((\Gamma,H^{v}_{+}),(\Gamma,H^{v}_{-})\) _have relative property_ \((F_{\mathcal{E}})\)_._
_Then \(\Gamma\) has property \((F_{\mathcal{E}})\)._
Proof.: We define \(\vec{\mathcal{G}}\) to be the directed graph with the vertex set \(V\) such that \(u\to v\) and \(v\to u\) in \(\vec{\mathcal{G}}\) if and only if \(u\sim v\) in \(\mathcal{G}\). For this graph, we can apply Theorem 6.4 when \(\mathcal{E}\) is closed under passing to ultraproducts or Theorem 6.5 when \(\mathcal{E}=\mathcal{E}_{uc}\) and deduce that \(\Gamma\) has property \((F_{\mathcal{E}})\).
## 7. Synthesis of the fixed point property for groups graded by root systems
Groups graded by root systems were defined Ershov, Jaikin-Zapirain and Kassabov [1] as a generalization of Steinberg groups over (classical) root systems. The aim of the section is to prove that from groups that are (strongly) graded by a root system, relative property \((F_{\mathcal{E}_{uc}})\) of the root groups implies property \((F_{\mathcal{E}_{uc}})\) of the entire group.
**Definition 7.1**.: Let \(\Gamma\) be a group and \(\Phi\) be a root system in \(E\). We say that \(\Gamma\) is _graded by \(\Phi\)_ if there are subgroups \(\{K_{\alpha}\}_{\alpha\in\Phi}\) of \(\Gamma\) such that the following holds:
* \(\langle K_{\alpha}:\alpha\in\Phi\rangle=\Gamma\).
* For every \(\alpha,\beta\in\Phi\), \(\beta\notin\mathbb{R}_{<0}\alpha\), \[[K_{\alpha},K_{\beta}]\subseteq\langle X_{\gamma}:\gamma=a\alpha+b\beta\in \Phi,a,b\geq 1\rangle.\]
If the above holds, we say that \(\langle K_{\alpha}:\alpha\in\Phi\rangle\) is a grading of \(\Gamma\). Moreover, we say that \(\Gamma\) is _strongly graded by \(\Phi\)_ (and that \(\langle K_{\alpha}:\alpha\in\Phi\rangle\) is a strong grading of \(\Gamma\)), if it is graded by \(\Phi\) and for every \(\Phi_{1}\in\mathrm{Borel}(\Phi)\) and every \(\beta\in\mathrm{Core}(\Phi_{1})\) it holds that
\[K_{\beta}\subseteq\langle K_{\gamma}:\gamma\in\Phi_{1}\setminus(\mathbb{R}_{>0 }\beta)\rangle.\]
The main example for groups graded by root systems are the Steinberg groups:
**Proposition 7.2**.: _[_1_, Proposition 7.7]_ _Let \(\Phi\) be a reduced irreducible classical root system of rank \(\geq 2\) and \(R\) a commutative ring. Also, let \(\mathrm{St}_{\Phi}(R)\) be the Steinberg group and \(\{K_{\alpha}(R):\alpha\in\Phi\}\) be the root subgroups of \(\mathrm{St}_{\Phi}(R)\). Then \(\mathrm{St}_{\Phi}(R)\) is strongly graded by \(\{K_{\alpha}(R):\alpha\in\Phi\}\)._
Let \(\Phi\) be a root system in \(E\) and \(\Gamma\) be a group graded by \(\Phi\). For a non-empty subset \(A\subseteq\Phi\), we denote
\[K_{A}=\langle K_{\alpha}:\alpha\in A\rangle.\]
**Lemma 7.3**.: _Let \(\Phi\) be a root system in \(E\) and \(\Gamma\) a group that is graded by \(\Phi\). Also, let \(\Phi_{1}\in\mathrm{Borel}(\Phi)\) and \(\alpha\in\partial\Phi_{1}\). Then \(K_{\Phi\cap\mathbb{R}_{>0}\alpha}\) normalizes \(K_{\Phi_{1}\setminus\mathbb{R}\alpha}\)._
Proof.: Let \(\alpha^{\prime}\in\Phi\cap\mathbb{R}_{>0}\alpha\) and \(\beta\in\Phi_{1}\setminus\mathbb{R}\alpha\). Then
\[[K_{\alpha^{\prime}},K_{\beta}]\subseteq\langle X_{\gamma}:\gamma=a\alpha^{ \prime}+b\beta\in\Phi,a,b\geq 1\rangle\subseteq^{\mathrm{Lemma\;2.12}(2)}K_{ \mathrm{Core}(\Phi_{1})}\subseteq K_{\Phi_{1}\setminus\mathbb{R}\alpha}.\]
**Corollary 7.4**.: _Let \(\Phi\) be a root system in \(E\) and \(\Gamma\) a group that is graded by \(\Phi\). Also, let \(\Phi_{1},\Phi_{2}\in\mathrm{Borel}(\Phi)\) that are co-maximal. Then \(K_{(\Phi_{1}\cup\Phi_{2})\setminus(\Phi_{1}\cap\Phi_{2})}\) normalizes \(K_{\Phi_{1}\cap\Phi_{2}}\) and \(K_{-(\Phi_{1}\cap\Phi_{2})}\)._
Proof.: We assumed that \(\Phi_{1},\Phi_{2}\) are co-maximal and thus there is \(\alpha\in\partial\Phi_{1}\) such that \(-\alpha\in\partial\Phi_{2}\) and
\[(\Phi_{1}\cup\Phi_{2})\setminus(\Phi_{1}\cap\Phi_{2})=\Phi\cap\mathbb{R}\alpha.\]
By Lemma 7.3, \(K_{\Phi\cap\mathbb{R}_{>0}\alpha}\) normalizes \(K_{\Phi_{1}\setminus\mathbb{R}_{>0}\alpha}=K_{\Phi_{1}\cap\Phi_{2}}\). Also, by Lemma 7.3, \(K_{\Phi\cap\mathbb{R}_{<0}\alpha}\), \(K_{\Phi\cap\mathbb{R}_{>0}(-\alpha)}=K_{\Phi\cap\mathbb{R}_{<0}\alpha}\) normalizes \(K_{\Phi_{2}\setminus\mathbb{R}_{>0}(-\alpha)}=K_{\Phi_{1}\cap\Phi_{2}}\). Thus, we conclude that \(K_{(\Phi_{1}\cup\Phi_{2})\setminus(\Phi_{1}\cap\Phi_{2})}\) normalizes \(K_{\Phi_{1}\cap\Phi_{2}}\).
It follows that \(K_{(\Phi_{1}\cup\Phi_{2})\setminus(\Phi_{1}\cap\Phi_{2})}\) also normalizes \(K_{-(\Phi_{1}\cap\Phi_{2})}=K_{(-\Phi_{1})\cap(-\Phi_{2})}\), since
\[(\Phi_{1}\cup\Phi_{2})\setminus(\Phi_{1}\cap\Phi_{2})=((-\Phi_{1})\cup(-\Phi _{2}))\setminus((-\Phi_{1})\cap(-\Phi_{2})).\]
**Lemma 7.5**.: _Let \(\Phi\) be a regular root system in \(E\) and \(\Gamma\) a group that is strongly graded by \(\Phi\). Also, let \(\Phi_{1},\Phi_{2}\in\mathrm{Borel}(\Phi)\) that are co-maximal. Then \(\langle K_{\Phi_{1}\cap\Phi_{2}},K_{-(\Phi_{1}\cap\Phi_{2})}\rangle=\Gamma\)._
Proof.: It is sufficient to prove that for every \(\beta\in(\Phi_{1}\cup\Phi_{2})\setminus(\Phi_{1}\cap\Phi_{2})\), it holds that
\[K_{\beta}<\langle K_{\Phi_{1}\cap\Phi_{2}},K_{-(\Phi_{1}\cap\Phi_{2})}\rangle.\]
Fix \(\beta\in(\Phi_{1}\cup\Phi_{2})\setminus(\Phi_{1}\cap\Phi_{2})\). By Lemma 2.12 (3), there is \(\Phi_{3}\in\mathrm{Borel}(\Phi)\) such that \(\beta\in\mathrm{Core}(\Phi_{3})\). By the assumption of strong grading,
\[K_{\beta}\subseteq\langle K_{\gamma}:\gamma\in\Phi_{3}\setminus(\mathbb{R}_{> 0}\beta)\rangle<\langle K_{\Phi_{1}\cap\Phi_{2}},K_{-(\Phi_{1}\cap\Phi_{2})}\rangle\]
as needed.
**Lemma 7.6**.: _Let \(\Phi\) be a root system in \(E\) and \(\Gamma\) be a group graded by \(\Phi\). For every \(\Phi_{1}\in\mathrm{Borel}(\Phi)\), the group \(K_{\Phi_{1}}\) is boundedly generated by \(K_{\alpha},\alpha\in\Phi_{1}\)._
Proof.: Fix \(\Phi_{1}\in\mathrm{Borel}(\Phi)\) and let \(f\in\mathfrak{F}(\Phi_{1})\) such that \(\Phi_{f}=\Phi_{1}\). We index the roots in \(\Phi_{f}\) according to their \(f\)-value: \(\Phi_{f}=\{\alpha_{1},...,\alpha_{n}\}\) such that for every \(1\leq i<j\leq n\), it holds that \(f(\alpha_{i})<f(\alpha_{j})\). We observe that for every \(1\leq i<j\leq n\), it holds that
\[[K_{\alpha_{i}},K_{\alpha_{j}}]\subseteq\langle X_{\gamma}:\gamma=a\alpha_{i} +b\alpha_{j}\in\Phi,a,b\geq 1\rangle<\langle X_{\gamma}:\gamma\in\Phi,f(\gamma)>f( \alpha_{i})\rangle.\]
Thus, \(K_{\alpha_{i}}\) normalizes \(K_{\{\alpha_{i+1},...,\alpha_{n}\}}\) and it follows that
\[K_{\{\alpha_{i},\alpha_{i+1},...,\alpha_{n}\}}\subseteq K_{\alpha_{i}}K_{\{ \alpha_{i+1},...,\alpha_{n}\}}.\]
By induction, we conclude that
\[K_{\Phi_{1}}=K_{\{\alpha_{1},...,\alpha_{n}\}}\subseteq K_{\alpha_{1}}...K_{ \alpha_{n}}\]
and in particular, \(K_{\Phi_{1}}\) is boundedly generated by \(K_{\alpha_{1}},...,K_{\alpha_{n}}\).
Using the lemmas above, we will prove the following theorem that was mentioned in the introduction:
**Theorem 7.7**.: _Let \(\Phi\) be a regular root system in \(E\) and \(\Gamma\) a group that is strongly graded by \(\Phi\). Also, let \(\mathcal{E}\) be a class of uniformly convex Banach spaces such that either \(\mathcal{E}\) is closed under passing to ultraproducts or \(\mathcal{E}=\mathcal{E}_{uc}\). If for every \(\alpha\in\Phi\), the pair \((\Gamma,K_{\alpha})\) has relative property \((F_{\mathcal{E}})\), then \(\Gamma\) has property \((F_{\mathcal{E}})\)._
Proof.: We first note that by Lemma 2.12 (3) and by the assumption that \(\Gamma\) is strongly graded by \(\Phi\), it follows for every \(\alpha\in\Phi\) that \(K_{\alpha}<[\Gamma,\Gamma]\) and since \(\Gamma=\langle K_{\alpha}:\alpha\in\Phi\rangle\) it follows that \(\Gamma\) has a trivial Abelianization.
The idea of the proof is to use the machinery given in Theorem 6.6, i.e., to define a connected graph \(\mathcal{G}\) such that for every vertex \(v\) of \(\mathcal{G}\) there are groups \((H^{v}_{+},H^{v}_{-},N^{v})\) that fulfil conditions (1)-(5) given in Theorem 6.6.
The graph \(\mathcal{G}\) we will use will be the line graph of \(\mathcal{G}_{\text{co-max}}\) defined above. We recall that given a graph \(\mathcal{G}^{\prime}\), the line graph of \(\mathcal{G}^{\prime}\), denoted \(\text{Line}(\mathcal{G}^{\prime})\), is the graph that represents the adjacencies between the edges of \(\mathcal{G}^{\prime}\). Explicitly,
\[V(\text{Line}(\mathcal{G}^{\prime}))=\{\{u,v\}:u,v\in V(\mathcal{G}^{\prime}) \text{ and }u\sim^{\mathcal{G}^{\prime}}v\}\]
and for \(\{u,v\},\{u^{\prime},v^{\prime}\}\in\text{Line}(\mathcal{G}^{\prime}),\{u,v \}\neq\{u^{\prime},v^{\prime}\}\) it holds that \(\{u,v\}\sim^{\text{Line}(\mathcal{G}^{\prime})}\{u^{\prime},v^{\prime}\}\) if and only if \(\{u,v\}\cap\{u^{\prime},v^{\prime}\}\neq\emptyset\). It is not hard to see that if \(\mathcal{G}^{\prime}\) is connected, then so in \(\text{Line}(\mathcal{G}^{\prime})\).
Let \(\mathcal{G}=\text{Line}(\mathcal{G}_{\text{co-max}})\), i.e.,
\[V(\mathcal{G})=\{\{\Phi_{1},\Phi_{2}\}:\Phi_{1},\Phi_{2}\in\text{Borel}(\Phi) \text{ and }\Phi_{1},\Phi_{2}\text{ are co-maximal}\}.\]
We note that by Lemma 2.13, the graph \(\mathcal{G}_{\text{co-max}}\) is connected and thus \(\mathcal{G}\) is connected.
For \(\{\Phi_{1},\Phi_{2}\}\in V(\mathcal{G})\), we define
\[H^{\{\Phi_{1},\Phi_{2}\}}_{+}=K_{\Phi_{1}\cap\Phi_{2}},H^{\{\Phi_{1},\Phi_{2} \}}_{-}=K_{-(\Phi_{1}\cap\Phi_{2})},N^{\{\Phi_{1},\Phi_{2}\}}=K_{(\Phi_{1}\cup \Phi_{2})\setminus(\Phi_{1}\cap\Phi_{2})}.\]
We will complete the proof by checking that these subgroups fulfil the conditions of Theorem 6.6:
1. For every \(\{\Phi_{1},\Phi_{2}\}\in V(\mathcal{G})\), it holds by Corollary 7.4 that \(N^{\{\Phi_{1},\Phi_{2}\}}\) normalizes \(H^{\{\Phi_{1},\Phi_{2}\}}_{+}\) and \(H^{\{\Phi_{1},\Phi_{2}\}}_{-}\).
2. For every \(\{\Phi_{1},\Phi_{2}\}\in V(\mathcal{G})\), it holds by Lemma 7.5 that \(\langle H^{\{\Phi_{1},\Phi_{2}\}}_{+},H^{\{\Phi_{1},\Phi_{2}\}}_{-}\rangle=\Gamma\).
3. Let \(\{\Phi_{1},\Phi_{2}\},\{\Phi^{\prime}_{1},\Phi^{\prime}_{2}\}\in V(\mathcal{G})\) such that \(\{\Phi_{1},\Phi_{2}\}\neq\{\Phi^{\prime}_{1},\Phi^{\prime}_{2}\}\) and \(\{\Phi_{1},\Phi_{2}\}\sim^{\mathcal{G}}\{\Phi^{\prime}_{1},\Phi^{\prime}_{2}\}\). Then without loss of generality, \(\Phi_{1}=\Phi^{\prime}_{1}\). Thus, \[H^{\{\Phi_{1},\Phi_{2}\}}_{\pm}=K_{\pm(\Phi_{1}\cap\Phi_{2})}<K_{\pm\Phi_{1}}< \langle K_{\pm(\Phi_{1}\cap\Phi^{\prime}_{2})},K_{(\Phi_{1}\cup\Phi^{\prime}_{2} )\setminus(\Phi_{1}\cap\Phi^{\prime}_{2})}\rangle=\langle H^{\{\Phi_{1},\Phi^{ \prime}_{2}\}}_{\pm},N^{\{\Phi_{1},\Phi^{\prime}_{2}\}}\rangle,\] and similarly \[H^{\{\Phi_{1},\Phi^{\prime}_{2}\}}_{\pm}<\langle H^{\{\Phi_{1},\Phi_{2}\}}_{+}, N^{\{\Phi_{1},\Phi_{2}\}}\rangle.\]
4. Fix some \(\{\Phi_{1}^{\prime},\Phi_{2}^{\prime}\}\in V(\mathcal{G})\). We note that \(\{-\Phi_{1}^{\prime},-\Phi_{2}^{\prime}\}\in V(\mathcal{G})\) and thus \[\langle H_{+}^{\{\Phi_{1},\Phi_{2}\}},N^{\{\Phi_{1},\Phi_{2}\}}:\{ \Phi_{1},\Phi_{2}\}\in V(\mathcal{G})\rangle \supseteq\langle H_{+}^{\{\Phi_{1}^{\prime},\Phi_{2}^{\prime}\}},H_{+}^{\{-\Phi_{1}^{\prime},-\Phi_{2}^{\prime}\}},N^{\{\Phi_{1}^{\prime},\Phi_ {2}^{\prime}\}}\rangle\] \[=\langle K_{\Phi_{1}^{\prime}\cup\Phi_{2}^{\prime}},K_{-(\Phi_{1} ^{\prime}\cap\Phi_{2}^{\prime})},K_{(\Phi_{1}^{\prime}\cup\Phi_{2}^{\prime}) \setminus(\Phi_{1}^{\prime}\cap\Phi_{2}^{\prime})}\rangle\] \[=\Gamma.\]
5. We note that \(H_{+}^{\{\Phi_{1},\Phi_{2}\}}<K_{\Phi_{1}}\) and \(H_{-}^{\{\Phi_{1},\Phi_{2}\}}<K_{-\Phi_{1}}\). Thus, in order to prove that pairs \((\Gamma,H_{+}^{\{\Phi_{1},\Phi_{2}\}})\) and \((\Gamma,H_{-}^{\{\Phi_{1},\Phi_{2}\}})\) have relative property \((F_{\mathcal{E}})\), it is enough to prove that for every \(\Phi_{3}\in\mathrm{Borel}(\Phi)\), the pair \((\Gamma,K_{\Phi_{3}})\) has relative property \((F_{\mathcal{E}})\). Fix \(\Phi_{3}\in\mathrm{Borel}(\Phi)\). By Lemma 7.6, it holds that \(K_{\Phi_{3}}\) is boundedly generated by \(K_{\alpha},\alpha\in\Phi_{3}\). We assumed that for every \(\alpha\in\Phi\), the pair \((\Gamma,K_{\alpha})\) has relative property \((F_{\mathcal{E}})\) and thus, by Lemma 2.6, it follows that the pair \((\Gamma,K_{\Phi_{3}})\) has relative property \((F_{\mathcal{E}})\) as needed.
As an application, we derive the following result for Steinberg groups that appeared in the introduction as Theorem 1.5:
**Theorem 7.8**.: _Let \(\Phi\) be a classical reduced irreducible root system of rank \(\geq 2\), \(R\) a commutative ring, \(\mathrm{St}_{\Phi}(R)\) the Steinberg group and \(\{K_{\alpha}:\alpha\in\Phi\}\) the root subgroups of \(\mathrm{St}_{\Phi}(R)\)._
_If for every \(\alpha\in\Phi\), the pair \((\mathrm{St}_{\Phi}(R),K_{\alpha})\) has relative property \((F_{\mathcal{E}_{uc}})\), then \(\mathrm{St}_{\Phi}(R)\) has property \((F_{\mathcal{E}_{uc}})\)._
Proof.: We note that \(\Phi\) is regular and that by Proposition 7.2, \(\mathrm{St}_{\Phi}(R)\) is strongly graded by the root subgroups. Thus, property \((F_{\mathcal{E}_{uc}})\) follows from Theorem 7.7.
## 8. Banach fixed point properties for Steinberg groups and elementary Chevalley groups
Combining our relative fixed point property results with our synthesis argument implies our main result:
**Theorem 8.1**.: _Let \(\Phi\) be a classical reduced irreducible root system of rank \(\geq 2\) such that \(\Phi\neq C_{2}\). For every finitely generated commutative (unital) ring \(R\), the groups \(\mathrm{St}_{\Phi}(R)\) and \(\mathrm{EL}_{\Phi}(R)\) have property \((F_{\mathcal{E}_{uc}})\)._
Proof.: From Theorem 5.9 and Theorem 7.8 it follows that \(\mathrm{St}_{\Phi}(R)\) has property \((F_{\mathcal{E}_{uc}})\). Since \(\mathrm{EL}_{\Phi}(R)\) is a quotient of \(\mathrm{St}_{\Phi}(R)\) and property \((F_{\mathcal{E}_{uc}})\) is preserved under quotients, it follows that \(\mathrm{EL}_{\Phi}(R)\) has property \((F_{\mathcal{E}_{uc}})\).
## 9. Super-expanders
In [15], Mendel and Naor defined the notion of super-expanders. We recall their definition here. Let \(\mathbb{E}\) be a Banach space and \(\{(V_{i},E_{i})\}_{i\in\mathbb{N}}\) be a sequence of finite graphs with uniformly bounded degree, such that \(\lim_{i}|V_{i}|=\infty\). We say that \(\{(V_{i},E_{i})\}_{i\in\mathbb{N}}\) has a
_Poincare inequality with respect to \(\mathbb{E}\)_ if there are constants \(p,\gamma\in(0,\infty)\) such that for every \(i\in\mathbb{N}\) and every \(f:V_{i}\rightarrow\mathbb{E}\) we have
\[\frac{1}{|V_{i}|^{2}}\sum_{(u,v)\in V_{i}\times V_{i}}\|f(u)-f(v)\|^{p}\leq\frac {\gamma}{|V_{i}|}\sum_{(x,y)\in E_{i}}\|f(x)-f(y)\|^{p}.\]
The sequence \(\{(V_{i},E_{i})\}_{i\in\mathbb{N}}\) is called a _super-expander family_ if it has a Poincare inequality with respect to every uniformly convex Banach space.
Given a Banach space \(\mathbb{E}\), property \((T_{\mathbb{E}})\) is a group property defined in [1] as Banach version of property (T). We will not recall this definition here, but only recall the following result:
**Theorem 9.1**.: _[_1_, Theorem 1.3]_ _Let \(\Gamma\) be a locally compact, second countable group. For a Banach space \(\mathbb{E}\), it \(\Gamma\) has property \((F_{\mathbb{E}})\), then \(\Gamma\) has property \((T_{\mathbb{E}})\)._
For Cayley graphs, the following proposition of Lafforgue gives a relation between Poincare inequality of Cayley graphs and Banach property \((T)\):
**Proposition 9.2**.: _[_1_, Proposition 5.2]_ _Let \(\Gamma\) be a finitely generated discrete group and let \(\Gamma_{i}\) be a sequence of finite quotients of \(\Gamma\) with epimorphism \(\phi_{i}:\Gamma\rightarrow\Gamma_{i}\) such that \(|\Gamma_{i}|\rightarrow\infty\). Also, let \(\mathbb{E}\) be a Banach space. If \(\Gamma\) has Banach property \((T_{\mathbb{E}^{\prime}})\) for \(\mathbb{E}^{\prime}=\ell^{2}(\bigcup_{i}\Gamma_{i};\mathbb{E})\), then for every fixed finite symmetric generating set \(S\) of, the family of Cayley graphs of \(\{(\Gamma_{i},\phi_{i}(S))\}_{i\in\mathbb{N}}\) has a Poincare inequality with respect to \(\mathbb{E}\)._
Combining this result with our Theorem 1.1 implies the following result that appeared in the introduction (Theorem 1.6):
**Theorem 9.3**.: _Let \(n\geq 3,m\in\mathbb{N}\) and let \(\{R_{i}\}_{i\in\mathbb{N}}\) be a sequence of finite commutative (unital) rings such that for each \(i\), \(R_{i}\) is generated by \(p_{0}^{(i)}=1,p_{1}^{(i)},...,p_{m}^{(i)}\in R_{i}\) and \(|R_{i}|\rightarrow\infty\). Also, let \(\Phi\neq C_{2}\) be a classical reduced irreducible root system of rank \(\geq 2\). Denote \(\phi_{i}:\mathrm{St}_{\Phi}(\mathbb{Z}[t_{1},...,t_{m}])\rightarrow\mathrm{EL }_{\Phi}(R_{i})\) be the epimorphisms induced by the ring epimorphism \(\mathbb{Z}[t_{1},...,t_{m}]\to R_{i},1\mapsto r_{0}^{(i)},t_{j} \mapsto r_{j}^{(i)},\forall 1\leq j\leq m\)._
_For a finite generating set \(S\) of \(\mathrm{St}_{\Phi}(\mathbb{Z}[t_{1},...,t_{m}])\) it holds that the family of Cayley graphs of \(\{(\mathrm{EL}_{\Phi}(R_{i}),\phi_{i}(S))\}_{i\in\mathbb{N}}\) is a super-expander family._
Proof.: Let \(\mathbb{E}\) be some uniformly convex Banach space. By [1, Theorem 3], \(\ell^{2}(\mathrm{EL}_{\Phi}(R_{i});\mathbb{E})\) is a uniformly convex Banach space. It follows from Theorems 1.1, 9.1 and Proposition 9.2, that the family of Cayley graphs of \(\{(\mathrm{EL}_{\Phi}(R_{i}),\phi_{i}(S))\}_{i\in\mathbb{N}}\) is a \(\mathbb{E}\)-expander family.
|
2305.10503
|
OR-NeRF: Object Removing from 3D Scenes Guided by Multiview Segmentation
with Neural Radiance Fields
|
The emergence of Neural Radiance Fields (NeRF) for novel view synthesis has
increased interest in 3D scene editing. An essential task in editing is
removing objects from a scene while ensuring visual reasonability and multiview
consistency. However, current methods face challenges such as time-consuming
object labeling, limited capability to remove specific targets, and compromised
rendering quality after removal. This paper proposes a novel object-removing
pipeline, named OR-NeRF, that can remove objects from 3D scenes with user-given
points or text prompts on a single view, achieving better performance in less
time than previous works. Our method spreads user annotations to all views
through 3D geometry and sparse correspondence, ensuring 3D consistency with
less processing burden. Then recent 2D segmentation model Segment-Anything
(SAM) is applied to predict masks, and a 2D inpainting model is used to
generate color supervision. Finally, our algorithm applies depth supervision
and perceptual loss to maintain consistency in geometry and appearance after
object removal. Experimental results demonstrate that our method achieves
better editing quality with less time than previous works, considering both
quality and quantity.
|
Youtan Yin, Zhoujie Fu, Fan Yang, Guosheng Lin
|
2023-05-17T18:18:05Z
|
http://arxiv.org/abs/2305.10503v3
|
OR-NeRF: Object Removing from 3D Scenes Guided by Multiview Segmentation with Neural Radiance Fields
###### Abstract
The emergence of Neural Radiance Fields (NeRF) for novel view synthesis has led to increased interest in 3D scene editing. One important task in editing is removing objects from a scene while ensuring visual reasonability and multiview consistency. However, current methods face challenges such as time-consuming object labelling, limited capability to remove specific targets, and compromised rendering quality after removal. This paper proposes a novel object-removing pipeline, named OR-NeRF, that can remove objects from 3D scenes with either **point** or **text** prompts on a **single** view, achieving better performance in less time than previous works. Our method uses a points projection strategy to rapidly spread user annotations to all views, significantly reducing the processing burden. This algorithm allows us to leverage the recent 2D segmentation model Segment-Anything (SAM) to predict masks with improved precision and efficiency. Additionally, we obtain colour and depth priors through 2D inpainting methods. Finally, our algorithm employs depth supervision and perceptual loss for scene reconstruction to maintain consistency in geometry and appearance after object removal. Experimental results demonstrate that our method achieves better editing quality with less time than previous works, considering both quality and quantity.
## 1 Introduction
Neural Radiance Fields (NeRF) [27] has demonstrated excellent results in reconstructing 3D scenes, and recent works [30; 46; 51; 45; 48] have aimed to extend its capabilities to editing 3D scenes. One of the essential editing operations is removing objects from a 3D scene, which has garnered significant interest from the research community [47; 44; 43; 29; 9]. However, the practical application of this task faces several challenges. One of the most significant obstacles is the accurate localization of unwanted objects. Although it is natural for humans to identify unwanted objects, asking users to label every view is impractical. Additionally, ensuring multiview consistency and plausible content after deletion without any ground truth is not trivial.
Several works have tried to address the above problems but remain unsatisfactory. Object-NeRF [47] and ObjectSDF [44] decompose the NeRF [27] training into background and objects branches,
allowing them to render specified objects controlled by object ID. However, because of the lack of supervision for the removed part, neither of these works [47; 44] can guarantee a plausible completion at the removal area. Weder et al. [43] and SPIn-NeRF [29] use the 2D inpainting method [38] to generate colour and depth priors after deletion and reconstruct NeRF [27] from these priors directly. Although editing quality has improved, Weder et al. [43] require all views' masks to realize, while SPIn-NeRF [29] uses a series of segmentation preliminaries [10; 3; 55; 54] which even involves network training to generate masks for each scene with intensive time. DFFs [17] applies pre-trained language models [32; 3] to enable text-prompt editing by training NeRF [27] to align feature vectors extracted from language models [32; 3], eliminating the need for masks. However, it has difficulty locating regions to remove if the pre-trained object detector [32; 3] does not work appropriately.
In this paper, we propose a novel pipeline called OR-NeRF that enables free object removal from 3D scenes using either points or text prompts on a single image, requiring less time for multiview segmentation and achieving better performance than previous methods. To spread the points prompt on a single view to other views, we introduce a point projection strategy that utilizes the COLMAP [35] sparse reconstruction to find correspondences from 2D points to 3D sparse point cloud and further projects 3D points to all 2D images with camera parameters. This results in precise sparse points annotations for all scene views, which can be directly input to a recent 2D segmentation model Segment-Anything (SAM) [16] to predict masks. Generated at a speed of approximately two frames per second, our algorithm outperforms previous works like SPIn-NeRF [29] requiring minutes. Following the approach of Weder et al. [43] and SPIn-NeRF [29], we use the 2D inpainting model LaMa [38] to get colour priors for the removal area. We develop our scene object removal algorithm using TensoRF [4] as the backbone with depth supervision and perceptual loss. TensoRF [4] is a SOTA model for improving rendering quality considering time and performance trade-offs. This approach enables us to reconstruct the 3D scene after object removal with superior editing quality compared to existing methods. Fig 1 shows an overview of our OR-NeRF framework.
We evaluate our method on various datasets and analyze its performance in multiview segmentation and scene object removal through quality and quantity analyses. In summary, our contributions are (1) A novel pipeline for efficient object removal from 3D scenes, allowing for both points and text prompts on a single image; and (2) Experimental results demonstrate that our method achieves better editing quality and requires less time for multiview segmentation than previous methods, as evidenced by both quality and quantity analyses.
## 2 Related Work
Multiview SegmentationThough segmentation in 2D is well studied, multiview segmentation for 3D scenes has received less attention despite its non-negligible importance for downstream applications like 3D editing. Several self-supervised methods [7; 24] have been proposed, but they often produce inaccurate masks and have difficulty handling complex scenes. To mitigate these challenges, semi-supervised strategies [40; 8; 54; 29] have emerged that require partial annotations, or reasonable prompts from users. Semantic NeRF [54] propagates partial labels to dense semantic segmentation by leveraging a few in-place annotations via predicting semantic labels with volume rendering. Similar to NeRF [27], SPIn-NeRF [29] further constructs a thorough pipeline to generate masks for all views with points prompt on a single image. They [29] use one-shot segmentation [10] to estimate an initial mask, followed by a video segmentation [3; 38] to generate masks for all views by treating the image sequence as a video. Finally, they [29] use Semantic NeRF [54] to refine the masks. However, the above approaches require network training, which consumes considerable resources and does not guarantee an accurate mask, as errors can accumulate with complicated frameworks.
Scene Object RemovalNeRF [27] has greatly facilitated the area of 3D scene editing, and research [1; 2; 25; 36] focuses on various editing types emerging in large numbers. Works exist for texture editing [5; 45], geometry editing [30; 46; 51], and object-centred editing [34; 50; 9; 44; 47], such as removal [21; 43; 29], and even enabling multiple manipulations [19; 17; 41; 23; 57; 49; 28; 18]. [44; 47] decompose NeRF [27] training into background and object branches, allowing for rendering specified objects controlled by assigned object IDs. However, they generate 'black holes' at the removal region as there is no supervision or priors for the deletion part during training. [21; 43; 29] utilize the 2D inpainting method [38] to obtain priors for the removal part and directly reconstruct
the scene after deletion from these priors. Though achieving better rendering quality, these methods demand high preconditions, such as annotating or generating masks for all views, which rely on expensive time costs and hardware resources. Additionally, [17, 28, 9] combine pre-trained language models [3, 32, 20, 39] to enable text editing, thus bypassing the requirement for masks. Still, the rendering quality in the removal region is poor, as no algorithms are designed for learning pixel values after deletion.
## 3 Background
### Neural Radiance Fields
Given 3D location \(\textbf{x}=(x,y,z)\) and 2D viewing direction \(\textbf{d}=(\theta,\phi)\), NeRF [27] models the 3D scene implicitly with an MLP network which gives a mapping function \(F_{\Theta}:(\textbf{x},\textbf{d})\rightarrow(\textbf{c},\sigma)\). The output **c** stands for the radiance and \(\sigma\) for volume density at this point, respectively. To optimize weights \(\Theta\), the volume rendering approach [15] is introduced as:
\[C(\textbf{r})=\int_{t_{n}}^{t_{f}}T(t)\sigma(\textbf{r}(t))\textbf{c}(\textbf {r}(t),\textbf{d})dt,\text{ where }T(t)=\exp\left(-\int_{t_{n}}^{t}\sigma( \textbf{r}(s))ds\right)\,. \tag{1}\]
In Eq (1), \(C(\textbf{r})\) represents the pixel value and is calculated by integrating the radiance value **c** along the ray \(\textbf{r}(t)=\textbf{o}+t\textbf{d}\) starting from the camera position **o** with direction **d** pointing to the pixel, within near and far bounds \(t_{n}\) and \(t_{f}\). The function \(T(t)\) denotes the accumulated transmittance along the ray from \(t_{n}\) to \(t\). With the above definitions, NeRF [27] trains the network by minimizing the total squared error between rendered pixels and ground truth.
### SPIn-NeRF
SPIn-NeRF [29] proposes a comprehensive pipeline for removing objects from 3D scenes. In addition to a set of sparse view images with their corresponding camera parameters, SPIn-NeRF [29] takes a few points on one view annotated by users indicating the unwanted objects as a prompt. With these inputs, SPIn-NeRF [29] first combines a series of segmentation methods [10, 3, 55, 54] to obtain masks for all views. Then a 2D image inpainting model [38] is used to generate colour and depth priors in the mask area. The scene after deletion can be reconstructed with a modified version of
Figure 1: An overview of our OR-NeRF’s framework. We start with a set of sparse images and either points or text prompts. If a text prompt is used, we convert it into a points prompt by sampling points from the initial mask estimated using Grounded-SAM [13] (Sec 4.1.2). Next, we propagate the points annotations to all views by projecting them from 2D to 3D point cloud and then back to 2D (Sec 4.1.1). We utilize SAM [16] to predict masks using these points annotations. And LaMa is used [38] to obtain colour and depth priors. Finally, the scene after removal is reconstructed using Neural Radiance Fields supervised by colour (Eq (3)), depth (Eq (5)), and perceptual (Eq (6)) cues simultaneously (Sec 4.2).
vanilla NeRF [27] from these priors directly, which adds depth supervision [6] and perceptual loss [14] to constrain the geometry and appearance consistency across different views.
In the mask generation stage, an initial mask from the single-view annotation is obtained using the one-shot segmentation [10] method. The video segmentation approach [55; 3] that follows provides coarse masks for all views by wrapping images into a video sequence. Finally, the coarse masks are fine-tuned to generate proper masks for all views by fitting the Semantic NeRF [54]. This procedure even requires training the Semantic NeRF [54] from scratch to refine coarse masks obtained from [55; 3], resulting in significant costs in terms of time and hardware. Fig 2 shows the difference in mask generation between our pipeline and SPIn-NeRF [29].
## 4 Method
Considering a set of sparse-view images with their corresponding camera poses waiting to be edited, our method requires users to provide either points or text prompts indicating the unwanted objects for only one image. Possible prompts can be a few points marked on the object or words describing the target. To begin with, we find the masks of unwanted objects in all images. We spread the initial points prompts to all images for points input according to 3D geometry match relationships (Sec 4.1.1). While for text input, we first acquire Grounded-SAM [13] to make an initial mask for the annotated single view and followed by sampling points in this initial mask to switch text prompts to the points-prompt pattern (Sec 4.1.2).
To continue, we utilize the SAM model [16] to predict masks with points prompt and use masks to guide a 2D inpainting model LaMa [38] to generate colour and depth priors. Finally, we describe our object-removing strategy guaranteeing geometry and appearance consistency across all the views (Sec 4.2). Fig 1 shows an overview of our framework for removing objects from 3D scenes with points or text prompts.
### Multiview Segmentation
#### 4.1.1 Points Prompt
Suppose we have a group of \(n\) images \(\mathcal{I}=\{I_{i}\}_{i=1}^{n}\) and their corresponding camera parameters \(\mathcal{C}=\{C_{i}\}_{i=1}^{n}\) gathered from a 3D scene. We aim to predict masks for all views \(\mathcal{I}\) from only one-shot annotation. An intuitive approach to this question is to generate annotations for other images. We carefully investigate the 3D geometry matching relation in 3D scenes and find that a 2D point on a certain perspective can be spread to other views by projecting it back to 3D space and then projecting the corresponding 3D point to 2D planes under a certain camera pose. For 2D to 3D pass, we can refer to the sparse point cloud reconstructed by COLMAP [35] and its projected discrete points group \(\mathcal{D}=\{D_{i}\}_{i=1}^{n}\) on all 2D images. Actually, this information is represented by a certain data structure in COLMAP's [35] sparse reconstruction as a unique one-to-one mapping, which allows us to locate points in 3D space by simply querying with 2D coordinates (More details about the data structure are provided in the supplementary material). However, this introduces a new problem: finding a mapping for the user's arbitrary input is not guaranteed as the reconstruction is sparse. We can solve this question by making a query with the existing nearest points in the discrete points set \(\mathcal{D}\). And finally, for the 3D to 2D reverse pass, we reproject 3D points back to the 2D plane under a certain view through its corresponding camera matrices. Now we can spread the initial annotation provided by users to all other views safely and quickly as this algorithm utilizes 3D information, which is
Figure 2: Comparison of mask generation between SPIn-NeRF [29] (first row) and ours (second row). Our method achieves rapid and precise mask generation for all views in a single step, supporting both points and text input. In contrast, SPIn-NeRF [29] exhibits slower speed, lower accuracy, and limited support for points prompt only, requiring three steps, including network training.
self-consistent and does not involve any neural network training. Only matrices computation is needed, and the algorithm can achieve a speed of about two frames per second for generating masks.
Specifically, we leverage the 3D geometry correspondence to calculate all views' annotation \(\mathcal{P}_{2d}=\{P_{ij}\}_{i=1}^{n}\underset{j=1}{\overset{m}{\underset{j=1}{ \rightleftharpoons}}}\) from the only prompt \(P_{1}\) provided by users and here \(P_{ij}=(x_{ij},y_{ij})\), while \(m\) stands for the number of points marked in an image. With \(\mathcal{P}_{2d}\), we can obtain masks \(\mathcal{M}=\{M_{i}\}_{i=1}^{n}\) for all views easily from SAM model [16]\(\mathcal{F}_{S}\) by making inferences as \(\mathcal{M}=\mathcal{F}_{S}(\mathcal{I},\mathcal{P}_{2d})\). To realise this, we first initialise \(M_{1}\) with \(\mathcal{F}_{S}(I_{1},P_{1})\). Then we acquire points \(\mathcal{P}_{3d}=\{(x_{k},y_{k},z_{k})\}_{k=1}^{l}\) in 3D space by querying 2D coordinates \(D_{1}^{*}=(M_{1}\cap D_{1})\). Note \(l\) equals the number of points in \(D_{1}^{*}\) and we only refer to the points belonging to the mask \(M_{1}\) as we need to constrain the points annotation for all views precisely match the unwanted objects. In practice, the nearest points are calculated after 3D points have been projected to 2D planes to ensure the amount and quality of prompts (See supplementary materials for detailed analysis).
Considering the 3D to 2D situation, we begin with camera parameters. For each view \(I_{i}\), the associated camera parameters \(C_{i}=\{\textbf{K}_{i},\textbf{P}_{i}\}\) is composed of the intrinsics **K** and extrinsics \(\textbf{P}=[\textbf{R}|\textbf{t}]\). Here, the extrinsic matrix **P** is represented by a \(3\times 3\) rotation matrix **R** (camera orientation) and a \(3\times 1\) translation vector **t** (camera position) that together transform the 3D point from the world coordinate system \(P_{w}=[X_{w},Y_{w},Z_{w}]^{T}\) to the camera coordinate system \(P_{c}=[X_{c},Y_{c},Z_{c}]^{T}=\textbf{R}P_{w}+\textbf{t}\). By substituting \(\mathcal{P}_{3d}\) to \(P_{w}\), we can switch 3D points \(\mathcal{P}_{3d}\) from the world coordinate system to camera coordinate system for all views simply as \(\mathcal{P}_{3d}^{*}=\{(x_{ik},y_{ik},z_{ik})\}_{i=1}^{n}\underset{k=1}{ \overset{l}{\underset{k=1}{\rightleftharpoons}}}=\textbf{R}_{i}\mathcal{P}_{3 d}+\textbf{t}_{i}\). Here \(\mathcal{P}_{3d}^{*}\) denotes the camera coordinate system form. And with one little step forward:
\[\mathcal{P}_{2d}^{*}=\{P_{ik}\}_{i=1}^{n}\underset{k=1}{\overset{l}{ \underset{k=1}{\rightleftharpoons}}}=\{x_{ik}/z_{ik},y_{ik}/z_{ik}\}_{i=1}^{n }\underset{k=1}{\overset{l}{\underset{k=1}{\rightleftharpoons}}},\text{ where }(x,y,z)\in\mathcal{P}_{3d}^{*}\,, \tag{2}\]
we project 3D points \(\mathcal{P}_{3d}^{*}\) back to all 2D views to get corresponding pixel coordinates \(\mathcal{P}_{2d}^{*}\) in images. Now we need to filter the number of points in each image from \(k\) to \(m\). To handle this issue, we spread the initial annotation \(P_{1}\) to all views by performing the above 2D-3D-2D projection to \(D_{1}^{*}\) similarly to get a \(\mathcal{P}_{2d}^{{}^{\prime}}=\{P_{ij}\}_{i=1}^{n}\underset{j=1}{\overset{m}{ \underset{j=1}{\rightleftharpoons}}}\) and find the \(m\) nearest points to \(\mathcal{P}_{2d}^{{}^{\prime}}\) in \(\mathcal{P}_{2d}^{*}\) to construct \(\mathcal{P}_{2d}\). We keep the number of points the same as the user input in each view to ensure mask quality (More details in supplementary materials). By far, we get all the annotations required for the prediction of SAM [16], and we can gain masks for all views by calling \(\mathcal{M}=\mathcal{F}_{S}(\mathcal{I},\mathcal{\mathcal{P}}_{2d})\).
#### 4.1.2 Text Prompt
For the text prompt, we leverage SAM's [16] variety Grounded-SAM [13] which combines an object detector Grounding DINO [22] who can handle text input. A natural way to deal with text is by asking Grounded-SAM [13] to predict all the views' masks with the same text input. However, we observe a considerable speed drop in inference when comparing Grounding DINO [22] to SAM [16] (See supplementary materials for quantitative evidence). Meanwhile, Grounded-SAM [13] can fail to handle some 'difficult' views due to Grounding DINO's [22] limited detection ability. Therefore, we consider a two-stage strategy where we first use Grounded-SAM [13] to obtain an initial mask for the single view and then sample points from this mask. Finally, we use the points prompt method in Sec 4.1.1 to generate masks for the remaining views. This design ensures high-quality masks while minimizing computational costs.
Regarding \(m\) words \(\mathcal{T}=\{T_{j}\}_{j=1}^{m}\) input from the user that describe the unwanted objects. For input words sequence \(\mathcal{T}\) and images \(\mathcal{I}\) pairs, Grounding DINO [22] model \(\mathcal{F}_{G}\) takes the prompt \(\mathcal{T}\) as labels and tries to find these labels' corresponding bounding boxes \(\mathcal{B}=\{B_{ij}\}_{i=1}^{n}\underset{j=1}{\overset{m}{\underset{j=1}{ \rightleftharpoons}}}\) in images \(\mathcal{I}\) as \(\mathcal{B}=\mathcal{F}_{G}(\mathcal{I},\mathcal{T})\). As SAM [16] is capable of two kinds of inputs: points of boxes, we can obtain the mask \(M_{1}\) of unwanted objects in the user's annotated image \(I_{1}\) simply by forwarding SAM [16] with \(M_{1}=\mathcal{F}_{S}(I_{1},B_{1})\). With the one-shot mask \(M_{1}\), we sample a set of \(k\) points \(\mathcal{\tilde{P}}=\{P_{k}=(x_{k},y_{k})\}_{k=1}^{q}\) from this mask to make the problem solvable by the points prompt method (Sec 4.1.1). To implement this, we traverse the points in the mask from left to right and up to down and choose the top left, bottom right point and centre point of the mask to construct the points prompt \(\mathcal{\hat{P}}\). Then, text prompt input has been converted into points prompt, and we let the algorithm used for points prompt in Sec 4.1.1 generate masks for all views.
### Scene Object Removal
Once we get object masks for all views, we can reconstruct a 3D scene without unwanted objects through Neural Radiance Fields by treating 2D inpainting priors as ground truth. Recall Sec 3.1, the network can be optimized by minimizing the colour loss:
\[\mathcal{L}_{c}=\Sigma_{\mathbf{r}\in\mathcal{R}}\|\hat{C}(\mathbf{r})-C( \mathbf{r})\|_{2}^{2}\,, \tag{3}\]
where \(\mathcal{R}\) is the set of rays in each training batch, \(\hat{C}(\mathbf{r})\) are the ground truth and \(C(\mathbf{r})\) are the rendered pixels by network outputs calculated through Eq (1), respectively.
However, relying solely on colour loss is inadequate, as LaMa [38] does not consider the 3D context, leading to inconsistency across different views. To address this issue, we introduce depth constraints [6] into the training of Neural Radiance Fields. Depth values \(D(\mathbf{r})\) can be obtained through volume rendering easily as:
\[D(\mathbf{r})=\int_{t_{n}}^{t_{f}}T(t)\sigma(\mathbf{r}(t))zdt,\text{ where }T(t)=\exp\left(-\int_{t_{n}}^{t}\sigma(\mathbf{r}(s))ds\right)\,. \tag{4}\]
where \(z\) is the distance from the current 3D location to the camera position. Like RGB images, we render depth images for the original scene without deletion and use LaMa [38] to get depth priors. Then, we add depth supervision to training as:
\[\mathcal{L}_{d}=\Sigma_{\mathbf{r}\in\mathcal{R}}\|\hat{D}(\mathbf{r})-D( \mathbf{r})\|_{2}^{2}\,, \tag{5}\]
where \(\hat{D}(\mathbf{r})\) are the depth ground truth. We further discuss the difference between using the whole-depth image as supervision and only querying the depth in the mask area in Sec 5.3.
In addition, we recognize that depth supervision alone only enforces geometric consistency across views, while the appearance may still exhibit inconsistency. To address this, we incorporate perceptual loss [14] to guide the network in learning a plausible colour distribution within the masked region, matching the style of the inpainted colour priors. We focus the perceptual loss [14] specifically on the masked area. This is because colour loss alone is sufficient for the non-masked area, as pixel values do not change after the deletion in this area. It is important to note that the perceptual loss is designed at the image level. In our implementation, we refer to the patch-level implementation from SPIn-NeRF [29], represented by the following equation:
\[\mathcal{L}_{p}=\frac{1}{B}\Sigma_{\mathbf{\hat{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ }}}}}}}}}}}}}}} \mathbf{\mathcal{L}}\mathbf{p}\mathcal{C}( \mathbf{r})\,, \tag{6}\]
and adjust the patch sampling strategy (See supplementary materials) to fit a variety of data used in our Experiments (Sec 5). In Equation (6), we first sample a patch \(\mathcal{P}\) from the mask and calculate the mean square error between the rendered pixels \(I(\mathbf{r})\) and the ground truth \(\hat{I}(\mathbf{r})\) for the pixels within the patch \(\mathcal{P}\). Batch training with a size of \(B\) can be employed. Finally, the training objective is to minimize the total loss \(\mathcal{L}\) defined as:
\[\mathcal{L}=a*\mathcal{L}_{c}+b*\mathcal{L}_{d}+c*\mathcal{L}_{p}\,, \tag{7}\]
where \(a\), \(b\), and \(c\) are tunable loss weights for the colour, depth, and perceptual loss, respectively.
## 5 Experiments
DatasetsWe select 12 scenes from various commonly used 3D reconstruction datasets, including NeRF [27] LLFF data, IBRNet data [42], and LLFF real-world data [26]. Our scene selection aims to cover a wide range of scene variations and different types of removal operations (See supplementary materials for examples. Our method allows for removing multiple objects and partial objects like slogans, providing a high degree of flexibility). Since the reconstruction datasets do not provide ground truth for evaluation, we incorporate the SPIn-NeRF [29] dataset, which includes human-annotated object masks and captures of the scene after object removal. We use all 10 scenes from the SPIn-NeRF [29] dataset to evaluate the quality of multiview segmentation. To evaluate scene object removal's performance, we select 8 scenes, excluding two duplicate scenes, to ensure a diverse layout of the objects. To conclude, we conduct experiments on 20 scenes, comprehensively evaluating our OR-NeRF pipeline.
MetricsWe adopt the evaluation metrics commonly used in segmentation tasks, including pixel-wise accuracy (Acc) and intersection over union (IoU), to assess the performance of our multiview segmentation algorithm. We report peak signal-to-noise ratio (PSNR), a widely used 3D reconstruction metric for the scene object removal component. Additionally, we include two metrics used by SPIn-NeRF [29]: the learned perceptual image patch similarity (LPIPS) [53] and the Frechet inception distance (FID) [12]. These metrics compare the similarity between the ground-truth data and the rendering outputs produced by our method.
### Experiments Settings
Multiview SegmentationWe conduct experiments using points and text prompts on our selected scenes and evaluate the results using metrics stated in Sec 5. Since the implementation details of multiview segmentation were not explicitly provided in the SPIn-NeRF [29] paper, we directly utilize the metrics mentioned in their paper. However, it should be noted that the paper does not specify which scenes were used for calculating these metrics. Therefore, we compare the performance of SPIn-NeRF [29] with our scene-average results. Subsequently, we utilize the masks generated from the points prompt for all subsequent experiments.
Scene Object RemovalWe conduct experiments on all 20 scenes with ours and the SPIn-NeRF [29] methods. Both vanilla NeRF [27] and TensoRF [4] architectures are tested with our method's implementation. TensoRF [4] is a current SOTA model focusing on improving rendering quality. We follow the implementation of the SPIn-NeRF [29] to reproduce their results. For NeRF [27] and TensoRF [4], we train the original scenes to render depth maps instead of disparity maps used in SPIn-NeRF [29]. This decision is made to avoid errors that may arise from dividing by zero when calculating disparities.
### Multiview Segmentation
QuantityTable 1 compares mask generation between our method and SPIn-NeRF [29]. Our approach outperforms SPIn-NeRF [29] regarding accuracy and IoU. SPIn-NeRF's [29] mask generation process involves a complex pipeline that introduces errors at each step and requires significant time and hardware resources. In contrast, our method leverages the simplicity of SAM [16] and involves minimal matrix calculations. Consequently, our multiview segmentation algorithm delivers superior-quality results in less time. Table 2 shows our estimated time for mask generation compared to SPIn-NeRF [29].
QualityNote that we have excluded the 'book' scene from the average calculation. This decision was made because we have identified inaccuracies in the ground truth labels for this particular scene, as evident from Fig 3. Furthermore, as depicted in Fig 3, our segmentation results exhibit precise coverage of the target objects with intricate details, such as the crossing chair legs in the '12' scene. However, it should be noted that there is a minor flaw in the 'trash' scene where our masks fail to cover all areas of the trash cans explaining the low metrics in Table 1. We assert that this issue does
\begin{table}
\begin{tabular}{c|c|c c c c c c c c c|c} \hline \hline & 1 & 2 & 3 & 4 & 7 & 9 & 10 & 12 & trash & Mean & SPIn-NeRF[29] \\ \hline \multirow{2}{*}{points} & acc\(\uparrow\) & 99.80 & 99.82 & 99.73 & 99.79 & 99.81 & 99.78 & 99.87 & 99.30 & 99.51 & 99.71\(\uparrow\) & 98.91 \\ & IoU\(\uparrow\) & 96.77 & 96.47 & 97.48 & 98.50 & 97.43 & 96.29 & 95.47 & 91.73 & 88.68 & 95.42\(\uparrow\) & 91.66 \\ \hline \multirow{2}{*}{text} & acc\(\uparrow\) & 99.81 & 99.82 & 99.73 & 99.80 & 99.81 & 99.78 & 99.86 & 99.25 & 99.51 & 99.71\(\uparrow\) & 98.91 \\ & IoU\(\uparrow\) & 96.81 & 96.51 & 97.47 & 98.51 & 97.43 & 96.41 & 95.41 & 91.19 & 88.64 & 95.38\(\uparrow\) & 91.66 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of mask generation between our method and SPIn-NeRF [29]. The first row indicates the scene name in the SPIn-NeRF dataset, while ’points’ and ’text’ denote the prompts mode used, respectively.
\begin{table}
\begin{tabular}{c|c c c c c|c c c} \hline \hline & One-shot Seg & Video Seg & Semantic NeRF Train & Semantic NeRF predict & 2D Inpainting & Reconstruct \\ \hline SPIn-NeRF & \textless{}1 second & \textless{}1 minute & 2-5 minutes & 1 minute & \textless{}1 minute & 20-40 minutes \\ \hline Ours & & & & 1 minute & & \textless{}1 minute & 20-40minutes \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of time consumption between our method and SPIn-NeRF [29]. Statics for SPIn-NeRF is borrowed from their paper directly.
not significantly affect the subsequent experiments if refined with our strategy (See details about the reason and solution in supplementary materials).
### Scene Object Removal
QuantityTable 3 presents our results for scene object removal. In terms of overall rendering quality, Ours-NeRF exhibits a superior FID compared to SPIn-NeRF [29] but performs worse in terms of PSNR and LPIPS. On the other hand, Ours-TensoRF outperforms SPIn-NeRF in terms of FID and LPIPS scores but has a weakness in PSNR. Analyzing the impact of the loss models, it appears that the additional components for training Neural Radiance Fields do not have a significantly positive effect. Ours-NeRF and ours-TensoRF exhibit a similar pattern where depth supervision and perceptual loss increase PSNR but show no positive influence on FID and LPIPS. Interestingly, SPIn-NeRF behaves somewhat differently: removing perceptual loss and depth supervision from the SPIn-NeRF pipeline results in a subtle increase in PSNR compared to the original version. However, the FID and LPIPS scores demonstrate that the add-ons improve SPIn-NeRF performance. While the results involve complex numbers, we adopt Ours-TensoRF with perceptual loss as it performs best overall. Although Table 3 does not provide strong evidence for the efficacy of depth supervision and perceptual loss, we will discuss the real significance of these add-ons in the following section.
Figure 4: Comparison of overall rendering quality between SPIn-NeRF (left) and Ours-TensoRF (right). We can see blurry from SPIn-NeRF compared with our clear details.
Figure 3: Mask generation results of our OR-NeRF. The figure shows the masks generated for two scenes: ’book’ (up) and ’12’ (below) from the SPIn-NeRF [29] dataset.
\begin{table}
\begin{tabular}{c|c c c c|c c c|c c c} \hline & \multicolumn{4}{c|}{Ours-NeRF[27]} & \multicolumn{4}{c|}{Ours-TensoRF[4]} & \multicolumn{4}{c}{SPIn-NeRF[29]} \\ & dir & dp & da & lpips & dir & da & lpips & dir & da & lpips \\ \hline PSNR\(\uparrow\) & 14.04 & 14.04 & 14.16 & 14.16 & 13.93 & 14.04 & 14.03 & **14.85** & 14.82 & 14.83 \\ FID\(\downarrow\) & 61.11 & 65.21 & 64.71 & 58.15 & **53.28** & 64.29 & 59.74 & 70.02 & 70.07 & 67.26 \\ LPIPS\(\downarrow\) & 0.6834 & 0.6893 & 0.7022 & 0.6763 & 0.6370 & 0.6494 & **0.6273** & 0.6810 & 0.6752 & 0.6506 \\ \hline \end{tabular}
\end{table}
Table 3: Experiment results on scene object removal. The first row indicates methods name, while abbreviations in the second row indicate loss modules. ’dir’ denotes training Neural Radiance Fields with LaMa [38] priors directly, ‘dp’ denotes partial depth, ‘da’ denotes all depth, and ‘lpips’ denotes the use of perceptual loss. Notably, perceptual loss is always applied with all-depth supervision enabled.
QualityWe first compare the three methods' overall rendering quality in this part. Ours-NeRF and Ours-TensoRF produce clear outputs, while SPIn-NeRF [29] suffers from blurry due to the noisy disparity maps, which provide inaccurate geometry supervision. This can be observed by Fig 4.
Next, we discuss the impact of depth supervision. Although widely used in training, there is a lack of exploration of the difference between using the entire depth image as supervision and only applying depth loss in the masked area. Fig 5 indicates that full-depth supervision is necessary and irreplaceable, as both partial depth and direct training settings in all three architectures produce inconsistent depth results, resulting in different extents of restoring removed objects. However, it is worth noting that the depth loss does not show a visible difference in the rendered views, which aligns with the metrics presented in Table 3.
Moving on to the perceptual loss aspect, we conclude from Fig 6 that this loss has a positive effect but falls short of guaranteeing a plausible completion for the masked area. This also explains the relatively ineffective metrics in Table 3, as all of our results exhibit a significant gap with the ground truth. Finally, part of our editing results are displayed in Fig 7, and more details can be found in supplementary materials.
## 6 Conclusions and Discussions
This paper presents a novel pipeline OR-NeRF for object removal from 3D scenes, requiring only points or text prompts on a single view. We emphasize the advantages of our method in terms of rendering quality and time efficiency. We also suggest that potential enhancements could incorporate more robust 2D image inpainting techniques, such as diffusion [52; 11; 33; 31; 56; 37], to achieve plausible completions after object removal.
Figure 5: The effect of depth supervision. We can see from the figure that either without depth supervision (left) or training with partial depth (middle) leads to geometry inconsistency. While supervised by all-depth images (right) convergent to a consistent result.
Figure 6: The effect of perceptual loss. Left is Ours-TensoRF trained directly, and right is Ours-TensoRF with perceptual loss. We can see from the figure that this loss has some influence but is still unsatisfactory.
|
2306.09502
|
The Dark Neutral Medium is (Mostly) Molecular Hydrogen
|
We acquired ALMA ground state absorption profiles of HCO+ and other molecules
toward 33 extragalactic continuum sources seen toward the Galactic anticenter,
deriving N(H2) = N(HCO+)/3x10^{-9}. We observed J=1-0 CO emission with the IRAM
30m in directions where HCO+ was newly detected.
HCO+ absorption was detected in 28 of 33 new directions and CO emission along
19 of those 28. The 5 sightlines lacking detectable HCO+ have 3 times lower
mean EBV and N(DNM). Binned in EBV, N(H2) and N(DNM) are strongly correlated
and vary by factors of 50-100 over the observed range EBV~0.05-1 mag, while
N(HI) varies by factors of only 2-3. On average N(DNM) and N(H2) are well
matched, and detecting HCO+ absorption adds little/no H2 in excess of the
previously inferred DNM. There are 5 cases where 2N(H2) < N(DNM)/2 indicates
saturation of the HI emission. For sightlines with \WCO > 1 K-\kms the CO-H2
conversion factor N(H2)/\WCO\ = 2-3x10^{20}\pcc/K-\kms is higher than derived
from studies of resolved clouds in gamma-rays.
Our work sampled primarily atomic gas with a mean H2 fraction ~1/3, but the
DNM is almost entirely molecular. CO fulfills its role as an H2 tracer when its
emission is strong, but large-scale CO surveys are not sensitive to H2 columns
associated with typical values N(DNM) = 2-6x10^{20}\pcc. Lower \XCO\ values
from $\gamma$-ray studies arise in part from different definitions and usage.
Sightlines with \WCO\ \ge 1 K-\kms\ represent 2/3 of the H2 detected in HCO+
and detecting 90% of the H2 would require detecting CO at levels \WCO\~0.2-0.3
K-\kms
For full abstract see the paper
|
Harvey Liszt, Maryvonne Gerin
|
2023-06-15T20:53:21Z
|
http://arxiv.org/abs/2306.09502v1
|
# The Dark Neutral Medium is (Mostly) Molecular Hydrogen
###### Abstract
Context:More gas is sometimes inferred in molecular cloud complexes than is represented in HI or CO emission, and this is called dark neutral medium (DNM).
Aims:Our aim is to extend a study of DNM along 13 directions in the outskirts of Chamaeleon by determining the atomic or molecular character of the DNM along a larger sample of sightlines.
Methods:We acquired ALMA ground rotational state absorption profiles of HCO\({}^{+}\) and other molecules toward 33 compact extragalactic continuum background sources seen toward the Galactic anticenter, deriving \(\rm{N(H_{2})=N(HCO^{+})/3\times 10^{-9}}\) as before. We observed \(\rm{J=1-0}\) CO emission with the IRAM 30m telescope in discrete where HCO\({}^{+}\) was newly detected.
Results:HCO\({}^{+}\) absorption was detected in 28 of 33 new directions and CO emission along 19 of those 28. The five sightlines lacking detectable HCO\({}^{+}\) have three times lower \(<\)E\({}_{\rm{B-V}}\)\(>\) and \(<\)N(DNM)\(>\). Binned in E\({}_{\rm{B-V}}\), \(\rm{N(H_{2})}\) and N(DNM) are strongly correlated and vary by factors of 50-100 over the observed range E\({}_{\rm{B-V}}\)\(\approx\) 0.05 - 1 mag, while N(HI) varies by factors of only 2-3. On average N(DNM) and N(H\({}_{2}\)) are well matched, and detecting HCO\({}^{+}\) absorption adds little to no H\({}_{2}\) in excess of the previously inferred DNM. There are five cases where \(\rm{2N(H_{2})<N(DNM)/2}\) indicates saturation of the HI emission profile. For sightlines with \(\rm{W_{CO}\geq 1~{}K\)-km s\({}^{-1}}\), the CO-H\({}_{2}\) conversion factor \(\rm{N(H_{2})/W_{CO}=2-3\times 10^{20}~{}~{}cm^{-2}/(1~{}K\)-km s\({}^{-1}})\) is higher than is derived from studies of resolved clouds in \(\gamma\)-rays.
Conclusions:Our work sampled primarily atomic gas with a mean H\({}_{2}\) fraction \(\approx\) 1/3, but the DNM is almost entirely molecular. CO fulfills its role as an H\({}_{2}\) tracer when its emission is strong, but large-scale CO surveys are not sensitive to H\({}_{2}\) columns associated with typical values N(DNM) \(=2-6\times 10^{20}~{}\rm{cm^{-2}}\). Lower \(X_{\rm{CO}}\) values from \(\gamma\)-ray studies arise in part from different definitions and usage. Sightlines with \(\rm{W_{CO}\geq 1~{}K\)-km s\({}^{-1}}\) represent 2/3 of the H\({}_{2}\) detected in HCO\({}^{+}\) and detecting 90% of the H\({}_{2}\) would require detecting CO at levels \(\rm{W_{CO}\approx 0.2\)-0.3 K-km s\({}^{-1}}\).
## 1 Introduction
Between the atomic and molecular interstellar gases there is a transition that is imperfectly traced in \(\lambda\)21cm HI and/or \(\lambda\)2.6mm CO emission. Gas that is not well represented in one or both tracers is described as dark neutral medium (DNM) (Grenier et al., 2005; Planck Collaboration et al., 2015; Remy et al., 2017, 2018), and the atomic or molecular character of the DNM can only be decided by appealing to other information.
In the outskirts of Chamaeleon, we used millimeter-wave HCO\({}^{+}\) absorption to show that the DNM was almost exclusively molecular, even while the gas as a whole was mostly atomic (Liszt et al., 2018). HCO\({}^{+}\) was detected along all 13 sampled sightlines, of which 12 lacked detectable CO emission at a level of 1.5 K-km s\({}^{-1}\). The amount of inferred H\({}_{2}\) was comparable to that of the DNM (confirming the amount of the detected DNM), and only for three sightlines did it seem likely that saturation of the HI emission profile could account for much of the DNM. Subsequent detections of CO absorption along six directions by Liszt et al. (2019) showed that the lack of CO emission was due to low CO column densities. The linewidth of CO absorption was shown to be smaller than that of its parent molecule HCO\({}^{+}\), illustrating the complex chemical nature of the CO formation process in diffuse gas.
In this work we extend the analysis of the composition of DNM to 33 sightlines in directions toward the Galactic anticenter (Remy et al., 2017, 2018) and consider the larger sample of 46 sightlines. The present study employs a relatively small number of diffuse (\(\rm{A_{V}\leq 1~{}mag}\)) and translucent (\(\rm{A_{V}\leq 3~{}mag}\)) sightlines where background sources are serendipitously available, and is not a revision of the earlier study, whose derived DNM column densities and other results are assumed here. Rather, we use the newly derived H\({}_{2}\) abundances, independent of the presence of CO emission, to consider such topics as the atomic or molecular character of the DNM, the degree to which the L21 HI profile represents N(HI) and the influence of the CO-H\({}_{2}\) conversion factor on the DNM determination.
Section 2 summarizes the observational material considered here and Section 3 compares the prior results for DNM with atomic hydrogen measured in the L21 emission and H\({}_{2}\) as sampled by its proxy 89 GHz absorption of HCO\({}^{+}\). The role of CO emission data is explored in Section 4 where the CO-H\({}_{2}\) conversion factor is derived. Section 5 presents a summary, with conclusions and a discussion of later work.
## 2 Observations and data reduction
The methods of work were described in Liszt et al. (2018) and are not repeated in full detail here. From the existing DNM
analyses of Planck Collaboration et al. (2015) and Remy et al. (2017, 2018) we take N(DNM) and k21 HI column densities N(HI) and N(HI)\({}_{\rm cl}\) representing respectively the line profile integral integrated over all velocities and the column density that is associated with the cloud and/or kinematic features hosting the neutral gas and DNM, as derived by a decomposition that accounts for the compound nature of HI emission profiles from individual clouds.
To the existing analyses we add ALMA observations of absorption from HCO\({}^{+}\) and profiles of \(\lambda\)2.6mm CO emission from the IRAM 30m telescope, as described below.
### Millimeter-wave absorption
The new absorption data discussed here are spectra of HCO\({}^{+}\) in 33 directions along Galactic anticenter sightlines toward ALMA phase calibrators as given in the Tables here and projected on sky maps in Figure 1. As before, the HCO\({}^{+}\) spectra were acquired with a spectral resolution of 0.205 km s\({}^{-1}\) sampled at 0.102 km s\({}^{-1}\) intervals without the so-called spectral averaging that bins data. We also acquired spectra of HCN, C\({}_{2}\)H, HCO, and several isotopologs. Counts of detections are shown in Figure 2, but results for species other than HCO\({}^{+}\) will be discussed in detail in later work. Statistics of the HCO\({}^{+}\) detections are shown in Figure A.1. Spectra in the 28 directions with detectable HCO\({}^{+}\) absorption are shown in Figure B.1.
### IRAM 30m CO emission
We took frequency-switched CO emission profiles at the IRAM 30m telescope in August 2019 toward the 28 anticenter sightlines with detected HCO\({}^{+}\) absorption. The CO spectra are shown in Figure B.1. These CO data were taken as five-point maps toward the target and displaced by 2 HPBW (2x22\({}^{\prime\prime}\)) in the four cardinal directions, using a pointing pattern that was designed to detect HCO\({}^{+}\) and HCN emission in the presence of spectral line absorption in the target direction. For the HCO\({}^{+}\) emission that will be discussed in subsequent work, the HCO\({}^{+}\) emission profile is formed by averaging spectra along the four outlying directions. For CO the present results use the average of all five pointings because contamination by absorption is not measurable given the strength of the emission and/or the continuum. The results are presented on the main beam antenna temperature scale that is native to the 30m telescope.
### Conversion from integrated HCO\({}^{+}\) absorption to N(H\({}_{2}\))
The suitability of HCO\({}^{+}\) as a proxy for H\({}_{2}\) in diffuse molecular gas was explored extensively in Liszt & Gerin (2023) (hereafter Paper 1). As in our earlier work, we use N(H\({}_{2}\)) = N(HCO\({}^{+}\))/3 \(\times\) 10\({}^{-9}\) and N(HCO\({}^{+}\)) = \(1.10\times 10^{12}\)\(\rm{\tau_{HCO^{+}}}\) where \(\rm{\tau_{HCO^{+}}}\) is the integrated HCO\({}^{+}\) optical depth expressed in km s\({}^{-1}\).
### Reddening and dust optical depth
The 6\({}^{\prime}\) resolution dust-emission maps scaled to optical reddening E\({}_{\rm B-V}\) by Schlegel et al. (1998) are cited here. Those reddening values can be converted to Planck 353 GHz dust optical depth using the relationship established by Planck Collaboration et al. (2014) between the 353 GHz dust optical depth and the E\({}_{\rm B-V}\) values of Schlegel et al. (1998), E\({}_{\rm B-V}\)/\(\rm{\tau_{353}}=(1.49\pm 0.03)\times 10^{4}\) mag.
### Conventions
Velocities presented with the spectra are taken with respect to the kinematic definition of the Local Standard of Rest. N(H) is the column density of H nuclei detected in neutral atomic and molecular form, N(H) = N(HI)+2N(H\({}_{2}\)), and the molecular hydrogen and DNM fractions 2N(H\({}_{2}\))/N(H) and N(DNM)/N(H) are respectively represented by f\({}_{\rm H_{2}}\) and f\({}_{\rm DNM}\). The integrated absorption of the HCO\({}^{+}\) profile in units of km s\({}^{-1}\) is denoted by \(\rm{\tau_{HCO^{+}}}\) and similarly for other species. The integrated emission of the J=1-0 CO line is denoted by W\({}_{\rm CO}\) with units of K-km s\({}^{-1}\) and the CO-H\({}_{2}\) conversion factor N(H\({}_{2}\))/W\({}_{\rm CO}\) is denoted by \(X_{\rm CO}\). Where reference is made to a typical Galactic or standard CO-H\({}_{2}\) conversion factor, the value N(H\({}_{2}\))/W\({}_{\rm CO}=2\times 10^{20}\) cm\({}^{-2}\)/(1 K-km s\({}^{-1}\)) should be understood, as summarized in Table E.1 of Remy et al. (2017).
### Overview
Observational results of importance to this work are given in Table 1 and Table 2, and summaries of mean properties of various subsamples are presented in Table 3. Some statistics of the continuum targets and noise in the detections of HCO\({}^{+}\) absorption are discussed in Appendix A and the CO emission and HCO\({}^{+}\) absorption line profiles are shown in Figure B.1 and discussed in Appendix B.
## 3 HI, DNM, and H\({}_{2}\) sampled in HCO\({}^{+}\) absorption
Figure 1 shows the new Galactic anticenter sightlines projected on maps of E\({}_{\rm B-V}\) and N(DNM). The region at \(b<-30^{\circ}\) is described by Remy et al. (2017) as Cetus, and that at \(b>-22^{\circ}\) as Taurus-Main. California lies to the north around \(b=-10^{\circ}\) and Perseus is near (l,b) = 160\({}^{\circ}\)-20\({}^{\circ}\). Taurus-North overlays most of the map region. The individual subregions are not discussed here owing to the sparse sampling.
Counts of sources and detected molecular tracers binned in 0.05 mag intervals of reddening are shown in Figure 2. HCO\({}^{+}\) is the molecular tracer detected most commonly (28 of 33 anticenter sightlines and 41 of 46 overall), followed by C\({}_{2}\)H (26 total) and HCN (21 total). The properties of molecular absorption line tracers other than HCO\({}^{+}\) will be discussed in later work.
The Chamaeleon and Galactic anticenter subsamples have the same mode at E\({}_{\rm B-V}\) = 0.25 - 0.3 mag in Figure 2, but the anticenter subsample has a high-reddening tail at E\({}_{\rm B-V}>0.5\) mag that is absent in Chamaeleon. The Galactic anticenter sample has higher mean extinction and molecular fraction. Overall (see Table 3) the anticenter sources have the following characteristics:
* 33% higher \(<\)E\({}_{\rm B-V}\)\(>\) = 0.36 vs 0.27 mag;
* Same \(<\)N(HI)\(>\) = 1.3 \(\times\) 10\({}^{21}\) cm\({}^{-2}\) and f\({}_{\rm DNM}\) = 0.12;
* 80% higher \(<\)N(HI)\({}_{\rm cl}\)\(>\) = 1.12 vs 0.62 \(\times\)10\({}^{21}\) cm\({}^{-2}\);
* 2 times higher \(<\)2N(H\({}_{2}\))\(>\) = 0.80 vs 0.39 \(\times\)10\({}^{20}\) cm\({}^{-2}\);
* 65% higher f\({}_{\rm H_{2}}\) = 0.38 vs 0.23;
* Higher fraction of detected CO, 9/33 (18/33 in the new work) vs 1/13;
* Higher incidence of nondetection of HCO\({}^{+}\), 5/33 vs 0/13, in some cases due to low flux, see Figure A.1.
The DNM fraction is nearly the same in the two samples (0.12-0.13) and much smaller (0.06) in both subsamples when CO emission was detected at the level of 1 K-km s\({}^{-1}\). The DNM fraction is also noticeably smaller when molecular gas sampled in HCO\({}^{+}\) absorption is absent, and similarly at E\({}_{\rm B-V}<0.2\) mag. The strongest variations in f\({}_{\rm DNM}\) in Table 3 arise in selections
based on the strength of CO emission and are discussed in Section 4.
### Quantitative summary results
Figure 3 shows an overview of trends in means of N(HI), N(DNM), and N(H\({}_{2}\)) using data binned in 0.05 mag intervals of reddening as in Figure 2, plotted horizontally at the mean E\({}_{\rm B-V}\) in every bin. The counts in each bin are shown and the bins at \(\rm E_{B-V}>0.5\) mag are occupied by only one or two sightlines. The association of DNM with molecular gas is unmistakable.
The mean HI column density varies by only a factor of 3 over a wider range E\({}_{\rm B-V}\) = 0.09 - 1 mag. By contrast, means of N(NDM) and N(H\({}_{2}\)) vary by factors of 50-100. The values of \(\rm<\)N(DNM)\(>\) and \(\rm<\)N(H\({}_{2}\))\(>\) are comparable and increase steadily for E\({}_{\rm B-V}\)\(\la\) 0.7 mag, beyond which \(\rm<\)N(DNM)\(>\) either fails to increase or falls: the CO emission tracer used to derive N(DNM) was used effectively to account for H nuclei in H\({}_{2}\). The cloud-associated mean N(HI)\({}_{\rm cl}\) declines at E\({}_{\rm B-V}\) = 0.7 mag as H\({}_{2}\) sequesters H nuclei. The value of \(\rm<\)N(H\({}_{2}\))\(>\) increases up to the bins at highest E\({}_{\rm B-V}\) where \(\rm<\)N(H\({}_{2}\))\(>\)\(\approx\)\(\rm<\)N(HI\(>\)) \(\approx\)\(2-3<\)N(HI)\({}_{\rm cl}\)\(>\) and whole sightlines are dominated by molecular gas.
The correlation between DNM and H\({}_{2}\) is remarkable considering the vastly different scales on which E\({}_{\rm B-V}\), N(HI), and N(NDM) are measured (6-20\({}^{\prime}\)), as contrasted with \(\rm N(H_{2})\propto\rm T_{HCO^{+}}\) that is derived on submilliarcsecond scales from absorption against scatter-broadened compact millimeter-wave ALMA calibrator continuum sources. The same kind of correlation of tracers observed on different angular scales occurs in the correlation of integrated \(\lambda\)21cm HI optical depth with E\({}_{\rm B-V}\)(Liszt, 2021).
When HCO\({}^{+}\) is not detected (5/46 directions) the following characteristics are found compared to sightlines with detected HCO\({}^{+}\):
* 3 times lower \(\rm<\)E\({}_{\rm B-V}\)\(>\) = 0.13 mag;
* 1.3 times higher \(\rm<\)N(H)\(>\)/\(\rm<\)E\({}_{\rm B-V}\)\(>\) = \(8\times 10^{21}\) cm\({}^{-2}\) mag\({}^{-1}\);
* 3.5 times lower \(\rm<\)N(DNM)\(>\) = \(8\times 10^{19}\) cm\({}^{-2}\);
* 2 times lower \(\rm<\)f\({}_{\rm DNM}\)\(>\) = 0.07;
* no DNM in 3/5 cases vs. 15/46 overall.
Instead, using 3\(\sigma\) upper limits for N(H\({}_{2}\)) we find:
Figure 1: Sky maps of the observed sightlines. Left: Positions of the background sources observed here are projected against a map of E\({}_{\rm B-V}\) from the work of Schlegel et al. (1998). The sightlines lacking detected HCO\({}^{+}\) absorption (largely due to weak continuum) are indicated with smaller symbols. Right: Same as left, but shown against a map of N(DNM) from Remy et al. (2017, 2018).
Figure 2: Counts of sightlines and detected tracers binned in 0.05 intervals of reddening, E\({}_{\rm B-V}\). The histogram of all sources is overlaid (dashed red) in the other panels.
* \(>\)5 times lower \(<\)2N(H\({}_{2}\))\(>\)\(\leq\) 1.5 \(\times\) 10\({}^{20}\) cm\({}^{-2}\) and
* \(>\)2.5 times lower \(<\)f\({}_{\rm H_{2}}\)\(>\)\(\leq\) 0.14
The ensemble of sightlines with N(DNM) \(<\) 5 \(\times\) 10\({}^{19}\) cm\({}^{-2}\) does not differ by more than 10-20% from other sightlines in any mean property not involving N(DNM). The variations in N(DNM) and N(H\({}_{2}\)) stand in great contrast with N(HI) that increases only by a factor of 3 across the figure. The rise in N(H\({}_{2}\)) accounts for the failure of N(HI) to increase at high E\({}_{\rm B-V}\), but this could in principle also be explained by increasing saturation of the HI emission. As noted in Planck Collaboration et al. (2015) and further refined by Remy et al. (2018), the global DNM analysis finds the best fit with the optically thin estimate of N(HI), implicitly leaving a bigger overall role for molecular gas than for saturation of the HI profile.
### DNM and HI
Shown in Figure 4 is the variation of \(<\)N(DNM)\(>\) with \(<\)N(HI)\(>\) and \(<\)N(HI)\(>\). Chamaeleon differs from the Galactic anticenter in having more extraneous atomic gas along the line of sight and smaller values of cloud-associated gas. The cloud-associated HI in Chamaeleon is clustered at the low end of the range, even though the samples are evenly matched in the span of their total HI column density and have the same mean N(HI) in Table 3.
Sightlines lacking DNM are present at all N(HI), but N(DNM) is uniformly small along sightlines at the highest column densities N(HI) \(\gtrsim\) 2.3 \(\times\) 10\({}^{21}\) cm\({}^{-2}\) where CO emission is more commonly observed. The sources with N(DNM) \(>\) 6 \(\times\) 10\({}^{20}\) cm\({}^{-2}\) and high N(DNM)/N(HI) ratios are perhaps the most obvious candidates for hosting significant quantities of DNM in atomic form as the result of saturation of the HI profile, but only two of these actually have small contributions from N(H\({}_{2}\)). Assessing the DNM contribution arising from possible saturation of the HI profile is discussed in Section 3.3 where it is seen that the strongest cases of atomic DNM actually have more modest N(DNM).
### DNM, E\({}_{\rm B-V}\), and N(H\({}_{2}\))
As with HI, sightlines lacking DNM are present over the full range of E\({}_{\rm B-V}\) in Figure 5, top, but they are predominant at smaller N(H\({}_{2}\)). Sightlines with 2N(H\({}_{2}\)) \(\lesssim\) 10\({}^{20}\) cm\({}^{-2}\) lack DNM. Along sightlines with column densities too low to host molecular gas, the atomic gas is well represented by the optically thin estimates of N(HI) used in the DNM analysis.
Care must be taken in interpreting the relationship of DNM and N(H\({}_{2}\)) in Figure 5 because the DNM in part represents material that is missing in the CO emission tracer while some of the H\({}_{2}\) traced by HCO\({}^{+}\) is actually visible in CO emission. To minimize this cross-contamination, Figure 6 shows a plot of N(DNM) vs. N(H\({}_{2}\)) for sightlines lacking a detection of CO, using the more sensitive IRAM data for the Galactic anticenter subsample. Included are all but one (12/13) of the Chamaeleon sightlines and 60% of those toward the anticenter.
Sightlines lacking DNM in the absence of CO emission in Figure 6 are almost entirely confined at 2N(H\({}_{2}\)) \(\lesssim\) 2 \(\times\) 10\({}^{20}\) cm\({}^{-2}\), so the detections of HCO\({}^{+}\) absorption do not imply much additional gas along sightlines where DNM was not found. Missing from Figure 6 are sightlines with 2N(H\({}_{2}\)) \(\gtrsim\) 10\({}^{21}\) cm\({}^{-2}\) corresponding to W\({}_{\rm CO}\)= 2.5 K-km s\({}^{-1}\) for the Galactic CO-H\({}_{2}\) conversion factor.
There are two or three cases in each subsample where N(DNM)/2N(H\({}_{2}\)) \(\gtrsim\) 2 and saturation of the L21 emission profile may be important. Most of these sightlines have modest values N(DNM) \(\approx\) 4 - 5 \(\times\) 10\({}^{20}\) cm\({}^{-2}\) and are not the sightlines with high N(DNM) that were singled out for discussion in Section 3.2 when comparing N(DNM) and N(HI).
There are also two or three sightlines at N(H\({}_{2}\)) \(>\) 6 \(\times\) 10\({}^{20}\) cm\({}^{-2}\) where N(DNM)/2N(H\({}_{2}\)) \(\lesssim\) 2. Overall \(<\)2N(H\({}_{2}\))\(>\) = 1.3\(<\)N(DNM)\(>\) for sources lacking CO emission, so H\({}_{2}\) accounts for DNM without adding extra "dark" gas, and the DNM is mostly molecular. As before in Chamaeleon, the DNM is largely molecular, even if the medium overall is predominantly atomic.
## 4 The role of CO emission
The CO emission that was detected in previous studies used to determine N(DNM) along our sightlines was well above 1 K-km s\({}^{-1}\) in every case (Table 2), with \(<\)W\({}_{\rm CO}\)\(>\) = 4.9K-km s\({}^{-1}\) (Table 3). This is very nearly the same mean as for the sample of sightlines with W\({}_{\rm CO}\)\(\geq\) 1K-km s\({}^{-1}\) observed at the IRAM 30m, 5.5 K-km s\({}^{-1}\) (see Table 2). Compared to the overall average, the sightlines with \(<\)W\({}_{\rm CO}\)\(>\) \(\geq\) 1K-km s\({}^{-1}\) have the following characteristics:
* 0.7 mag;
* 3-4 times higher \(<\)2N(H\({}_{2}\))\(>\) = \(2-3\times 10^{21}\) cm\({}^{-2}\);
* 2 times higher \(<\)f\({}_{\rm H_{2}}\)\(>\) = 0.6;
* 2 times lower \(<\)f\({}_{\rm DNM}\)\(>\) = 0.06.
The 50% smaller DNM fractions \(<\)f\({}_{\rm DNM}\)\(>\) = 0.06 along samples of sightlines with \(<\)W\({}_{\rm CO}\)\(>\)\(\geq\) 1K-km s\({}^{-1}\) indicate that CO emission is doing a good job of tracing H\({}_{2}\) in gas where the molecular fraction is high (Liszi 2017), CO emission is strong, and the cloud-associated HI fraction declines (Figure 2). However, CO emission in general represents a small portion of the total molecular gas present along the sightlines in this work. For instance, \(<\)2N(H\({}_{2}\))\(>\) = 7 \(\times\) 10\({}^{20}\) cm\({}^{-2}\) along 46 sightlines, while \(<\)W\({}_{\rm CO}\)\(>\)\(\approx\) 5 K-km s\({}^{-1}\) along the 6-10 sightlines where \(<\)W\({}_{\rm CO}\)\(>\)\(\geq\) 1K-km s\({}^{-1}\) (Table 3). For a typical Galactic CO-H\({}_{2}\) conversion factor, the summed molecular gas column represented in HCO\({}^{+}\) is three to four times that inferred from the summed CO emission. Most of the molecular gas in our sample was hidden in the DNM prior to our work and is still only revealed by the HCO\({}^{+}\) absorption, even with more sensitive CO observations. The IRAM 30m detections with \(<\)W\({}_{\rm CO}\)\(>\) below 1K-km s\({}^{-1}\) comprise less than 20% of the total amount of CO emission.
### The CO-H\({}_{2}\) conversion factor
Comparison of the scaled HCO\({}^{+}\) absorption and IRAM 30 emission measurements provides the most direct determination of the actual CO-H\({}_{2}\) conversion factor along the lines of sight studied in this work. Figure 7 shows the trends in \(<\)N(H\({}_{2}\))\(>\), \(<\)W\({}_{\rm CO}\)\(>\), and \(<\)N(H\({}_{2}\))\(>\)/\(<\)W\({}_{\rm CO}\)\(>\) binned in 0.05 mag increments of E\({}_{\rm B-V}\) as in Figure 3. Included are the 28 Galactic anticenter sightlines where HCO\({}^{+}\) was detected, with W\({}_{\rm CO}\) taken at the 3\(\sigma\) upper limit for sightlines where CO emission was not detected with greater significance. Also included is the sightline toward J1733 in Chamaeleon where CO emission was detected. The 3\(\sigma\) upper limits W\({}_{\rm CO}\)\(\leq\) 1.5 K-km s\({}^{-1}\) in Chamaeleon are not useful.
Figure 7 illustrates the behavior of the CO-H\({}_{2}\) conversion factor that is tabulated for different samples in Table 3. The values of \(<\)N(H\({}_{2}\))\(>\) and \(<\)W\({}_{\rm CO}\)\(>\) both increase steadily with E\({}_{\rm B-V}\) in Figure 7, but at different rates so that their ratio declines by a factor \(\approx\) 7 to \(<\)N(H\({}_{2}\))\(>\)/\(<\)W\({}_{\rm CO}\)\(>\) = \(2.5-3\times 10^{20}\)H\({}_{2}\) cm\({}^{-2}\)(K-km s\({}^{-1}\))\({}^{-1}\) for E\({}_{\rm B-V}\)\(\gtrsim\) 0.5 mag. Variations of similar magnitude in
individual diffuse and/or translucent MBM clouds were reported by Magnani et al. (1998) and Cotten & Magnani (2013).
Mean values of N(H\({}_{2}\))/W\({}_{\rm CO}\) are \(2-2.5\times 10^{20}\)H\({}_{2}\) cm\({}^{-2}\)(K-km s\({}^{-1}\))\({}^{-1}\) for the old and new samples with \(<\)W\({}_{\rm CO}\)\(>\)\(\geq\) 1 K-km s\({}^{-1}\), increasing by factors of 2-3 for the samples with IRAM 30m detections below 1K-km s\({}^{-1}\) and upper limits. Also see the inset in Figure 7 on this point..
The values of the CO-H\({}_{2}\) conversion factor derived are comparable to those derived in extant Galactic-scale \(\gamma\)-ray analyses (Table E.1 in Remy et al. 2017) when W\({}_{\rm CO}\)\(\geq\) 1 K-km s\({}^{-1}\), but are uniformly larger than those derived in cloud-level studies like the DNM analysis whose results are summarized in Table 2 of Remy et al. (2017). Those results ranged from \(1-1.6\times 10^{20}\)H\({}_{2}\) cm\({}^{-2}\)(K-km s\({}^{-1}\))\({}^{-1}\) for determinations based on dust and from \(0.44-1.00\times 10^{20}\)H\({}_{2}\) cm\({}^{-2}\) (K-km s\({}^{-1}\))\({}^{-1}\) for determinations based on gamma rays.
There is no contradiction with the larger values found in this work whose method of studying widely separated, semi-randomly placed sightlines is similar to the larger scale studies. The strongly CO-emitting (W\({}_{\rm CO}\)\(>\) 10 K-km s\({}^{-1}\)) regions encountered in the cloud-level studies that have generally small values of \(X_{\rm CO}\) (see Figure 13 in Remy et al. 2018) were not sampled here.
There may also be a difference arising from the operational definition of the conversion factor. The conversion factors in Table 2 in Remy et al. (2017) are those that optimize the fit of the \(\gamma\)-ray or dust emissivity to a total hydrogen column density that is represented schematically as N(H) = N(HI)+N(DNM)+2N(H\({}_{2}\)) = N(HI)+N(DNM)+2\(X_{\rm CO}\)W\({}_{\rm CO}\). After the analysis, there remains a DNM constituent whose molecular fraction is undetermined and not explicitly considered in the definition of the multiplier \(X_{\rm CO}\); we showed that the DNM is largely molecular. By contrast, the present analysis determines N(H\({}_{2}\)) independent of CO emission and defines \(X_{\rm CO}\) = N(H\({}_{2}\))/W\({}_{\rm CO}\). This N(H\({}_{2}\)) includes the molecular component of the DNM that we took pains to consider separately in the discussion of Figure 6.
### Achievable limits and detection thresholds for CO, DNM, and H\({}_{2}\)
Signlines in our study often had rms CO emission noise \(\Delta\)W\({}_{\rm CO}\)\(\gtrsim\) 1/3 - 1/2 K-km s\({}^{-1}\) in the prior DNM analysis (Table 2). For a typical Galactic conversion 2N(H\({}_{2}\))/W\({}_{\rm CO}\) = 4 \(\times\) 10\({}^{20}\) cm\({}^{-2}\) (K-kms)\({}^{-1}\), the equivalent 3\(\sigma\) threshold detection limits on the hydrogen column are 2N(H\({}_{2}\)) = \(4-6\times 10^{20}\) cm\({}^{-2}\), above the actual values of N(DNM) along most of the sightlines we observed. By contrast, the median 3\(\sigma\) detection threshold from HCO\({}^{+}\) in our work is 2N(H\({}_{2}\)) = 1.1 \(\times\) 10\({}^{20}\) cm\({}^{-2}\) and typical N(DNM) values are N(DNM) \(\gtrsim\) 2 \(\times\) 10\({}^{20}\) cm\({}^{-2}\) (Figure 6).
The effectiveness of reducing the detection threshold of CO emission below 1 K-km s\({}^{-1}\) can be assessed by examining the distribution of CO detections and upper limits in the IRAM 30m CO observations. The subsamples of IRAM CO observations
Figure 4: N(DNM) plotted against total sightline N(HI) (bottom) and cloud-associated atomic hydrogen N(H\({}_{\rm J}\))\({}_{d}\) (top).
Figure 3: Trends in mean N(HI), N(H\({}_{2}\)), and N(DNM). The results are binned in 0.05 mag intervals of E\({}_{\rm B-V}\). Left: N(HI) is calculated over the whole line profile. Right: N(HI) = N(HI)\({}_{\rm J,i}\) the HI column density associated with the clouds studied here. The number of sightlines in each bin is shown in both panels.
summarized in Table 3 show 6 detections with W\({}_{\rm CO}\)\(>\) 1 K-km s\({}^{-1}\), 12 detections with W\({}_{\rm CO}\)\(<\) 1 K-km s\({}^{-1}\), and ten upper limits at levels W\({}_{\rm CO}\)\(<\) 0.1 - 0.2 K-km s\({}^{-1}\) (see Table 1 for values of upper limits and Figure 3 for a graphical representation of the data). HCO\({}^{+}\) was detected along all of these sightlines, and the three subsamples respectively represent fractions 0.66, 0.26, and 0.08 of the total amount of H\({}_{2}\).
Thus, a survey with a detection limit of 1 K-km s\({}^{-1}\) would detect approximately two-thirds of the molecular gas along these diffuse and/or translucent lines of sight, and a much increased effort to reduce the detection limit to 0.2 K-km s\({}^{-1}\) might find another one-fourth of the H\({}_{2}\). Comparable reductions in the fraction of undetected H\({}_{2}\) with increasing CO sensitivity down to rms levels \(\Delta\)W\({}_{\rm CO}\)\(\approx\) 0.1 K-km s\({}^{-1}\) were achieved by Donate & Magnani (2017). Missing one-third of the H\({}_{2}\) at the 1 K-km s\({}^{-1}\) threshold is consistent with the dark gas fraction derived by Wolfire et al. (2010) and Gong et al. (2018).
### The wider view
In Section 4.2 we note that two-thirds of the H\({}_{2}\) was found along the sightlines with W\({}_{\rm CO}\)\(>\) 1 K-km s\({}^{-1}\) and that a deeper survey with a detection limit of 0.2 K-km s\({}^{-1}\) would have found another 25% of the H\({}_{2}\). At this point we can ask how the results of this sparse sampling are reflected in the region as a whole. We drew a hull around the observed anticenter sightlines, as shown in Fig.
Figure 5: N(DNM) plotted against E\({}_{\rm B-V}\) (top) and 2N(H\({}_{2}\)) (bottom). Larger symbols represent sightlines where W\({}_{\rm CO}\)\(\ga\) 1 K-km s\({}^{-1}\).
Figure 6: N(DNM) plotted against N(H\({}_{2}\)) for sightlines lacking detected CO emission in the analysis of Remy et al. (2018). Shown are loci at which the number of H nuclei in H\({}_{2}\) is 50, 100, and 200% of that in DNM.
Figure 7: Trends in mean N(H\({}_{2}\)) (black squares), W\({}_{\rm CO}\) (pink diamonds), and N(H\({}_{2}\))/W\({}_{\rm CO}\) (blue triangles). The results are binned in 0.05 mag intervals of E\({}_{\rm B-V}\), as in Figure 3. Bins in which all sightlines have only upper limits on W\({}_{\rm CO}\) are indicated by upper and lower limits. The scales for N(H\({}_{2}\)) cm\({}^{-2}\) and N(H\({}_{2}\))/W\({}_{\rm CO}\) ( cm\({}^{-2}\) (K-km s\({}^{-1}\))\({}^{-1}\)) are read on the left vertical axis and that for W\({}_{\rm CO}\) (K-km s\({}^{-1}\)) on the right. The number of sightlines in each bin is shown as in Figure 3 and the count is carried separately for W\({}_{\rm CO}\) at low E\({}_{\rm B-V}\). The light gray dashed line is a power-law fit N(H\({}_{2}\)) = \(10^{21.6973}\)E\({}_{\rm B-V}\)\({}^{1.335}\).
ure C.1 and derived the pixel-by-pixel statistics for the contained area, the amount of material (taken proportional to E\({}_{\rm B-V}\)), and the amount of H\({}_{2}\). For H\({}_{2}\) we used the N(H\({}_{2}\))-E\({}_{\rm B-V}\) relationship N(H\({}_{2}\)) = \(10^{21.0973}\)E\({}_{\rm B-V}\)-\(\,1^{3.35}\)1 and the fact that W\({}_{\rm CO}\)\(\ga 1\) K-km s\({}^{-1}\) at E\({}_{\rm B-V}\)\(\ga 0.4\) mag or N(H\({}_{2}\))\(\ga 4\times 10^{20}\) cm\({}^{-2}\) in Figure 7 (see also Figure 5 of Paper 1).
Footnote 1: \(\rm f_{H_{2}}=2\)N(H\({}_{2}\))/N(H) = 0.4-0.5 at E\({}_{\rm B-V}\) = 2 mag if N(H)/E\({}_{\rm B-V}\) = \(6-8\times 10^{21}\)mag\({}^{-1}\)
Figure 8 shows the probability densities (at left) and cumulative distributions for the derived quantities. Reading values off the cumulative probability distributions at right in Figure 8, conditions with E\({}_{\rm B-V}\)\(\geq\) 0.4 mag, N(H\({}_{2}\)) \(\ga 4\times 10^{20}\) cm\({}^{-2}\) occur over one-fourth of the total area containing one-half of the total projected gas column and two-thirds of the H\({}_{2}\). Apparently, the sampled sightlines were representative of the region as a whole regarding the H\({}_{2}\) distribution.
Sampling 90% of the H\({}_{2}\) would require detecting CO emission at E\({}_{\rm B-V}\)\(\ga 0.2\) mag where N(H\({}_{2}\)) \(\approx\)\(2\times 10^{20}\) cm\({}^{-2}\). The sampling here is too sparse to derive an equivalent value of W\({}_{\rm CO}\) in Figure 7, but the broader sample in Paper 1 suggests W\({}_{\rm CO}\)\(\ga 0.3\) K-km s\({}^{-1}\) would be appropriate. Sightlines with E\({}_{\rm B-V}\)\(<\) 0.32 mag or A\({}_{\rm V}\)\(<\) 1 mag comprise two-thirds of the area, one-third of the mass, and one-fourth of the H\({}_{2}\).
### Comparison with predictions and models
For models, the critical factors in predicting the amount of dark gas are the threshold where W\({}_{\rm CO}\) reaches the 1 K-km s\({}^{-1}\) brightness level and the amount of material above the threshold. In the regime of diffuse molecular gas the integrated brightness of the J=1-0 line is determined almost exclusively by the CO column density, with W\({}_{\rm CO}\) (K-km s\({}^{-1}\)) = N(CO)/\(10^{15}\) cm\({}^{-2}\) for hydrogen number densities n(H) = \(64-500\) cm\({}^{-3}\)(Liszt, 2007, 2017; Hu et al., 2021) and kinetic temperature appropriate to the local thermal pressure. This simple proportionality between column density and brightness persists well beyond the CO column density where the J=1-0 transition becomes optically thick, owing to the strongly subthermal excitation (Goldreich & Kwan, 1974).
Thus, the CO chemistry dominates the CO brightness, and the sky map of CO emission is a map of the chemistry. The observations can be reproduced by an ad hoc CO formation chemistry in which a fixed relative abundance of HCO\({}^{+}\) N(HCO\({}^{+}\))/N(H\({}_{2}\)) = \(3\times 10^{-9}\) thermally recombines to CO if the H\({}_{2}\) formation and shielding of H\({}_{2}\) and CO are treated self-consistently along with the heating and cooling Liszt (2017). However, networks of chemical reactions in quiescent gas may fail to form the observed quantity of HCO\({}^{+}\) and produce too little CO. This causes the CO brightness to reach 1 K-km s\({}^{-1}\) at overly large values of E\({}_{\rm B-V}\) and N(H\({}_{2}\)) where shielding of CO by dust and H\({}_{2}\) make up for the deficit in the CO formation rate (e.g., Gong et al., 2018, 2021, 2022).
Models with a weak CO formation chemistry have an innate tendency to overestimate the amount of CO-dark gas, but the actual amount of CO-dark gas depends on the distribution of material. In practice the DNM fraction seen by Remy et al. (2018) varied from 0 to 0.3 (their Figure 8) and the sightlines sampled here had \(<\)f\({}_{\rm DNM}\)\(>\) = 0.13.
## 5 Summary
We took 89.2 GHz ALMA HCO\({}^{+}\) absorption spectra toward 33 compact millimeter-wave extragalactic continuum sources seen against the Galactic anticenter (Figure 1 and the tables above). We also observed J=1-0 CO emission at the IRAM 30m telescope in the 28 anticenter directions where HCO\({}^{+}\) was detected. Inferring N(H\({}_{2}\)) from N(HCO\({}^{+}\)) using the ratio N(HCO\({}^{+}\))/N(H\({}_{2}\)) = \(3\times 10^{-9}\), we combined these results with those from our earlier study of 13 directions where HCO\({}^{+}\) absorption was detected in the outskirts of Chamaeleon. We compared the inferred N(H\({}_{2}\)) with prior determinations of the column densities of the dark neutral medium, the neutral gas of uncertain (atomic or molecular) character that had been found to be missing in HI and/or CO emission when compared with the submillimeter dust and \(\gamma\)-ray emissivities of large-scale molecular cloud complexes.
Binning the HI, H\({}_{2}\), and DNM column densities in reddening, we showed in Figure 3 that the mean DNM and molecular gas column densities were comparable and varied compatibly by factors of 50-100 over the observed range E\({}_{\rm B-V}\) = 0.09 - 1 mag, while N(H(I) varied by only factors of 2-3. The means of N(H\({}_{2}\)) and N(DNM) are small at low mean reddening, and increase in similar fashion up to E\({}_{\rm B-V}\)= 0.5 mag where molecular gas begins to dominate and CO emission is strong. N(H\({}_{2}\)) continues to increase with higher reddening, but N(DNM) and the column density of cloud-associated HI fall where H\({}_{2}\) dominates (sequestering hydrogen) and CO emission is stronger and more closely representative of N(H\({}_{2}\)).
We made detailed individual sightline-level comparisons of N(DNM) with E\({}_{\rm B-V}\), N(HI), and N(H\({}_{2}\)) in Figures 4-6. Sightlines with appreciable DNM appear at N(HI) (Figure 4); the overall mean DNM fraction \(\rm f_{DNM}\) = N(DNM)/(N(HI)+2N(H\({}_{2}\))) = 0.12 is modest. Sightlines with appreciable DNM are lacking when E\({}_{\rm B-V}\)\(\la\) 0.15 mag (Figure 5) and when 2N(H\({}_{2}\)) \(\la\) 10\({}^{20}\) cm\({}^{-2}\) or 2N(H\({}_{2}\)) \(\ga\) 10\({}^{21}\) cm\({}^{-2}\) (Figure 6). In Figure 6 we compared N(DNM) and N(H\({}_{2}\)) in directions lacking detected CO emission in order to eliminate the case that H\({}_{2}\) was already represented by CO emission in the DNM analysis. This figure showed that there were 2-3 sightlines in each subsample (Chameleon and anticenter) or 5-6/46 overall where 2N(H\({}_{2}\)) \(<\)\(\leq\) N(DNM)/2 and H\({}_{2}\) accounted for the minority of the DNM. There are also a few directions at 2N(H\({}_{2}\)) \(\approx\) 10\({}^{21}\) cm\({}^{-2}\) where 2N(H\({}_{2}\)) \(>\) 2N(DNM). Overall, the amounts of DNM and H\({}_{2}\) are similar \(<\)2N(H\({}_{2}\))\(>\) = 1.3 \(<\)N(DNM)\(>\) for the unambiguous cases lacking CO emission. The form of the DNM is overwhelmingly molecular hydrogen.
Directions with \(<\)W\({}_{\rm CO}\)\(>\)\(>\)1 K-km s\({}^{-1}\) have two times smaller mean DNM fractions \(<\)f\({}_{\rm DNM}\)\(>\) = 0.06, while sightlines with \(<\)W\({}_{\rm CO}\)\(>\) \(<\) 1 K-km s\({}^{-1}\) have three times higher f\({}_{\rm DNM}\)\(\ga\) 0.17-0.19 (Table 3). The relatively few sightlines with W\({}_{\rm CO}\)\(\geq\) 1 K-km s\({}^{-1}\) contain two-thirds of the H\({}_{2}\) detected in HCO\({}^{+}\), and detecting 90% of the H\({}_{2}\) would require detecting CO at levels W\({}_{\rm CO}\)\(\approx\) 0.2-0.3 K-km s\({}^{-1}\).
The CO-H\({}_{2}\) conversion factor falls steadily with increasing E\({}_{\rm B-V}\) or W\({}_{\rm CO}\) in Figure 7. Because the H\({}_{2}\) is concentrated in the sightlines with W\({}_{\rm CO}\)\(\geq\) 1 K-km s\({}^{-1}\), the overall mean CO-H\({}_{2}\) conversion factors in our work are \(<\)N(H\({}_{2}\))\(>\)/\(<\)W\({}_{\rm CO}\)\(>\) = 2 \(3\times 10^{20}\)H\({}_{2}\) cm\({}^{-2}\) for samples of sightlines with detectable CO emission. These values are comparable to previously determined large-scale Galactic averages, and are substantially higher than the global values determined by the cloud-level analyses quoted here to derive N(DNM). We ascribed these differences in part to the present sampling of widely scattered sightlines (i.e., that we did not do a cloud-level analysis) and perhaps to differences in the operational definition of the conversion factor, as discussed in Section 4.1.
Subsequent papers derived from the observations discussed here will focus on the physics and chemistry of the molecules
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline Source & RA(J2000) & Dec(J2000) & \(l\) & \(b\) & E\({}_{\rm B-V}\)1 & 89 GHz flux & \(\Upsilon_{\rm HCO}\)\({}^{+2}\) & \(\sigma\)\({}^{T}\)\({}_{\rm HCO}\)\({}^{+3}\) & W\({}_{\rm CO}\)4 & \(\sigma\)W\({}_{\rm CO}\)5 \\ & hh.mmss & dd.mmss & degrees & degrees & mag & Jy & km s\({}^{-1}\) & km s\({}^{-1}\) & K-km s\({}^{-1}\) \\ \hline J0203+1134 & 2.03464 & 11.34492 & 149.6826 & -47.4992 & 0.144 & 0.126 & \(\leq\)0.263 & 0.088 & & \\ J0209+1352 & 2.09357 & 13.52045 & 150.1800 & -44.8290 & 0.094 & 0.223 & 0.20 & 0.050 & \(\leq\)0.071 & 0.024 \\ J0211+1051 & 2.11132 & 10.51348 & 152.5781 & -47.3674 & 0.145 & 0.462 & 0.76 & 0.029 & 0.36 & 0.025 \\ J0213+1820 & 2.31305 & 18.20255 & 148.7289 & -40.4041 & 0.130 & 0.093 & \(\leq\)0.345 & 0.115 & & \\ J0231+1322 & 2.31459 & 13.22547 & 157.0917 & -42.7380 & 0.121 & 0.430 & 0.14 & 0.025 & \(\leq\)0.064 & 0.021 \\ J0242+1742 & 2.42243 & 17.42589 & 157.0180 & -37.7033 & 0.077 & 0.227 & \(\leq\)0.151 & 0.050 & & \\ J0252+1718 & 2.52077 & 17.18427 & 159.7420 & -36.7885 & 0.220 & 0.172 & 0.25 & 0.077 & \(\leq\)0.096 & 0.032 \\ J0325+2224 & 3.25368 & 22.24004 & 163.6700 & -28.0213 & 0.213 & 1.162 & 1.01 & 0.017 & 0.94 & 0.051 \\ J0329+3510 & 3.29154 & 35.10060 & 155.9152 & -17.4042 & 0.267 & 0.570 & 0.51 & 0.032 & \(\leq\)0.143 & 0.048 \\ J0329+2756 & 3.29577 & 27.56155 & 160.7030 & -23.0743 & 0.198 & 0.158 & \(\leq\)0.193 & 0.064 & & \\ J0334+0800 & 3.34533 & 8.00144 & 177.2396 & -37.0871 & 0.391 & 0.150 & 0.44 & 0.088 & \(\leq\)0.162 & 0.054 \\ J0336+3218 & 3.36301 & 32.1893 & 158.9998 & -18.7650 & 0.733 & 1.689 & 0.16 & 0.009 & \(\leq\)0.219 & 0.073 \\ J0356+2093 & 3.56085 & 29.03423 & 164.6120 & -18.4927 & 0.212 & 0.139 & 1.50 & 0.090 & 1.62 & 0.042 \\ J0357+2319 & 3.57216 & 23.19538 & 169.0302 & -22.4661 & 0.185 & 0.224 & 0.28 & 0.027 & \(\leq\)0.192 & 0.064 \\ J0400+0550 & 4.00117 & 5.50431 & 184.2710 & -33.7266 & 0.266 & 0.159 & 0.25 & 0.063 & \(\leq\)0.172 & 0.057 \\ J0401+0413 & 4.01199 & 4.13344 & 186.0261 & -34.4947 & 0.341 & 0.405 & 0.49 & 0.021 & 0.19 & 0.048 \\ J0403+2600 & 4.03056 & 26.00015 & 168.0260 & -19.6483 & 0.201 & 0.331 & 0.60 & 0.029 & 0.62 & 0.067 \\ J0406+0637 & 4.06343 & 6.37150 & 184.7075 & -32.0009 & 0.283 & 0.264 & 0.54 & 0.051 & 0.63 & 0.040 \\ J0407+0742 & 4.07291 & 7.42075 & 183.8723 & -31.1558 & 0.265 & 0.387 & 0.53 & 0.031 & 0.19 & 0.042 \\ J0426+0518 & 4.26366 & 5.18199 & 189.3631 & -28.7705 & 0.291 & 0.516 & 0.12 & 0.020 & \(\leq\)0.170 & 0.057 \\ J0426+2327 & 4.26557 & 23.27396 & 173.8881 & -17.4457 & 0.539 & 0.304 & 2.57 & 0.057 & 4.84 & 0.045 \\ J0427+0457 & 4.27476 & 4.57083 & 189.8857 & -28.7306 & 0.335 & 0.414 & 0.62 & 0.024 & 0.39 & 0.045 \\ J0437+2037 & 4.31038 & 20.37343 & 176.8096 & -18.5565 & 0.532 & 0.217 & 1.54 & 0.073 & 0.67 & 0.026 \\ J0431+1731 & 4.31574 & 17.31358 & 179.4942 & -20.3579 & 0.464 & 0.104 & 1.01 & 0.110 & 0.70 & 0.049 \\ J0433+0521 & 4.33111 & 5.21156 & 190.7330 & -27.3967 & 0.298 & 4.911 & 0.35 & 0.003 & \(\leq\)0.109 & 0.036 \\ J0437+2940 & 4.37044 & 29.40138 & 170.5818 & -11.6609 & 0.979 & 0.059 & 5.92 & 1.138 & 10.42 & 0.027 \\ J0438+3004 & 3.48049 & 30.04455 & 170.4116 & -11.2283 & 0.952 & 0.689 & 6.25 & 0.038 & 7.11 & 0.026 \\ J0439+3045 & 4.39178 & 30.45076 & 170.0655 & -10.5913 & 0.867 & 0.195 & 5.05 & 0.082 & 6.69 & 0.027 \\ J0440+1437 & 4.40211 & 14.37570 & 183.2538 & -20.5438 & 0.681 & 0.337 & 1.21 & 0.031 & 0.83 & 0.029 \\ J0445+0715 & 4.45014 & 7.15539 & 190.4535 & -23.8898 & 0.121 & 0.275 & \(\leq\)0.083 & 0.028 & & \\ J04494+1121 & 4.49077 & 11.21286 & 187.4274 & -20.7365 & 0.504 & 0.521 & 0.65 & 0.022 & 0.23 & 0.033 \\ J0502+1338 & 5.02332 & 13.381
observed in the course of this work, chiefly HCO\({}^{+}\), C\({}_{2}\)H, and HCN.
###### Acknowledgements.
The National Radio Astronomy Observatory (NRAO) is a facility of the National Science Foundation, operated by Associated Universities, Inc. This work was supported by the French program "Physique et Chimie du Milie Interalletier" (PCMI) funded by the Conseil National de la Recherche Scientifique (CNRS) and Centre National d'Etudes Spatiales (CNES). This work is based in part on observations carried out under project number 003-19 with the IRAM 30m telescope. IRAM is supported by INSU/CNRS (France), MPG (Germany) and IGN (Spain). This paper makes use of the following ALMA data - ADS/JAO.ALMA#2016.1.00714.5 - ADS/JAO.ALMA#2018.1.00115.5 ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), NSC and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. We thank Isabelle Grenier for providing results of the DNM analysis and we thank the anonymous referee for many helpful remarks. HSL is grateful for the hospitality of the ITU-R and the Hotel de la Cigeone in Geneva during the completion of this manuscript.
|
2304.11534
|
Graph Neural Networks for Text Classification: A Survey
|
Text Classification is the most essential and fundamental problem in Natural
Language Processing. While numerous recent text classification models applied
the sequential deep learning technique, graph neural network-based models can
directly deal with complex structured text data and exploit global information.
Many real text classification applications can be naturally cast into a graph,
which captures words, documents, and corpus global features. In this survey, we
bring the coverage of methods up to 2023, including corpus-level and
document-level graph neural networks. We discuss each of these methods in
detail, dealing with the graph construction mechanisms and the graph-based
learning process. As well as the technological survey, we look at issues behind
and future directions addressed in text classification using graph neural
networks. We also cover datasets, evaluation metrics, and experiment design and
present a summary of published performance on the publicly available
benchmarks. Note that we present a comprehensive comparison between different
techniques and identify the pros and cons of various evaluation metrics in this
survey.
|
Kunze Wang, Yihao Ding, Soyeon Caren Han
|
2023-04-23T04:21:50Z
|
http://arxiv.org/abs/2304.11534v3
|
# Graph Neural Networks for Text Classification: A Survey
###### Abstract.
Text Classification is the most essential and fundamental problem in Natural Language Processing. While numerous recent text classification models applied the sequential deep learning technique, graph neural network-based models can directly deal with complex structured text data and exploit global information. Many real text classification applications can be naturally cast into a graph, which captures words, documents, and corpus global features. In this survey, we bring the coverage of methods up to 2023, including corpus-level and document-level graph neural networks. We discuss each of these methods in detail, dealing with the graph construction mechanisms and the graph-based learning process. As well as the technological survey, we look at issues behind and future directions addressed in text classification using graph neural networks. We also cover datasets, evaluation metrics, and experiment design and present a summary of published performance on the publicly available benchmarks. Note that we present a comprehensive comparison between different techniques and identify the pros and cons of various evaluation metrics in this survey.
Graph Neural Networks, Text Classification. Representation Learning +
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †:
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †:
Footnote †: journal: ACM
+
Footnote †:
Footnote †: journal: ACM
+
Footnote †:
Footnote †: journal: ACM
+
Footnote †:
Footnote †: journal: ACM
+
Footnote †:
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †:
Footnote †: journal: ACM
+
Footnote †:
Footnote †:
Footnote †:
Footnote †:
+
[MISSING_PAGE_POST]
document-level graphs and each of them represents a document. The corpus-level graph can capture the global structural information of the entire corpus, while the document-level graph can capture the word-to-word relationships within a document explicitly. Both ways of applying graph neural networks to text classification achieve good performance.
This paper mainly focuses on GNN-based text classification techniques, datasets, and their performance. The graph construction approaches for both corpus-level and document-level graphs are addressed in detail. Papers on the following aspects will be reviewed:
* GNNs-based text classification approaches. Papers that design GNN-based frameworks to enhance the feature representation or directly apply GNNs to conduct sequence text classification tasks will be summarized, described and discussed. GNNs applied for token-level classification (Natural Language Understanding) tasks, including NER, slot filling, etc, will not be discussed in this work.
* Text classification benchmark datasets and their performance applied by GNN-based models. The text classification datasets with commonly used metrics used by GNNs-based text classification models will be summarized and categorized based on task types, together with the model performance on these datasets.
### Related Surveys and Our Contribution
Before 2019, the text classification survey papers [2; 35; 116; 129; 46] have focused on covering traditional machine learning-based text classification models. Recently, with the rapid development of deep learning techniques, [55; 82; 147; 149] review the various deep learning based approaches. In addition, some papers not only review the SoTA model architectures, but summarize the overall workflow [10; 39; 42; 49; 83] or specific techniques for text classification including word embedding [102], feature selection [22; 96; 103], term weighting [3; 91] and etc. Meanwhile, some growing potential text classification architectures are surveyed, such as CNNs [132], attention mechanisms [78]. Since the powerful ability to represent non-Euclidean relation, GNNs have been used in multiple practical fields and reviewed e.g. financial application [117], traffic prediction [72], bio-informatics [142], power system [60], recommendation system [27; 59; 131]. Moreover, [14; 126; 139; 8; 144] comprehensively review the general algorithms and applications of GNNs, as well as certain surveys mainly focus on specific perspectives including graph construction [105; 112], graph representation [34], training [128], pooling [65] and more. However, only [55; 82] briefly introduce certain SoTA GNN-based text classification models. A recent short review paper [75] reviews several SoTA models without providing a comprehensive overview in this area. The contribution of this survey includes:
* This is the first survey focused only on graph neural networks for text classification with a comprehensive description and critical discussion on more than twenty GNN text classification models.
* We categorize the existing GNN text classification models into two main categories with multiple sub-categories, and the tree structure of all the models shows in Figure 1.
* We compare these models in terms of graph construction, node embedding initialization, and graph learning methods. And We also compare the performance of these models on the benchmark datasets and discuss the key findings.
* We discuss the existing challenges and some potential future work for GNN text classification models.
### Text Classification Tasks
Text classification involves assigning a pre-defined label to a given text sequence. The process typically involves encoding pre-processed raw text into numerical representations and using classifiers to predict the corresponding categories. Typical sub-tasks include sentiment analysis, topic labelling, news categorization, and hate speech detection. Certain frameworks can be extended to advanced applications such as information retrieval, summarising, question answering, and natural language inference. This paper focuses specifically on GNN-based models used for typical text classification.
* **Sentiment Analysis** is a task that aims to identify the emotional states and subjective opinions expressed in the input text, such as reviews, micro-blogs, etc. This can be achieved through binary or multi-class classification. Effective sentiment analysis can aid in making informed business decisions based on user feedback.
* **Topic Classification** is a supervised deep learning task to automatically understand the text content and classified into multiple domain-specific categories, typically more than two. The data sources may gather from different domains, including Wikipedia pages, newspapers, scientific papers, etc.
* **Junk Information Detection** involves detecting inappropriate social media content. Social media providers commonly use approaches like hate speech, abusive language, advertising or spam detection to remove such content efficiently.
### Text Classification Development
Many traditional machine learning methods and deep learning models are selected as the baselines for comparing with the GNN-based text classifiers. We mainly summarized those baselines into three types:
_Traditional Machine Learning_: In earlier years, traditional methods such as Support Vector Machines (SVM) (Krizhevsky et al., 2014) and Logistic Regression (Krizhevsky et al., 2014) utilized sparse representations like Bag of Words (BoW) and TF-IDF. However, recent advancements (Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014) have focused on dense representations, such as Word2vec, GloVe, and Fasttext, to mitigate the limitations of sparse representations. These dense representations are also used as inputs for sophisticated methods, such as Deep Averaging Networks (DAN) (Krizhevsky et al., 2014) and Paragraph Vector (Doc2Vec) (Krizhevsky et al., 2014), to achieve new state-of-the-art results.
_Sequential Models_: RNNs and CNNs have been utilized to capture local-level semantic and syntactic information of consecutive words from input text bodies. The upgraded models, such as LSTM (Hochreiter and Schmidhuber, 1997) and GRU (Hochreiter and Schmidhuber, 1997), have been proposed to address the vanishing or exploding gradient problems caused by vanilla RNN. CNN-based structures have been applied to capture N-gram features by using one or more convolution and pooling layers, such as Dynamic CNN (Krizhevsky et al., 2014) and TextCNN (Krizhevsky et al., 2014). However, these models can only capture local dependencies of consecutive words. To capture longer-term or non-Euclidean relations, improved RNN structures, such as Tree-LSTM (Hochreiter and Schmidhuber, 1997) and MT-LSTM (Krizhevsky et al., 2014), and global semantic information, like TopicRNN (Krizhevsky et al., 2014), have been proposed. Additionally, graph (Krizhevsky et al., 2014) and tree structure (Krizhevsky et al., 2014) enhanced CNNs have been proposed to learn more about global and long-term dependencies.
_Attentions and Transformers_: attention mechanisms (Chen et al., 2016) have been widely adopted to capture long-range dependencies, such as hierarchical attention networks (Chen et al., 2016) and attention-based hybrid models (Krizhevsky et al., 2014). Self-attention-based transformer models have achieved state-of-the-art performance on many text classification benchmarks via pre-training on some tasks to generate strong contextual word representations. However, these models only focus on learning the
relation between input text bodies and ignore the global and corpus level information. Researchers have proposed combining the benefits of attention mechanisms and Graph Neural Networks (GNNs) to learn both the relation between input text bodies and the global and corpus level information, such as VGCN-BERT (Wang et al., 2019) and BERTGCN (Wang et al., 2019).
### Outline
The outline of this survey is as follows:
* Section 1 presents the research questions and provides an overview of applying Graph Neural Networks to text classification tasks, along with the scope and organization of this survey.
* Section 2 provides background information on text classification and graph neural networks and introduces the key concepts of applying GNNs to text classification from a designer's perspective.
* Section 3 and Section 4 discuss previous work on Corpus-level Graph Neural Networks and Document-level Graph Neural Networks, respectively, and provide a comparative analysis of the strengths and weaknesses of these two approaches.
* Section 5 introduces the commonly used datasets and evaluation metrics in GNN for text classification.
* Section 6 reports the performance of various GNN models on a range of benchmark datasets for text classification and discusses the key findings.
* The challenges for the existing methods and some potential future works are discussed in Section 7.
* In Section 8, we present the conclusions of our survey on GNN for text classification and discuss potential directions for future work.
## 2. Backgrounds of Gnn
### Definition of Graph
A graph in this paper is represented as \(G=(V,E)\), where \(V\) and \(E\) represent a set of nodes (vertices) and edges of \(G\), respectively. A single node in the node set is represented \(v_{i}\in V\), as well as \(e_{ij}=(v_{i},v_{j})\in E\) donates an edge between node \(v_{i}\) and \(v_{j}\). The adjacent matrix of graph \(G\) is represented as \(A\), where \(A\in\mathbb{R}^{n\times n}\) and \(n\) is the number of nodes in graph \(G\). If \(e_{ij}\in E\), \(A_{ij}=1\), otherwise \(A_{ij}=0\). In addition, we use \(\mathbf{X}\) and \(\mathbf{E}\) to represent the nodes and edges representations in graph \(G\), where \(\mathbf{X}\in\mathbb{R}^{n\times m}\) and \(\mathbf{E}\in\mathbb{R}^{n\times c}\). \(\mathbf{x}_{i}\in\mathbb{R}^{m}\) represents the \(m\)-dimensional vector of node \(v_{i}\) and \(\mathbf{e}_{ij}\in\mathbb{R}^{c}\) represents the \(c\)-dimensional vector of edge \(e_{ij}\) (most of the recent studies set \(c=1\) to represent a weighting scalar). \(\mathbf{A}\) donates the edge feature weighted adjacent matrix.
### Traditional Graph-based Algorithms
Before GNNs were broadly used for representing irregular relations, traditional graph-based algorithms have been applied to model the non-Euclidean structures in text classification e.g. Random Walk (197; 146), Graph Matching (101; 104), Graph Clustering (79) which has been well summarized in (124). There are three common limitations of those traditional graph-based algorithms. Firstly, most of those algorithms mainly focus on capturing graph-level structure information without considering the significance of node and edge features. For example, Random Walk based approaches (107; 146) mainly focus on using distance or angle between node vectors to calculate transition probability while ignoring the information represented by node vectors. Secondly, since the traditional graph-based
algorithms are only suitable for specific tasks, there is no unified learning framework for addressing various practical tasks. For example, [44] proposes a graph clustering method that requires a domain knowledge-based ontology graph. Lastly, the traditional graph based methods are comparative time inefficient like the Graph Edit Distance-based graph matching methods have exponential time complexity [104].
### Foundations of GNN
To tackle the limitation of traditional graph-based algorithms and better represent non-Euclidean relations in practical applications, Graph Neural Networks are proposed by [100]. GNNs have a unified graph-based framework and simultaneously model the graph structure, node, and edge representations. This section will provide the general mathematical definitions of Graph Neural Networks. The general forward process of GNN can be summarised as follows:
\[\mathbf{H}^{(I)}=\mathcal{F}(\mathbf{A},\mathbf{H}^{(l-1)}) \tag{1}\]
where \(\mathbf{A}\in\mathbb{R}^{n\times n}\) represents the weighted adjacent matrix and \(\mathbf{H}^{(I)}\in\mathbb{R}^{n\times d}\) is the updated node representations at the \(l\)-th GNN layers by feeding \(l-1\)-th layer node features \(\mathbf{H}^{(l-1)}\in\mathbb{R}^{n\times k}\) ( \(k\) is the dimensions of previous layers node representations ) into pre-defined graph filters \(\mathcal{F}\).
Figure 1: Categorizing the graph neural network text classification models.
The most commonly used graph filtering method is defined as follows:
\[\mathbf{H}^{(I)}=\phi(\tilde{\mathbf{A}}\mathbf{H}^{(I-1)}\mathbf{W}) \tag{2}\]
where \(\tilde{\mathbf{A}}=\mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1}{2}}\) is the normalized symmetric adjacency matrix. \(\mathbf{A}\in\mathbb{R}^{n\times n}\) is the adjacent matrix of graph \(G\) and \(\mathbf{D}\) is the degree matrix of \(\mathbf{A}\), where \(D_{ii}=\Sigma_{j}A_{ij}\). \(\mathbf{W}\in\mathbb{R}^{k\times d}\) is the weight matrix and \(\phi\) is the activation function. If we design a two layers of GNNs based on the above filter could get a vanilla Graph Convolutional Network (GCN) (Kunze et al., 2017) framework for text classification:
\[\mathbf{Y}=softmax(\tilde{\mathbf{A}}(ReLU(\tilde{\mathbf{A}}\mathbf{H}\mathbf{W}^{(0)}))\mathbf{W}^{ (1)}) \tag{3}\]
where \(\mathbf{W}^{0}\) and \(\mathbf{W}^{1}\) represent different weight matrix for different GCN layers and \(\mathbf{H}\) is the input node features. \(ReLU\) function is used for non-linearization and \(softmax\) is used to generated predicted categories \(\mathbf{Y}\).
### GNN for Text Classification
In this paper, we mainly discuss how GNNs are applied in Text Classification tasks. Before we present the specific applications in this area, we first introduce the key concepts of applying GNNs to text classification from a designer's view. We suppose for addressing a text classification task need to design a graph \(G=(V,E)\). The general procedures include _Graph Construction_, _Initial Node Representation_, _Edge Representations_, _Training Setup_.
#### 2.4.1. Graph Construction
Some applications have explicit graph structures including constituency or dependency graphs (Kunze et al., 2017), knowledge graphs (Kunze et al., 2017; Kunze et al., 2018), social networks (Kunze et al., 2017) without constructing graph structure and defining corresponding nodes and edges. However, for text classification, the most common graph structures are implicit, which means we need to define a new graph structure for a specific task such as designing a word-word or word-document co-occurrence graph. In addition, for text classification tasks, the graph structure can be generally classified into two types:
* _Corpus-level/Document-level_. Corpus-level graphs intend to construct the graph to represent the whole corpus such as (Kunze et al., 2017; Kunze et al., 2018; Kunze et al., 2019), while the document-level graphs focus on representing the non-Euclidean relations existing in a single text body like (Kunze et al., 2017; Kunze et al., 2019; Kunze et al., 2019). Supposing a specific corpus \(\mathcal{C}\) contains a set of documents (text bodies) \(\mathcal{C}=\{D_{1},D_{2},..,D_{j}\}\) and each \(D_{i}\) contains a set of tokens \(D_{i}=\{t_{i_{1}},t_{i_{2}},...,t_{i_{k}}\}\). The vocabulary of \(\mathcal{C}\) can be represented as \(\mathcal{D}=\{t_{1},t_{2},...,t_{l}\}\), where \(l\) is the length of \(\mathcal{D}\). For the most commonly adopted corpus-level graph \(G_{corpus}=(V_{corpus},E_{corpus})\), a node \(v_{i}\) in \(V_{corpus}\) follows \(v_{i}\in\mathcal{C}\cup\mathcal{D}\) and the edge \(e_{ij}\in E_{corpus}\) is one kind of relations between \(v_{i}\) and \(v_{j}\). Regarding the document level graph \(G_{doc_{i}}=(V_{doc_{i}},E_{doc_{i}})\), a node \(v_{i_{j}}\) in \(V_{doc_{i}}\) follows \(v_{i_{j}}\in D_{i}\).
After designing the graph-scale for the specific tasks, specifying the graph types is also important to determine the nodes and their relations. For text classification tasks, the commonly used graph construction ways can be summarized into:
* _Homogeneous/Heterogeneous Graphs_: homogeneous graphs have the same node and edge type while heterogeneous graphs have various node and edge types. For a graph \(G=(V,E)\), we use \(\mathbf{N}^{\mathbf{\alpha}}\) and \(\mathbf{N}^{\mathbf{e}}\) to represent the number of types of \(V\) and \(E\). If \(\mathbf{N}^{\mathbf{\alpha}}=\mathbf{N}^{\mathbf{e}}=1\), \(G\) is a homogeneous graph. If \(\mathbf{N}^{\mathbf{\alpha}}>1\) or \(\mathbf{N}^{\mathbf{e}}>1\), \(G\) is a heterogeous graph.
* _Static/Dynamic Graphs_: Static graphs aim to use the constructed graph structure by various external or internal information to leverage to enhance the initial node representation such as dependency or constituency graph
[110], co-occurrence between word nodes [143], TF-IDF between word and document nodes [53, 123, 133] and so on. However, compared with the static graph, the dynamic graph initial representations or graph topology are changing during training without certain domain knowledge and human efforts. The feature representations or graph structure can jointly learn with downstream tasks to be optimised together. For example, [120] proposed a novel topic-award GNN text classification model with dynamically updated edges between topic nodes with others (e.g. document, word). Piao et al. [95] also designed a dynamic edge based graph to update the contextual dependencies between nodes. Additionally, [16] propose a dynamic GNN model to jointly update the edge and node representation simultaneously. We provide more details about above mentioned models in Section 3 and Section 4.
Another widely used pair of graph categories are _directed_ or _undirected_ graphs based on whether the directions of edges are bi-directional or not. For text classification, most of the GNN designs are following the unidirectional way. In addition, those graph type pairs are not parallel which means those types can be combined.
#### 2.4.2. Initial Node Representation.
Based on the pre-defined graph structure and specified graph type, selecting the appropriate initial node representations is the key procedure to ensure the proposed graph structure can effectively learn node. According to the node entity type, the existing node representation approaches for text classification can be generally summarised into:
* _Word-level Representation_: non-context word embedding methods such as GloVe [93], Word2vec [81], FastText [13] are widely adopted by many GNN-based text classification framework to numerically represent the node features. However, those embedding methods are restricted to capturing only syntactic similarity and fail to represent the complex semantic relationships between words, as well as, they cannot capture the meaning of out-of-vocabulary (OOV) words, and their representations are fixed. Therefore, there are some recent studies selecting ELMo [94], BERT [23], GPT [97] to get contextual word-level node representation. Notably, even if one-hot encoding is the simplest word representation method, there are many GNN-based text classifiers using one-hot encoding and achieving state-of-the-art performance. Few frameworks use randomly initialised vectors to represent the word-level node features.
* _Document-level Representation_: similar to other NLP applications, document level representations are normally acquired by aggregating the word level representation via some deep learning frameworks. For example, some researchers select by extracting the last-hidden state of LSTM or using the [CLS] token from BERT to numerically represent the input text body. Furthermore, it is also a commonly used document-level node representation way to use TF-IDF based document vectors.
Most GNN based text classification frameworks will compare the performance between different node representation methods to conduct quantitative analysis, as well as provide reasonable justifications for demonstrating the effectiveness of the selected initial node representation based on defined graph structure.
#### 2.4.3. Edge Features.
Well-defined edge features can effectively improve the graph representation learning efficiency and performance to exploit more explicit and implicit relations between nodes. Based on the predefined graph types, the edge feature types can be divided into _structural features_ and _non-structural features_. The structural edge features are acquired from explicit relations between nodes such as dependency or constituency relation between words, word-word adjacency
relations, etc. Those relations between nodes are explicitly defined and are also widely employed in other NLP applications. However, more commonly used edge features are non-structural features which implicitly existed between the nodes and specifically applied to specific graph-based frameworks. The typically non-structural edge features are firstly defined by (Wang et al., 2017) for GNNs-based text classification tasks including:
* **PMI** measures the co-occurrence between two words in a sliding window \(W\) and is calculated as: \[\text{PMI}(i,j) =log\frac{p(i,j)}{p(i)p(j)}\] (4) \[p(i,j) =\frac{\#W(i,j)}{\#W}\] (5) \[p(i) =\frac{\#W(i)}{\#W}\] (6) where \(\#W\) is the number of windows in total, and \(\#W(i)\), \(\#W(i,j)\) shows the number of windows containing word \(i\) and both word \(i\) and \(j\) respectively.
* **TF-IDF** is the broadly used weight of the edges between document-level nodes and word-level nodes.
Except for those two widely used implicit edge features, some specific edge weighting methods are proposed to meet the demands of particular graph structures for exploiting more information of input text bodies.
#### 2.4.4. Training Setup.
After specifying the graph structure and types, the graph representation learning tasks and training settings also need to be determined to decide how to optimise the designed GNNs. Generally, the graph representation learning tasks can be categorised into three levels including _Node-level_, _Graph-level_ and _Edge-level_. Node-level and Graph-level tasks involve node or graph classification, clustering, regression, etc, while Edge-level tasks include link prediction or edge classification for predicting the relation existence between two nodes or the corresponding edge categories.
Similar to other deep learning model training settings, GNNs also can be divided into _supervised_, _semi-supervised_ and _unsupervised training settings_. Supervised training provides labelled training data, while unsupervised training utilises unlabeled data to train the GNNs. However, compared with supervised or unsupervised learning, semi-supervised learning methods are broadly used by GNNs designed for text classification applications which could be classified into two types:
* _Inductive Learning_ adjusts the weights of proposed GNNs based on a labelled training set for learning the overall statistics to induce the general trained model for following processing. The unlabeled set can be fed into the trained GNNs to compute the expected outputs.
* _Transductive Learning_ intends to exploit labelled and unlabeled sets simultaneously for leveraging the relations between different samples to improve the overall performance.
## 3. Corpus-Level GNN for Text Classification
We define a corpus-level Graph Neural Network as "constructing a graph to represent the whole corpus", thus, only one or several graphs will be built for the given corpus. We categorize Corpus-level GNN into four subcategories based on the types of nodes shown in the graph.
### Document and Word Nodes as a Graph
Most corpus-level graphs include word nodes and document nodes and there are word-document edges and word-word edges. By applying \(K\)(normally \(K\)=2 or 3) layer GNN, word nodes will serve as a bridge to propagate the information from one document node to another.
#### 3.1.1. PMI and TF-IDF as graph edges: TextGCN, SGC, S\({}^{2}\)GC, NMGC, TG-Transformer, BertGCN.
**TextGCN[133]** Yao et al. [133] builds a corpus-level graph with training document nodes, test document nodes and word nodes. Before constructing the graph, a common preprocessing method[47] has been applied and words shown fewer than 5 times or in NLTK[11] stopwords list have been removed. The edge value between the document node and the word node is TF-IDF and that between the word nodes is PMI. The adjacency matrix of this graph shows as follows.
\[A_{ij}=\begin{cases}\text{PMI}(i,j)&i,j\text{ are words},\text{PMI}(i,j)>0\\ \text{TF-IDF}_{i,j}&i\text{ is document},j\text{ is word}\\ 1&i=j\\ 0&\text{otherwise}\end{cases} \tag{7}\]
A two-layer GCN is applied to the graph, and the dimension of the second layer output equals to the number of classes in the dataset. Formally, the forward propagation of TextGCN shows as:
\[Z=\text{softmax}(\tilde{A}(\text{ReLU}(\tilde{A}X\mathbf{W}^{(0)}))\mathbf{W}^{(1)}) \tag{8}\]
where \(\tilde{A}\) is the normalized adjacency of \(A\) and \(X\) is one-hot embedding. \(W_{0}\) and \(W_{1}\) are learnable parameters of the model. The representation on training documents is used to calculate the loss and that on test documents is for prediction.
\begin{table}
\begin{tabular}{l l} \hline \hline
**Notations** & **Descriptions** \\ \hline \(G\) & A graph. \\ \hline \(V\) & The set of nodes in a graph. \\ \hline \(E\) & The set of edges in a graph. \\ \hline \(e_{ij}\) & An edge between node \(i\) and node \(j\). \\ \hline \(N_{i}\) & The neighbors of a node \(i\). \\ \hline \(\mathbf{A}\) & The graph adjacency matrix. \\ \hline \(\tilde{\mathbf{A}}\) & The normalized matrix \(\mathbf{A}\). \\ \hline \(\tilde{\mathbf{A}}^{k},k\in Z\) & The \(k^{th}\) power of \(\tilde{\mathbf{A}}\). \\ \hline \([\mathbf{A}||B]\) & The concatenation of \(A\) and \(B\). \\ \hline \(D\) & The degree matrix of \(\mathbf{A}\). \(D_{ii}=\Sigma_{j=1}^{n}A_{ij}\). \\ \hline \(W^{(l)}\) & The weight matrix of layer \(l\). \\ \hline \(H\in R^{n\times d}\) & The feature matrix of a graph. \\ \hline \(H^{(l)}\in R^{n\times d}\) & The feature matrix of a graph at layer \(l\). \\ \hline \(\tilde{\mathbf{h}}_{i}\in R^{n}\) & The feature vector of the node \(i\) \\ \hline \(\tilde{\mathbf{h}}_{i}^{(l)}\in R^{n}\) & The feature vector of the node \(i\) at layer \(l\). \\ \hline \(Z\in R^{n\times d}\) & The output feature matrix of a graph. \\ \hline \(z_{i}\in R^{n}\) & The output feature vector of the node \(i\) \\ \hline \hline \end{tabular}
\end{table}
Table 1. Commonly used notations in Graph Neural Networks
TextGCN is the first work that treats a text classification task as a node classification problem by constructing a corpus-level graph and has inspired many following works.
Based on TextGCN, several works follow the same graph construction method and node initialization but apply different graph propagation models.
**SGC[123]** To make GCN efficient, SGC (Simple Graph Convolution) removes the nonlinear activation function in GCN layers, therefore, the K-layer propagation of SGC shows as:
\[Z=\text{softmax}(\tilde{\mathbf{A}}...(\tilde{\mathbf{A}}(\tilde{\mathbf{A}}\mathbf{X}\mathbf{W}^{ (0)})\mathbf{W}^{(1)})...\mathbf{W}^{(K)}) \tag{9}\]
which can be reparameterized into
\[Z=\text{softmax}(\tilde{\mathbf{A}}^{K}\mathbf{X}\mathbf{W}) \tag{10}\]
and \(K\) is 2 when applied to text classification tasks. With a smaller number of parameters and only one feedforward layer, SGC saves computation time and resources while improving performance.
**S\({}^{2}\)GC[148]** To solve the oversmoothing issues in GCN, Zhu and Koniusz [148] propose Simple Spectral Graph Convolution(S\({}^{2}\)GC) which includes self-loops using Markov Diffusion Kernel. The output of S\({}^{2}\)GC is calculated as:
\[Z=\text{softmax}(\frac{1}{K}\Sigma_{k=0}^{K}\tilde{\mathbf{A}}^{k}\mathbf{X}\mathbf{W}) \tag{11}\]
and can be generalized into:
\[Z=\text{softmax}(\frac{1}{K}\Sigma_{k=0}^{K}((1-\alpha)\tilde{\mathbf{A}}^{k}\mathbf{ X}+\alpha\mathbf{X})\mathbf{W}) \tag{12}\]
Similarly, \(K=2\) on text classification tasks and \(\alpha\) denotes the trade-off between self-information of the node and consecutive neighbourhood information. S\({}^{2}\)GC can also be viewed as introducing skip connections into GCN.
**NMGC[53]** Other than using the sum of each GCN layer in S\({}^{2}\)GC, NMGC applies min pooling using the Multi-hop neighbour Information Fusion (MIF) operator to address oversmoothing problems. A MIF function is defined as:
\[\text{MIF}(K)=\min(\tilde{\mathbf{A}}\mathbf{X}\mathbf{W},\tilde{\mathbf{A}}^{2}\mathbf{X}\mathbf{W},...,\tilde{\mathbf{A}}^{K}\mathbf{X}\mathbf{W}) \tag{13}\]
NMGC-K firstly applies a MIF(\(K\)) layer then a GCN layer and K is 2 or 3. For example, when \(K=3\), the output is:
\[Z=\text{softmax}(\tilde{\mathbf{A}}(\text{ReLU min}(\tilde{\mathbf{A}}\mathbf{X}\mathbf{W}^{ (0)},\tilde{\mathbf{A}}^{2}\mathbf{X}\mathbf{W}^{(0)},\tilde{\mathbf{A}}^{3}\mathbf{X}\mathbf{W}^{(0) }))\mathbf{W}^{(1)}) \tag{14}\]
NMGC can also be treated as a skip-connection in Graph Neural Networks which makes the shallow layer of GNN contribute to the final representation directly.
**TG-Transformer[137]** TextGCN treats the document nodes and word nodes as the same type of nodes during propagation, and to introduce heterogeneity into the TextGCN graph, TG-Transformer (Text Graph Transformer) adopts two sets of weights for document nodes and word nodes respectively. To cope with a large corpus graph, subgraphs are sampled from the TextGCN graph using PageRank algorithm[88]. The input embedding of is the sum of three types of embedding: pretrained GloVe embedding, node type embedding, and Weisfeiler-Lehman structural encoding[85]. During propagation, self-attention[114] with graph residual[138] is applied.
**BertGCN[63]** To combine BERT[45] and TextGCN, BertGCN enhances TextGCN by replacing the document node initialization with the BERT [CLS] output of each epoch and replacing the word input vector with zeros. BertGCN trains BERT and TextGCN jointly by interpolating the output of TextGCN and BERT:
\[Z=\lambda Z_{GCN}+(1-\lambda)Z_{BERT} \tag{15}\]
where \(\lambda\) is the trade-off factor. To optimize the memory during training, a memory bank is used to track the document input and a smaller learning rate is set to BERT module to remain the consistency of the memory bank. BertGCN shows that with the help of TextGCN, BERT can achieve better performance.
#### 3.1.2. Multi-Graphs/Multi-Dimensional Edges: TensorGCN, ME-GCN.
**TensorGCN**[68] Instead of constructing a single corpus-level graph, TensorGCN builds three independent graphs: Semantic-based graph, Syntactic-based graph, and Sequential-based graph to incorporate semantic, syntactic and sequential information respectively and combines them into a tensor graph.
Three graphs share the same set of TF-IDF values for the word-document edge but different values for word-word edges. Semantic-based graph extracts the semantic features from a trained Long short-term memory(LSTM)[36] model and connects the words sharing high similarity. Syntactic-based graph uses Stanford CoreNLP parser[76] and constructs edges between words when they have a larger probability of having dependency relation. For Sequential-based graph, PMI value is applied as TextGCN does.
The propagation includes intra-graph propagation and inter-graph propagation. The model first applies the GCN layer on three graphs separately as intra-graph propagation. Then the same nodes on three graphs are treated as a virtual graph and another GCN layer is applied as inter-graph propagation.
**ME-GCN**[118] To fully utilize the corpus information and analyze rich relational information of the graph, ME-GCN (Multi-dimensional Edge-Embedded GCN) builds a graph with multi-dimensional word-word, word-document and document-document edges. Word2vec and Doc2vec embedding is firstly trained on the given corpus and the similarity of each dimension of trained embedding is used to construct the multi-dimensional edges. The trained embedding also serves as the input embedding of the graph nodes. During propagation, GCN is firstly applied on each dimension and representations on different dimensions are either concatenated or fed into a pooling method to get the final representations of each layer.
#### 3.1.3. Making TextGCN Inductive: HeteGCN, InducT-GCN, T-VGAE.
**HeteGCN**[98] HeteGCN (Heterogeneous GCN) optimizes the TextGCN by decomposing the TextGCN undirected graph into several directed subgraphs. Several subgraphs from TextGCN graph are combined sequentially as different layers: feature graph (word-word graph), feature-document graph (word-document graph), and document-feature graph (document-word graph). Different combinations were tested and the best model is shown as:
\[Z=\text{softmax}(\mathbf{A}_{w-d}(\text{ReLU}(\mathbf{A}_{w-w}\mathbf{X}_{w}\mathbf{W}^{(0)} ))\mathbf{W}^{(1)}) \tag{16}\]
where \(\mathbf{A}_{w-w}\) and \(\mathbf{A}_{w-d}\) show the adjacency matrix for the word-word subgraph and word-document subgraph. Since the input of HeteGCN is the word node embeddings without using document nodes, it can also work in an inductive way while the previous corpus-level graph text classification models are all transductive models.
**InducT-GCN**[119] InducT-GCN (InducTive Text GCN) aims to extend the transductive TextGCN into an inductive model. Instead of using the whole corpus for building the graph, InducT-GCN builds a training corpus graph and makes the input embedding of the document as the TF-IDF vectors, which aligns with the one-hot word embeddings. The weights are learned following TextGCN but InducT-GCN builds virtual subgraphs for prediction on new test documents.
**T-VGAE**[127] T-VGAE (Topic Variational Graph Auto-Encoder) applies Variational Graph Auto-Encoder on the latent topic of each document to make the model inductive. A vocabulary graph \(A_{\varnothing}\) which connects the words using PMI values is constructed while each document is represented using the TF-IDF vector. All the document vectors are
stacked into a matrix which can also be treated as a bipartite graph \(A_{d}\). Two graph auto-encoder models are applied on \(A_{p}\) and \(A_{d}\) respectively. The overall workflow shows as:
\[Z_{v}=\text{Encoder}_{GCN}(\mathbf{A}_{v},\mathbf{X}_{v}) \tag{18}\] \[Z_{d}=\text{Encoder}_{UDMP}(\mathbf{A}_{d},Z_{v})\] (19) \[\mathbf{A}_{v}^{*}=\text{Decoder}(Z_{v})\] (20) \[\mathbf{A}_{d}^{*}=\text{Decoder}(\mathbf{Z}_{d},\mathbf{Z}_{v}) \tag{17}\]
where \(X^{p}\) is an Identity Matrix. The \(\text{Encoder}_{GCN}\) and the decoders are applied following \(VGAE\)(Kunze et al., 2017) while \(\text{Encoder}_{UDMP}\) is an unidirectional message passing variant of \(\text{Encoder}_{GCN}\). The training objective is minimising the reconstruction error and \(Z_{d}\) is used for the classification task.
### Document Nodes as a Graph
To show the global structure of the corpus directly, some models only adopt document nodes in the non-heterogeneous graph.
**knn-GCN(Chen et al., 2017)** knn-GCN constructs a k-nearest-neighbours graph by connecting the documents with their \(K\) nearest neighbours using Euclidean distances of the embedding of each document. The embedding is generated in an unsupervised way: either using the mean of pretrained GloVe word vectors or applying LDA(Kunze et al., 2017). Both GCN and Attention-based GNN(Kunze et al., 2017) are used as the graph model.
**TextGTL(Wang et al., 2017)** Similar to TensorFlowGCN, TextGTL (Text-oriented Graph-based Transductive Learning) constructs three different document graphs: Semantics Text Graph, Syntax Text Graph, and Context Text Graph while all the graphs are non-heterogeneous. Semantics Text Graph uses Generalized Canonical Correlation Analysis(Chen et al., 2017) and trains a classifier to determine the edge values between two document nodes. Syntax Text Graph uses the Stanford CoreNLP dependency parser(Sutskever et al., 2017) to construct units and also trains a classifier. Context Text Graph defines the edge values by summing up the PMI values of the overlapping words in two documents. Two GCN layers are applied and the output of each graph is mixed as the output of this layer and input for the next layer for all three graphs:
\[\mathbf{H}^{(1)}=\sigma(\mathbf{A}\mathbf{H}^{(0)}\mathbf{W}^{(0)}) \tag{22}\] \[\mathbf{H}^{(2)}=\sigma(\mathbf{A}[\mathbf{H}^{(1)}_{sem}||\mathbf{H}^{(1)}_{syn }||\mathbf{H}^{(1)}_{seq}]\mathbf{W}^{(1)})\] (23) \[Z=\text{Pooling}_{mean}(\mathbf{H}^{(2)}_{sem},\mathbf{H}^{(2)}_{syn}, \mathbf{H}^{(2)}_{seq}) \tag{21}\]
where \(H^{(0)}\) is the TF-IDF vector of the documents. Data augmentation with super nodes is also applied in TextGTL to strengthen the information in graph models.
### Word Nodes as a Graph
By neglecting the document nodes in the graph, a graph with only word nodes shows good performance in deriving the graph-based embedding and is used for downstream tasks. Since no document nodes are included, this method can be easily adapted as an inductive learning model.
**Vgcn-Bert(Sutskever et al., 2017)** VGCN-BERT enhances the input embedding of BERT by concatenating it with the graph embedding. It first constructs a vocabulary graph and uses PMI as the edge value. A variant of the GCN layer called
VGCN(Vocabulary GCN) is applied to derive the graph word embedding:
\[\mathbf{X}_{Graph}=\text{ReLU}(\mathbf{X}_{BERT}\mathbf{A}\mathbf{W}^{(0)})\mathbf{W}^{(1)} \tag{24}\]
where BERT embedding is used as the input. The graph word embeddings are concatenated with BERT embedding and fed into the BERT as extra information.
### Extra Topic Nodes in the Graph
Topic information of each document can also provide extra information in corpus-level graph neural networks. Several models also include topic nodes in the graph.
#### 3.4.1. Single Layer Topic nodes: HGAT, STGCN
**HGAT**[(64)] HGAT (Heterogeneous GAT) applies LDA[(12)] to extract topic information for each document, top \(P\) topics with the largest probabilities are selected as connected with the document. Instead of using the words directly, to utilize the external knowledge HGAT applies the entity linking tool TAGME1 to identify the entities in the document and connects them. The semantic similarity between entities using pretrained Word2vec with threshold is used to define the connectedness between entity nodes. Since the graph is a heterogeneous graph, a HIN (heterogeneous information network) model is implemented which propagates solely on each sub-graphs depending on the type of node. An HGAT model is applied by considering type-level attention and node-level attention. For a given node, the type-level attention learns the weights of different types of neighbouring nodes while node-level attention captures the importance of different neighbouring nodes when ignoring the type. By using the dual attention mechanism, HGAT can capture the information of type and node at the same time.
Footnote 1: [https://sobigdata.dscience.org/group/tagme/](https://sobigdata.dscience.org/group/tagme/)
**STGCN**[(130)] In terms of short text classification, STGCN (Short-Text GCN) applies BTM to get topic information to avoid the data sparsity problem from LDA. The graph is constructed following TextGCN while extra topic nodes are included. The edge values of word-topic and document-topic are from BTM and a classical two-layer GCN is applied. The word embeddings learned from STGCN are concatenated with BERT embeddings and a bi-LSTM model is applied for final prediction.
#### 3.4.2. Multi-layer Topic Nodes: DHTG
**DHTG**[(120)] To capture different levels of information, DHTG (Dynamic Hierarchical Topic Graph) introduces hierarchical topic-level nodes in the graph from fine-grain to coarse. Poisson gamma belief network (PGBN)[(145)] is used as a probabilistic deep topic model. The first-layer topics are from the combination of words, while deeper layers are generated by previous layers' topics with the weights of PGBN, and the weights serve as the edge values of each layer of topics. For the topics on the same layer, the cosine similarity is chosen as the edge value. A two-layer GCN is applied and the model is learned jointly with PGBN, which makes the edge of the topics dynamic.
### Critical Analysis
Compared with sequential models like CNN and LSTM, corpus-level GNN is able to capture the global corpus structure information with word nodes as bridges between document nodes and shows great performance without using external resources like pretrained embedding or pretrained model. However, the improvement in performance is marginal when pretrained embedding is included. Another issue is that most corpus-level GNN is transductive learning which is not
applicable in the real world. Meanwhile, constructing the whole corpus into a graph requires large memory space especially when the dataset is large.
A detailed comparison of corpus-level GNN is displayed in Table 2.
## 4. Document-level GNN for Text Classification
By constructing the graph based on each document, a graph classification model can be used as a text classification model. Since each document is represented by one graph and new graphs can be built for test documents, the model can easily work in an inductive way.
### Local Word Consecutive Graph
The simplest way to convert a document into a graph with words as nodes is by connecting the consecutive words within a sliding window.
#### 4.1.1. Simple consecutive graph models: Text-Level-GNN, MPAD, TextING.
**Text-Level-GNN[(37)]** Text-Level-GNN applies a small sliding window and constructs the graph with a small number of nodes and edges in each graph, which saves memory and computation time. The edge value is trainable and shared across the graphs when connecting the same two words, which also brings global information.
Unlike corpus-level graph models, Text-Level-GNN applies a message passing mechanism (MPM)[(30)] instead of GCN for graph learning. For each node, the neighbour information is aggregated using max-pooling with trainable edge values as the AGGREGATE function and then the weighted sum is used as the COMBINE function. To get the representation of each graph, sum-pooling and an MLP classifier are applied as the READOUT function. The propagation shows as:
\[\mathbf{h}_{i}^{(l+1)}=(1-\alpha)(max_{\mathbf{n}\in\mathcal{N}_{i}}e_{\mathbf{n}}\mathbf{h }_{n}^{(I)})+\alpha\mathbf{h}_{i}^{(I)} \tag{26}\] \[\mathbf{z}_{i}=\text{softmax}(\mathbf{W}\Sigma_{i}\mathbf{h}_{i}+\mathbf{b}) \tag{25}\]
where \(\mathbf{h}_{i}^{(I)}\) is \(i\)th word node presentation of layer \(l\), \(e_{\mathbf{n}i}\) is edge weight from node \(n\) to node \(i\). A two-layer MPM is applied, and the input of each graph is pretrained GloVe vectors.
**MPAD[(86)]** MPAD (Message Passing Attention Networks) connects words within a sliding window of size 2 but also includes an additional master node connecting all nodes in the graph. The edge only shows the connectedness of each pair of word nodes and is fixed. A variant of Gated Graph Neural Networks is applied where the AGGREGATE function is the weighted sum and the COMBINE function is GRU[(18)]. Self-attention is applied in the READOUT function.
To learn the high-level information, the master node is directly concatenated with the READOUT output, working as a skip connection mechanism. To get the final representation, each layer's READOUT results are concatenated to capture multi-granularity information. Pretrained Word2vec is used as the initialization of word nodes input.
**TextING[(143)]** To simplify MPAD, TextING ignores the master node in the document-level graphs, which makes the graph sparser. Compared with Text-Level-GNN, TextING remains fixed edges. A similar AGGREGATE and COMBINE function are applied under the concept of e Gated Graph Neural Networks(GGNN)[(58)] with the weighted sum and GRU. However, for the READOUT function, soft attention is used and both max-pooling and mean-pooling are applied to make sure that "every word plays a role in the text and the keywords should contribute more explicitly".
#### 4.1.2. Advanced graph models: MLGNN, TextSSL, DADGNN.
**MLGNN[61]** MLGNN (Multi-level GNN) builds the same graph as TextING but introduces three levels of MPM: bottom-level, middle-level and top-level. In the bottom-level MPM, the same method with Text-Level-GNN is applied with pretrained Word2vec as input embedding but the edge is non-trainable. In the middle level, a larger window size is adopted and Graph Attention Networks(GAT)[115] is applied to learn long distant word nodes information. In the top-level MPM, all word nodes are connected and multi-head self-attention[114] is applied. By applying three different levels of MPM, MLGNN learns multi-granularity information well.
**DADGNN[70]** DADGNN (Deep Attention Diffusion GNN) constructs the same graph as TextING but uses attention diffusion to overcome the oversmoothing issue. Pretrained word embedding is used as the input of each node and an MLP layer is applied. Then, the graph attention matrix is calculated based on the attention to the hidden states of each node. The diffusion matrix is calculated as
\[T=\Sigma_{n=0}^{\infty}\epsilon_{n}\mathbf{A}^{n} \tag{27}\]
where \(A\) is the graph attention matrix and \(\epsilon\) is the learnable coefficients. \(A^{n}\) plays a role of connecting \(n\)-hop neighbours and Liu et al. [70] uses \(n\in[4,7]\) in practice. A multi-head diffusion matrix is applied for layer propagation.
**TextSSL[95]** To solve the word ambiguity problem and show the word synonymity and dynamic contextual dependency, TextSSL (Sparse Structure Learning) learns the graph using intra-sentence neighbours and inter-sentence neighbours simultaneously. The local syntactic neighbour is defined as the consecutive words and trainable edges across graphs are also included by using Gumbel-softmax. By applying sparse structure learning, TextSSL manages to select edges with dynamic contextual dependencies.
### Global Word Co-occurrence Graph
Similar to the TextGCN graph, document-level graphs can also use PMI as the word-word edge values,
#### 4.2.1. Only global word co-occurrence: DAGNN
**DAGNN[125]** To address the long-distance dependency, hierarchical information and cross-domain learning challenges in domain-adversarial text classification tasks, Wu et al. [125] propose DAGNN (Domain-Adversarial Graph Neural Network). Each document is represented by a graph with content words as nodes and PMI values as edge values, which can capture long-distance dependency information. Pretrained FastText is chosen as the input word embeddings to handle the out-of-vocabulary issue and a GCN model with skip connection is used to address the oversmoothing problem. The propagation is formulated as:
\[\mathbf{H}^{(l+1)}=(1-\alpha)\mathbf{\tilde{A}}\mathbf{H}^{(l)}+\alpha\mathbf{H}^{(0)} \tag{28}\]
To learn the hierarchical information of documents, DiffPool[136] is applied to assign each document into a set of clusters. Finally, adversarial training is used to minimize the loss on source tasks and maximize the differentiation between source and target tasks.
#### 4.2.2. Combine with Extra Edges: ReGNN, GFN
**ReGNN[56]** To capture both global and local information, ReGNN (Recursive Graphical Neural Network) uses PMI together with consecutive words as the word edges. And graph propagation function is the same as GGNN while additive attention[7] is applied in aggregation. Pretrained GloVe is the input embedding of each word node.
**GFN[20]** GFN (Graph Fusion Network) builds four types of graphs using the word co-occurrence statistics, PMI, the similarity of pretrained embedding and Euclidean distance of pretrained embedding. Although four corpus-level
graphs are built, the graph learning actually happens on subgraphs of each document, making the method a document-level GNN. For each subgraph, each type of graph is learned separately using the graph convolutional method and then a fusion method of concatenation is used. After an MLP layer, average pooling is applied to get the document representation.
### Other word graphs
Some other ways of connecting words in a document have been explored.
**HyperGAT[(25)]** Ding et al. [(25)] proposes HyperGAT (Hypergraph Attention Networks) which builds hypergraphs for each document to capture high-level interaction between words. Two types of hyperedges are included: sequential hyperedges connecting all words in a sentence and semantic hyperedges connecting top-K words after getting the topic of each word using LDA. Like traditional hypergraph propagations, HyperGAT follows the same two steps of updating but with an attention mechanism to highlight the key information: Node-level attention is applied to learn hyperedges representations and edge-level attention is used for updating node representations.
**IGCN[(110)]** Contextual dependency helps in understanding a document and the graph neural network is no exception. IGCN constructs the graph with the dependency graph to show the connectedness of each pair of words in a document. Then, the word representation learned from Bi-LSTM using POS embedding and word embedding is used to calculate the similarity between each pair of nodes. Attention is used for the output to find the important relevant semantic features.
**GTNT[(80)]** Words with higher TF-IDF values should connect to more word nodes, with this in mind, GTNT(Graph Transformer Networks based Text representation) uses sorted TF-IDF value to determine the degree of each node and applies the Havel-Hakimi algorithm[(32)] to determine the edges between word nodes. A variant of GAT is applied during model learning. Despite the fact that GAT's attention score is mutual for two nodes, GTNT uses relevant importance to adjust the attention score from one node to another. Pretrained Word2vec is applied as the input of each node.
### Critical Analysis
Most document-level GNNs connect consecutive words as edges in the graph and apply a graph neural network model, which makes them similar to CNN where the receptive field enlarges when graph models go deeper. Also, the major differences among document-level GNNs are the details of graph models, e.g. different pooling methods, and different attention calculations, which diminishes the impact of the contribution of these works. Compared with corpus-level GNN, document-level GNN adopts more complex graph models and also suffers from the out-of-memory issue when the number of words in a document is large.
A detailed comparison of document-level GNN is displayed in Table 2.
## 5. Datasets and Metrics
### Datasets
There are many popular text classification benchmark datasets, while this paper mainly focuses on the datasets used by GNN-based text classification applications. Based on the purpose of applications, we divided the commonly adopted datasets into three types including _Topic Classification_, _Sentiment Analysis_ and _Other_. Most of these text classification datasets contain a single target label of each text body. The key information of each dataset is listed in Table 3.
#### 5.1.1. Topic Classification.
Topic classification models aim to classify input text bodies from diverse sources into predefined categories. News categorization is a typical topic classification task to obtain key information from news and classify them into corresponding topics. The input text bodies normally are paragraphs or whole documents especially for news categorization, while there are still some short text classification datasets from certain domains such as micro-blogs, bibliography, etc. Some typical datasets are listed:
* _Ohsumed_[(40)] is acquired from the MEDLINE database and further processed by [(133)] via selecting certain documents (abstracts) and filtering out the documents belonging to multiple categories. Those documents are classified into 23 cardiovascular diseases. The statistics of [(133)] processed Ohsumed dataset is represented in Table 3, which is directly employed by other related works.
* _R8 / R52_ are two subsets of the Reuters 21587 dataset 2 which contain 8 and 52 news topics from Reuters financial news services, respectively. Footnote 2: For the original Reuters 21587 dataset, please refer to this link [http://www.daviddlewis.com/resources/testcollections/reuters21578](http://www.daviddlewis.com/resources/testcollections/reuters21578)
* _20NG_ is another widely used news categorization dataset that contains 20 newsgroups. It was originally collected by [(50)], but the procedures are not explicitly described.
\begin{table}
\begin{tabular}{c|l l l l l} \hline \hline
**Graph** & **Model** & **External Resource** & **Edge Construction** & **Node Initialization** & **Learning** \\ \hline \multirow{8}{*}{**Corpus-level**} & TextGCN[(133)] & N/A & pmi, tf\({}^{-}\)idf & one-hot & transductive \\ \cline{2-6} & SGC[(123)] & N/A & pmi, tf\({}^{-}\)idf & one-hot & transductive \\ \cline{2-6} & SGC[(148)] & N/A & pmi, tf\({}^{-}\)idf & one-hot & transductive \\ \cline{2-6} & NMGC[(53)] & N/A & pmi, tf\({}^{-}\)idf & one-hot & transductive \\ \cline{2-6} & TG-transformer[(137)] & GloVe & pmi, tf\({}^{-}\)idf & GloVe & transductive \\ \cline{2-6} & BERTGCN[(63)] & BERT & pmi, tf\({}^{-}\)idf & doc: 0 word: BERT emb & transductive \\ \cline{2-6} & TensorGCN[(63)] & GloVe, CoreNLP & emb sim, dep graph, pmi, tf\({}^{-}\)idf & one-hot & transductive \\ \cline{2-6} & MI-GCN[(118)] & N/A & emb sim, tf\({}^{-}\)idf & Trained Word2vec/doc2vec & transductive \\ \cline{2-6} & HetGCN[(158)] & N/A & pmi, tf\({}^{-}\)idf & one-hot & inductive \\ \cline{2-6} & InducT-GCN[(119)] & N/A & pmi, tf\({}^{-}\)idf & one-hot, tf\({}^{-}\)idf vectors & inductive \\ \cline{2-6} & TVGAE[(127)] & N/A & pmi & one-hot & inductive \\ \cline{2-6} & VGCN-BERT[(73)] & BERT & pmi & BERT emb & transductive \\ \cline{2-6} & klm-GCN[(5)] & GloVe & emb sim & GloVe & transductive \\ \cline{2-6} & TextGTL[(54)] & CoreNLP & dep graph, pmi & tf\({}^{-}\)idf vectors & transductive \\ \cline{2-6} & HGAT[(64)] & TAGME, Word2vec & LDA, entity link, emb sim & tf\({}^{-}\)idf, LDA, Word2vec & transductive \\ \cline{2-6} & STGCN[(134)] & BERT & pmi, tf\({}^{-}\)idf, BTM & BERT emb & transductive \\ \cline{2-6} & DHTG[(120)] & N/A & PGRN, pmi, tf\({}^{-}\)idf & one-hot & transductive \\ \hline \multirow{8}{*}{**Doc-level**} & Text-Level-GNN[(37)] & GloVe & consecutive words & GloVe & inductive \\ \cline{2-6} & MPAD[(86)] & Word2vec & consecutive words & Word2vec & inductive \\ \cline{2-6} & TextING[(143)] & GloVe & consecutive words & GloVe & inductive \\ \cline{2-6} & MLGNN[(61)] & Word2vec & consecutive words & Word2vec & inductive \\ \cline{2-6} & DADGNN[(70)] & Word2vec/GloVe & consecutive words & Word2vec/GloVe & inductive \\ \cline{2-6} & TextSL[(95)] & GloVe & consecutive words & GloVe & inductive \\ \cline{2-6} & DAGNN[(125)] & GloVe & pmi & GloVe & inductive \\ \cline{2-6} & ReGNN[(56)] & GloVe & consecutive words, pmi & GloVe & inductive \\ \cline{2-6} & GFN[(20)] & GloVe & pmi, emb sim & GloVe & inductive \\ \cline{2-6} & HyperGAT[(25)] & N/A & LDA, consecutive words & one-hot & inductive \\ \cline{2-6} & RGCN[(110)] & spatCy & dep graph & LSTM emb & inductive \\ \cline{2-6} & GTNT[(80)] & Word2vec/GloVe & tf\({}^{-}\)idf sorted value & Word2vec/GloVe & inductive \\ \hline \hline \end{tabular}
\end{table}
Table 2. Models Detailed Comparison on whether using external resources, how to construct the edge and node input, and whether transductive learning or inductive learning. GloVe and Word2vec are pretrained if not specified. \({}^{*}\)emb sim\({}^{*}\) is short for \({}^{*}\)embedding similarity\({}^{*}\), \({}^{*}\)dep graph\({}^{*}\) is short \({}^{*}\)dependency graph\({}^{*}\).
* _AG News_[141] is a large-scale news categorization dataset compared with other commonly used datasets which are constructed by selecting the top-4 largest categories from the AG corpus. Each news topic contains 30,000 samples for training and 1900 samples for testing.
* _Database systems and Logic Programming (DBLP)_ is a topic classification dataset to classify the computer science paper titles into six various topics [80]. Different from paragraph or document based topic classification dataset, DBLP aims to categorise scientific paper titles into corresponding categories, the average input sentence length is much lower than others.
* _Dbpedia_[52] is a large-scale multilingual knowledge base that contains 14 non-overlapping categories. Each category contains 40000 samples for training and 5000 samples for testing.
* _WebKB_[19] is a long corpus web page topic classification dataset.
* _TREC_[57] is a question topic classification dataset to categorise one question sentence into 6 question categories.
#### 5.1.2. Sentiment Analysis.
The purpose of sentiment analysis is to analyse and mine the opinion of the textual content which could be treated as a binary or multi-class classification problem. The sources of existing sentiment analysis tasks come from movie reviews, product reviews or user comments, social media posts, etc. Most sentiment analysis datasets target to predict the people's opinions of one or two input sentences of which the average length of each input text body is around 25 tokens.
\begin{table}
\begin{tabular}{c|l l l l l l l l} \hline \hline
* _Movie Review (MR)_[(89)] is a binary sentiment classification dataset for movie review which contains positive and negative data equally distributed. Each review only contains one sentence.
* _Stanford Sentiment Treebank (SST)_[(106)] is an upgraded version of MR which contains two subsets SST-1 and SST-2. SST-1 provides five fine-grained labels while SST-2 is a binary sentiment classification dataset.
* _Internet Movie DataBase (IMDB)_[(74)] is also an equally distributed binary classification dataset for sentiment analysis. Different from other short text classification dataset, the average number of words of each review is around 221.
* _Yelp 2014_[(109)] is a large scale binary category based sentiment analysis dataset for longer user reviews collected from Yelp.com.
Certain binary sentiment classification benchmark datasets are also used by GNN-based text classifiers. Most of them are gathered from shorter user reviews or comments (normally one or two sentences) from different websites including Amazon Alexa Reviews (_AAR_), Twitter US Airline (_TUA_), Youtube comments (_SenTube-A_ and _SenTube-T_) [(113)].
#### 5.1.3. Other Datasets
There are some datasets targeting other tasks including hate detection, grammaticality checking, etc. For example, _ArangoHate_[(4)] is a hate detection dataset, a sub-task of intend detection, which contains 2920 hateful documents and 4086 normal documents by resampling the merged datasets from [(21)] and [(121)]. In addition, [(26)] proposes another large scale hate language detection dataset, namely _FountaHate_ to classify the tweets into four categories including 53,851, 14,030, 27,150, and 4,965 samples of normal, spam, hateful and abusive, respectively. Since there is no officially provided training and testing splitting radio for above datasets, the numbers represented in Table 3 are following the ratios (train/development/test is 85:5:10) defined by [(73)].
#### 5.1.4. Dataset Summary
Since an obvious limitation of corpus-level GNN models has high memory consumption limitation [(37; 25; 137)], the datasets with a smaller number of documents and vocabulary sizes such as Ohsumed, R8/R52, 20NG or MR are widely used to ensure feasibly build and evaluate corpus-level graphs. For the document-level GNN based models, some larger size datasets like AG-News can be adopted without considering the memory consumption problem. From Table 3, we could find most of the related works mainly focus on the GNN applied in topic classification and sentiment analysis which means the role of GNNs in other text classification tasks such as spam detection, intent detection, abstractive question answering need to be further exploited. Another observed trend is short text classification are gained less attention compared with long document classification tasks. In this case, GNN in short text classification may be an.
### Evaluation Methods
#### 5.2.1. Performance Metrics
In terms of evaluating and comparing the performance of proposed models with other baselines, accuracy and F1 are most commonly used metrics to conduct overall performance analysis, ablation studies and breakdown analysis. We use \(TP\), \(FP\), \(TN\) and \(FN\) to represent the number of true positive, false positive, true negative and false negative samples. \(N\) is the total number of samples.
* _Accuracy_ and _Error Rate_: are basic evaluation metrics adopted by many GNN-based text classifiers such as [(54; 67; 120; 133; 137)]. Most of the related papers run all baselines and their models 10 times or 5 times to show
the \(mean\pm standard\)\(deviation\) of accuracy for reporting more convincing results. It can be defined as: \[Accuracy=\frac{(TF+TN)}{N},\] (29) \[ErrorRate=1-Accuracy=\frac{(FP+FN)}{N}.\] (30)
* _Precision_, _Recall_ and _F1_: are metrics for measuring the performance especially for imbalanced datasets. Precision is used to measure the results relevancy, while recall is utilised to measure how many truly relevant results acquired. Through calculating the harmonic average of Precision and Recall could get F1. Those three measurements can be defined as: \[Precision=\frac{TP}{(TP+FP)},\] (31) \[Recall=\frac{TP}{(TP+FN)},\] (32) \[F1=\frac{2\times Precision\times Recall}{(Precision+Recall)},\] (33) Few papers only utilise recall or precision to evaluate the performance [80]. However, precision and recall are more commonly used together with F1 or Accuracy to evaluate and analyse the performance from different perspectives e.g. [56, 64, 73, 127]. In addition, based on different application scenarios, different F1 averaging methods are adopted by those papers to measure overall F1 score of multi-class (Number of Classes is \(C\)) classification tasks including:
* _Macro-F1_ applies the same weights to all categories to get overall \(F1_{macro}\) by taking the arithmetic mean. \[F1_{macro}=\frac{1}{C}\Sigma_{i=1}^{C}F1_{i},\] (34)
* _Micro-F1_ is calculated by considering the overall \(P_{micro}\) and \(R_{micro}\). It can be defined as: \[F1_{micro}=\frac{2\times P_{micro}\times R_{micro}}{(P_{micro}+R_{micro})}\] (35) where: \[P_{micro}=\frac{\Sigma_{i\in C}TP_{i}}{\Sigma_{i\in C}TP_{i}+FP_{i}},R_{micro }=\frac{\Sigma_{i\in C}TP_{i}}{\Sigma_{i\in C}TP_{i}+FN_{i}},\] (36)
* _Weighted-F1_ is the weighted mean of F1 of each category where the weight \(W_{i}\) is related to the number of occurrences of the corresponding \(i\)th class, which can be defined as: \[F1_{macro}=\Sigma_{i=1}^{C}F1_{i}\times W_{i},\] (37)
#### 5.2.2. Other Evaluation Aspects.
Since two limitations of GNN-based models are time and memory consumption, therefore, except the commonly used qualitative performance comparison, representing and comparing the GPU or CPU memory consumption and the training time efficiency of proposed models are also adopted by many related studies to demonstrate the practicality in real-world applications. In addition, based on the novelties of various models, specific evaluation methods are conducted to demonstrate the proposed contributions.
* _Memory Consumption_: [25, 37, 70] list the memory consumption of different models for comprehensively evaluating the proposed models in computational efficiency aspect.
* _Time Measurement_: [90; 98] perform performance training time comparison between their proposed models and baselines on different benchmarks. Due to the doubts about the efficiency of applying GNNs for text classification, it is an effective way to demonstrate they could well balance performance and time efficiency.
* _Parameter Sensitivity_ is commonly conducted by GNNs studies to investigate the effect of different hyperparameters e.g. varying sliding window sizes, embedding dimensions of proposed models to represent the model sensitivity via line chart such as [70; 25; 64].
* _Number of Labelled Documents_ is a widely adopted evaluation method by GNN-based text classification models [25; 54; 80; 120; 133; 64] which mainly analyse the performance trend by using different proportions of training data to test whether the proposed model can work well under the limited labelled training data.
* _Vocabulary Size_ is similar to the number of labelled documents but it investigates the effects of using different sizes of vocabulary during the GNN training stage adopted by [120].
#### 5.2.3. Metrics Summary.
For general text classification tasks, Accuracy, Precision, Recall and varying F1 are commonly used evaluation metrics for comparing with other baselines. However, for GNN based models, only representing the model performance cannot effectively represent the multi-aspects of proposed models. In this case, there are many papers conducting external processes to evaluate and analyse the GNN based classifier from multiple views including time and memory consumption, model sensitivity and dataset quantity.
## 6. Performance
While different GNN text classification models may be evaluated on different datasets, there are some datasets that are commonly used across many of these models, including **20NG**, **R8**, **R52**, **Ohsumed** and **MR**. The accuracy of various models assessed on these five datasets is presented in Table 4. Some of the results are reported with ten times average accuracy and standard derivation while some only report the average accuracy. Several conclusions can be drawn:
\begin{table}
\begin{tabular}{c|l l l l l l l} \hline \hline \multirow{2}{*}{**Type**} & \multirow{2}{*}{**Method**} & \multicolumn{2}{l}{**External**} & \multirow{2}{*}{**20NG**} & \multirow{2}{*}{**R8**} & \multirow{2}{*}{**R52**} & \multirow{2}{*}{**Ohsumed**} & \multirow{2}{*}{**MR**} \\ & & **Resource** & & & & & & \\ \hline \multirow{11}{*}{**Corpus-level**} & TextGCN [133] & N/A & 86.34 \(\pm\) 0.09 & 97.07 \(\pm\) 00.10 & 93.56 \(\pm\) 0.18 & 68.36 \(\pm\) 0.56 & 76.74 \(\pm\) 0.20 \\ \cline{2-7} & SGC [123] & N/A & 88.5 \(\pm\) 0.1 & 97.2 \(\pm\) 0.1 & 94.0 \(\pm\) 0.2 & 68.5 \(\pm\) 0.3 & 75.9 \(\pm\) 0.3 \\ \cline{2-7} & S2GC [148] & N/A & 88.6\(\pm\) 0.1 & 97.4 \(\pm\) 0.1 & 94.5 \(\pm\) 0.2 & 68.5 \(\pm\) 0.1 & 76.7 \(\pm\) 0.0 \\ \cline{2-7} & TG-transformer [137] & GloVe & - & 98.1\(\pm\)0.1 & 95.2\(\pm\)0.2 & 70.4\(\pm\)0.4 & - \\ \cline{2-7} & DHTG [120] & N/A & 87.13 \(\pm\) 0.07 & 97.33 \(\pm\) 0.06 & 93.93 \(\pm\) 0.10 & 68.80 \(\pm\) 0.33 & 77.21 \(\pm\) 0.11 \\ \cline{2-7} & TensorGCN [68] & GloVe, CoreNLP & 87.74 \(\pm\) 0.05 & 98.04 \(\pm\) 0.08 & 95.05 \(\pm\) 0.11 & 70.11 \(\pm\) 0.24 & 77.91 \(\pm\) 0.07 \\ \cline{2-7} & STGCN [134] & BERT & - & 98.5 & - & - & 82.5 \\ \cline{2-7} & NMGC [53] & N/A & 86.61 \(\pm\) 0.06 & 97.31 \(\pm\) 0.09 & 94.35 \(\pm\) 0.06 & 69.21 \(\pm\) 0.17 & 76.21 \(\pm\) 0.25 \\ \cline{2-7} & BertGCN [63] & BERT & 89.3 & 98.1 & 96.6 & 72.8 & 86 \\ \cline{2-7} & RobertaGCN [63] & RoBERTa & 89.5 & 98.2 & 96.1 & 72.8 & 89.7 \\ \cline{2-7} & T-VGAE [127] & N/A & 88.08 \(\pm\) 0.06 & 97.68 \(\pm\) 0.14 & 95.05 \(\pm\) 0.10 & 70.02 \(\pm\) 0.14 & 78.03 \(\pm\) 0.11 \\ \hline \multirow{11}{*}{**Doc-level**} & ReGNN [56] & GloVe & - & 97.93 \(\pm\) 0.31 & 95.17 \(\pm\) 0.17 & 67.93 \(\pm\) 0.33 & 78.71 \(\pm\) 0.56 \\ \cline{2-7} & Text-Level-GNN [37] & GloVe & 84.16 \(\pm\) 0.25\({}^{*}\) & 97.8 \(\pm\) 0.2 & 94.6 \(\pm\) 0.3 & 69.4 \(\pm\) 0.6 & 75.47 \(\pm\) 0.06\({}^{*}\) \\ \cline{1-1} \cline{2-7} & TextING [143] & GloVe & - & 98.13 \(\pm\) 0.12 & 95.68 \(\pm\) 0.35 & 70.84 \(\pm\) 0.52 & 80.19 \(\pm\) 0.31 \\ \cline{1-1} \cline{2-7} & HyperGAT [25] & N/A & 86.62 \(\pm\) 0.16 & 97.97 \(\pm\) 0.23 & 94.98 \(\pm\) 0.27 & 69.90 \(\pm\) 0.34 & 78.32 \(\pm\) 0.27 \\ \cline{1-1} \cline{2-7} & TextSSL [95] & GloVe & 85.26 \(\pm\) 0.28 & 97.81 \(\pm\) 0.14 & 95.48 \(\pm\) 0.26 & 70.59 \(\pm\) 0.38 & 79.74 \(\pm\) 0.19 \\ \hline \hline \end{tabular}
\end{table}
Table 4. Performance Table. - indicates unavailability. \({}^{*}\) refers to replication from HyperGAT [25].
* Models that use external resources usually achieve better performance than those that do not, especially models with BERT and RoBERTa(Rosen et al., 2019; Wang et al., 2020).
* Under the same setting, such as using GloVe as the external resource, Corpus-level GNN models (e.g. TG-Transformer(Wang et al., 2020), TensorGCN(Wang et al., 2020)) typically outperform Document-level GNN models (e.g. TextING(Wang et al., 2020), TextSSL(Wang et al., 2020)). This is because Corpus-level GNN models can work in a transductive way and make use of the test input, whereas Document-level GNN models can only use the training data.
* The advantage of Corpus-level GNN models over Document-level GNN models only applies to topic classification datasets and not to sentiment analysis datasets such as **MR**. This is because sentiment analysis involves analyzing the order of words in a text, which is something that most Corpus-level GNN models cannot do.
## 7. Challenges and Future Work
### Model Performance
With the development of pre-trained models(Wang et al., 2020; Wang et al., 2020), and prompt learning methods(Wang et al., 2020; Wang et al., 2020) achieve great performance on text classification. Applying GNNs in text classification without this pre-training style will not be able to achieve such good performance. For both corpus-level and document-level GNN text classification models, researching how to combine GNN models with these pretrained models to improve the pretrained model performance can be the future work. Meanwhile, more advanced graph models can be explored, e.g. more heterogeneous graph models on word and document graphs to improve the model performance.
### Graph Construction
Most GNN text classification methods use a single, static-value edge to construct graphs based on document statistics. This approach applies to both corpus-level GNN and document-level GNN. However, to better explore the complex relationship between words and documents, more dynamic hyperedges can be utilized. Dynamic edges in GNNs can be learned from various sources, such as the graph structure, document semantic information, or other models. And hyperedges can be built for a more expressive representation of the complex relationships between nodes in the graph.
### Application
While corpus-level GNN text classification models have demonstrated good performance without using external resources, these models are mostly transductive. To apply them in real-world settings, an inductive learning approach should be explored. Although some inductive corpus-level GNNs have been introduced, the large amount of space required to construct the graph and the inconvenience of incremental training still present barriers to deployment. Improving the scalability of online training and testing for inductive corpus-level GNNs represents a promising area for future work.
## 8. Conclusion
This survey article introduces how Graph Neural Networks have been applied to text classification in two different ways: corpus-level GNN and document-level GNN, with a detailed structural figure. Details of these models have been introduced and discussed, along with the datasets commonly used by these methods. Compared with traditional machine learning and sequential deep learning models, graph neural networks can explore the relationship between words and documents in the global structure (corpus-level GNN) or the local document (document-level GNN), giving a good
performance. A detailed performance comparison is applied to investigate the influence of external resources, model learning methods, and types of different datasets. Furthermore, we propose the challenges for GNN text classification models and potential future work.
|
2303.02289
|
Central BH mass of tidal disruption event candidate SDSS J0159 through
long-term optical variabilities
|
In this manuscript, central BH mass is determined in the tidal disruption
event (TDE) candidate SDSS J0159, through the nine years long variabilities, in
order to check whether the virial BH mass is consistent with the mass estimated
by another independent methods. First, host galaxy spectroscopic features are
described by 350 simple stellar templates, to confirm the total stellar mass
about $7\times10^{10}{\rm M_\odot}$ in SDSS J0159, indicating the virial BH
mass about two magnitudes larger than the BH mass estimated by the total
stellar mass. Second, based on an efficient method and fitting procedure,
through theoretical TDE model applied to describe the SDSS $ugriz$-band light
curves of SDSS J0159, central BH mass can be determined as
$M_{BH}\sim4.5_{-1.1}^{+1.3}\times10^6{\rm M_\odot}$, well consistent with the
M-sigma relation expected BH mass and the total stellar mass expected BH mass.
Third, the theoretical TDE model with parameter of central BH mass limited to
be higher than $10^8{\rm M_\odot}$ can not lead to reasonable descriptions to
the light curves of SDSS J0159, indicating central BH mass higher than
$10^8{\rm M_\odot}$ is not preferred in SDSS J0159. Therefore, the TDE model
determined central BH mass of SDSS J0159 are about two magnitudes lower than
the virial BH mass, to support central BLRs including accreting debris
contributions from central TDE, and provide interesting clues to reconfirm that
outliers in the space of virial BH mass versus stellar velocity dispersion
should be better candidates of TDE.
|
XueGuang Zhang
|
2023-03-04T01:40:04Z
|
http://arxiv.org/abs/2303.02289v1
|
Central BH mass of tidal disruption event candidate SDSS J0159 through long-term optical variabilities
###### Abstract
In this manuscript, central BH mass is determined in the tidal disruption event (TDE) candidate SDSS J0159, through the nine years long variabilities, in order to check whether the virial BH mass is consistent with the mass estimated by another independent methods. First, host galaxy spectroscopic features are described by 350 simple stellar templates, to confirm the total stellar mass about \(7\times 10^{10}\)M\({}_{\odot}\) in SDSS J0159, indicating the virial BH mass about two magnitudes larger than the BH mass estimated by the total stellar mass. Second, based on an efficient method and fitting procedure, through theoretical TDE model applied to describe the SDSS \(ugriz\)-band light curves of SDSS J0159, central BH mass can be determined as \(M_{BH}\sim 4.5^{+1.3}_{-1.1}\times 10^{6}\)M\({}_{\odot}\), well consistent with the M-sigma relation expected BH mass and the total stellar mass expected BH mass. Third, the theoretical TDE model with parameter of central BH mass limited to be higher than \(10^{8}\)M\({}_{\odot}\) can not lead to reasonable descriptions to the light curves of SDSS J0159, indicating central BH mass higher than \(10^{8}\)M\({}_{\odot}\) is not preferred in SDSS J0159. Therefore, the TDE model determined central BH mass of SDSS J0159 are about two magnitudes lower than the virial BH mass, to support central BLRs including accreting debris contributions from central TDE, and provide interesting clues to reconfirm that outliers in the space of virial BH mass versus stellar velocity dispersion should be better candidates of TDE.
active galactic nuclei - emission line galaxies - supermassive black holes - tidal disruption - transient sources +
Footnote †: journal: ApJ
0000-0002-2071-8807]XueGuang Zhang
0000-0002-1881-7088]XueGuang Zhang
## 1 Introduction
SDSS J0159 (=SDSS J015957.64+003310.5) at redshift \(z=0.312\) (corresponding luminosity distance about 1625Mpc) is an interesting object in the literature. LaMassa et al. (2015) have reported the SDSS J0159 as the first changing-look quasar transitioned from a Type 1 quasar (both apparent broad H\(\alpha\) and H\(\beta\) in optical spectrum) to a Type 1.9 AGN (Active Galactic Nuclei) (only weak broad H\(\alpha\) in optical spectrum) between 2000 and 2010, and reported the dimming of the AGN continuum as intrinsic physical reason to explain the changing-look properties. Meanwhile, Merloni et al. (2015) have reported the SDSS J0159 as a candidate of Tidal Disruption Event (TDE) due to the very rapid rise and the decay trend of the long-term optical variabilities well described by \(t^{-5/3}\) expected by theoretical TDE model, indicating TDEs can be well treated as one explanation to changing-look AGN as discussed in Yang et al. (2018); Zhang (2021).
A star can be tidally disrupted by gravitational tidal force of a central massive black hole (BH), when it passing close to the central BH with a distance larger than event horizon of the BH but smaller than the expected tidal disruption radius \(R_{\Upsilon}=R_{\star}\times(\frac{M_{\rm BH}}{M_{\star}})^{1/3}\), where \(R_{\star}\), \(M_{\star}\) and \(M_{\rm BH}\) represent radius and mass of the tidally disrupted star and central BH mass, respectively. And then, fallback materials from tidally disrupted star can be accreted by the central massive BH, leading to a flare up follows by a decrease. This is the basic picture of a TDE.
The well-known pioneer work on TDE can be found in Rees (1988). Since then, as an excellent beacon for indicating massive black holes, both theoretical simulations and observational results on TDEs have been widely studied and reported in the literature. More detailed and improved simulations on TDEs can be found in Evans & Kochanek (1989); Magorrian & Tremaine (1999); Bogdanovic et al. (2004); Lodato et al. (2009);
MacLeod et al. (2012); Guillochon and Ramirez-Ruiz (2015); Lodato et al. (2015); Bonnerot et al. (2017); Darbha et al. (2018); Coughlin and Nixon (2019); Curd and Narayan (2019); Golightly et al. (2019), etc.. More recent reviews in detail on TDEs can be found in Stone et al. (2018). And based on theoretical TDE model, public codes of _TDEFIT_ provided by Guillochon et al. (2014) ([https://github.com/guillochon/tdefit](https://github.com/guillochon/tdefit)) and _MOSFIT_ provided by Guillochon et al. (2018) ([https://github.com/guillochon/mosfit](https://github.com/guillochon/mosfit)) have been well applied to describe both structure evolutions in details for the falling back stellar debris and evolutions of expected time-dependent long-term variability through hydrodynamical simulations. Therefore, TDE models with sufficient model parameters can be applied to describe observed long-term variability from TDEs.
Meanwhile, there are so-far around one hundred observational results on TDE candidates reported in the literature based on TDE expected variability properties in different multi-wavelength bands, such as the reported TDE candidates in Komossa et al. (2004); Cenko et al. (2012); Gezari et al. (2012); Holoien et al. (2014, 2016); Lin et al. (2017); Tadhunter et al. (2017); Mattila et al. (2018); Wang et al. (2018); Holoien et al. (2020); Liu et al. (2020); Neustadt et al. (2020); Goodwin et al. (2022), etc.. More recently, van Velzen et al. (2021) have reported seventeen TDE candidates from the First Half of ZTF (Zwicky Transient Facility) Survey observations along with Swift UV and X-ray follow-up observations. Sazonov et al. (2021) have reported thirteen TDE candidates from the SRG all-sky survey observations and then confirmed by follow-up optical observations. More recent review on observational properties of reported TDEs can be found in Gezari (2021).
Among the reported TDE candidates, SDSS J0159 is an interesting object, because its central virial BH mass reported as \(\sim 10^{8}\mathrm{M}_{\odot}\) in LaMassa et al. (2015); Merloni et al. (2015) through the virialization assumptions (Peterson et al., 2004; Greene and Ho, 2005; Vestergaard and Peterson, 2006; Kelly and Bechtold, 2007; Rafiee and Hall, 2011; Shen et al., 2011; Mejia-Restrepo et al., 2022) applied to BLRs (broad emission line regions) clouds, the largest BH mass among the reported BH masses of TDE candidates in Wevers et al. (2017); Mockler et al. (2019); Zhou et al. (2021); Wong et al. (2022). However, variations of accretion flows have apparent effects on dynamical structures of BLRs, if the BLRs clouds are tightly related to TDEs, such as the more detailed abnormal variabilities of broad emission lines in the TDE candidate ASASSN-14li in Holoien et al. (2016): strong emissions leading to wider line widths of broad H\(\alpha\) which are against the expected results by the Virialization assumptions to BLRs clouds. More recently, Zhang et al. (2019) have measured the stellar velocity dispersion about \(81\pm 27\mathrm{km/s}\) of SDSS J0159 through the absorption features around 4000A, reported that the M-sigma relation (Ferrarese and Merritt, 2000; Gebhardt et al., 2000; Kormendy and Ho, 2013; Bennert et al., 2015) determined central BH mass are about two magnitudes smaller than the virial BH mass in SDSS J0159, and provided an interesting clue to detect TDE candidates by outliers in the space of virial BH masses versus stellar velocity dispersions.
Moreover, Charlton et al. (2019) have shown the total stellar mass of SDSS J0159 is about \(4.7\times 10^{10}\mathrm{M}_{\odot}\), indicating the central BH mass about \(6\times 10^{6}\mathrm{M}_{\odot}\) (the accepted value in Wong et al. (2022)) in SDSS J0159 through the correlation between central BH mass and total stellar mass as discussed in Haring and Rix (2004); Sani et al. (2011); Kormendy and Ho (2013); Reines and Volonteri (2015), roughly consistent with the BH mass estimated by the M-sigma relation reported in our previous paper Zhang et al. (2019). Therefore, besides the virial BH mass through virialization assumption to BLRs clouds and the BH mass through the M-sigma relation and through the total stellar mass, BH mass of SDSS J0159 determined through one another independent method will be very interesting and meaningful.
More recently, Guillochon et al. (2014); Mockler et al. (2019); Ryu et al. (2020); Zhou et al. (2021) have shown that the long-term TDE model expected time-dependent variability properties can be well applied to estimate central BH masses of TDE candidates. However, SDSS J0159 is not discussed in Mockler et al. (2019); Ryu et al. (2020); Zhou et al. (2021), probably due to lack of information of peak intensities of light curves in SDSS J0159 and/or other unknown reasons. Therefore, in the manuscript, the central BH mass of SDSS J0159 is to be estimated by theoretical TDE model applied to describe the long-term optical variabilities. It is very interesting to check whether would the reported virial BH mass or M-sigma relation determined BH mass be coincident with the TDE model determined BH mass. The manuscript is organized as follows. In section 2, we present spectroscopic features of SDSS J0159, in order to re-confirm the low stellar velocity dispersion and the low total stellar mass. Section 3 shows our methods with a TDE model applied to describe observed multi-band light curves. Then, the main results and necessary discussions are shown in Section 4. Section 5 shows simulating results to check whether the long-term variabilities are tightly related to a central TDE in SDSS J0159. Section 6 gives our main summary and final conclusions. And in the manuscript, the cosmological parameters of \(H_{0}=70\mathrm{km\cdot s^{-1}Mpc^{-1}}\), \(\Omega_{\Lambda}=0.7\) and \(\Omega_{m}=0.3\) have been adopted.
## 2 Spectroscopic Results of SDSS J0159
In Zhang et al. (2019), the stellar velocity dispersion about \(81\pm 27\mathrm{km/s}\) of SDSS J0159 has been measured through absorption features around 4000A in the
SDSS spectrum with plate-mjd-fiberid=3609-55201-0524 which includes apparent host galaxy contributions, based on the 39 simple stellar population templates from the Bruzual & Charlot (2003). In the section, the SSP (simple Stellar Population) method discussed in Bruzual & Charlot (1993); Kauffmann et al. (2003b); Cappellari & Emsellem (2004); Cid Fernandes et al. (2005, 2013); Zhang (2014); Lopez Fernandez et al. (2016); Cappellari (2017); Werle (2019) is re-applied to mainly determine the total stellar mass in SDSS J0159 but with more plenty of stellar templates, in order to check whether are there low total stellar mass in SDSS J0159, which will provide further clues on central BH mass. In the section, the spectrum with plate-mjd-fiberid=3609-55201-0524 is mainly considered and collected from the eBOSS (Extended Baryon Oscillation Spectroscopic Survey) (Ahumada et al., 2020; Abdurro'uf et al., 2022), due to the following main reason. The eBOSS spectrum with apparent stellar absorption features of SDSS J0159 in the Type 1.9 AGN state is observed at the late times of the TDE expected flare, with few effects of TDE on the spectroscopic features.
Due to the weak but apparent broad H\(\alpha\) in SDSS J0159 observed in mjd=55201, contributions from central AGN continuum emissions should be considered. Here, 350 SSP templates \(S_{lib}\) are collected from the MILES (Medium resolution INT Library of Empirical Spectra) stellar library (Falcon-Barroso et al., 2011; Knowles et al., 2021) with 50 stellar ages from 0.06Gyrs to 17.78Gyrs and with 7 metallicities from -2.32 to 0.22, and the simple SSP method is re-applied to determine the host galaxy stellar lights and the central AGN continuum emissions, through the following model function
\[S_{g}\ =\ A\ \otimes\ S_{lib,\sigma,V_{s}}\ +\ \alpha\times\lambda^{\beta} \tag{1}\]
with \(S_{g}\) as the SDSS spectrum of SDSS J0159, \(S_{lib,\sigma,V_{s}}\) as the broadened and shifted stellar templates with broadening velocity \(\sigma\) and shifting velocity \(V_{s}\), \(\alpha\times\lambda^{\beta}\) as the AGN continuum emissions. Then, through the Levenberg-Marquardt least-squares minimization technique (the known MPFIT package), with emission lines being masked out, the weights \(A\) and the power law continuum emissions can be well determined. Fig. 1 shows the best descriptions and corresponding residuals to the SDSS spectrum with rest wavelength range from 3650A to 7700A by the model function above, with \(\chi^{2}\sim 0.93\) (summed squared residuals divided by degree of freedom) and with residuals calculated by SDSS spectrum minus the best descriptions.
Based on the best descriptions, total stellar mass is determined to be about
\[M_{tot}\ \sim\ \sum\ A\times 4\pi\ Dis^{2}./Unit/L_{\odot}\sim 7.11\times 10 ^{10}\mathrm{M}_{\odot} \tag{2}\]
with \(L_{\odot}=3.826\times 10^{33}\mathrm{erg/s}\) as the solar luminosity, and \(Dis=1623Mpc\) as the distance between SDSS J0159 and the earth, and \(Unit=10^{-17}\mathrm{erg/s/cm^{2}/\AA}\) as the emission intensity unit of the SDSS spectrum. The calculated total stellar mass in SDSS J0159 is roughly consistent with the reported \(4.7^{+8.0}_{-3.9}\times 10^{10}\mathrm{M}_{\odot}\) in Charlton et al. (2019) and the reported \(2.3^{+0.7}_{-0.1}\times 10^{10}\mathrm{M}_{\odot}\) in Graur et al. (2018), indicating lower central BH mass in SDSS J0159.
Therefore, the low total stellar mass in SDSS J0159 can be well confirmed, and lead to expected lower central BH mass than the Virial BH mass in SDSS J0159. Then, besides the lower total stellar mass in SDSS J0159, it is interesting to check whether the TDE model can still lead to different central BH mass in SDSS J0159.
## 3 Main Method and Fitting Procedure to Describe the Light Curves
Similar as what we have recently done in Zhang (2022) to describe X-ray variabilities in the TDE candidate _Swift_ J2058.4+0516 with relativistic jet, the following four steps are applied to describe the long-term optical \(ugriz\)-band variabilities of SDSS J0159. The similar procedures have also
Figure 1: The SDSS spectrum (solid dark green line) of SDSS J0159 and the best descriptions (solid red line) to the spectrum with the emission lines being masked out by the SSP method with applications of 350 stellar templates. In top region, as shown legend in top-left corner, solid purple line shows the SSP method determined stellar lights, and solid blue line shows the determined power law AGN continuum emissions, vertical red lines from left to right mark the following emission features masked out, including [O ii]\(\lambda 3727\AA\), H\(\alpha\), H\(\gamma\), [Ne iii]\(\lambda 3869\AA\), He ii\(\lambda 3891\AA\), Calcium K line, [Ne iii]\(\lambda 3968\AA\), Calcium H line, [S ii]\(\lambda 4070\AA\), H\(\delta\), H\(\gamma\), [O iii]\(\lambda 4364\AA\), He i\(\lambda 5877\AA\) and [O i]\(\lambda 6300,6363\AA\), respectively, and the area filled by red lines around 5000Å shows the region masked out including the emission features of probable He ii, broad and narrow H\(\beta\) and [O iii] doublet, and the area filled by red lines around 6550Å shows the region masked out including the emission features of broad and narrow H\(\alpha\), [N ii] and [S ii] doublets. Bottom region shows the residuals calculated by SDSS spectrum minus sum of the stellar lights and the power law continuum emissions.
been applied in Zhang (2022b) to discuss TDE expected long-term variabilities in the high redshift quasar SDSS J014124+010306 and in Zhang (2022c) to discuss TDE expected long-term variabilities of broad H\(\alpha\) line luminosity in the known changing-look AGN NGC 1097.
First, standard templates of viscous-delayed accretion rates in TDEs are created. Based on the discussed \(dM/dE\) provided in Guillochon et al. (2014, 2018); Mockler et al. (2019) (the fundamental elements in the public code of TDEFIT and MOSFIT), templates of fallback material rate \(\dot{M}_{fbt}=dM/dE~{}\times~{}dE/dt\) can be calculated with \(dE/dt\sim\frac{(2~{}\pi~{}G~{}M_{\rm BH})^{2/3}}{3~{}r^{3/3}}\), for the standard cases with central BH mass \(M_{\rm BH}=10^{6}\)M\({}_{\odot}\) and disrupted main-sequence star of \(M_{*}=1\)M\({}_{\odot}\) and with a grid of the listed impact parameters \(\beta_{t}\) in Guillochon & Ramirez-Ruiz (2013). Considering the viscous delay effects as discussed in Guillochon & Ramirez-Ruiz (2013); Mockler et al. (2019) by the viscous timescale \(T_{vis}\), templates of viscous-delayed accretion rates \(\dot{M}_{at}\) can be determined by
\[\dot{M}_{at}~{}=~{}\frac{exp(-t/T_{vis})}{T_{vis}}\int_{0}^{t}exp(t^{\prime}/T_ {vis})\dot{M}_{fbt}dt^{\prime} \tag{3}\]
. Here, a grid of 31 log(\(T_{vis,~{}t}\)/years) range from -3 to 0 are applied to create templates \(\dot{M}_{at}\) for each impact parameter. The final templates of \(\dot{M}_{at}\) include 736 (640) time-dependent viscous-delayed accretion rates for 31 different \(T_{vis}\) of each 23 (20) impact parameters for the main-sequence star with polytropic index \(\gamma\) of 4/3 (5/3).
Second, for common TDE cases with model parameters of \(\beta\) and \(T_{vis}\) different from the list values in \(\beta_{t}\) and in \(T_{vis,~{}t}\), the actual viscous-delayed accretion rates \(\dot{M}_{a}\) are created by the following two linear interpolations. Assuming that \(\beta_{1}\), \(\beta_{2}\) in the \(\beta_{t}\) are the two values nearer to the input \(\beta\), and that \(T_{vis1}\), \(T_{vis2}\) in the \(T_{vis,~{}t}\) are the two values nearer to the input \(T_{vis}\), the first linear interpolation is applied to find the viscous-delayed accretion rates with input \(T_{vis}\) but with \(\beta=\beta_{1}\) and \(\beta=\beta_{2}\) by
\[\begin{split}\dot{M}_{a}(T_{vis},~{}\beta_{1})=\dot{M}_{at}(T_{vis1 },~{}\beta_{1})+\\ \frac{T_{vis}-T_{vis1}}{T_{vis2}-T_{vis1}}(\dot{M}_{at}(T_{vis2},~{ }\beta_{1})-\dot{M}_{at}(T_{vis1},\beta_{1}))\\ \dot{M}_{a}(T_{vis},~{}\beta_{2})=\dot{M}_{at}(T_{vis1},~{}\beta_{2 })+\\ \frac{T_{vis}-T_{vis1}}{T_{vis2}-T_{vis1}}(\dot{M}_{at}(T_{vis2},~{ }\beta_{2})-\dot{M}_{at}(T_{vis1},~{}\beta_{2}))\end{split} \tag{4}\]
. Then, the second linear interpolation is applied to find the viscous-delayed accretion rates with input \(T_{vis}\) and with input \(\beta\) by
\[\begin{split}\dot{M}_{a}(T_{vis},~{}\beta)=\dot{M}_{a}(T_{vis},~{ }\beta_{1})+\\ \frac{\beta-\beta_{1}}{\beta_{2}-\beta_{1}}(\dot{M}_{a}(T_{vis},~{ }\beta_{2})-\dot{M}_{a}(T_{vis},~{}\beta_{1}))\end{split} \tag{5}\]
. Applications of the linear interpolations can save about one tenth of the running time for the fitting procedure to describe the observed long-term light curves, comparing with the running time for the fitting procedure considering the integral equation (3) to determine the \(\dot{M}_{a}(T_{vis},~{}\beta)\).
Third, for a common TDE case with \(M_{\rm BH}\) and \(M_{*}\) different from \(10^{6}\)M\({}_{\odot}\) and \(1\)M\({}_{\odot}\), the actual viscous-delayed accretion rates \(\dot{M}\) and the corresponding time information in observer frame are created by the following scaling relations as shown in Guillochon et al. (2014); Mockler et al. (2019),
\[\begin{split}\dot{M}=M_{\rm BH,~{}6}^{-0.5}~{}\times~{}M_{\star} ^{2}~{}\times~{}R_{\star}^{-1.5}~{}\times~{}\dot{M}_{a}(T_{vis},~{}\beta)\\ t_{m}=(1+z)\times M_{\rm BH,~{}6}^{0.5}~{}\times~{}M_{\star}^{-1} \times R_{\star}^{1.5}~{}\times~{}t_{a}(T_{vis},~{}\beta)\end{split} \tag{6}\]
, where \(M_{\rm BH,~{}6}\), \(M_{\star}\), \(R_{\star}\) and \(z\) represent central BH mass in unit of \(10^{6}\)M\({}_{\odot}\), stellar mass and radius in unit of M\({}_{\odot}\) and R\({}_{\odot}\), and redshift of host galaxy of a TDE, respectively. And the mass-radius relation well discussed in Tout et al. (1996) is accepted for main-sequence stars.
Fourth, based on the calculated time-dependent templates of accretion rate \(\dot{M}(t)\), the time dependent emission spectrum \(f_{A}(t)\) can be calculated through the modeled simple black-body photosphere model discussed in Guillochon & Ramirez-Ruiz (2013); Mockler et al. (2019)
\[\begin{split} f_{A}(t)~{}=~{}\frac{2\pi~{}Gc^{2}}{\lambda^{5}} \frac{1}{exp(\frac{hc}{k~{}T_{p}(t)})-1}(\frac{R_{P}(t)}{Dis})^{2}\\ T_{p}(t)~{}=~{}(\frac{\eta~{}\dot{M}(t)c^{2}}{4\pi\sigma_{SB}R_{p }^{2}(t)})^{1/4}\\ R_{p}(t)~{}=~{}R_{0}~{}a_{p}~{}(\frac{\eta~{}\dot{M}(t)c^{2}}{ 3\times 10^{38}M_{BH}})^{t_{p}}\\ a_{p}~{}=~{}(GM_{BH}\times\frac{t_{p}}{\pi})^{1/3}\end{split} \tag{7}\]
with \(Dis\) as the distance to the earth calculated by redshift, \(k\) as the Boltzmann constant, \(T_{p}(t)\) and \(R_{p}(t)\) as the time-dependent effective temperature and radius of the photosphere, respectively, \(\eta\) as the energy transfer efficiency smaller than 0.4, \(\sigma_{SB}\) as the Stefan-Boltzmann constant, \(t_{p}\) as the time information of the peak accretion rate. Then, based on the calculated time dependent \(f_{A}(t)\) in observer frame and the wavelength ranges \(\lambda_{u,~{}g,~{}r,~{}i,~{}z}\) of SDSS \(ugriz\) filters, the SDSS \(ugriz\)-band time dependent luminosities as discussed in Merloni et al. (2015) can be calculated by
\[L_{u,~{}g,~{}r,~{}i,~{}z}(t)~{}=~{}\int_{\lambda_{u,~{}g,~{}r,~{}i,~{}z}}f_{A}(t) d\lambda~{}\times~{}4~{}\pi~{}\times~{}Dis^{2} \tag{8}\]
with \(Dis\) as the distance to the earth calculated by redshift. Here, not magnitudes but luminosities are calculated, mainly because that the collected light curves of SDSS J0159 are the time dependent \(ugriz\)-band luminosities.
Finally, the theoretical TDE model expected time dependent optical light curves \(L_{(u,~{}g,~{}r,~{}i,~{}z)}(t)\) can be described
by the following seven model parameters, the central BH mass \(\log(M_{\rm BH}/10^{6}M_{\odot})\), the stellar mass \(\log(M_{\star})\) (corresponding stellar radius \(R_{\star}\) calculated by the mass-radius relation), the energy transfer efficiency \(\log(\eta)\), the impact parameter \(\log(\beta)\), the viscous timescale \(\log(T_{vis})\), the parameters \(l_{p}\) and \(R_{0}\) related to the black-body photosphere model. Meanwhile, the redshift 0.312 of SDSS J0159 is accepted. Then, through the through the well-known maximum likelihood method combining with the Markov Chain Monte Carlo (MCMC) technique (Foreman-Mackey et al., 2013), the \(ugriz\)-band light curves of SDSS J0159 can be well described, with accepted prior uniform distributions and starting values of the model parameters listed in Table 1. Meanwhile, the available BH masses and stellar masses are the ones leading the determined tidal disruption radius \(R_{\rm TDE}\),
\[\frac{R_{\rm TDE}}{R_{\rm s}}=5.06\times(M_{\star})^{-1/3}(\frac{M_{\rm BH,\ 6}}{10})^{-2/3}\times R_{\star}>1 \tag{9}\]
, to be larger than event horizon of central BH (\(R_{\rm s}=2GM_{\rm BH}/c^{2}\)).
## 4 Main Results and Necessary Discussions
The observed light curves (the time dependent luminosities) in SDSS \(ugriz\) bands for SDSS J0159 are shown in Fig. 2 with the data points are directly collected from Table 2 in Merloni et al. (2015) after subtractions of host galaxy contributions. This is the main reason why the time dependent luminosities rather than the apparent magnitudes are collected, because the host galaxy contributions have been clearly removed from the time dependent luminosities. Then, based on the discussed method and fitting procedure in section above, the observed \(ugriz\)-band light curves of SDSS J0159 are simultaneously described by the theoretical TDE model with seven model parameters. Fig. 2 shows the best fitting results and the corresponding confidence bands after considerations of the uncertainties of model parameters based on the theoretical TDE model.
It is clear that besides the \(z\)-band light curve statistically lower than the theoretical TDE model expected values, the other \(ugri\)-bands light curves can be well described by the theoretical TDE model. However, as simply discussed in Guillochon & Ramirez-Ruiz (2013); Mockler et al. (2019), the simple black-body photosphere model can be well applied to describe optical band variabilities, considering the probable additional contributions of NIR emissions from dust clouds, the simple black-body black-body photosphere model is not preferred to describe the \(z\)-band variabilities. Certainly, a simply strengthened TDE model expected theoretical \(z\)-band light curve by a factor 2 (dot-dashed lines in the last panel of Fig. 2) can be well applied to describe the observed \(z\)-band light curve. Furthermore, as well discussed in Lu et al. (2016); Jiang et al. (2017, 2021), there should be apparent IR contributions to \(z\)-band light curve of SDSS J0159 at \(z=0.312\), considering that radiation photons from the central TDE are absorbed by dust grains and then re-radiated in the infrared band, similar as discussed in Guillochon & Ramirez-Ruiz (2013); Mockler et al. (2019). In other words, if the additional contributions were removed, the \(z\)-band light curve obey the same TDE expected variability properties as those in \(ugri\)-bands.
Fig. 3 shows the MCMC technique determined two dimensional posterior distributions in contours of the seven model parameters. The accepted values of the model parameters are listed in Table 1. The central BH masses \(M_{BH}(10^{6}{\rm M}_{\odot})\) in SDSS J0159 are about \(\sim 5.7^{+0.8}_{-0.7}\) and \(\sim 3.3^{+0.5}_{-0.4}\) by applications of TDE model with \(\gamma=4/3\) and with \(\gamma=5/3\), respectively. Moreover, comparing with the reported parameters for the TDE candidates in Mockler et al. (2019), the reported values of the model parameters are common values among the TDE candidates.
Before proceeding further, it is necessary to consider whether central BH with mass around \(10^{8}{\rm M}_{\odot}\) (the virial BH mass) can also lead to accepted descriptions to the \(ugri\)-band light curves1 of SDSS J0159. Therefore, besides the best fitting results determined through the seven model parameters as free parameters with prior uniform distributions, the TDE model and the fitting procedure is running again, but with the central BH mass having the prior uniform distribution limited from \(1\times 10^{8}{\rm M}_{\odot}\) to \(5\times 10^{8}{\rm M}_{\odot}\) and the other model parameters having the same prior uniform distributions as what have been applied above. Then, the TDE model determined descriptions are shown as dotted purple lines and dotted pink lines in the first four panels of Fig. 2, which are worse to explain the rise-to-peak trend. However, if the first data point shown in Fig. 2 (MJD-50200=880) was not related to the expected central TDE (i.e., starting time of the expected central TDE was later than MJD-50200=880), the determined descriptions with accepted virial BH mass could be also well accepted. However, after considering the following two points, the data point at MJD-50200=880 is considered to be related to central transient event. First, as listed values in Table 2 in Merloni et al. (2015), the luminosity at MJD-50200=880 is at least about 2 times stronger than the data points at late times of the event with MJD-50200 later than 4000, indicating that the first data point at MJD-50200=880 related to central event is preferred in SDSS J0159. Second, the CSS (Drake et al., 2009) V-band photometric light curve at quiescent state of SDSS J0159 is collected and shown in bottom right panel of Fig 2. The CSS V-band light curve has standard deviation about 0.12mag, leading corresponding luminosity variability to be about 11.2%, quite smaller than the luminosity ratio
Figure 2: The first five panels show theoretical TDE model determined the best fitting results to the SDSS \(ugriz\)-band light curves (solid circles plus error bars in red) of SDSS J0159. In the panels for the \(ugri\)-band light curves, as shown legend in top right corner, solid blue line and dashed blue lines show the best fitting results and the corresponding confidence bands calculated by uncertainties of the model parameters determined by applications of TDE model with \(\gamma=4/3\), solid green line and dashed green lines show the best fitting results and the corresponding confidence bands determined by applications of TDE model with \(\gamma=5/3\), dotted line in purple and dotted line in pink show the TDE model determined descriptions to the light curve by applications of prior distribution of central BH mass lager than \(10^{8}\)M\({}_{\odot}\) in the TDE model with \(\gamma=4/3\) and \(\gamma=5/3\), respectively. In the bottom left panel for the \(z\)-band light curve, as shown legend in top right corner, solid blue line and solid green line show the results determined by applications of TDE model with \(\gamma=4/3\) and with \(\gamma=5/3\), respectively, dot-dashed blue line and dot-dashed green line show the corresponding strengthened model results by the factor of 2, which can be applied to well describe the \(z\)-band light curve. Bottom right panel shows the CSS V-band light curve, with horizontal solid and dashed lines show the mean value and corresponding 1RMS scatter.
Figure 3: MCMC technique determined two dimensional projections of the posterior distributions of the seven model parameters. In each left panel, contour represents the results for the model parameters determined by TDE model with \(\gamma=4/3\). In each right panel, contour represents the results for the model parameters determined by TDE model with \(\gamma=5/3\). In each panel, solid circle plus error bars in red show the final accepted values and the corresponding uncertainties of the model parameters.
about 2 of the first data point at MJD-50200=880 to the data points at late times. Therefore, the model with central BH mass larger than \(10^{8}\)M\({}_{\odot}\) are not preferred in SDSS J0159.
Based on the results shown in Fig. 2 and posterior distributions of model parameters shown in Fig. 3, there are two interesting points we can find. First, the observed light curves can be well described by the theoretical TDE model, strongly indicating a TDE around central BH of SDSS J0159 as suggested in Merloni et al. (2015). Second, different model parameters can lead to observed light curves being well described by TDE models, however, the different model parameters in TDE models leading to much different properties on features around peak. In other words, once there were observed high quality light curves with apparent features around peak intensities, the sole TDE model can be determined. Fortunately, due to the similar BH masses in the TDE models with different polytropic indices, it is not necessary to further determine which TDE model, the TDE model with polytropic index \(\gamma=4/3\) or the model with \(\gamma=5/3\), is preferred in SDSS J0159, because only BH mass properties are mainly considered in the manuscript.
Moreover, based on the listed TDE model determined parameters (logarithmic values) in Table 1, the stellar mass of the tidally disrupted main-sequence star is about \(M_{\star}\sim 1.97\)M\({}_{\odot}\) (\(M_{\star}\sim 1.93\)M\({}_{\odot}\)) for \(\gamma=4/3\) (\(\gamma=5/3\)), which is out of the transition range between 0.3M\({}_{\odot}\) and 1M\({}_{\odot}\) for stars as discussed in Mockler et al. (2019). Therefore, we do not consider hybrid fallback functions that smoothly blend between the 4/3 and 5/3 polytopes, as suggested in Mockler et al. (2019). Meanwhile, as the shown best-fitting results to the \(ugri\)-bands light curves of SDSS J0159 in Fig. 2, applications of the polytropic index \(\gamma=4/3\) and \(\gamma=5/3\) individually can lead to well accepted descriptions to the observed light curves, re-indicating that it is not necessary to describe long-term variabilities of SDSS J0159 with considerations of hybrid fallback functions that smoothly blend between the 4/3 and 5/3 polytopes.
Based on the theoretical TDE model determined BH mass \(M_{BH}\sim 4.5^{+1.3}_{-1.1}\times 10^{6}\)M\({}_{\odot}\) (the mean value of the two BH masses determined by theoretical TDE models with different polytropic indices), we can check the dependence of BH mass on stellar velocity dispersion in SDSS J0159 in Fig. 4. Comparing with the reported M-sigma relations for both quiescent galaxies in McConnell & Ma (2013); Kormendy & Ho (2013); Savorgnan & Graham (2015) and in AGN in Ho & Kim (2014); Woo et al. (2015), the TDE model determined BH mass is well consistent with the M-sigma relation expected value in SDSS J0159. Moreover, considering the reported BH masses of TDE candidates in Zhou et al. (2021) which we shown as solid cyan circles in Fig. 4, the TDE model determined central BH mass is preferred in SDSS J0159, rather than the virial BH mass about two magnitudes higher than the expected value. Furthermore, Fig. 5 shows the correlation between TDE model determined BH masses \(M_{BH}\) and TDE model determined energy transfer efficiency \(\eta\) of the TDE candidates in Mockler et al. (2019); Zhou et al. (2021). It is clear that the SDSS J0159 as a radio quiet object is common as the other TDE candidates in the space of \(M_{BH}\) versus \(\eta\), besides the TDE candidate _swift_ J2058.4+0516 with relativistic jet related to central TDE (Cenko et al., 2012; Zhang, 2022). Therefore, the BH mass about \(10^{6}\)M\({}_{\odot}\) in SDSS J0159 is reasonable enough.
The TDE model determined central BH mass two magnitudes smaller than virial BH mass in SDSS J0159 provide robust evidence to support that the BLRs in SDSS J0159 include strong contributions of accreting debris from central TDE, leading the dynamical properties of disk-like BLRs as discussed in Zhang (2021). And the results in the manuscript to reconfirm that outliers in the space of virial BH masses versus stellar velocity dispersions could be better candidates of TDEs.
Furthermore, simple discussions are given on variability properties of broad H\(\alpha\), to support that the Virialization assumption applied to estimate virial BH mass is not appropriate to be applied to estimate central virial BH mass in SDSS J0159. Under the widely accepted virialization assumption for broad line AGN as well discussed in Peterson et al. (2004); Greene & Ho (2005); Vestergaard & Peterson (2006); Kelly & Bechtold (2007); Rafiee & Hall (2011); Shen et al. (2011); Mejia-Restrepo et al. (2022):
\[M_{\rm BH}\ \propto\ V^{2}\ \times\ R_{\rm BLR}\ \propto\ V^{2}\ \times\ L^{0.5} \tag{10}\]
where \(V\), \(R_{\rm BLR}\) and \(L\) mean broad line width, distance of BLRs (broad emission line regions) to central BH and broad line
\begin{table}
\begin{tabular}{l c c c c} \hline \hline parameter & prior & p0 & value\({}^{a}\) & value\({}^{b}\) \\ \hline log(\(M_{\rm BH,\ 6}\)) & [-3, 3] & 0. & \(0.756\pm 0.055\) & \(0.524\pm 0.058\) \\ log(\(M_{\star}/M_{\odot}\)) & [-2, 1.7] & 0. & \(0.294\pm 0.057\) & \(0.285\pm 0.055\) \\ log(\(\beta\))(4/3) & [-0.22, 0.6] & 0. & \(-0.019\pm 0.017\) & \(\cdots\) \\ log(\(\beta\))(5/3) & [-0.3, 0.4] & 0. & \(\cdots\) & \(0.287\pm 0.042\) \\ log(\(T_{vis}\)) & [-3, 0] & -1. & \(-0.109\pm 0.032\) & \(-0.074\pm 0.005\) \\ log(\(\eta\)) & [-3, -0.4] & -1. & \(-0.845\pm 0.085\) & \(-1.986\pm 0.058\) \\ log(\(R_{0}\)) & [-3, 3] & 1. & \(-0.187\pm 0.076\) & \(-0.109\pm 0.075\) \\ log(\(l_{p}\)) & [-3, 0.6] & 0. & \(-0.202\pm 0.039\) & \(-0.101\pm 0.034\) \\ \hline \end{tabular} Notes: The first column shows the applied model parameters. The second column shows limitations of the prior uniform distribution of each model parameter. The third column with title “p0” lists starting value of each parameter. The fourth column with column title marked with \({}^{a}\) means the values of model parameters for the TDE model with \(\gamma=4/3\). The fifth column with column title marked with \({}^{b}\) means the values of model parameters for the TDE model with \(\gamma=5/3\).
\end{table}
Table 1: Parameters of TDE models for SDSS J0159
luminosity (or continuum luminosity), after accepted the well known empirical R-L relation (\(R_{\rm BLR}~{}\propto~{}L^{-0.5}\)) (Kaspi et al., 2005; Bentz et al., 2013). Then for broad H\(\alpha\) in multi-epoch spectra of SDSS J0159, we will have
\[V_{1}^{4}~{}\times~{}L_{1}~{}=~{}V_{2}^{4}~{}\times~{}L_{2} \tag{11}\]
, where suffix 1 and suffix 2 mean parameters from two different epochs. For SDSS J0159 discussed in LaMassa et al. (2015) and Merloni et al. (2015), the widths (full width at half maximum) of broad H\(\alpha\) are about (3408\(\pm\)110) km/s and (6167\(\pm\)280) km/s in 2000 and in 2010, respectively, the luminosities of broad H\(\alpha\) are about (329 \(\pm\) 11) \(~{}\times\) 10\({}^{40}\) erg/s and (143 \(\pm\) 7) \(\times\) 10\({}^{40}\) erg/s in 2000 and in 2010, respectively. Thus, the \(V^{4}~{}\times~{}L\) in 2000 and 2010 are
\[\begin{split}(\frac{V_{2000}}{1000\;{\rm km/s}})^{4}~{}\times~{} \frac{L_{2000}}{10^{42}\;{\rm erg/s}}~{}\sim~{}444_{68}^{77}\\ (\frac{V_{2010}}{1000\;{\rm km/s}})^{4}~{}\times~{}\frac{L_{2010}} {10^{42}\;{\rm erg/s}}~{}\sim~{}2068_{435}^{523}\end{split} \tag{12}\]
. There are quite larger \(V^{4}~{}\times~{}L\) in 2010 than in 2000. Furthermore, if considering serious obscuration of the broad Balmer lines in 2010 (no broad H\(\beta\) detected in 2010), the
Figure 4: On the correlation between stellar velocity dispersions and BH masses. Solid five-point-star in purple shows the BH mass of SDSS J0159 determined by theoretical TDE model and the stellar velocity dispersion measured in the manuscript, solid five-point-star in red shows the virial BH mass of SDSS J0159 as discussed in LaMassa et al. (2015); Merloni et al. (2015); Zhang et al. (2019) and the velocity dispersion measured in Zhang et al. (2019). Dot-dashed lines in different colors listed in the legend in top left corner represent the M-sigma relations reported in the literature through the quiescent galaxies (QGs) and/or the RM (Reverberation Mapped) AGNs with classical/pseudobulges and/or the TDEs. And, solid circles in red, in blue and in cyan show the values for the 89 quiescent galaxies from Savorgnan & Graham (2015), the 29 RM AGNs from Woo et al. (2015) and the 12 TDEs from Zhou et al. (2021), respectively.
Figure 5: On the dependence of energy transfer efficiency \(\eta\) on central BH mass of the reported TDE candidates in Mockler et al. (2019); Zhou et al. (2021). Symbols in blue represent the results collected from Table 2 in Mockler et al. (2019), and symbols in dark green represent the results collected from Table 3 in Zhou et al. (2021). The solid circle plus error bars in purple show the results for the _swift_ J2058.4+0516 determined in Zhang (2022). The two solid circles in red show the determined results by model with \(\gamma=4/3\) and \(\gamma=5/3\) in the manuscript, respectively.
luminosity of broad H\(\alpha\) in 2010 should be larger, leading to more larger \(V^{4}\ \times\ L\) in 2010. The results indicating some none-virial components are actually included in the broad line variabilities, to support our results in the manuscript to some extent.
Before the end of the section, three additional points are noted. First and foremost, as the shown results in Fig. 2, there is a smaller flare around MJD-50200\(\sim\)2700 in SDSS J0159, which can be simply accepted as a re-brightened peak. Among the reported TDEs candidates, ASASSN-15lh reported in Leloudas et al. (2016) is the known TDE candidate with re-brightened peak in its long-term light curves. So far, it is not unclear on the physical origin of re-brightened peaks in TDEs candidates. As discussed in Leloudas et al. (2016), circularization could be efficiently applied to explain re-brightened peak in TDE expected light curves. Meanwhile, as discussed in Mandel and Levin (2015), re-brightened peak could be expected when a binary star is tidally disrupted by a central BH. And as discussed in Coughlin and Armitage (2018), re-brightened peak could be expected when a star is tidally disrupted by a central binary black hole system with extreme mass ratio. Unfortunately, there are not enough information to determine which proposed method is preferred to explain the re-brightened peak around MJD-50200\(\sim\)2700 in SDSS J0159. Further efforts in the near future are necessary to discuss properties of the re-brightened peak in SDSS J0159. And there are no further discussions on the re-brightened peak in SDSS J0159 in the manuscript.
Besides, as the shown and discussed results of the reported optical TDEs candidates in Mockler et al. (2019); Gezari (2021); van Velzen et al. (2021), time durations about hundreds of days in rest frame can be found at 10% of the peak of the TDE expected light curves, quite smaller than the corresponding time duration of about 1900days at 10% of the peak of the light curves of the SDSS J0159 in the rest frame (with redshift z=0.312 accepted). Actually, the longer time duration in SDSS J0159 can be naturally explained by the scaled relation in equation (6) with different BH mass and stellar mass. As an example, among the reported optical TDEs candidates shown in Mockler et al. (2019), the PTF09ge has time duration about \(T_{P}\sim 160days\) at 10% (2.5magnitudes weaker than the peak magnitude) of the peak of its light curves in the rest frame. The PTF09ge is collected, mainly due to more clearer and smoother optical light curves with larger variability amplitudes shown in Fig. 1 in Mockler et al. (2019). To collect one another TDE candidate can lead to similar results. Considering PTF09ge with MOSFIT determined BH mass about \(M_{BH,P}\sim 3.6\times 10^{6}\)M\({}_{\odot}\) and stellar mass about \(M_{\star,\ P}\sim 0.1\)M\({}_{\odot}\) (corresponding stellar radius about \(R_{\star,\ P}\sim 0.12\)R\({}_{\odot}\)) and \(\beta_{P}\sim 1.1\) and \(\gamma_{P}=5/3\) listed in Mockler et al. (2019), the expected time duration \(T_{S}\) of the TDE expected light curve in SDSS J0159 can be estimated as
\[T_{S}\ \sim\ T_{P}\ (\frac{M_{BH,S}}{M_{BH,\ P}})^{0.5}\ (\frac{M_{\star,\ S}}{M_{ \star,\ P}})^{-1}\ (\frac{R_{\star,\ S}}{R_{\star,\ P}})^{1.5}\ S_{\beta} \tag{13}\]
with \(S_{\beta}\) as the parameter considering effects of different \(\beta\) applied in PTF09ge (\(\beta_{P}\sim 1.1\)) and in SDSS J0159 (\(\beta_{S}\sim 1.94\)). Then, based on the determined parameters listed in Table 1 for SDSS J0159 with \(\gamma_{S}=5/3\), the central BH mass and stellar parameters are about \(M_{BH,S}\sim 3.4\times 10^{6}\)M\({}_{\odot}\) and \(M_{\star,\ S}\sim 1.93\)M\({}_{\odot}\) (corresponding stellar radius about \(R_{\star,\ S}\sim 1.51\)R\({}_{\odot}\)). Meanwhile, based on light curves of the created standard templates of time-dependent viscous-delayed accretion rates with \(\gamma=5/3\) and \(M_{BH}=10^{6}\)M\({}_{\odot}\) and \(M_{\star}=1\)M\({}_{\odot}\), time duration at 10% of the peak of the light curve of the viscous-delayed accretion rate with \(\beta=1.94\) and \(\log(T_{vis}\sim-0.074)\) (the values for the SDSS J0159) is about 1280days, about 4.7 times longer than the corresponding time duration about 280days with \(\beta=1.1\) and \(\log(T_{vis}\sim-1.4)\) (the values for the PTF09ge), leading to \(S_{\beta}\sim 4.6\). Then, based on the parameters for PTF09ge and for SDSS J0159, we can have
\[T_{S}\ \sim\ 160\ (\frac{3.4}{3.6})^{0.5}\ (\frac{0.1}{1.93})\ (\frac{1.51}{0.12})^{1.5}\ 4.6 \sim\ 1700\text{days} \tag{14}\]
very similar as the time duration 1900days at 10% of the peak of the TDE expected flare in the SDSS J0159. The results strongly indicate that the longer time durations of the TDE expected flare in SDSS J0159 are reasonable.
Last but not the least, as discussed in Merloni et al. (2015), there are expected AGN activities through applications of narrow emission line flux ratios (see their Fig. 3). Therefore, the central TDE in SDSS J0159 is a central TDE in AGN host galaxy, quiet different from the vast majority of the reported optical TDEs candidates in quiescent galaxies. Actually, there are so-far only hands of TDEs candidates reported in AGN host galaxies. Blanchard et al. (2017) have reported a TDE candidate in a narrow line Seyfert 1 galaxy of which light curves can be roughly described by theoretical TDE model. Yan and Xie (2018) have shown the TDE expected variability pattern in the low-luminosity AGN NGC 7213. Liu et al. (2020) have reported a TDE candidate in AGN SDSS J0227. Frederick et al. (2021) have reported TDEs candidates in narrow line Seyfert 1 galaxies. More recently, Zhang et al. (2022) have shown the TDE expected variability patterns in a narrow line Seyfert 1 galaxy. Meanwhile, Chan et al. (2019, 2020) have modeled TDEs variabilities in AGN with a pre-existing accretion disc, and shown that about 20days-long plateau could be expected around central BH with masses around \(10^{6-7}\)M\({}_{\odot}\). Therefore, considering variability properties of the detected optical TDEs candidates in AGN host galaxy reported in the literature, TDE expected variability patterns can also be expected in SDSS J0159. Even
considering the 20days-long plateau feature by simulating results in Chan et al. (2020), the plateau has few effects on the long-term variabilities of SDSS J0159, because the time duration about 1900days of the light-curves of SDSS J0159 is quite larger than 20days. In one word, the expected central TDE in SDSS J0159 is in AGN host galaxy, but TDE described variability patterns can be well expected in SDSS J0159.
## 5 Whether the long-term variabilities are related to central AGN activities in SDSS J0159?
The results and discussions above are mainly based on the fundamental point that the long-term variabilities are related to a central TDE in SDSS J0159. Therefore, it is necessary and interesting to check probability of AGN activities applied to well describe the long-term variabilities in SDSS J0159. Here, in order to compare variability properties between SDSS J0159 and the other normal quasars in the literature MacLeod et al. (2010), rather than the luminosity light curve but the photometric light curve is mainly discussed in the section.
The known Continuous AutoRegressive (CAR) process and/or the improved Damped Random Walk (DRW) process can be applied to describe intrinsic AGN activities, as well discussed in Kelly, Bechtold & Siemiginowska (2009); Kozlowski et al. (2010); MacLeod et al. (2010); Zu et al. (2013, 2016); Zhang & Feng (2017); Moreno et al. (2019); Sheng, Ross & Nicholl (2022). Here, based on the DRW process, the collected photometric \(g\)-band light curve from Stripe82 database (Bramich et al., 2008; Thanjavur et al., 2021) can be described by the public code JAVELIN (Just Another Vehicle for Estimating Lags In Nuclei) (Kozlowski et al., 2010; Zu et al., 2013), with two process parameters of intrinsic characteristic variability amplitude and timescale of \(\sigma\) and \(\tau\). The best descriptions are shown in left panel of Fig. 6. And corresponding MCMC technique determined two dimensional posterior distributions of \(\sigma\) and \(\tau\) are shown in right panel of Fig. 6, with the determined \(\log(\tau/days)\sim 2.92\pm 0.21\) and \(\log(\sigma/(mag/days^{0.5}))\sim-0.55\pm 0.10\), leading \(SF_{\infty}/mag\sim\sigma/(mag/days^{0.5})\times\sqrt{\tau/days}\) to be about \(\log(SF_{\infty}/mag)\sim 0.85\) in SDSS J0159. Actually, to collect the photometric _uri_-band light curves can lead to similar results. Meanwhile, as discussed in MacLeod et al. (2010) for SDSS normal quasars in Stripe82 database, normal SDSS quasars have mean values of \(\log(\tau/days)\sim 2.4\) and \(\log(SF_{\infty}/mag)\sim-0.7\). Therefore, in the space \(log(\tau)\) versus \(\log(SF_{\infty})\), SDSS J0159 is an outlier among the quasars, due to its \(\log(SF_{\infty}/mag)\) about 1.5magnitudes larger than the normal quasars. In other words, although the SDSS J0159 has its light curves with longer time durations, SDSS J0159 has unique variability properties, quite different from the normal SDSS quasars.
Moreover, besides the discussions in the section to show unique variability properties in SDSS J0159, two additional points can be found. First, the first data point at MJD-50200=880 is apparently 0.3magnitudes brighter than the data points with MJD-50200 later than 4000, which can be applied as one another evidence to support that the first data point at MJD-50200=880 in Fig. 2 is not one data point similar as the other data points at late times, but one data point related to the central TDE, under the assumed TDE to explain variabilities of SDSS J0159, similar as discussed results in Section 4. Second, although there are unique variability properties in SDSS J0159 quite different from normal quasars, contributions of central AGN activities to the light curves cannot be totally ruled out, such as the shown weak AGN activities in eBOSS spectrum of SDSS J0159. Therefore, it is necessary to discuss effects of central AGN activities on our final results. Besides the fitting procedure discussed in Section 3, five additional model parameters are added to described AGN contributions to the light curves, i.e., the equation (8) in Section 3 is re-written to
\[\begin{split} L_{u,~{}g,~{}r,~{}i,~{}z}(t)&=\int_{ \lambda_{u,~{}g,~{}r,~{}i,~{}z}}f_{\lambda}(t)d\lambda\ \times 4~{}\pi\ \times~{}Dis^{2}\\ &\quad+~{}L0_{u,~{}g,~{}r,~{}i,~{}z}\end{split} \tag{15}\]
with \(L0_{u,~{}g,~{}r,~{}i,~{}z}>0\) as the AGN contributions. Then, the theoretical TDE model with 12 model parameters are applied to re-describe the \(ugriz\)-band luminosity light curves shown in Fig. 2 through the Levenberg-Marquardt least-squares minimization technique, leading to the parameters \(L0_{u,~{}g,~{}r,~{}i,~{}z}\sim 0\) and similar BH masses as discussed in Section 4, because the data points at late times have luminosities around zero as shown in Table 2 in Merloni et al. (2015). If parameters of \(L0_{u,~{}g,~{}r,~{}i,~{}z}\) are not constant values but time-dependent, corresponding fitting results to \(ugr\)-band luminosity light curves could lead to stronger luminosities at data points earlier than MJD-50200=880, quite brighter than the data points from POSSII shown in Fig. 1 in Merloni et al. (2015). Therefore, even there are weak AGN activities, they have few effects on our final results through applications of theoretical TDE model in the manuscript.
## 6 Main summary and conclusions
Finally, we give our main summary and conclusions as follows.
* Host galaxy spectroscopic features are measured by the simple SSP method applied with 350 stellar templates, to confirm the low total stellar mass of SDSS J0159, through the whole spectroscopic absorption features within rest wavelength range from 3650A to 7700A.
* Theoretical TDE model can be well applied to describe the \(ugriz\)-band variabilities of SDSS J1059, leading the central BH mass to be about \(M_{BH}\sim 4.5^{+1.3}_{-1.1}\times 10^{6}\)M\({}_{\odot}\), two magnitudes smaller than the virial BH mass in SDSS J0159.
* Through CAR/DRW process applied to describe long-term light curves of SDSS J0159, comparing variability properties between SDSS J0159 and the normal SDSS quasars in Stripe82 database, SDSS J0159 is an outlier in the space of the process parameter of intrinsic variability timescale \(\tau\) versus intrinsic variability amplitude \(\sigma\), indicating SDSS J0159 has unique variability properties, quite different from the normal quasars.
* Theoretical TDE model with the model parameter of central BH mass limited to be higher than \(10^{8}\)M\({}_{\odot}\) cannot lead to reasonable descriptions to the SDSS \(ugriz\)-band variabilities of SDSS J1059, indicating central BH mass higher than \(10^{8}\)M\({}_{\odot}\) is not preferred in SDSS J0159.
* The TDE model determine central BH mass is well consistent with the M-sigma relation expected value through the measured stellar velocity dispersion in SDSS J0159.
* The outliers in the space of virial BH masses versus stellar velocity dispersions could be better candidates of TDEs.
## Acknowledgements
Zhang gratefully acknowledge the anonymous referee for giving us constructive comments and suggestions to greatly improve our paper. Zhang gratefully thanks the research funding support from GuangXi University and the grant support from NSFC-12173020. This manuscript has made use of the data from the SDSS projects. The SDSS-III web site is [http://www.sdss3.org/](http://www.sdss3.org/). SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration. The paper has made use of the public code of TDEFIT ([https://github.com/guillochon/tdefit](https://github.com/guillochon/tdefit)) and MOSFIT ([https://github.com/guillochon/mosfit](https://github.com/guillochon/mosfit)), and use of the MPFIT package ([http://cow.physics.wisc.edu/~craigm/idl/idl.html](http://cow.physics.wisc.edu/~craigm/idl/idl.html)) written by Craig B. Markwardt, and use of the python emcee package ([https://pypi.org/project/emcee/](https://pypi.org/project/emcee/)). The paper has made use of the template spectra for a set of SSP models from the MILES [https://miles.iac.es](https://miles.iac.es).
|
2304.06048
|
RELS-DQN: A Robust and Efficient Local Search Framework for
Combinatorial Optimization
|
Combinatorial optimization (CO) aims to efficiently find the best solution to
NP-hard problems ranging from statistical physics to social media marketing. A
wide range of CO applications can benefit from local search methods because
they allow reversible action over greedy policies. Deep Q-learning (DQN) using
message-passing neural networks (MPNN) has shown promise in replicating the
local search behavior and obtaining comparable results to the local search
algorithms. However, the over-smoothing and the information loss during the
iterations of message passing limit its robustness across applications, and the
large message vectors result in memory inefficiency. Our paper introduces
RELS-DQN, a lightweight DQN framework that exhibits the local search behavior
while providing practical scalability. Using the RELS-DQN model trained on one
application, it can generalize to various applications by providing solution
values higher than or equal to both the local search algorithms and the
existing DQN models while remaining efficient in runtime and memory.
|
Yuanhang Shao, Tonmoy Dey, Nikola Vuckovic, Luke Van Popering, Alan Kuhnle
|
2023-04-11T18:01:49Z
|
http://arxiv.org/abs/2304.06048v1
|
# RELS-DQN: A Robust and Efficient Local Search Framework for Combinatorial Optimization
###### Abstract
Combinatorial optimization (CO) aims to efficiently find the best solution to NP-hard problems ranging from statistical physics to social media marketing. A wide range of CO applications can benefit from local search methods because they allow reversible action over greedy policies. Deep Q-learning (DQN) using message-passing neural networks (MPNN) has shown promise in replicating the local search behavior and obtaining comparable results to the local search algorithms. However, the over-smoothing and the information loss during the iterations of message passing limit its robustness across applications, and the large message vectors result in memory inefficiency. Our paper introduces RELS-DQN, a lightweight DQN framework that exhibits the local search behavior while providing practical scalability. Using the RELS-DQN model trained on one application, it can generalize to various applications by providing solution values higher than or equal to both the local search algorithms and the existing DQN models while remaining efficient in runtime and memory.
## I Introduction
Combinatorial optimization is a broad and challenging field with real-world applications ranging from traffic routing to recommendation engines. As these problems are often NP-hard [1, 2, 3, 4, 5], an efficient algorithm to find the best solution in all instances with feasible resources is unlikely to exist. Therefore, researchers have turned to design heuristics [5, 6, 7, 8, 9] in addition to approximation algorithms [10, 11, 12, 13, 14, 15, 16] and enumeration [17, 18]. Among many well-known algorithms, the standard greedy algorithm (Greedy) [19] provides the optimal \((1-1/e)\)-approximation ratio for monotone submodular instances, but this theoretical guarantee does not hold for non-submodular functions [20]. The limitation of Greedy has led to the development of greedy local search techniques that provide a feasible solution for various applications. These techniques usually allow deletion and exchange operations after the maxima singleton chose greedily, and they have shown promising solution value [9, 10], but it requires an additional \(O(n)\) queries to the objective function \(f\). Furthermore, heuristics are usually fast and effective in practice even though it has no theoretical guarantees and requires trial and error over specific problems. In this paper, we focus on heuristically solving the cardinality-constrained maximization problem, which can be described as follows: Given a ground set \(\mathcal{N}\), a cardinality parameter \(k\), and a set of feasible solutions \(\mathcal{S}\subseteq 2^{\mathcal{N}}\), the goal is to return the solution \(V\in\mathcal{S}\) that maximizes the objective function \(f:2^{\mathcal{N}}\rightarrow\mathbb{R}\) subject to \(|V|\leq k\).
Recently, combinatorial optimization has seen a great deal of interest in the development of efficient heuristic algorithms [5, 6, 7, 8, 9]. The deep Q-learning (DQN) is trained on a wide range of combinatorial problem instances and obtains a feasible solution for various applications. Even though [6, 7, 8, 9, 21] provide high-quality solutions, only a select few can effectively perform the exploration behavior on the solution space, similar to local search [5, 9, 22]. These DQN-based models [5, 9, 22] primarily utilize graph neural networks (GNN), such as graph convolution network (GCN) [22], message-passing neural network (MPNN) [5, 7], and graph attention network (GAN) [8]. However, more recent works have proven that the GNNs inherently suffer from over-smoothing and information squashing [23, 24, 25, 26]. These issues can be more intense in unsupervised tasks due to lacking supervision information [27]. In addition, the large message vectors restrict the scalability because of memory overhead. In light of the local search algorithms' performance and the limitation of GNNs, we study how is the performance of a lightweight model directly using node features in the cardinality-constrained maximization problems, then a natural question would be: In light of the local search algorithms' performance and the limitation of GNNs, we study how is the performance of a lightweight model directly using node features in the cardinality-constrained maximization problems. _Is it possible to design a lightweight DQN model that can explore solution space like local search (LS) does and serve as a general-purpose algorithm for the combinatorial problem yet remain efficient in terms of runtime and memory consumption?_
_Contributions_
* To combine the local search and heuristic exploration, we propose a general-purpose local search algorithm to heuristically explore the environment space to avoid additional traversal for better solutions.
* We introduce a DQN framework, RELS-DQN (Robust, Efficient Local-Search), to mimic the behavior of the aforementioned algorithm
through reversible actions in a reinforcement learning (RL) environment. The agent uses a lightweight feed-forward network to heuristically explore the solution space and provide a feasible solution efficiently based on weighting the reduced number of state representation parameters in [5]. Combined with a novel reward shaping that considers both objective value and the constraint can assist the agent to learn the exploration within constraint limitation. As a result, the model generalizes to a diverse set of applications without additional application-specific training and reduces memory usage.
* Our approach is validated by conducting an empirical comparison between RELS-DQN, the MPNN-based local search model ECO-DQN [5], and the greedy local search algorithm (Greedy-LS) [10]. The comparison is carried out across four diverse combinatorial problems. Our results show that RELS-DQN either performs equally well or better than ECO-DQN in terms of solution value and offers more efficient memory usage and runtime. Furthermore, our model offers an order-of-magnitude improvement in speed compared to Greedy-LS with a marginal difference in solution value.
## II Related work
The advancements in AI techniques introduced new heuristic approaches based on deep learning [28, 29, 32, 6, 8, 24, 30, 8, 31, 6, 33, 6, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 9
replacement, which is an inevitable step in the local search. In a study by [30], they formulate a scheduling problem with sequential selection as a Markov Decision Process (MDP) and use RL to learn a parameterized policy with local search actions employing the GNN-based encoder. The trained RL can learn the local search by acceptance, neighborhood selection, and perturbations, yielding additional iterations like local search algorithms. Another work related to our approach is [9], which uses reinforcement learning with MPNN to adopt an expensive swapping search in size of \(O(n^{2})\). Overall, these works show that even though the results of [5, 7, 9, 22, 30] show a promising prospect of finding better solutions than greedy, the local search and MPNN methods remain challenges in solving combinatorial optimization problems. The proposed model RELS-DQN explores the possibility of the RL agent learning a heuristic exploration strategy without relying on message passing and if it can generalize to various cardinality-constrained combinatorial optimization problems.
## III Preliminaries
**RL Encoding:** The combinatorial optimization problem is defined in the standard RL framework as follows:
* States \(S\): is a vector of the environment representation which usually includes graph embeddings [5, 7]. In our model, the state is a sequence of elements on graph \(G\), and its embedding representation is a set of element-specific parameters \(\{x_{e_{1}},x_{e_{2}},\ldots,x_{e_{n}}\}\) (Section IV-A). It is easy to see that this representation of the state can be used across different applications as each parameter is weighted the same by the pre-trained network model.
* Actions \(a\): is an element \(e\) from \(\mathcal{N}\), which adds or removes this element from the solution set by flipping the incumbent state \(V_{\{e\}}\) (Section IV-A, the value is flipped to 1 if \(e\notin V\) and to 0 if \(e\in V\), where \(V\) is the partial solution of current state \(S\)), and updates other parameters in \(x_{e}\).
* Rewards \(R(S,a)\): is defined as the difference in the objective function after updating the state [7], which is \(R(S,a)=f(S^{\prime})-f(S)\), where \(S^{\prime}\) is the state after current state \(S\) taking action \(a\). In this case, the cumulative reward \(R\) at the terminal state is the objective function value. In our model, the reward is shaped to incorporate constraint in Section IV-C.
* Transition \((S,a,R,S^{\prime})\): uses tuple to present the process of moving from \(S\) to \(S^{\prime}\) with giving action \(a\).
**Double Q-learning:** Double Deep Q-network (Double DQN) [34] is an off-policy reinforcement learning algorithm that reduces the overestimation of DQN to improve the accuracy of value estimation. A target network with the same architecture as the original DQN is periodically updated to obtain the Q-value. The Q-value function of the target network is defined as follows:
\[Q^{target}\equiv R(S,a)+\gamma Q(S^{\prime},\operatorname*{arg\,max}_{a}Q(S^{ \prime},a;\theta),\theta^{target}), \tag{1}\]
where \(\theta\) is the online weights updated by \(S\), \(\theta^{target}\) is the target network weights updated by \(S^{\prime}\), and \(\gamma\in[0,1)\) is the discount factor.
## IV Rels-Dqn
To start, we modified the local search algorithm [20] to allow it to explore unvisited nodes after a few iterations instead of extra queries after each selection of the maximum singleton. In Algorithm 1, we provide two parameters to balance the weight of exploration and greedy selection. For each node, we set up an element-wise initialized timer \(\mathcal{T}\) (line 4) to track if the node has been visited yet. The \(2k\) iterations imply two stages of selection. In the first \(k\) iterations, the marginal gain of the objective function \(\delta_{e}\) is
Fig. 1: Overview of RELS-DQN training: 1) We generate a random ER graph with fixed size and initialize the environment for each training episode, where \(t\) labels the iteration of timesteps; 2) Each element’s parameters (denoted \(x_{e}=\{V_{\{e\}},\Delta_{e},\mathcal{T}_{e},V_{k_{e}},\mathcal{L}^{*}\}\), Section IV-A) are updated and passed to the feed-forward network to calculate the Q-value, where the numbers below each layer are input/output vector dimension, and \(|\mathcal{N}|\) is the ground set size; 3) Based on the \(\epsilon\)-greedy action rate of training, the agent’s action \(A_{t}\) will either be randomly chosen (exploration) or through a greedy-policy (exploitation, \(A_{t}\leftarrow\operatorname*{arg\,max}_{i}\{Q_{i}\}\)); 4) Using action \(A_{t}\), get the reward \(R_{t}\) and the new state \(S_{t+1}\) to update the environment and the network (if training frequency is reached); 5) Every \(2k\) timesteps, we generate a new graph and repeat Steps (1) - (4) until the total timesteps \(t=\) 1 million.
selected greedily as the \(\mathcal{T}_{e}\) remains the same for \(e\notin V\). As for the second \(k\) iteration, the unvisited nodes have a larger weight on \(\mathcal{T}_{e}\) to encourage exploring new nodes instead of following the margin gain. However, it is infeasible to find the best weight parameters that apply to various applications.
Next, we propose RELS-DQN inspired by [5]. Our model mimics the behavior of the general-purpose local search algorithm defined in Algorithm 1, and balances the greedy policy and exploration through a network model instead of weight parameters. RELS-DQN modified the environment observations in [5]. We provide element-wise defined parameters (Section IV-A) as the representation of the environment state. Our model uses a lightweight feed-forward network (Section IV-B) that provides the flexibility to learn an application. This network can apply to different applications while remaining efficient in terms of memory and runtime. As for the reward shaping (Section IV-C), we use the cardinality constraint \(k\) as part of reward shaping to enable exploration of the local optimal solution within the constraint limitation. Figure 1 illustrates the training framework of RELS-DQN, where it explores small randomly generated synthetic graphs to learn how to obtain solutions through local search behavior.
```
1:Input: evaluation oracle \(f:2^{\mathcal{N}}\rightarrow\mathbb{R}\), constraint \(k\), marginal gain influence weight \(\lambda_{\Delta}\), element age influence weight \(\lambda_{t}\)
2: Let \(V\leftarrow\emptyset\), \(V^{*}\leftarrow\emptyset\), \(t=0\)
3:\(f(e|V)=f(V\cup\{e\})-f(V)\) if \(e\notin V\), otherwise \(f(V\setminus\{e\})-f(V)\)
4:\(\mathcal{T}\leftarrow\{t_{e}=0\) ; \(e\in\mathcal{N}\}\)
5:for\(t=1\) to \(2k\)do
6:\(\mathcal{T}\leftarrow\{t_{e}+1\) ; \(e\in\mathcal{N}\}\)
7:\(\Delta\leftarrow\{f(e|V)\) : \(e\in\mathcal{N}\}\)
8:\(x\leftarrow\arg\max_{e}\{\lambda_{\Delta}\cdot\Delta_{e}+\lambda_{t}\cdot \mathcal{T}_{e}\) ; \(e\in\mathcal{N}\}\)
9:if\(\Delta_{x}>0\) or \(|V|\leq k\)then
10:if\(x\notin V\)then
11:\(V\gets V\cup\{x\}\)
12:else
13:\(V\gets V\)\(\setminus\{x\}\)
14:endif
15:\(\mathcal{T}_{x}\gets 0\)
16:\(V^{*}\leftarrow\arg\max\{f(V^{*}),f(V)\}\)
17:else
18:\(x\leftarrow\arg\max_{e}\{\lambda_{\Delta}\cdot\Delta_{e}+\lambda_{t}\cdot \mathcal{T}_{x}\) ; \(e\in V\}\)
19:\(V\gets V\)\(\setminus\{x\}\)
20:endif
21:endfor
22:return\(V^{*}\)
```
**Algorithm 1** ReversibleLocalSearch\((f,\mathcal{N},k,\lambda_{\Delta},\lambda_{t})\)
### _Element Parameters_
In this section, a set of parameters are defined at the element level to describe the current state of the environment. These parameters provide the feature representation to the RL model for balancing exploration, exploiting the greedy policy, and taking into account constraint limitation. Existing models such as S2V-DQN [7] and ECo-DQN [5] use one and seven element-specific parameters respectively for solution selection. While these are performant models, their lack of constraint parameters hinders their ability to handle constrained problems. In addition, S2V-DQN only uses one element parameter which restricts its local search capability, while the seven parameters of ECO-DQN lead to redundancy due to its global observations. Therefore, we list the parameters used in our model as follows:
* **Incumbent State**\(V_{\{e\}}\) is a binary bit to indicate if the element is selected or not in current state \(S\), such that it is 1 if \(e\in V\); otherwise 0. This parameter of the entire set is denoted as \(V_{S}\), such that the partial solution \(V\) is denoted as \(V=\{x\in V_{S}|x=1\}\).
* **Marginal Gain**\(\Delta_{e}\) is the gain/loss of adding/removing element \(e\in\mathcal{N}\) from the partial solution \(V\). This parameter encourages the model to maximize the gain regardless of whether the action is removing or adding an element.
* **Inactive Age**\(\mathcal{T}_{e}\) provides the timestep that element \(e\in\mathcal{N}\) has not been selected as the action \(a\). It is reset to 0 every time \(a\leftarrow\{e\}\), which discourages the model from selecting the same action every iteration and aid in extensive exploration of the solution space.
* **Feasible candidate**\(V_{k_{\epsilon}}\) provides the permissible actions of an element. For the entire set, its candidate state is represented as \(V_{k}\). This parameter discourages the model to generate a solution set size larger than \(k\) and encourages removing action when it exceeds constraint limitation. For this parameter, all elements are set to 1 if \(|V|\leq k\); otherwise, only the elements in the partial solution set are 1, and the rest elements are 0.
* **Distance to local optimum**\(\mathcal{L}^{*}\) is the distance of current \(f(S)\) to the best observed \(f(S^{*})\). The \(S^{*}\) is the state of the best solution that is updated when \(f(S)>f(S^{*})\) at the end of each timestep. As the actions reducing solution value are possible, we need this parameter to track the best solution that has been encountered during the episode.
### _Network Architecture_
The DQN network architecture determines how the Q-value for the action policy is calculated. Existing literature has primarily focused on GCN [21, 35, 36] and MPNN [5, 7] to determine the best action based on the updated node embedding from the neighborhood aggregation. While the GNN aggregates information from local graph neighborhoods and the embeddings are encoded with the multi-hop neighbors, it also smooths the features that are well-designed to intuitively reflect the environment state. Especially, the parameters defined as the environment state in section IV-A are additional information that is not attached to nodes and edges and should avoid any operation that can cause information loss, such as neighborhood normalization [37]. Therefore, these advantages of GNN adversely impact the robustness and ability to solve various combinatorial
optimization problems. Additionally, the memory overhead of MPNN and GCN network architecture can limit their scalability to large data instances. RELS-DQN addresses the robustness and scalability concerns of the MPNN-based models by utilizing two fully connected feed-forward layers with ReLu activation functions represented as follows:
\[\mu_{e}=\text{ReLu}(\theta_{2}\ \text{ReLu}(\theta_{1}x_{e})), \tag{2}\]
where \(x_{e}=\{V_{\{e\}},\Delta_{e},\mathcal{T}_{e},V_{k_{e}},\mathcal{L}^{*}\}\in \mathbb{R}^{5}\) is the feature vector of a single element \(e\in\mathcal{N}\), and \(\mu\) is its output vector from the hidden layers. \(\theta_{1}\in\mathbb{R}^{m\times 5},\theta_{2}\in\mathbb{R}^{m\times m}\) are the weight of the corresponding layers, and \(m\) is the number of neurons in the hidden layers.
The feed-forward layers directly capture information about the environment representations and use its weight matrix and ReLu activation function to produce the hidden features. Then, the readout layer produces the Q-value from each element's hidden feature \(\mu_{e}\) and the pooled embedding over the ground set as follows:
\[Q_{e}=\theta_{4}[\text{ReLu}(\theta_{3}\frac{1}{|\mathcal{N}|}\sum_{u\in \mathcal{N}}\mu_{u}),\mu_{e}], \tag{3}\]
where \(\theta_{3}\in\mathbb{R}^{m\times m}\), \(\theta_{4}\in\mathbb{R}^{2m}\), \([\cdot,\cdot]\) is the concatenation operator, and \(|\mathcal{N}|\) represents the number of elements in ground set. The embedding of the ground set \(\mu_{u},u\in\mathcal{N}\) uses the shared \(\mu_{e}\), which is later normalized with \(|\mathcal{N}|\) and fed into a linear pooling layer with ReLu activation function. The concatenation of the \(\mu_{e}\) and the pooled embedding over the ground set, ReLu(\(\theta_{3}\frac{1}{|\mathcal{N}|}\sum_{u\in\mathcal{N}}\mu_{u}\)), is used to extract the Q-value so that the model selects the action \(a\leftarrow\arg\max_{i}\{Q_{i}:1\leq i\leq|\mathcal{N}|\}\).
### _Reward reshaping_
The reward reshaping of RELS-DQN primarily consists of the state reward \(F(V_{S})\) and the constraint reward \(D(V_{S})\) in order to find the best solution within constraints by exploration. Given the state reward \(F(V_{S})=max(f(S)-f(S^{*}),0)\) and the constraint reward \(D(V_{S})=k-|V|\), the non-negative reward for solving constrained instances is formally given as follows:
\[R(S,a)=\begin{cases}max\{\frac{F(V_{S})\times D(V_{S})}{|\mathcal{N}|},0\}&a \notin V,|V|\geq k\\ \quad max\{\frac{F(V_{S})}{|\mathcal{N}|},0\}&\text{, otherwise}\end{cases} \tag{4}\]
where \(V_{S}\) is the incumbent state at state \(S\); \(S^{*}\) is the best observed state up to current state and \(k\) is the cardinality constraint.
The state reward \(F\) provides a positive reward when an action improves the objective value over the best-observed state \(S^{*}\) and does not receive a penalty when the action reduces the solution value. On the other hand, the constraint reward \(D\) works as follows: if the size of the solution is within the constraint limitation \(|V|<k\), the agent receives the state reward; if \(|V|\geq k\), the agent is encouraged to remove an element \(e\) from the partial solution \(V\) by adding a penalty to the adding action. Additionally, the exploration is considered by removing all penalties and not rewarding the agent unless the solution is improved. Therefore, reward shaping considers two scenarios: if the agent takes any actions within the constraint limit or the removing actions at \(|V|\geq k\), the environment provides the state reward; if the agent takes the adding action at \(|V|\geq k\), the environment changes any positive reward into zero by providing the constraint penalty \(D<0\). In this case, the environment only provides a positive reward when a better solution is found within the constraint, and the non-negative reward allows the agent to explore actions that can reduce the solution value.
## V Empirical Evaluation
In this section, we analyze the effectiveness and efficiency of RELS-DQN as a general purpose algorithm. We provide the source code link1 that includes experimental scripts and datasets. With our empirical evaluations, we address the following questions:
Footnote 1: Code Repository: [https://drive.google.com/file/d/1cQaJ1zdlQMY2C-NF6dnWgps2jVkt/view](https://drive.google.com/file/d/1cQaJ1zdlQMY2C-NF6dnWgps2jVkt/view)
**Question 1:** Is a shallow network architecture better at solving CO problems than deep neural networks?
**Question 2:** Can the RELS-DQN model solve a diverse set of problems without application-specific training?
**Question 3:** How efficient is RELS-DQN compared to the MPNN-based ECO-DQN and Greedy-LS?
### _Experiment setup_
**Applications:** Our experiment set consists of a set of four diverse combinatorial optimization problems: Maximum Cut (MaxCut) [5], Directed Vertex Cover with Costs (MaxCov) [38], Movie Recommendation (MovRec) [13], and Influence-and-Exploit Marketing (InfExp) [39]. Details of all applications and their objective functions are defined as follows:
**Maximum Cut Problem** (MaxCut) is defined as follows. For a given graph \(G(\mathcal{N},E,w)\), where \(\mathcal{N}\) represents the set of vertices, \(E\) denotes the set of edges, and \(w:E\rightarrow\mathbb{R}^{*}\) represents the weight of edges with \(w\in\{0,\pm 1\}\). Find a subset of nodes \(V\subseteq\mathcal{N}\) such that the objective function is maximized: [5]
\[f(V):=max\sum_{u\in V,v\in\overline{V}}w(u,v), \tag{5}\]
where \(\overline{V}=\mathcal{N}\setminus V\) represents the complementary set. In addition, the constrained MaxCut problem projects to \(|V|\leq k\), in which \(0<k\leq|\mathcal{N}|\) is a constraint.
**Directed Vertex Cover with Costs** (MaxCov) is defined as follows. For a given directed graph \(G(\mathcal{N},E,w)\), let \(w:E\rightarrow\mathbb{R}\). For a subset of nodes \(V\subseteq\mathcal{N}\), the weighted directed vertex cover function is \(g(V)=\sum_{u\in N(V)\cup V}w_{u}\), where \(N(V)\) represents the set of nodes pointed by the subset \(V\). [38] assumes that there is a nonnegative cost \(c_{n}\) for each \(n\in\mathcal{N}\). Therefore, the objective function is defined as [38]:
\[g(V)-c(V):=\sum_{u\in N(V)\cup V}w_{u}-\sum_{u\in V}c_{u}, \tag{6}\]
where the cost associated with each node is set to \(c(n)=1+max\{d(n)-q,0\}\) for a fixed cost factor \(q=6\) and the out-degree \(d(n)\) of \(n\in\mathcal{N}\).
**Movie Recommendation** (MovRec) produces a subset from a list of movies for a user based on other users with similar ratings. We use the objective function in [13] defined as follows:
\[f(V):=\sum_{i\in\mathcal{N}}\sum_{j\in V}s(i,j)-\lambda\sum_{i\in V}\sum_{j\in V }s(i,j) \tag{7}\]
Where \(s_{ij}\) is the inner product of the ratings vector between movie \(i\) and \(j\), and \(\lambda=5\) is the diversity coefficient which the large \(\lambda\) penalizes similarity more [39].
**Influence-and-Exploit Marketing** (InfExp) consider how the buyers of a social network can influence others to buy goods for revenue maximization. For a given set of buyer \(\mathcal{N}\), each buyer \(i\in\mathcal{N}\) is associated with a non-negative concave function \(f_{i}(x)=a_{i}\sqrt{x}\) where the \(a_{i}\) a number followed by Pareto Type II distribution with \(\lambda=1,\alpha=2\)[39]. For a given subset \(V\subseteq\mathcal{N}\setminus\{i\}\) of buyer \(i\), the objective function is defined as follows [39]:
\[f(V):=\sum_{i\in\mathcal{N}\setminus V}f_{i}(\sum_{j\in V\cup\{i\}}w_{ij}) \tag{8}\]
where the weight of the graph is uniformly randomly generated from \(U(0,1)\).
**Benchmarks:** We compare RELS-DQN to the MPNN heuristic framework ECO-DQN [5], the standard greedy algorithm (Greedy), the vanilla local search algorithm (Greedy-Rev) and the local search algorithm [10] (Greedy-LS). The ECO-DQN serves as a benchmark after adapting the feasible candidate \(V_{k}\) in environment observation and the constrained reward function. Greedy-Rev is a greedy algorithm that allows reversible actions to arrive at a better value until no further improvement is possible. Greedy-LS meanwhile repeats itself within the cardinality constraint \(k\), checking if the existing nodes satisfy the credentials for removing or swapping after every new addition. Additionally, we use sparse representation on MPNN using Torch-Geometric to evaluate memory improvement.
**Training:** RELS-DQN model is trained on MaxCut only, and ECO-DQN models are trained on specific applications. ECO-DQN model is modified and adapted to the constrained problem by applying feasible candidate parameter \(V_{k}\) (Section IV-A) and reward shaping(Section IV-C). We train all models using synthetic graphs (\(n\)=40, \(k\)=30) with each episode \(2k\) timesteps to make sure each graph is explored entirely and the element parameters are learned sufficiently. The synthetic training data is generated in the following ways:
* **MaxCut:** Erdos-Renyi graphs (ER) [40] with probability \(p=0.15\) and edge weight \(w_{ij}\in\{0,\pm 1\}\).
* **MaxCov:** ER graphs with \(p=0.15\) and \(w_{ij}\in\{0,+1\}\).
* **MoveRec:** ER graphs with \(p=0.15\) and \(w_{ij}\in(0,1)\) generated uniformly randomly.
* **InfExp:** Barabasi-Albert graph (BA) [41] with \(m=4\) and \(w_{ij}\in(0,1)\) generated uniformly randomly.
**Testing:** RELS-DQN model uses the pre-trained model of MaxCut to validate all applications and datasets. Conversely, the ECO-DQN model uses the application-specific pre-trained model for each problem. We evaluate the performance using synthetic ER graph (\(n\)=200, \(p\)=0.03), BA graph (\(n\)=200, \(m\)=4) and large datasets listed in Table II. In addition, all applications are validated on a hundred graphs except the InfExp is validated on ten graphs.
**Hardware environment:** We run all experiments on a server with a TITAN RTX GPU (24GB memory) and 80 CPU Intel(R) Xeon(R) Gold 5218R cores.
### _Architecture Evaluation_
In this section, we describe the learning characteristics of the DQN models on the MaxCut problem. Figure (a)a compares the training performance of RELS-DQN by increasing the number of hidden layers in Equation 2 up to 8 layers. As shown, RELS-DQN-2 and RELS-DQN-4 converge to the highest solution value, and RELS-DQN-2 exhibits the ability to converge earlier. RELS-DQN-6 cannot converge to a high value, while RELS-DQN-8 shows the inability to learn the MaxCut problem. These results demonstrate the adverse effect of using a complex neural network architecture to solve combinatorial problems. Based on our evaluation, we choose RELS-DQN-2 as our model for all evaluations as it demonstrates the best learning ability. In figure (b)b, we observe that both RELS-DQN-2 and ECO-DQN converge to the best solution with ours converging earlier. S2V-DQN cannot achieve higher solution values due to its lack of local search.
### _Generalization Evaluation_
In this section, we test the ability of RELS-DQN to provide feasible solutions to various combinatorial problems without application-specific training. To evaluate its generalization, we compare the MaxCut trained RELS-DQN (ER graph, \(n=\)40) to the application-specific trained ECO-DQN (ER graph, \(n=40\)), Greedy, and Greedy-LS and Greedy-Rev across four applications. We evaluate the models on synthetic and real-world datasets for each application. As part of the synthetic dataset experiments (Figure (a)a-(h)h), all models are run on a hundred randomly generated synthetic graphs, with the mean solution value plotted.
**RELS-DQN Vs. Greedy** The empirical comparison of RELS-DQN to the Greedy, the Greedy-Rev, and the Greedy-LS demonstrates our model's ability to provide solutions with values higher than or equal to these general-purpose greedy-based algorithms. As illustrated in Figure 3, due to its lack of local-search capability, Greedy is surpassed by Greedy-Rev and Greedy-LS in terms of solution value, with Greedy-LS consistently providing similar or outperforming Greedy-Rev across all applications. For
\begin{table}
\begin{tabular}{l l l l} \hline \hline Application & Dataset & \(|\mathcal{N}|\) & \(|E|\) \\ \hline MaxCut & ego-Facebook(fb4k) & 4039 & 88234 \\ MaxCut & musae-FB(fb22k) & 22470 & 171002 \\ MaxCov & email-Eu-core(eu1k) & 1005 & 25571 \\ MovRec & MovieLens(mov1.7k) & 1682 & 983206 \\ InfExp & BarabásiAlbert(ba300) & 300 & 1184 \\ \hline \hline \end{tabular}
\end{table} TABLE II: Dataset and graph size.
the MaxCut and MaxCov problem on the ER datasets (Figure (a)a, (b)b), RELS-DQN outperforms Greedy-LS by 2% and 3% respectively on average, with indiscernible results for the other applications. Additionally, RELS-DQN provides 5% and 1.5% improvement in solution value over Greedy-LS for MaxCut and MaxCov on BA datasets, with near-identical results on other applications and large datasets. RELS-DQN's improvement over Greedy-LS can be attributed to its enforced exploration (Alg. 1, Line 18 - 19), which allows the model to search for better solutions even when local-search criteria (Alg. 1, Line 9 - 16) indicate no possibility of adding or removing elements. Across all instances, RELS-DQN outperforms Greedy, Greedy-Rev, and Greedy-LS by 5%, 2%, and 1%, respectively. It is evident from the comparison that RELS-DQN's local-search capability can provide feasible solutions in various applications without requiring application-specific training, proving its utility as a general-purpose algorithm. Individual application results are provided in Table I.
**RELS-DQN Vs. ECO-DQN** Across all datasets, ECO-DQN and RELS-DQN provide near-identical solution values for the MaxCut problem. As shown in Figure (b)b-(d)d, for MaxCov, MovRec, and InfExp on ER graphs, RELS-DQN consistently outperforms ECO-DQN with a 5% average improvement in solution value across these instances. Extending the experiment set to the BA datasets (Figure (f)f - (h)h), we observe that RELS-DQN provides an average improvement of 28% across the MaxCov, MovRec and InfExp application, with this improvement increasing to 114% for real-world datasets (Figure (j)j - (l)l). In addition, Table I indicates that the difference between ECO-DQN and RELS-DQN is minimal for ER datasets as the models are trained on ER graphs for each application, and the performance gap increases with changes in graph type and size. The comparison validates the generalization consistency of RELS-DQN to maintain or outperform application-specific ECO-DQN models' performance across a wide range of combinatorial applications and datasets.
### _Efficiency Evaluation_
**Runtime Evaluation:** We compare the runtime performance of RELS-DQN to ECO-DQN and the best local-search baseline Greedy-LS for the real-world datasets. Figure (b)b - (d)d illustrates that RELS-DQN provides a 264, 64, and 36 times speedup over Greedy-LS for MaxCov, MovRec and InfExp respectively. Additionally, for the MaxCut problem, Greedy-LS does not complete a single constraint instance within the 24 hours timeout. As shown in Figure (a)a, (c)c and (d)d, RELS-DQN is 40%, 7% and 8% faster than ECO-DQN for the MaxCut, MovRec and InfExp problem. Across all applications, RELS-DQN provides a 10% runtime improvement over ECO-DQN on average. The results exemplify RELS-DQN's ability to maintain or outperform the Greedy-LS in terms of solution value while providing an order-of-magnitude speedup in completion time.
**Memory evaluation:** We observe the GPU memory usage using the \(pynvml\) package. For ECO-DQN, we monitor two variants, ECO-DQN, which uses an adjacency matrix for its graph representation, and ECO-DQN-sparse, a sparse matrix implementation for MPNN. Table III presents the memory usage of these models with "-" representing scenarios when an out-of-memory crash occurred. Evaluations are performed on two real-world datasets, fb4k (\(n=\)4,039 ) and fb22k (\(n=\)22,470) with batch sizes 1 (Postfix "-batch1") and 5. ECO-DQN with dense representation (adjacency matrix) runs out of GPU memory on 3 out of 4 instances, with the only instance it completes being fb4k for batch size = 1. While ECO-DQN-sparse drastically improves the memory overhead of ECO-DQN, RELS-DQN on average across all scenarios provides a 30% improvement over ECO-DQN-sparse.
## VI Conclusion
In this paper, we have introduced RELS-DQN, a novel DQN model that demonstrates the ability to perform the local search across a diverse set of combinatorial problems without application-specific training. Unlike existing models, utilizing a compressed feature space and a simpler network architecture allows our model to behave like a general-purpose algorithm for combinatorial problems. Our claims are exemplified through an extensive evaluation of a diverse set of applications that illustrates RELS-DQN's ability to provide feasible solutions on all four instances of constrained
Fig. 2: The learning curves of the RL models for MaxCut on ER-40 graph. RELS-DQN and ECO-DQN converge to a higher solution value than S2V-DQN while RELS-DQN reaches convergence earlier than ECO-DQN.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Network type & fb4k. & fb4k & fb22k. & fb22k \\ & batch1 & & batch1 & \\ \hline RELS-DQN & **1459** & **1725** & **3335** & **11197** \\ ECO-DQN & 10483 & - & - & - \\ ECO-DQN-sparse & 1571 & 2489 & 5249 & 20663 \\ \hline \hline \end{tabular}
\end{table} TABLE III: GPU usage (MB).
CO problems, with significant speedup over ECO-DQN and Greedy-LS. In addition, the reduction in memory footprint compared to MPNN models encourage RELS-DQN's scalability to real-world combinatorial problems. The combined improvement in both generalization and efficiency suggests that the environment states contain valuable information that should not be used as messages in GNN. The future study direction should consider feature-based and structural information separately and analyze their individual effects before exploring their combined impact. On the other hand, our framework also has several limitations. The CO problems that require the ordered sequence are not feasible, such as TSP, because our framework focuses on selecting a subset in an arbitrary order that maximizes the objective function. Another limitation is that the environment representation is element-wise parameters and requires margin gain calculation, which is computationally expensive and the efficiency can be further improved by reducing the number of query times in margin gain calculation.
Fig. 4: The runtime results of RELS-DQN, ECO-DQN and the best local search baseline Greedy-LS on the large datasets (see Table II) of four applications. The \(timeout\) for each application (\(k=\{25,50,100,200\}\)) is set to **24 hours**.
Fig. 3: Illustrates the testing results of RELS-DQN, ECO-DQN, Greedy, Greedy-Rev and Greedy-LS on four applications. RELS-DQN uses the pre-trained model of MaxCut (ER, \(n=\)40) for all applications, while ECO-DQN uses application-specific model for each application. The \(timeout\) for each application is set to **24 hours**.
|
2310.04918
|
SWAP: Sparse Entropic Wasserstein Regression for Robust Network Pruning
|
This study addresses the challenge of inaccurate gradients in computing the
empirical Fisher Information Matrix during neural network pruning. We introduce
SWAP, a formulation of Entropic Wasserstein regression (EWR) for pruning,
capitalizing on the geometric properties of the optimal transport problem. The
``swap'' of the commonly used linear regression with the EWR in optimization is
analytically demonstrated to offer noise mitigation effects by incorporating
neighborhood interpolation across data points with only marginal additional
computational cost. The unique strength of SWAP is its intrinsic ability to
balance noise reduction and covariance information preservation effectively.
Extensive experiments performed on various networks and datasets show
comparable performance of SWAP with state-of-the-art (SoTA) network pruning
algorithms. Our proposed method outperforms the SoTA when the network size or
the target sparsity is large, the gain is even larger with the existence of
noisy gradients, possibly from noisy data, analog memory, or adversarial
attacks. Notably, our proposed method achieves a gain of 6% improvement in
accuracy and 8% improvement in testing loss for MobileNetV1 with less than
one-fourth of the network parameters remaining.
|
Lei You, Hei Victor Cheng
|
2023-10-07T21:15:32Z
|
http://arxiv.org/abs/2310.04918v4
|
# Robust Network Pruning With Sparse
###### Abstract
This study tackles the issue of neural network pruning that inaccurate gradients exist when computing the empirical Fisher Information Matrix (FIM). We introduce an entropic Wasserstein regression (EWR) formulation, capitalizing on the geometric attributes of the optimal transport (OT) problem. This is analytically showcased to excel in noise mitigation by adopting neighborhood interpolation across data points. The unique strength of the Wasserstein distance is its intrinsic ability to strike a balance between noise reduction and covariance information preservation. Extensive experiments performed on various networks show comparable performance of the proposed method with state-of-the-art (SoTA) network pruning algorithms. Our proposed method outperforms the SoTA when the network size or the target sparsity is large, the gain is even larger with the existence of noisy gradients, possibly from noisy data, analog memory, or adversarial attacks. Notably, our proposed method achieves a gain of 6% improvement in accuracy and 8% improvement in testing loss for MobileNetV1 with less than one-fourth of the network parameters remaining.
## 1 Introduction
The advent of deep learning has revolutionized various domains of artificial intelligence, with neural networks showing remarkable performance across an array of applications. Nonetheless, the increase in model complexity has led to escalating computational demands and substantial memory requirements. This poses significant challenges for deploying these models in resource-constrained environments such as mobile or internet of things (IoT) devices. Therefore, the concept of neural network pruning emerges as a critical solution. It aims to optimize the network by removing less important parameters, which reduces computational overhead while maintaining the performance of the original model.
In the realm of state-of-the-art deep learning, the models often exhibit substantial size and complexity, with up to trillions of parameters, as exemplified by models such as GPT-4. The immense computational demand, energy inefficiency, and the challenges with model interpretability associated with these models highlight the need for innovative and efficient optimization techniques. These techniques should ideally minimize the model size while improving their robustness and interpretability. Considering the limitations of previous work, especially those arising from the influence of noisy data and noisy gradients, the paper proposes a promising pathway for robust pruning.
Below, we inspect the network pruning problem from an optimization perspective, with a concise introduction of the most relevant existing works. A sketch of our approach is given following a brief discussion of the motivations.
**Related Work on Pruning as Optimization.** Denote by \(\bar{\mathbf{w}}\in\mathbb{R}^{p}\) a trained model and \(\mathcal{L}(\mathbf{w})\) the loss function given arbitrary model \(\mathbf{w}\). The loss function can be locally approximated around \(\bar{\mathbf{w}}\)
with Taylor Expansion as shown in (1).
\[\mathcal{L}(\mathbf{w})=\mathcal{L}(\bar{\mathbf{w}})+\nabla\mathcal{L}(\bar{ \mathbf{w}})^{\top}(\mathbf{w}-\bar{\mathbf{w}})+\frac{1}{2}(\mathbf{w}-\bar{ \mathbf{w}})^{\top}\nabla^{2}\mathcal{L}(\bar{\mathbf{w}})(\mathbf{w}-\bar{ \mathbf{w}})+O(\left\|\mathbf{w}-\bar{\mathbf{w}}\right\|^{3}) \tag{1}\]
Consider a neural network with a loss function \(\mathcal{L}(\mathbf{w})=\frac{1}{N}\sum_{i=1}^{N}\boldsymbol{\ell}_{i}( \mathbf{w})\), where \(\boldsymbol{\ell}_{i}(\mathbf{w})\in\mathbb{R}^{p}\) is the loss incurred at data point \(i\) (\(i=1,\ldots,N\)). The goal of network pruning is to find a set of \(\mathbf{w}\) such that there are \(k\) (\(k<p\)) elements of we being zero while keeping the newly obtained model \(\mathbf{w}\)'s performance as good as possible to the original one \(\bar{\mathbf{w}}\). Mathematically, we want to find some \(\mathbf{w}\in\mathbb{R}^{p}\) that satisfies both \(\mathcal{L}(\mathbf{w})\approx\mathcal{L}(\bar{\mathbf{w}})\) and \(\left\|\mathbf{w}\right\|_{0}\leq k\), with \(k<p\).
This line of research can be dated back to (LeCun et al., 1989), where the approximation in equation (1) is adopted. Under the assumption that gradient \(\nabla\mathcal{L}(\bar{\mathbf{w}})\approx 0\) when the network is trained, the network weights are pruned one-by-one in decreasing order based on the value of \((\mathbf{w}-\bar{\mathbf{w}})^{\top}\mathbf{H}(\mathbf{w}-\bar{\mathbf{w}})\). In their approach, the \(\mathbf{H}\) is approximated as a diagonal matrix; this is later extended in (Hassibi and Stork, 1992) to include the whole Hessian matrix and the authors also proposed using the Fisher information matrix (FIM) as an approximation to the Hessian. Later, (Singh and Alistarh, 2020) proposed to reduce the computation complexity by using block diagonal Hessian, and FIM is approximated using a small subset of the training data.
These approaches all use equation (1) to prune the network in a one-by-one manner, namely the weight with the least importance is set to zero according to the different approximations of equation (1). In this way, the potential interactions of pruning multiple weights are ignored. To explore this, the network pruning problem is formulated as a mixed integer quadratic programming (MIQP) in (Yu et al., 2022). Namely, an objective function
\[f(\mathbf{w})=(\mathbf{w}-\bar{\mathbf{w}})^{\top}\mathbf{H}(\mathbf{w}-\bar{ \mathbf{w}})+\lambda\left\|\mathbf{w}-\bar{\mathbf{w}}\right\|^{2}\text{ (}\lambda\geq 0) \tag{2}\]
is minimized, and Hessian is approximated as \(\mathbf{H}\approx\nabla^{2}\mathcal{L}(\bar{\mathbf{w}})\), subject to the sparsity constraint \(\left\|\mathbf{w}\right\|_{0}\leq k\), where \(\lambda\) is a regularization parameter. Although this approach shows significant improvements over the previous work, it suffers from scalability issues as it requires the full Hessian matrix.
**Sparse Linear Regression (LR) Formulation**. To reduce the computational complexity, the Hessian matrix can be approximated by the empirical FIM, using \(n\) samples as in (Chen et al., 2022; Benbaki et al., 2023). Denote \(\mathbf{G}=[\nabla\boldsymbol{\ell}_{1},\ldots,\nabla\boldsymbol{\ell}_{n}]^{ \top}\in\mathbb{R}^{n\times p}\), where \(\nabla\boldsymbol{\ell}_{i}=\nabla\boldsymbol{\ell}_{i}(\bar{\mathbf{w}})\). For simplicity, \(\nabla\boldsymbol{\ell}_{i}\) is used in this document to represent the derivative of the data point \(i\)'s loss at \(\bar{\mathbf{w}}\) consistently in this paper unless specified otherwise. The Hessian is approximated through the expression \(\mathbf{H}\approx(1/n)\sum_{i=1}^{n}\nabla\boldsymbol{\ell}_{i}\nabla \boldsymbol{\ell}_{i}^{\top}=(1/n)\mathbf{G}^{\top}\mathbf{G}\), which is the so-called FIM. Denote \(x_{i}=\nabla\boldsymbol{\ell}_{i}^{\top}\mathbf{w}\) and \(y_{i}=\nabla\boldsymbol{\ell}_{i}^{\top}\bar{\mathbf{w}}\), (2) is formulated to the sparse LR problem shown in (3) below.
\[\min_{\mathbf{w}}\bar{Q}(\mathbf{w})=\sum_{i=1}^{n}\left\|x_{i}(\mathbf{w})-y _{i}\right\|^{2}+n\lambda\left\|\mathbf{w}-\bar{\mathbf{w}}\right\|^{2},\text{ s.t. }\|\mathbf{w}\|_{0}\leq k \tag{3}\]
This formulation has a computational advantage, as empirical FIM needs not to be computed explicitly. It is shown that the formulation scales _to large neural network pruning_(Chen et al., 2022).
**Motivation of Combating Against Noise**. In practice, it is not always easy to obtain the correct gradients for pruning large neural networks. There can be noise contained in the data samples, and the gradients can also be corrupted due to various reasons, e.g., distributed or federated learning (Turan et al., 2022), or adversarial attacks such as data poisoning (Steinhardt et al., 2017).
As pointed out by (Mahsereci et al., 2017; Siems et al., 2021), conditioning on the underlying true gradient \(\nabla\mathcal{L}(\bar{\mathbf{w}})=0\), there are mini-batch gradients which are not informative anymore as it can be fully explained by sample noise and the vanishing gradients. These gradients would not contribute to the covariance information of the empirical FIM but serve as outliers in Hessian approximation.
In the scenarios of federated learning (FL), gradients computed by different clients are skewed and consequently, local models move away from globally optimal models (Huang et al., 2022), imposing challenges for constructing informative FIM. Besides, noise can be added to the gradient for privacy concerns in communications (Li et al., 2020). Additionally, the clients usually have inevitable noisy samples and labels, making models suffer from a significant performance drop (Tuor et al., 2021). Finally, over-the-air communications itself suffer from unavoidable noises (Ang et al., 2020; Yang et al., 2020). These lead to concerns for network pruning with noisy gradients.
The analog memory recently gained attention for deep learning model deployment (Garg et al., 2022). When neural network parameters and data are stored in these analog devices, they are susceptible to device-related noise, affecting the performance of network compression (Isik et al., 2023).
**Approach Sketch**. We revisit the MIQP network pruning optimization from a perspective of entropic Wasserstein regression (EWR), which leverages Wasserstein distance to model the dissimilarity between two distributions. In our context, it measures the dissimilarity of distributions relevant to model parameters and gradient magnitudes before and after pruning. Namely, \(\nabla\mathbf{\ell}\) is a \(p\) dimensional distribution, capturing geometric properties of the loss at \(\bar{\mathbf{w}}\) before pruning. Both \(\bar{\mathbf{w}}\) and \(\mathbf{w}\) perform projections for \(\nabla\mathbf{\ell}\) to a 1-D distribution respectively as \(\nabla\mathbf{\ell}^{\top}\bar{\mathbf{w}}\) and \(\nabla\mathbf{\ell}^{\top}\mathbf{w}\). Computing the distance between \(\nabla\mathbf{\ell}^{\top}\bar{\mathbf{w}}\) and \(\nabla\mathbf{\ell}^{\top}\mathbf{w}\) falls into the framework of sliced probability divergence (Nadjahi et al., 2020). Under this framework, pruning optimization essentially fine-tunes \(\mathbf{w}\) and selectively reserves its elements such that the divergence is minimized subject to the sparsity constraint.
Our approach's effectiveness in combating noisy gradients is established both analytically and numerically. We demonstrate that _pruning through the Wasserstein regression implicitly enacts gradient averaging using Neighborhood Interpolation_. This entails a nuanced balance between capturing gradient covariance and diminishing gradient noise. Notably, the sparse LR formulation is merely a specific instance of ours. Yet, our proposed algorithm doesn't demand a markedly higher computational expense. This modest additional effort bestows upon us enhanced robustness.
## 2 Problem Setup and Formulation
We first introduce the optimal transport (OT) problem in Kantorovich formulation with entropic regularization, which measures the divergence between two distributions, defined in (4) below. The Wasserstein regression formulation as a generalization of the LR formulation is then proposed.
**The Kantorovich Problem**. Denote \(\mathcal{P}_{2}\) the set of probability measures with finite second moments. Let \(\mu\), \(\nu\in\mathcal{P}_{2}\) and let \(\Pi(\mu,\nu)\) denote the set of probability measures in \(\mathcal{P}_{2}\) with marginal distributions equal to \(\mu\) and \(\nu\). The 2-Wasserstein distance is defined as
\[W_{2}^{2}(\mu,\nu)=\inf_{\pi\in\Pi(\mu,\nu)}\int_{\mathbb{R}^{d \times d}}\|x-y\|^{2}\,\mathrm{d}\pi(x,y)+\varepsilon\int_{\mathbb{R}^{d}} \log\left(\frac{\mathrm{d}\pi}{\mathrm{d}\mu\,\mathrm{d}\nu}\right). \tag{4}\]
This is also referred to as the entropic OT problem, where the first term is the transportation cost between the two measures and the second term is the entropic regularization with multiplier \(\varepsilon\).
**Sparse EWR Formulation**. The pruning problem formulation is defined in (5) below.
\[\min_{\mathbf{w}} Q(\mathbf{w}) =W_{2}^{2}(x(\mathbf{w}),y)+\lambda\left\|\mathbf{w}-\bar{\mathbf{ w}}\right\|^{2}\] (5a) s.t. \[\left\|\mathbf{w}\right\|_{0} \leq k \tag{5b}\]
The term \(W_{2}^{2}(x(\mathbf{w}),y)\) is a Wasserstein distance between the two one-dimensional distributions \(x\) and \(y\) (or a sliced Wasserstein distance for \(\nabla\ell\) with two one-dimensional projections). The optimization is to alter \(\mathbf{w}\) such that the distance between the two distributions is minimized.
Let \(x\) and \(y\) follow the empirical distributions \(\{x_{i}\}_{i=1}^{n}\) and \(\{y_{i}\}_{i=1}^{n}\). Denote by \(a_{i}\) and \(b_{i}\) the mass of the data points \(x_{i}\) and \(y_{i}\), respectively. We use \(\mathbf{I}\) to refer to a matrix representing the transportation probability between \(x\) and \(y\), and \(\Pi\) the set of all such matrices, i.e. \(\Pi=\{\mathbf{\Pi}|\sum_{i=1}^{n}\pi_{ij}=1\ \forall j\text{ and }\sum_{j=1}^{n}\pi_{ij}=1\ \forall i\}\). Then the formulation (5) reads as follows.
\[\min_{\mathbf{w}}Q(\mathbf{w}) =\inf_{\mathbf{\Pi}\in\Pi}\left\{\sum_{i=1}^{n}\sum_{j=1}^{n}\| x_{i}(\mathbf{w})-y_{i}\|^{2}\,\pi_{ij}+\varepsilon\sum_{i=1}^{n}\sum_{j=1}^{n} \log\left(\frac{\pi_{ij}}{a_{i}b_{j}}\right)\pi_{ij}\right\}+\lambda\left\| \mathbf{w}-\bar{\mathbf{w}}\right\|^{2}\] (6a) s.t. \[\left\|\mathbf{w}\right\|_{0} \leq k \tag{6b}\]
**LR as a Special Case**. Let \(\varepsilon=0\). Once we set \(\mathbf{\Pi}\) to be a diagonal matrix with constant value \(1/n\), i.e. \(\operatorname{diag}(1/n)\), the mass transportation happens only between data point pairs \((x_{i},y_{i})\) for
\(i=1,\ldots,n\). Therefore we have
\[Q_{\mathbf{\Pi}=\operatorname{diag}(1/n)}(\mathbf{w})=\frac{1}{n}\sum_{i=1}^{n} \left\|x_{i}(\mathbf{w})-y_{i}\right\|^{2}+\lambda\left\|\mathbf{w}-\bar{ \mathbf{w}}\right\|^{2}=\frac{1}{n}\bar{Q}(\mathbf{w}), \tag{7}\]
i.e. the formulation (6) in this case degrades to the LR formulation in (3).
## 3 Theoretical Aspects
This section reveals some good theoretical properties of the Sparse EWR formulation for network pruning. We start with Proposition 1 below (proof in Appendix A.1) that states a geometry property of OT with squared Euclidean distance cost. Additionally, we demonstrate the Neighborhood Interpolation mechanism that happens implicitly in solving the EWR. Moreover, we show that such a mechanism strikes a balance in capturing gradient covariance and reducing gradient noise, with a brief discussion on the advantage of using entropic regularization in terms of sample complexity.
**Proposition 1** (Convex Hull Distance Equality).: _Consider a set \(S\) and its convex hull \(\text{Conv}(S)\) in a Euclidean space, and an arbitrary point \(x\) in the space. For any probability measure \(\hat{\nu}\) on \(S\), we can find a point \(y^{\prime}\) in \(\text{Conv}(S)\) as \(y^{\prime}=\int y\,\mathrm{d}\nu(y)\) such that \(\|x-y^{\prime}\|^{2}=\int\|x-y\|^{2}\,\mathrm{d}\hat{\nu}(y)\), where \(\nu\) is a measure on \(\text{Conv}(S)\)._
Neighborhood Interpolation.In formulation (5), let \(W_{x}\) be the first term of \(W_{2}^{2}\) in (5) for an arbitrary given \(x\), i.e., \(W_{x}=\int\left\|x(\mathbf{w})-y\right\|^{2}\,\mathrm{d}\pi(\cdot|x)(y)\), where \(\pi(\cdot|x)(y)\) is a conditional measure given \(x\). Now, divide the Euclidean space \(\mathbb{R}^{d}\) by sub-spaces \(S_{1}^{y}\), \(S_{2}^{y}\),..., \(S_{n}^{y}\) for \(y\). For any conditional measure \(\pi(\cdot|x)(y)\) defined on any \(S_{i}^{y}\) (\(i=1,\ldots,n\)), there exists a measure \(\nu(y)\) defined on \(\text{Conv}(S_{i}^{y})\) such that the weighted distance from \(S_{i}^{y}\) to \(x\) equals the distance from \(x\) to a point \(y^{\prime}\) in \(\text{Conv}(S_{i}^{y})\). Hence
\[W_{x} =\int_{S_{1}^{y}\cup S_{2}^{y}\cup\cdots\cup S_{n}^{y}}\left\|x- y\right\|^{2}\,\mathrm{d}\pi(\cdot|x)(y)\] \[=\frac{1}{n}\sum_{i=1}^{n}\left\|x-y_{i}^{\prime}\right\|^{2}\, \,\text{s.t.}\,\,y_{i}^{\prime}=\int_{\text{Conv}(S_{i}^{x})}y\,\,\mathrm{d} \nu(y),\nu\in\mathbb{V}_{i}(x),\;i=1,\ldots n.\]
where \(\mathbb{V}_{i}(x)\) is the set of measures \(\nu\) that make the equality holds and \(\mathbb{V}_{i}(x)\neq\varnothing\) by Proposition 1.
Similarly, we define \(W_{y}\) for any given \(y\) and a set \(S_{m}^{x}\),
\[W_{y} =\int_{S_{1}^{x}\cup S_{2}^{x}\cup\cdots\cup S_{n}^{x}}\left\|x- y\right\|^{2}\,\mathrm{d}\pi(\cdot|y)(x)\] \[=\frac{1}{n}\sum_{i=1}^{n}\left\|x_{i}^{\prime}-y\right\|^{2}\, \,\text{s.t.}\,\,x_{i}^{\prime}=\int_{\text{Conv}(S_{i}^{x})}x\,\,\mathrm{d} \mu(x),\;\mu\in\mathbb{U}_{i}(y),\;i=1,\ldots n.\]
where \(\mu\) is a measure defined on \(\text{Conv}(S_{i}^{x})\) and \(\mathbb{U}_{i}(y)\neq\varnothing\).
_We demonstrate the concept of "Neighborhood Interpolation" through an empirical distribution example. Define \(S\) as a subset of \(y\) such that for every element \(y_{i}\in S\), \(\pi_{x,i}>0\). Without loss of generality, we can denote \(S=\{y_{1},y_{2},y_{3},y_{4},y_{5}\}\). The area shaded in gray, denoted as \(\text{Conv}(S)\), represents the convex hull of \(S\). \(W_{x}\) computes a weighted summation of distances between \(x\) and the points \(y_{1},\ldots,y_{5}\). The weights \(\pi_{x,1},\ldots,\pi_{x,5}\) are decided by OT. A significant \(\pi_{x,i}\) typically implies that \(y_{i}\) is in proximity to \(x\), indicating a neighborhood relation. By Proposition 1, this weighted distance is analogous to the distance between \(x\) and \(y^{\prime}\), where \(y^{\prime}\) is derived from \(\text{Conv}(S)\)._
Revisit the EWR formulation.The integral of either \(W_{x}\) or \(W_{y}\) respectively on \(x\) or \(y\) gives the first term of \(W_{2}^{2}\). One can then reformulate (5) as (8) below.
\[\min_{\mathbf{w}:\left\|\mathbf{w}\right\|_{0}\leq k}Q(\mathbf{w})=\frac{1}{2} \inf_{\pi}\left\{\int W_{x}(\mathbf{w})\,\mathrm{d}\mu(x)+\int W_{y}(\mathbf{ w})\,\mathrm{d}\nu(y)\right\}+\lambda\left\|\mathbf{w}-\bar{\mathbf{w}}\right\|^{2}. \tag{8}\]
Interpretation: The objective function calculates the Euclidean distance between a point \(x\) and \(n\) distinct points. These \(n\) points originate from \(n\) convex hulls, each shaped by different \(n\) subspaces within \(y\). Similarly, the function measures the distance between each point \(y\) and \(m\) unique points derived from \(m\) convex hulls, each formed by distinct \(m\) subspaces within \(x\).
We claim that the EWR formulation is more resilient to noisy gradient than its counterpart, the LR formulation given by (3). To understand this claim better, let us reimagine the problem using empirical distributions, as indicated by (6). In this context, we use \(x_{i}\) and \(y_{i}\) as substitutes for \(S_{i}^{x}\) and \(S_{i}^{y}\). Moreover, the integration in both \(W_{x}\) and \(W_{y}\) is replaced with summations, offering a more insightful version of our initial EWR formulation, shown as (9).
\[\min_{\mathbf{w}:\|\mathbf{w}\|_{0}\leq k}Q(\mathbf{w})=\inf_{\mathbf{\Pi}} \left\{Q_{\mathbf{\Pi}}(\mathbf{w})+\varepsilon\sum_{i=1}^{n}\sum_{j=1}^{n} \log\left(\frac{\pi_{ij}}{a_{i}b_{j}}\right)\pi_{ij}\right\} \tag{9}\]
where the notation \(Q_{\mathbf{\Pi}}(\mathbf{w})\) in (10) denotes the part of the objective function given fixed \(\mathbf{\Pi}\):
\[Q_{\mathbf{\Pi}}(\mathbf{w}) =\sum_{i=1}^{n}\sum_{j=1}^{n}\|x_{i}(\mathbf{w})-y_{j}\|^{2}\,\pi _{ij}+\lambda\,\|\mathbf{w}-\bar{\mathbf{w}}\|^{2} \tag{10a}\] \[=\frac{1}{2}\underbrace{\sum_{i=1}^{n}\|x_{i}(\mathbf{w})-y_{i}^{ \prime}\|^{2}}_{K_{\mathbf{\Pi}}^{(i)}}+\frac{1}{2}\underbrace{\sum_{i=1}^{n} \|x_{i}^{\prime}(\mathbf{w})-y_{i}\|^{2}}_{K_{\mathbf{\Pi}}^{(2)}}+\lambda\, \|\mathbf{w}-\bar{\mathbf{w}}\|^{2} \tag{10b}\]
In \(Q_{\mathbf{\Pi}}(\mathbf{w})\), for each index \(i\), points \(x_{i}^{\prime}\) and \(y_{i}^{\prime}\) are chosen from the convex hulls formed by points in \(x\) and \(y\), as per the guidelines of Proposition 1. Now, contrasting this with the LR model in (3), the objective \(Q(\mathbf{w})\) aims for regression directly over the data points whereas every point from one empirical set is matched for Euclidean distance computation to a point derived from a convex combination of the other.
The infimum in (9) seeks the OT plan, \(\mathbf{\Pi}\), that aligns the empirical distributions \(x\) and \(y\) closely. In practical terms, for each data point \(x_{i}\), only a subset of \(\{y_{i}\}_{i=1}^{n}\) will transport a substantial mass, rather than the entire set. This behavior of \(\mathbf{\Pi}\) effectively defines \(n\) "neighborhoods" for each data point \(x_{i}\) within the empirical distribution of \(y\). Here, a "neighborhood" refers to a group of data points in \(y\) that are proximate to a specific \(x_{i}\) in the Euclidean sense.
**Neighborhood Size Control**. A critical aspect of this formulation is the entropic regularization term, which is used to modulate the size of these neighborhoods. Specifically, increasing the value of \(\varepsilon\) amplifies the impact of the entropy term. This change broadens the neighborhoods, drawing more data points into the fold of the associated convex hulls. An illustrative extreme case is when \(\varepsilon=0\). Here, the OT does one-to-one matching, implying that each data point \(y_{i}\) primarily forms the convex hull independently. On the contrary, when \(\varepsilon\rightarrow\infty\), all data points are equally weighted by \(\mathbf{\Pi}\) and hence involved in forming the convex hull as a neighborhood.
**Capturing Covariance With Gradient Noise Reduction**. For an arbitrary \(\mathbf{w}\), the EWR formulation essentially strikes a balance between gradient noise reduction and covariance capturing. We show the analysis for \(K_{\mathbf{\Pi}}^{(1)}\) in (10), and \(K_{\mathbf{\Pi}}^{(2)}(\mathbf{w})\) follows similarly. Note that \(y_{i}^{\prime}=\sum_{j=1}^{n}\nu_{j}^{(i)}y_{j}=\sum_{j=1}^{n}\nu_{j}^{(i)} \nabla\boldsymbol{\ell}_{i}^{\top}\bar{\mathbf{w}}\), where \(\boldsymbol{\nu}^{(i)}\) are convex combination coefficients by Proposition 1. Denote \(\nabla\boldsymbol{\ell}_{i}^{\prime\top}=\sum_{j=1}^{n}\nu_{j}^{(i)}\nabla \boldsymbol{\ell}_{j}^{\top}\), and \(\mathbf{G}^{\prime}=[\nabla\boldsymbol{\ell}_{1}^{\prime},\ldots,\nabla \boldsymbol{\ell}_{n}^{\prime}]^{\top}\). The term \(K_{\mathbf{\Pi}}^{(1)}\) expands as follows.
\[K_{\mathbf{\Pi}}^{(1)} =\sum_{i=1}^{n}(\nabla\boldsymbol{\ell}_{i}^{\top}\mathbf{w}- \nabla\boldsymbol{\ell}_{i}^{\prime\top}\bar{\mathbf{w}})^{\top}(\nabla \boldsymbol{\ell}_{i}^{\top}\mathbf{w}-\nabla\boldsymbol{\ell}_{i}^{\prime\top }\bar{\mathbf{w}})\] \[=\sum_{i=1}^{n}(\mathbf{w}^{\top}\nabla\boldsymbol{\ell}_{i} \nabla\boldsymbol{\ell}_{i}^{\top}\mathbf{w}-\mathbf{w}^{\top}\nabla \boldsymbol{\ell}_{i}\nabla\boldsymbol{\ell}_{i}^{\prime\top}\bar{\mathbf{w}}- \bar{\mathbf{w}}^{\top}\nabla\boldsymbol{\ell}_{i}^{\prime}\nabla \boldsymbol{\ell}_{i}^{\prime}\nabla\boldsymbol{\ell}_{i}^{\top}\mathbf{w}+\bar {\mathbf{w}}^{\top}\nabla\boldsymbol{\ell}_{i}^{\prime}\nabla\boldsymbol{\ell}_ {i}^{\prime\top}\bar{\mathbf{w}}) \tag{11}\]
Examining \(K_{\mathbf{\Pi}}^{(1)}\) from (11), we see that it effectively replaces half of \(\nabla\boldsymbol{\ell}_{i}\) with \(\nabla\boldsymbol{\ell}_{i}^{\prime}\), a version obtained through weighted gradient averaging. Now let's compare the covariance between \(\nabla\boldsymbol{\ell}_{i}\) and
\(\nabla\mathbf{\ell}^{\prime}_{i}\). Assume that \(\nabla\mathbf{\ell}_{i}\) (\(1\leq i\leq n\)) are i.i.d. with the same covariance matrix \(\Sigma\), then \(\mathbf{G}^{\prime}\) is with equal or less noise than \(\mathbf{G}\). To show this, denote the covariance matrix of each \(\nabla\mathbf{\ell}^{\prime}_{i}\) by
\[\Sigma^{\prime}_{i}=\text{Cov}\bigg{[}\sum_{j=1}^{n}\nu^{(i)}_{j}\nabla\mathbf{ \ell}_{j}\bigg{]}=\sum_{j=1}^{n}[\nu^{(i)}_{j}]^{2}\Sigma.\]
The total variance of each gradient in \(\mathbf{G}^{\prime}\) (i.e., the trace of \(\Sigma^{\prime}_{i}\)) is then
\[\text{trace}(\Sigma^{\prime}_{i})=\text{trace}\bigg{(}\sum_{j=1}^{n}\big{[} \nu^{(i)}_{j}\big{]}^{2}\Sigma\bigg{)}=\sum_{j=1}^{n}\big{[}\nu^{(i)}_{j} \big{]}^{2}\text{trace}(\Sigma)\leq\text{trace}(\Sigma).\]
The last inequality follows from the fact that \(\sum_{j=1}^{n}[\nu^{(i)}_{j}]^{2}\leq 1\), which is a consequence of the Cauchy-Schwarz inequality given that the coefficients \(\nu^{(i)}_{j}\) form a convex combination.
Originally, the covariance information of all data points is embedded in \(\nabla\mathbf{\ell}_{i}\nabla\mathbf{\ell}^{\top}_{i}\) for \(i=1,2,\ldots,n\). An alternative representation is \(\nabla\mathbf{\ell}^{\prime}_{i}\nabla\mathbf{\ell}^{\prime\top}_{i}\), which prioritizes noise reduction, but sacrifices some covariance information. Both \(\nabla\mathbf{\ell}^{\prime}_{i}\nabla\mathbf{\ell}^{\top}_{i}\) and \(\nabla\mathbf{\ell}_{i}\nabla\mathbf{\ell}^{\prime\top}_{i}\) highlight a trade-off. Notably, both the original covariance \(\nabla\mathbf{\ell}_{i}\nabla\mathbf{\ell}^{\top}_{i}\) and its noise-reduced counterpart \(\nabla\mathbf{\ell}^{\prime}_{i}\nabla\mathbf{\ell}^{\prime\top}_{i}\) are retained in (11).
**Difference From Averaging Prior to Optimization:** Next, we show that such gradient averaging differs from the averaging operation conducted prior to optimization. Let \(\mathbf{G}^{\prime}=[\nabla\mathbf{\ell}^{\prime}_{1},\nabla\mathbf{\ell}^{\prime}_{2 },\ldots,\nabla\mathbf{\ell}^{\prime}_{n}]^{\top}\) such that \(\mathbf{G}^{\prime}\) represents the row-wise convex combination of \(\mathbf{G}\). Approximating the Hessian of the MIQP (2), two scenarios emerge: using \(\mathbf{G}\) that not performing averaging (case 1) and \(\mathbf{G}^{\prime}\) that performs averaging before optimization (case 2).
**Case 1** is indeed the original LR formulation (3). Denote by \(K\) its term that is corresponding to \(K^{(1)}_{\mathbf{\Pi}}\), we have
\[K =(\mathbf{w}-\bar{\mathbf{w}})^{\top}\mathbf{G}^{\top}\mathbf{G} (\mathbf{w}-\bar{\mathbf{w}})\] \[=\sum_{i=1}^{n}(\mathbf{w}^{\top}\nabla\mathbf{\ell}_{i}\nabla\mathbf{ \ell}^{\top}_{i}\mathbf{w}-\mathbf{w}^{\top}\nabla\mathbf{\ell}_{i}\nabla\mathbf{ \ell}^{\top}_{i}\bar{\mathbf{w}}-\bar{\mathbf{w}}^{\top}\nabla\mathbf{\ell}_{i} \nabla\mathbf{\ell}^{\top}_{i}\mathbf{w}+\bar{\mathbf{w}}^{\top}\nabla\mathbf{\ell}_{i }\nabla\mathbf{\ell}^{\top}_{i}\bar{\mathbf{w}}) \tag{12}\]
**Case 2** uses the less-noisy row-wise convex combination matrix \(\mathbf{G}^{\prime}\) instead of \(\mathbf{G}\). Yet, the original covariance \(\nabla\mathbf{\ell}_{i}\nabla\mathbf{\ell}^{\top}_{i}\) is lost: Denote by \(K^{\prime}\) the corresponding term, and we have
\[K^{\prime}=\sum_{i=1}^{n}(\mathbf{w}^{\top}\nabla\mathbf{\ell}^{\prime}_{i}\nabla \mathbf{\ell}^{\prime\top}_{i}\mathbf{w}-\mathbf{w}^{\top}\nabla\mathbf{\ell}^{\prime }_{i}\nabla\mathbf{\ell}^{\prime\top}_{i}\bar{\mathbf{w}}-\bar{\mathbf{w}}^{\top} \nabla\mathbf{\ell}^{\prime}_{i}\nabla\mathbf{\ell}^{\prime\top}_{i}\mathbf{w}+\bar{ \mathbf{w}}^{\top}\nabla\mathbf{\ell}^{\prime}_{i}\nabla\mathbf{\ell}^{\prime\top}_{i }\bar{\mathbf{w}}) \tag{13}\]
Inspecting the expressions, it can be observed that \(K^{(1)}_{\mathbf{\Pi}}\) (also \(K^{(2)}_{\mathbf{\Pi}}\)) strikes a balance between \(K\) and \(K^{\prime}\). There are two notable extreme cases for \(\mathbf{\Pi}\):
1. \(\mathbf{\Pi}=\mathrm{diag}(1/n)\). This corresponds to the LR formulation, as detailed in Section 2. A smaller value of \(\varepsilon\) steers the optimization in this direction.
2. \(\mathbf{\Pi}=(1/n^{2})\mathbf{1}\cdot\mathbf{1}^{\top}\). This arises when \(\varepsilon\to\infty\), meaning the entropy term holds sway in the OT optimization. Here, mutual information is minimized to ensure an even contribution from data points in the convex combination. Both \(x^{\prime}_{i}\) and \(y^{\prime}_{i}\) are the arithmetic means of their respective sets, and all \(\nabla\mathbf{\ell}^{\prime}_{i}\) are equivalent to the averaged gradient over the \(n\) points. Importantly, the original covariance remains intact even in this edge case.
As \(n\) grows indefinitely, the empirical OT formulation from (6) approaches its continuous counterpart given by (5). Intuitively, a large dataset of high-quality training samples makes the empirical fisher a close approximation to the true fisher. In such situations, \(\varepsilon\) is set to zero. Brenier's theorem (Peyre, 2019) then suggests that the OT plan turns into a monotone map for costs represented by squared Euclidean distances. This means \(\mathbf{\Pi}\) tends towards \(\mathrm{diag}(1/n)\). Consequently, the Wasserstein distance formulation reduces to the Euclidean distance formulation, delivering optimal performance with ample data.
An advantage of employing the EWR formulation is its inherent capability of gradient averaging. This approach negates the need to manually determine the convex combination coefficients or resort to density estimation to pinpoint the nearest gradient neighbors for averaging. Importantly, this seamless trade-off has an advantage over using Euclidean distance with gradient averaging performed prior to optimization. The reason is that the original covariance information will inevitably be lost in the formulation (13), irrespective of the chosen averaging method.
**Sample Compexty** of \(W_{2}^{2}(\mu,\nu)\) is narrowed to \(O(1/\sqrt{n})\) from \(O(1/n^{\frac{1}{4}})\) by the entropic regularization term. Please see Appendix A.2 for details.
## 4 Algorithm Design
**Algorithmic Framework**. The algorithm addresses the network pruning problem defined in (5). Drawing inspiration from (Chen et al., 2022), the algorithm incrementally adjusts the sparsity of the weights vector \(\mathbf{w}\) by using a descending sequence of non-zero elements \(k_{0},\ldots,k_{T}\). During each sparsity level, the weights \(\mathbf{w}\) and the transportation plan \(\mathbf{\Pi}\) (can be obtained with efficient algorithms; see Appendix A.3) are refined iteratively.
```
0: Pre-pruning weights \(\bar{\mathbf{w}}\), sparsity \(k\), regularization parameter \(\lambda\), \(\varepsilon\), maximum rounds \(T\), batches \(\mathcal{B}_{1},\ldots,\mathcal{B}_{T}\), optimization step size \(\tau>0\)
0: Post-pruning weights \(\mathbf{w}\), satisfying \(\|\mathbf{w}\|\leq k\)
1: Set \(k_{0},k_{1},\ldots,k_{T}\) as a descending sequence, with \(k_{0}<p\) and \(k_{T}=k\).
2:\(\mathbf{w}^{(0)}\leftarrow\bar{\mathbf{w}}\)
3:for\(t\gets 0,1,\ldots,T\)do
4: Compute \(\mathbf{G}=[\nabla\boldsymbol{\ell}_{1}(\bar{\mathbf{w}}),\ldots\nabla \boldsymbol{\ell}_{n}(\bar{\mathbf{w}})]^{\top}\) with batch \(B_{t}\)
5:\(\mathbf{x},\mathbf{y}\leftarrow\mathbf{G}\mathbf{w}^{(t)},\mathbf{G}\bar{ \mathbf{w}}\)
6: Compute the pairwise Euclidean distance matrix \(\mathbf{C}\) between \(\mathbf{x}\) and \(\mathbf{y}\)
7: Compute OT planning \(\mathbf{\Pi}^{(t)}\) (see Appendix A.3)
8:\(\nabla Q\leftarrow\mathbf{G}^{\top}(\mathbf{\Pi}(\mathbf{G}\mathbf{w}^{(t)}- \mathbf{G}\bar{\mathbf{w}}))+\lambda(\mathbf{w}^{(t)}-\bar{\mathbf{w}})\)
9:\(\mathbf{w}^{(t+\frac{1}{2})}\leftarrow\mathbf{w}^{(t)}-\tau\nabla Q\)
10:\(\mathbf{w}^{(t+1)}\leftarrow\) Select from \(\mathbf{w}^{(t+\frac{1}{2})}\)\(k_{t}\) components having largest absolute values; Others zero
11:\(\bar{\mathbf{w}}\leftarrow\mathbf{w}^{(t+1)}\)
12:endfor
```
**Algorithm 1** Entropic Wasserstein Regression Pruning
**Weights Optimization**. The weights \(\mathbf{w}\) are optimized using the stochastic gradient descent (SGD) paired with the iterative hard threshold (IHT) algorithm. We use \(\nabla Q\) to represent the derivative of \(Q(\mathbf{w})\) for brevity, with its comprehensive derivation in Appendix A.4. The expression is
\[\nabla Q=\mathbf{G}^{\top}(\mathbf{\Pi}(\mathbf{G}\mathbf{w}-\mathbf{G}\bar{ \mathbf{w}}))+\lambda(\mathbf{w}-\bar{\mathbf{w}}). \tag{14}\]
Following the weight updates driven by SGD (as seen in line 9 of Algorithm 1), the IHT method is applied. Here, the \(k_{t}\) components of \(\mathbf{w}\) with the largest magnitudes are retained, while the remaining are set to zero, ensuring adherence to the sparsity criteria.
A vital component of the optimization process is the choice of the stepsize \(\tau\) (referenced in line 9). Although a straightforward approach might be to set \(\tau=\frac{1}{L}\) (where \(L\) denotes the Lipschitz constant of \(Q\)), better performance can be achieved when the stepsize is optimized using the methodology proposed in (Chen et al., 2022, Algorithm 2). For the quadratic function \(Q\), the Lipschitz constant \(L\) is given by \(L=n\lambda+\left\lVert G\right\rVert_{op}^{2}\), where \(\left\lVert\cdot\right\rVert_{op}\) indicates the foremost singular value.
Line 10 in Algorithm 1 employs the IHT method that is commonly used in sparse learning, which together with line 9, forms a projected gradient descent algorithm. It finds a sparse representation of the updated gradient in line 9. Intuitively, IHT keeps the dominant model weights and essentially preserves the most impactful aspects of the to-be-trimmed model. Although there exist alternative strategies for refining the IHT solution--including active set updates, coordinate descent, and the Woodbury formula's back-solving--a discussion on these falls outside the scope of this paper. For in-depth exploration, especially with respect to the specialized case described in (7), one can consult (Bhatia et al., 2015; Chen and Mazumder, 2020; Hazimeh and Mazumder, 2020; Benbaki et al., 2023).
Numerical Results
Our method is compared with several existing SoTA methods including MP (magnitude pruning (Mozer and Smolensky, 1989)), WF (WoodFisher (Singh and Alistarh, 2020)), CBS (Combinatorial Brain Surgeon (Yu et al., 2022)), and LR (i.e. the sparse LR formulation adopted by (Chen et al., 2022)). We refer to our proposed method as EWR (i.e. sparse entropic Wasserstein regression). Note that LR is a special instance of EWR, with \(\mathbf{\Pi}=\mathrm{diag}(1/n)\). All the methods are benchmarked on pre-trained neural networks: MLPNet (30K parameters) trained on MNIST (LeCun et al., 1998), ResNet20 (200K parameters) and ResNet50 (25M parameters) (He et al., 2016) trained on CIFAR10 (Krizhevsky et al., 2009), and MobileNetV1 (Howard et al., 2017) (4.2M parameters) trained on ImageNet (Deng et al., 2009). The experiment setup for reproducibility1 is detailed in AppendixA.5. We deliver more experiments results in Appendix A.6-A.9.
Footnote 1: The code is available on \(\mathbf{\Theta}\)
[https://anonymous.4open.science/r/Entropic-Wasserstein-Pruning-F222](https://anonymous.4open.science/r/Entropic-Wasserstein-Pruning-F222)
**Model Accuracy Performance Benchmarking**. Table 1 compares different networks across various sparsity levels, utilizing different methods. MLPNet's performance on MNIST is consistent
\begin{table}
\begin{tabular}{c|c|c c c c c} \hline \hline Network & Sparsity & MP & WF & CBS & LR (\(\text{EWR}_{\mathbf{\Pi}=\mathbf{\Pi}/n}\)) & EWR (proposed) \\ \hline \multirow{8}{*}{MLPNet on MNIST (93.97\%)} & 0.5 & 93.93 & 94.02 & 93.96 & **95.26** (\(\pm\)0.03) & **95.24** (\(\pm\)0.03) \\ & 0.6 & 93.78 & 93.82 & 93.96 & **95.13** (\(\pm\)0.02) & **95.13** (\(\pm\)0.01) \\ & 0.7 & 93.62 & 93.77 & 93.98 & **94.93** (\(\pm\)0.03) & **95.05** (\(\pm\)0.04) \\ & 0.8 & 92.89 & 93.57 & 93.90 & **94.82** (\(\pm\)0.04) & **94.84** (\(\pm\)0.03) \\ & 0.9 & 90.30 & 91.69 & 93.14 & **94.32** (\(\pm\)0.05) & **94.30** (\(\pm\)0.05) \\ & 0.95 & 83.64 & 85.54 & 88.92 & **92.82** (\(\pm\)0.06) & **92.86** (\(\pm\)0.05) \\ & 0.95\({}^{+\sigma}\) & - & - & - & 90.11 (\(\pm\)0.08) & **90.50** (\(\pm\)0.07) \\ & 0.98 & 32.25 & 38.26 & 55.45 & 84.43 (\(\pm\)0.10) & **85.71** (\(\pm\)0.09) \\ & 0.98\({}^{+\sigma}\) & - & - & - & 82.12 (\(\pm\)0.11) & **83.69** (\(\pm\)0.10) \\ \hline \multirow{8}{*}{ResNet20 on CIFAR10 (91.36\%)} & 0.5 & 88.44 & 90.23 & 90.58 & **92.06** (\(\pm\)0.04) & **92.04** (\(\pm\)0.03) \\ & 0.6 & 85.24 & 87.96 & 88.88 & **91.98** (\(\pm\)0.09) & **91.98** (\(\pm\)0.09) \\ & 0.7 & 78.79 & 81.05 & 81.84 & 91.09 (\(\pm\)0.10) & **91.89** (\(\pm\)0.10) \\ & 0.8 & 54.01 & 62.63 & 51.28 & 89.00 (\(\pm\)0.12) & **90.15** (\(\pm\)0.09) \\ & 0.9 & 11.79 & 11.49 & 13.68 & 87.63 (\(\pm\)0.11) & **88.82** (\(\pm\)0.10) \\ & 0.95 & - & - & - & 80.25 (\(\pm\)0.17) & **81.33** (\(\pm\)0.15) \\ & 0.95\({}^{+\sigma}\) & - & - & - & 77.37 (\(\pm\)0.18) & **79.05** (\(\pm\)0.16) \\ & 0.98 & - & - & - & 68.15 (\(\pm\)0.27) & **69.21** (\(\pm\)0.24) \\ & 0.98\({}^{+\sigma}\) & - & - & - & 65.04 (\(\pm\)0.27) & **68.01** (\(\pm\)0.25) \\ \hline \multirow{8}{*}{ResNet50 on CIFAR10 (92.78\%)} & 0.95 & - & - & - & 83.75 (\(\pm\)0.14) & **84.96** (\(\pm\)0.15) \\ & 0.95\({}^{+\sigma}\) & - & - & - & 82.34 (\(\pm\)0.16) & **84.92** (\(\pm\)0.17) \\ & 0.98 & - & - & - & 81.04 (\(\pm\)0.14) & **82.85** (\(\pm\)0.20) \\ & 0.98\({}^{+\sigma}\) & - & - & - & 80.11 (\(\pm\)0.23) & **82.94** (\(\pm\)0.22) \\ \hline \multirow{8}{*}{MobileNetV1} & 0.3 & 71.60 & **71.88** & **71.87** & 71.14 (\(\pm\)0.08) & **71.87** (\(\pm\)0.05) \\ & 0.4 & 69.16 & 71.15 & **71.45** & 71.12 (\(\pm\)0.10) & **71.44** (\(\pm\)0.07) \\ \cline{1-1} & 0.5 & 62.61 & 68.91 & 70.21 & 70.12 (\(\pm\)0.13) & **71.12** (\(\pm\)0.18) \\ \cline{1-1} & 0.6 & 41.94 & 60.90 & 66.37 & 70.05 (\(\pm\)0.22) & **70.92** (\(\pm\)0.18) \\ \cline{1-1} & 0.7 & 6.78 & 29.36 & 55.11 & 68.15 (\(\pm\)0.17) & **69.26** (\(\pm\)0.13) \\ \cline{1-1} & 0.8 & 0.11 & 0.24 & 16.38 & 65.72 (\(\pm\)0.19) & **66.82** (\(\pm\)0.14) \\ \cline{1-1} & 0.8\({}^{+\sigma}\) & - & - & - & 60.29 (\(\pm\)0.18) & **63.62** (\(\pm\)0.15) \\ \cline{1-1} & 0.9 & - & - & - & 47.65 (\(\pm\)0.15) & **49.43** (\(\pm\)0.13) \\ \cline{1-1} & 0.9\({}^{+\sigma}\) & - & - & - & 44.55 (\(\pm\)0.16) & **47.98** (\(\pm\)0.16) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Model Pruning Accuracy Benchmarking. Five runs are taken for LR and EWR, with the mean and 95% confidence interval (in the brackets) reported. The data of MP, WF, and CBS are copied from (Yu et al., 2022). The superscript “\(+\sigma\)” indicates that 20% of data is with noise. Bold texts imply the best performance, with \(0.1\) percentage as tolerance in difference. The sparsity column is the target sparsity.
across different sparsity levels, with both LR and the proposed EWR method showing superior performance. The advantages of EWR over the others are reflected by the three more challenging tasks ResNet20 and ResNet50 on CIFAR10 and MobileNetV1 on ImageNet, especially in the presence of noisy gradients. In summary, the proposed EWR method consistently outperforms or matches other methods. The LR method performs well at lower sparsity levels but is surpassed otherwise.
**Robustness with Noisy Gradient**. From Section 3, EWR differs from LR in terms of gradient noise reduction achieved by solving the OT problem to obtain a group of non-trivial data pair weighting coefficients. Hence, LR that has the transportation plan \(\mathbf{\Pi}\) fixed to \(\mathrm{diag}(1/n)\) naturally serves as a baseline for evaluating the effectiveness of such optimization in terms of robustness against noise. In two noisy scenarios, 10%, and 25%, we evaluate loss at noise levels of \(\sigma\) and \(2\sigma\) across varying sparsity. Tables 2 and 3 contrast the loss difference between LR and EWR. EWR consistently outperforms LR in both ResNet20 and MobileNetV1, most notably in noisy conditions and at higher sparsity. The peak performance difference is 8.13% favoring EWR on MobileNetV1 at 0.75 sparsity with 25% noise. Hence, EWR outperforms LR.
\begin{table}
\begin{tabular}{c|c c c c c c} \hline \hline \multirow{2}{*}{Sparsity} & \multicolumn{5}{c}{10\% Noisy Data} \\ \cline{2-7} & \multicolumn{2}{c}{Noise = \(\sigma\)} & \multicolumn{2}{c}{Noise = \(2\sigma\)} \\ \cline{2-7} & LR & EWR & Diff & LR & EWR & Diff \\ \hline
0.95 & 2.83 (\(\pm\)0.02) & **2.75** (\(\pm\)0.02) & **2.87**\% & 2.86 (\(\pm\)0.02) & **2.74** (\(\pm\)0.01) & **4.35**\% \\
0.84 & 1.58 (\(\pm\)0.01) & **1.54** (\(\pm\)0.01) & **2.73**\% & 1.62 (\(\pm\)0.03) & **1.54** (\(\pm\)0.02) & **5.15**\% \\
0.74 & 0.66 (\(\pm\)0.00) & **0.64** (\(\pm\)0.00) & **3.03**\% & 0.66 (\(\pm\)0.00) & **0.65** (\(\pm\)0.00) & **1.32**\% \\
0.63 & 0.35 (\(\pm\)0.00) & 0.35 (\(\pm\)0.00) & 0.00\% & 0.35 (\(\pm\)0.00) & 0.35 (\(\pm\)0.00) & 0.85\% \\ \hline \multirow{2}{*}{Sparsity} & \multicolumn{5}{c}{25\% Noisy Data} \\ \cline{2-7} & \multicolumn{2}{c}{Noise = \(\sigma\)} & \multicolumn{2}{c}{Noise = \(2\sigma\)} \\ \cline{2-7} & LR & EWR & Diff & LR & EWR & Diff \\ \hline
0.95 & 2.87 (\(\pm\)0.02) & **2.77** (\(\pm\)0.01) & **3.50**\% & 2.89 (\(\pm\)0.02) & **2.79** (\(\pm\)0.02) & **3.82**\% \\
0.84 & 1.72 (\(\pm\)0.02) & **1.65** (\(\pm\)0.02) & **4.07**\% & 1.76 (\(\pm\)0.02) & **1.69** (\(\pm\)0.02) & **4.05**\% \\
0.74 & 0.67 (\(\pm\)0.01) & 0.67 (\(\pm\)0.00) & 0.49\% & 0.68 (\(\pm\)0.00) & **0.67** (\(\pm\)0.00) & **1.55**\% \\
0.63 & 0.36 (\(\pm\)0.00) & 0.36 (\(\pm\)0.00) & 0.00\% & 0.35 (\(\pm\)0.00) & 0.35 (\(\pm\)0.00) & 0.00\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of testing loss values (no fine-tuning) for ResNet20. The result is averaged over 25 runs. The 90% confidence interval is reported. The target sparsity is set to be 0.95.
\begin{table}
\begin{tabular}{c|c c c c c c} \hline \hline \multirow{2}{*}{Sparsity} & \multicolumn{5}{c}{10\% Noisy Data} \\ \cline{2-7} & \multicolumn{2}{c}{Noise = \(\sigma\)} & \multicolumn{2}{c}{Noise = \(2\sigma\)} \\ \cline{2-7} & LR & EWR & Diff & LR & EWR & Diff \\ \hline
0.75 & 4.54 (\(\pm\)0.06) & **4.34** (\(\pm\)0.06) & **4.41**\% & 4.62 (\(\pm\)0.07) & **4.40** (\(\pm\)0.06) & **5.02**\% \\
0.63 & 2.53 (\(\pm\)0.04) & **2.46** (\(\pm\)0.03) & **2.61**\% & 2.56 (\(\pm\)0.04) & **2.48** (\(\pm\)0.04) & **3.23**\% \\
0.53 & 1.64 (\(\pm\)0.02) & **1.56** (\(\pm\)0.02) & **5.16**\% & 1.66 (\(\pm\)0.03) & **1.56** (\(\pm\)0.02) & **6.01**\% \\
0.42 & 1.30 (\(\pm\)0.00) & 1.30 (\(\pm\)0.00) & 0.00\% & 1.30 (\(\pm\)0.00) & 1.30 (\(\pm\)0.00) & 0.00\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of testing loss values (no fine-tuning) for MobileNetV1. The result is averaged from 10 runs. The 90% confidence interval is reported. The target sparsity is set to be 0.75.
## 6 Conclusions and Future Impact
The paper offers a novel formulation based on EWR, which strikes a balance between covariance information preservation and noise reduction. The work suggested promising avenues for applications in large-scale model compression, though it may require further empirical validation and exploration of practical implementations.
|
2301.04081
|
Two Different Weak Modulations in ab-type RR Lyrae Variable V838 Cyg,
and Potential Influence of Metal Abundance on Blazhko Modulation
|
Noting the weakest modulation and relatively high metal abundance of the
ab-type RR Lyrae star V838 Cyg, we collected the photometric data of this star
from several sky surveys to carry out an in-depth analysis. The O-C diagram
shows that the pulsation period of V838 Cyg increases linearly over a long
timescale. In a reanalysis of the high-precision Kepler data, we confirmed the
modulation with a period of 59.45\pm0.07 days found by Benko et al., (2014),
and also found an additional weak modulation with a longer period (840\pm21
days). After a series of analyses, we incline to the view that the mechanisms
causing the two modulations are different: the former is more similar to the
typical Blazhko effect, while the mechanism leading to the latter may be an
extrinsic factor. We also collected and compared the modulation and physical
parameters of other Blazhko RR Lyrae stars from several works in the
literature, and find that there is a potential negative correlation between the
modulation amplitude (or upper limit of amplitude) and the metal abundance. We
infer that the relatively high metal abundance will promote convection in the
outer stellar atmosphere, and then inhibit those factors (turbulence, shock
wave, etc.) that may cause Blazhko modulation. Future observations and research
work can be carried out with reference to this viewpoint. We also introduce the
moire effects that appear in the Kepler long-cadence light curves and their
possible interference in the previous analyses.
|
L. -J. Li, S. -B. Qian, X. -D. Shi, L. -Y. Zhu
|
2023-01-10T17:13:14Z
|
http://arxiv.org/abs/2301.04081v2
|
Two Different Weak Modulations in ab-type RR Lyrae Variable V838 Cyg, and Potential Influence of Metal Abundance on Blazhko Modulation
###### Abstract
Noting the weakest modulation and relatively high metal abundance of the ab type RR Lyrae star V838 Cyg, we collected the photometric data of this star in several sky surveys to carry out an in-depth analysis. The \(O-C\) diagram shows that the pulsation period of V838 Cyg increases linearly on the large time scale. In the reanalysis of the high-precision \(Kepler\) data, we confirmed the modulation with the period of \(59.45\pm 0.04\) days found earlier, and also found an additional weak modulation with a longer period (\(840\pm 21\) days). After a series of analyses, we incline to the view that the mechanisms leading the two modulations are different: the former is more similar to the typical Blazhko effect, while the mechanism leading to the latter may be an extrinsic factor. We also collected and compared the modulation and physical parameters of other Blazhko RR Lyrae stars from several works of literature, and find that there is a potential negative correlation between the modulation amplitude (or upper limit of amplitude) and the metal abundance. We infer that the relative metal-rich will promote the convection in the outer stellar atmosphere, and then inhibit those factors (turbulence, shock wave, etc.) that may cause Blazhko modulation. Future observations and research work can be carried out with reference to this viewpoint.
keywords: techniques: photometric - stars: fundamental parameters - stars: variables : RR Lyrae - stars: individual : V838 Cyg - space vehicles:\(Kepler\)
## 1 Introduction
RR Lyrae stars are short-period pulsating variables in the evolution stage of horizontal branch (HB), generally with pulsation periods of 0.2 - 1.2 days and pulsation amplitude of 0.2 - 1 mag in \(V\) band. Because of these pulsating characteristics, they are easy to be identified in the observations. In fact, RR Lyrae stars have been discovered for more than 100 years, and have a long history of research and systematic achievements (Smith, 2004). In the recently released Gaia Dr3 catalogue, scientists have provided up to 270 thousand RR Lyrae stars located in the Milky Way and nearby galaxies(Gaia Collaboration, 2022). Due to their wide distribution and numerous quantities, RR Lyrae stars are often used as probes to study the chemical, evolutionary and dynamical properties of old low-mass stars in the Milky Way (Smith, 2004).
However, in the field of RR Lyrae stars, there has always been an important observation and research content, that is, the Blazhko effect (Blazko, 1907) and the dominant physical mechanism behind it. Most RR Lyrae stars are single-mode pulsating variables: ab type of RR Lyrae stars (RRab stars) dominated by the fundamental mode or c type of RR Lyrae stars (RRc stars) dominated by the first overtone mode. In the observations, the light curves of some RR Lyrae stars show good stability, but a considerable number of stars show phenomena similar to the amplitude and frequency modulations in radio technology (Benko et al., 2011), which are specifically manifested as periodic or quasi-periodic changes of pulsation parameters over time with periods from a few days to hundreds of days (Kolenberg, 2008). With the acquisition of ground-based and space-based photometric data in recent years (especially \(Kepler\) mission, Borucki et al., 2010;
Koch et al., 2010), some discoveries have been made from the perspective of observation, including the conclusion that about 40% - 50% of the Galactic RRab stars exist Blazhko effect (Jurcsik et al., 2009; Nemec et al., 2013; Prudil & Skarka, 2017). Of course, there are also studies suggest that more than 90% of RR Lyrae stars have additional components in the frequency spectrum of the light curves (Kovacs, 2018, 2021). This difference in the incidence rate should be caused by different definitions of the Blazhko effect. As a macroscopic phenomenon, stellar pulsation is affected by a variety of factors throughout the process. When the observation accuracy is high enough, all RR Lyrae stars should be found to have additional oscillation and modulation components. Our viewpoint is to limit the Blazhko effect to those modulations that exhibit periodicity or quasi-periodicity. Otherwise, arbitrarily expanding the definition will cause the research to lose its pertinence.
Several theoretical models have been proposed to explain the Blazhko effect, but no consensus has been reached so far (Bryant, 2015). Perhaps it is valuable to find more clues and summarize more patterns from observations (Le Borgne et al., 2012; Skarka et al., 2020). Some works have been done on the statistical analysis of pulsation parameters and Blazhko modulation parameters. Jurcsik et al. (2005b) studied the relationship between the modulation amplitude and pulsation frequency of Blazhko stars, and found that both the modulation amplitude and the pulsation amplitude increased with the decrease of the pulsation period. Benko et al. (2014) and Benko & Szabo (2015a) found a positive correlation between the modulation period and modulation amplitude of the Blazhko RR Lyrae stars.
Metal abundance is one of the most important parameters in stellar evolution. In the study of HB stars in globular clusters, metal abundance is the first factor affecting HB morphology (the "first parameter", Sandage & Wallerstein, 1960). In the field of RR Lyrae stars, metal abundance is also very important. It is associated with absolute magnitude: metal-poor RR Lyrae stars are brighter and more massive (Bono et al., 2007; Sandage, 2010). Metal abundance is also related to pulsation characteristics (i.e., \(P_{\rm pul}\)-\(\phi_{31}\)-[Fe/H] relation, see Mullen et al., 2021, 2022, and references therein). There are also some works accumulated on whether there is a relationship between metal abundance and the Blazhko effect. Szeidl (1976) found that the proportion of Blazhko RRab stars increased with \(\Delta\)S, and pointed out that the Blazhko stars generally have rather low metal abundance. Smith (2004) compared the distribution of metal abundance in Blazhko RRab stars with the distribution of bright RRab stars, and concluded that "No trend in prevalence of the Blazhko effect as a function of metallicity is evident in this comparison". Jurcsik et al. (2005a) studied the Blazhko RRab stars in the Galactic bulge and the large Magellanic cloud, and found that the photometric metal abundance distributions of non-Blazhko and Blazhko variables are similar, and the difference in metal abundance cannot explain the modulation incidence rate that occurs in the two systems. Based on the studies on RR Lyrae stars in \(Kepler\) field, Benko & Szabo (2015b) have noticed that there seems to be some relationship between the metal abundance and the amplitude of modulation, but did not make further research. Skarka et al. (2020) found that the possible range of modulation amplitudes decreases with increasing pulsation periods and deduced that the Blazhko effect was suppressed in more metal-poor RR Lyrae stars. It can be seen that from the perspective of observation, researchers do not have a unified view and understanding of the correlation between metal abundance and Blazhko modulation.
V838 Cyg had been examined at the Maria Mitchell Observatory spanning about 50 years (Sands, 1978). A total of 70 times of light maximum were available in GEOS database1. Based on these these data and \(Kepler\) data, Nemec et al. (2011) obtained the long-term period change rate of V838 Cyg: 0.05(4) d Myr\({}^{-1}\). At first, V838 Cyg was considered as a non-Blazhko RRab star (Benko et al., 2010; Nemec et al., 2011), but Nemec et al. (2013) found that this variable star exhibited amplitude and frequency modulations with the lowest range, and identified it as a Blazhko star. In addition, they further pointed out that there is a variation in the modulation period, from 47, 54 to 64 days. This weak amplitude modulation with a period of 59.5 d was confirmed by an independent method (Benko et al., 2014). In addition, the metal abundance of V838 Cyg is the largest among all Blazhko RRab stars in \(Kepler\) field ([M/H] = -1.01, Nemec et al., 2013). Noting these characteristics, we use the photometric data collected from different sky surveys to conduct targeted analysis on V838 Cyg, and try to find some relevant indications. In Section 2, we present the photometric data from different sky surveys, including ground-based and space-based data. We describe the detailed analysis and discussion in Section 3 and 4, respectively. In Section 5, we present our summary. We notice that the light curves of V838 Cyg show Moire pattern, which may influence the analysis, so the possible impact of this effect is discussed in the Appendix.
Footnote 1: [http://rr-lyr.irap.omp.eu/dhrr/](http://rr-lyr.irap.omp.eu/dhrr/)
## 2 Sky Surveys
We collect the photometric data of V838 Cyg from several sky survey projects, including the ground data provided by SWASP and ASAS-SN, and the space data by \(Kepler\) and TESS. The former is mainly used to study the long-term change of pulsation period, while the \(Kepler\) data with high accuracy is used to study the weak modulation of V838 Cyg (Nemec et al., 2013; Benko et al., 2014). In addition, the GEOS RR Lyr database provides 70 times of light maximum for V838 Cyg (Le Borgne et al., 2007), and we use those published ones for \(O-C\) analysis. We introduce these data from different sources in the following subsections.
### Swasp
SWASP2 (SuperWASP, Super Wide Angle Search for Planets) is one of the early sky survey projects to carry out ground observation for searching exoplanets. It consists of two robotic observatories, respectively located on the island of La Palma and South Africa, allowing the observations to cover both the northern and southern hemispheres of the sky. Their observation equipments are composed of 8 wide-angle cameras, and the effective aperture of each lens
is about 20 cm (see Butters et al. 2010 for more details). The project was carried out from 2004 to 2008, used the transiting method to find exoplanets, and was one of the important exploration projects in the related fields before \(Kepler\) space mission (Smith & WASP Consortium, 2014). In addition, the variable stars and other fields also benefit from SWASP (i.e. Norton et al., 2011; Smalley et al., 2011; Skarka, 2014; Bernhard et al., 2015; Liska et al., 2016; Rigon et al., 2017). For V838 Cyg, SWASP provides more than 7600 photometric data points from different cameras. Based on the accuracy of the data, we use the data observed by cameras 141 and 144 for analysis.
It is worth to mentioning that in the pre-analysis, we found that there is a low-frequency peak in the Fourier spectrum of the light curves, which indicates the existence of a modulation component. Figure 1 shows the phased light curves observed by camera 141, while the modulation period is 28.62 d. This value is close to the orbital period of the lunar revolution (27.3 d). We found that the observation phase at 1 (see the red box in Figure 1, the observation times are HJD2454607.6, 2454635.5 and 2454664.5) exactly corresponds to the full moon. Therefore, we preliminarily believe that the modulation phenomenon is caused by the influence of moonlight. The data obtained from camera 144 also have the similar phenomenon, but the modulation is about half of the lunar revolution period (14.68 d).
### Asas-Sn
ASAS-SN3 (All-Sky Automated Survey for Supernovae), as its name suggests, its science goal is to search for bright supernovas and transients (Shappee et al., 2014). This project now has 24 telescopes distributed around the globe, surveying the entire visible sky down to about 18th magnitude. This project provides 384 photometry data in \(V\) band for V838 Cyg (Jayasinghe et al., 2018). We use these data to study the long-term change of the pulsation period (see Subsection 3.2).
Footnote 3: [https://www.astronomy.ohio-state.edu/asassn](https://www.astronomy.ohio-state.edu/asassn)
### \(Kepler\) and TESS
The goal of the \(Kepler\) mission is to discover exoplanets by transit method (Borucki et al., 2010), and it opened up a new era in the related field. The number of exoplanets discovered by \(Kepler\) project has reached thousands4. This mission also greatly promoted the research in the field of RR Lyrae stars (Plachy & Szabo, 2021). Based on \(Kepler\) data, V838 Cyg has been studied by Benko et al. (2010), Nemec et al. (2011, 2013), and Benko et al. (2014). In the present paper, we use the rectified data (Quarter 1-16) provided by Benko et al. (2014) and the Quarter 17 LC data taken from MAST (Barbara A. Mikulski Archive for Space Telescopes5) for analysis.
Footnote 4: [http://exoplanet.eu/](http://exoplanet.eu/)
Footnote 5: [https://mast.stsci.edu/portal/Mashup/Clients/Mast/](https://mast.stsci.edu/portal/Mashup/Clients/Mast/)
Similar to \(Kepler\), the goal of TESS (Transiting Exoplanet Survey Satellite) is to search exoplanets near the solar system by using the transit method (Ricker et al., 2015). However, due to the small aperture size of 10 cm, its resolution is very low, only 21 arcseconds/pixel6. Those faint targets, such as V838 Cyg, are easily affected by nearby starlight. We download the \(5\times 5\) pixel image cutouts of 5 sectors (Brasseur et al., 20197), and obtain the light curves of V838 Cyg in flux by using the following equation:
Footnote 6: [https://teess.mit.edu/science/](https://teess.mit.edu/science/)
Footnote 7: [https://mast.stsci.edu/teescut/](https://mast.stsci.edu/teescut/)
\[\Delta flux(t)=[\frac{\sum\limits_{inner=1}^{9}flux\_inner(t)}{9}-\frac{\sum \limits_{outer=1}^{16}flux\_outer(t)}{16}], \tag{1}\]
where the \(flux\_inner\) represents the flux in each central \(3\times 3\) pixels, and the \(flux\_outer\) represents the flux of the 16 pixels around the central 9 pixels. It should mention that the flux amplitude of the light curve is not the true amplitude. We use these data to determine the times of light maximum and minimum for \(O-C\) analysis.
## 3 Analyses
The analysis methods we adopt are mainly the \(O-C\) method, Fourier spectrum method and corresponding research methods for modulation (Le Borgne et al., 2012; Shibahashi & Kurtz, 2012), while the objects of study are those \(O-C\) values, pulsation parameters and Fourier coefficients, etc. We introduce the method and process of obtaining these parameters and further analyses in the following subsections.
### data processing and pre-analysis
In the \(O-C\) analysis of pulsating variable stars, most scholars choose the light maximum as the reference phase, and calculate the times of the corresponding phase. The common method is to select the observation data near the light maximum phase, fit the data with a linear polynomial, and then obtain the times of light maximum by using the calculated first derivatives (Sterken, 2005). This method is applies to SWASP data: we use the 5th order linear polynomial as the fitting formula and determine 25 times of light maximum (see Table 1). However, the time resolutions of the photometric data from other sky surveys are lower, this method is no longer suitable. Therefore, we use a method similar to the short-time Fourier transform to obtain the times of light maximum and minimum. We first divide the light curves into many small segments, and then use the following equation to fit each segment:
\[m(t)=A_{0}+\sum\limits_{k=1}^{n}A_{k}\sin[\frac{2\pi kt}{P_{\rm{pl}}}+\phi_{k}], \tag{2}\]
where \(t\) is the time; \(A_{0}\) is the mean magnitude; \(A_{k}\) is the amplitude of the \(k\) component, and \(\phi_{k}\) is the phase of the \(k\) component in sin term. Then, according to the fitting results, we can obtain the times of light extreme, pulsation parameters (the full amplitude, \(A_{1}\), and magnitude at light maximum \(Mag_{\rm{max}}\)), and Fourier coefficients (including the amplitude ratios, \(R_{ij}\)=\(A_{i}/A_{j}\), and the phase differences,
\(\phi_{ji}=i\phi_{j}-j\phi_{i}\)). In present paper, we provide the values of \(R_{21}\), \(R_{31}\), \(\phi_{21}\), and \(\phi_{31}\) of each segment. Because of the different characteristics of sky surveys, the selections of \(k\) value are different in specific analyses. For ASAS-SN data, \(k=\)5, and for TESS data, \(k=10\). For \(Kepler\) data, after many attempts, we set \(k=10\) when determining the times of light extreme to avoid the potential impact of Moire effect, and set \(k=15\) when obtaining the Fourier parameters and coefficients (the pulsation amplitude is systematically overestimated when \(k=10\)). This series of analysis method have been successfully applied (Li & Qian, 2014; Li et al., 2018, 2021, 2022).
Table 1 lists the times of light maximum of V838 Cyg obtained from the sky surveys. The time standard given by \(Kepler\) and TESS is BJD_TDB. To unify, we convert all the times to HJD_UTC (Eastman et al., 2010). These data are used to analyze the long-term period change of V838 Cyg (see Subsection 3.2). To study the weak modulation in V838 Cyg, we also obtained the pulsation parameters and Fourier coefficients from \(Kepler\) data, including the times of light maximum and minimum, the full pulsation amplitude, magnitude at light maximum, \(A_{1}\), \(R_{21}\), \(R_{31}\), \(\phi_{21}\), and \(\phi_{31}\). These data are provided as supplementary data (see Subsection 3.3).
### long-term period changes
Based on the data provided by the GEOS database and \(Kepler\), Nemec et al. (2011) proposed the pulsation period of V838 Cyg is constant or increases slowly, in which the change value in the latter case is \(0.05\pm 0.04\) d Myr\({}^{-1}\). Based on the times of light maximum in Table 1 and those provided by literature (Meinunger, 1970; Sands, 1978; Hubscher et al., 2009, 2010; Hubscher, 2011, and four visual data provided by Vandenbroe J. and Ferrand S.), we obtained a new \(O-C\) diagram (see Figure 2) by using the following linear ephemeris,
\[HJD_{\rm max}=2454964.5731+0.^{4}4802799\cdot E. \tag{3}\]
In Figure 2, the \(O-C\) diagram shows a clear parabolic shape, which means that the pulsation period increases linearly. Under the two different conditions of whether to consider the early \(O-C\) points (Epoch \(<\) -20000), we fitted the \(O-C\) diagram with a quadratic polynomial, and obtained the corresponding change rate of period: 0.050(2) d Myr\({}^{-1}\) (dotted green line) and 0.211(7) d Myr\({}^{-1}\) (dotted blue line). The former result is consistent with those of Nemec et al. (2011), while the latter result without considering the early data are four times larger. It can be seen that although the accuracy of early data is low, it is still very important in the studying of long-term period changes. We also plotted the \(O-C\) residuals after removing the parabola component for the two situations (bottom panels in Figure 2). It can be seen from the distribution of residuals on the vertical axis that the accuracy of \(Kepler\) data is the highest. Therefore, we focus on \(Kepler\) data to study the weak modulation components.
### the modulations in \(Kepler\) data
The weak modulation with a period of tens of days in the Kepler light curves of V838 Cyg has been discovered by Nemec et al. (2013) and Benko et al. (2014). In this subsection, we use the pulsation parameters and Fourier coefficients to carry out a more detailed analysis. We regard these time series data as signals, process them with Fourier transform by using the software Period04 (Lenz & Breger, 2005), and obtain the corresponding spectrum information. The panels in the left column of Figure 3 are the parameters and coefficients plotted as time series, in which the \(O-C_{1,\rm max}\) and \(O-C_{1,\rm min}\) denote the \(O-C\) residuals after the parabola component has been removed (The long term changes may interfere with the analysis of periodic signals). The red solid lines refer to the change of long-period modulation, while the green solid lines refer to the modulation component with a short-period. In the panels of \(\phi_{21}\) and \(\phi_{31}\), the data also show a linear increase (black solid lines). Therefore, before Fourier transforms, we remove the linear component (3.065\(\times\)10\({}^{-6}\), and 5.568\(\times\)10\({}^{-6}\) rad d\({}^{-1}\)).
The panels in the right column are the corresponding Fourier amplitude spectra in low-frequency range. It can be seen that all the spectra show significant peaks at the frequency 0.01682 d\({}^{-1}\), corresponding to the modulation previously discovered. However, in the upper five spectra, it is obvious that there is also a peak with lower frequency (the mean value is about 0.00119 d\({}^{-1}\), and the corresponding period is about 840 days), but there is no corresponding component in the amplitude spectra of \(R_{21}\), \(R_{31}\), \(\phi_{21}\), and \(\phi_{31}\). This suggests that the low-frequency component seems to be some simpler modulation. The red and green vertical dotted lines indicate the frequencies of the two modulation components, respectively. In the upper-right two panels, the cyan line represents the amplitude spectra when \(k=15\) (see the introduction in Subsection 3.1). It can be seen that there are several peaks around the short-period modulation, and their frequencies are 0.01577, 0.01684, 0.01848 and 0.02105 d\({}^{-1}\) (the corresponding periods are 63, 59, 54, and 47 days), respectively. This means that, when \(k=15\), the analysis results are affected by the Moire effect (see the appendix for details).
To further study the modulations, we also plot their parameter phase diagrams in Figure 4 and 5. Figure 4 shows the phase diagrams of short-period modulation. It can be seen that the phase difference between \(O-C_{1,\rm max}\) and \(O-C_{1,\rm min}\) is about \(\pi/2\), while the changes between \(O-C_{1,\rm max}\) and \(A_{1}\) (and \(Mag_{\rm max}\)) are basically in phase. In addition, we also found that the \(R_{21}\) is in phase with \(R_{31}\), while \(\phi_{21}\) and \(\phi_{31}\) are just opposite. For the long-period modulation (see Figure 5), the phase of \(O-C_{1,\rm max}\) and \(O-C_{1,\rm min}\) are the same, and the changes of the other three parameters related to the amplitude, full amplitude, \(A_{1}\), and magnitude at light maximum, are basically in phase. Table 2 lists the corresponding parameter results.
To demonstrate the varieties of phenomenology, Le Borgne et al. (2012) used the \(O-C\) and magnitude at light maximum folded with modulation period to plot the closed curves that describe the shape of the Blazhko effect. Using their method, we also plot the \(O-C_{1,\rm max}\) versus \(Mag_{\rm max}\) diagram of the two modulations of V838 Cyg in Figure 6. The black points refer to the binned observation points, while the green and red solid lines are the corresponding fitting results. For the short-period modulation, the change is clockwise, and the two parameters are roughly in phase, which is consistent with the situation in Fig. 9 of Le Borgne et al. (2012). The change of long-period modulation is counterclockwise,
and the correlation between \(O-C_{1,\max}\) and \(Mag_{\max}\) is not obvious, showing that it is indeed different from the general Blazhko modulation.
## 4 Discussions
The light curves of V838 Cyg show two different weak modulations. The component with a short-period causes the changes in pulsation period, amplitude and the Fourier coefficients, while the component with the long-period only causes the change of period and amplitude. The former exhibits typical Blazhko modulation characteristics, and it seems that it is intrinsic (due to the changes in the stellar atmosphere, i.e., multi-mode resonance, Szabo et al., 2010; Buchler & Kollath, 2011; Kollath et al., 2011; Bryant, 2015; the variation of turbulent convection strength due to oscillatory magnetic field, Stothers, 2006, 2010; or shock waves, Gillet, 2013), while the latter is extrinsic (such as due to the light travel time effect, LiTE, Irwin, 1952).
### The long-period modulation
Li & Qian (2014) found similar changes in the \(O-C\) diagrams of the other two RR Lyrae stars FN Lyr and V894 Cyg in the \(Kepler\) field. According to the same change trends of the \(O-C\) diagrams obtained from the times of light maximum and minimum, they supposed the mechanism of the changes is the LiTE caused by the presence of a companion star. We assume that this effect also exists in V838 Cyg, use an eccentric orbit model to fit the \(O-C\) diagrams (Li & Qian, 2014), and obtain the partial orbit parameters (see Table 3). Figure 7 plots the corresponding \(O-C\) diagrams, in which the same change trends can be observed.
In this case, assuming the mass of the pulsating primary star is 0.6 M\({}_{\sun}\), the calculated lower mass limit of the companion is about 0.0126 M\({}_{\sun}\), and it should be a massive planet. But one problem that needs to be considered is whether the presence of a planet causes amplitude variation and change in \(Mag_{\max}\). The general viewpoint is that the companion with long orbital periods will only cause modulation of the pulsation frequency (Shibahashi & Kurtz, 2012). Eclipsing may occur, but it is difficult to be recognized under the interference of pulsation. However, in the field of binary stars and cataclysmic variable stars, there is a reflection effect between companions (Wilson, 1990), and perhaps the companion can influence the pulsation amplitude and \(Mag_{\max}\) of the pulsating host star through this effect.
Shibahashi & Kurtz (2012) developed a new technology which can deduce the orbital parameters of the binary system from the Fourier transform spectra of the light curves that are affected by the LiTE, and this method has also been successfully applied (Murphy et al., 2013). Shibahashi & Kurtz (2012) pointed out that for most cases (\(\alpha\ll 1\), where \(\alpha\) is the amplitude ratio of the sidelobes to the central peak; for the definition of \(\alpha\), see Equation [21] in their paper), the LiTE will cause sidelobes near the pulsation frequencies, and show the following important characteristics: "the amplitude ratio of the sidelobes to the central peak is the same for all pulsation frequencies, and for low eccentricity systems the phases of the sidelobes are in quadrature with that of the central peak at the time of zero radial velocity". We use their method to verify the spectrum of V838 Cyg. Using software Period04, we performed Fourier analysis on rectified \(Kepler\) data (Benko et al., 2014). After pre-whitening by the pulsation peaks, the amplitude spectrum near the pulsation frequency \(f_{0}\) and harmonic frequencies are presented in Figure 8, in which two groups of peak components, corresponding to short and long-period modulations, can be observed. Figure 9 shows the \(\alpha/\nu_{\rm osc}\) of the two modulations obtained by different \(k\) values. It can be seen that the values of \(\alpha/\nu_{\rm osc}\) for long-period modulation are basically equal, and conform to the characteristics described by Shibahashi & Kurtz (2012). Table 4 lists the corresponding values of \(\alpha/\nu_{\rm osc}\).
However, there is a problem, that is, the frequency difference between the central peak and the sidelobes representing the long-period component is 0.000895 d\({}^{-1}\), which is not consistent with our analysis results. We find that the reason for this result is that the long-term period increase of V838 Cyg affects the spectral analysis (this is also the reason why we used \(O-C_{1}\) values in Figure 3). On the one hand, the non-periodic term makes the period of the long-period modulation longer; on the other hand, it also increases the absolute value of \(\alpha/\nu_{\rm osc}\). That is, the \(\alpha/\nu_{\rm osc}\) values (red solid points) in figure 9 include two components: the non-periodic term and the periodic modulation, which contribute about 75% and 25%, respectively. Since the former can be regarded as the frequency modulation, their \(\alpha/\nu_{\rm osc}\) are also equal, so we can expect that the \(\alpha/\nu_{\rm osc}\) values of the modulation with the period of 840 days are also equal. In addition, the Fourier spectra are affected by the short timebase of \(Kepler\) data and the asymmetric variations (i.e. the unnegligible orbital eccentricity, see Figure 7).
Based on the above analyses and discussions, the long-period modulation in V838 Cyg shows different characteristics from the general Blazhko modulation. we initially propose that the corresponding mechanism is the LiTE, at least it is an external factor.
### Metal abundance and Blazhko modulation
One of our purposes is to explore the relationship between metal abundance and Blazhko modulation. Note that V838 Cyg is the RRab star with the highest metal abundance and the weakest modulation among all Blazhko RRab stars in the \(Kepler\) field (Nemec et al., 2013), we define a similar parameter with reference to the amplitude modulation factor (characterize modulation depth) in radio technology:
\[M_{\rm a}=\frac{A_{\rm full,max}-A_{\rm full,min}}{A_{\rm full,max}+A_{\rm full,min}}, \tag{4}\]
where \(A_{\rm full,max}\), and \(A_{\rm full,min}\) are the maximum and minimum values of the full amplitude of the light curve, respectively. These values and the metal abundances are given in Tables 2 and 9 of Nemec et al. (2013). In the left panel of Figure 10 the metal abundances are plotted against the \(M_{\rm a}\). It can be seen that there is a significant negative correlation between the two parameters, that is, the higher the metal abundance, the weaker the Blazhko modulation. We also notice that all those RRab stars in the \(Kepler\) field with higher metal abundance than V838 Cyg are non-Blazhko stars, and there are also some non-Blazhko stars that show low metal abundances. These phenomena seem to indicate that metal
abundance does not necessarily determine whether the modulation occurs, but when it occurs, the metal abundance has an important influence on the intensity of modulation, rich metals will'suppress' modulation.
The sample size of \(Kepler\) RR Lyrae stars is not large, and more information is collected in other literature. Le Borgne et al. (2012) provided the magnitude variation at the light maximum of some Blazhko stars (see their Table 1). We define a similar parameter:
\[M_{a}^{{}^{\prime}}=\frac{\Delta Mag_{\rm max}}{A_{\rm full,mean}}, \tag{5}\]
The right panel of Figure 10 plots the metal abundance against the \(M_{a}^{{}^{\prime}}\), where the metal abundance of the corresponding stars are given by Layden (1994). A similar trend can be seen in the diagram. However, unlike the left panel, there are also some stars with weak modulations located in the range of poor metal abundance. But in the metal-rich range, the modulations are the weakest. This illustrates the'suppression' effect: large modulation is restricted to high metallicity.
The following question is how the metal abundance affects the Blazhko modulation. We are not familiar with the relevant theories and models, but only give a superficial viewpoint: in the stellar evolution model, the metal abundance is generally considered to mainly affect the opacity, the rich metal makes the opacity increase, the radiation temperature gradient increases accordingly, and affects the convection in the atmosphere (intensity and boundary position?). In this case, the turbulence, magnetic field activity and shock wave which may cause the Blazhko modulation are suppressed. Based on observation, Chadid et al. (2014) proposed that the Blazhko effect originates from a dynamical interaction between a multishock structure and an outflowing wind in a coronal structure; and then Chadid et al. (2017) show that the main shock intensity of metal-poor RR Lyrae stars is larger than that of metal-rich stars. According to their viewpoints, there is the following logical relationship: the metal abundance in the stellar atmosphere is negatively correlated with the shock intensity, thus suppressing the modulation caused by the shock waves.
The metal abundance is related to several other stellar intrinsic parameters (such as luminosity, mass, and effective temperature) and pulsation parameters. Whether some of these parameters are more closely related to modulation needs to be considered. We also compared the relationship between the \(M_{a}\) (\(M_{a}^{{}^{\prime}}\)) and other pulsation parameters (\(P_{\rm pul}\), \(P_{\rm Bl}\), and other stellar atmospheric parameters), but did not find any significant correlation as shown in Figure 10. However, it cannot be ruled out because some physical quantities are also interfered with by the evolution effect. Li et al. (2022) studied the four RRc stars in \(Kepler\) field, and found that the change intensities of the \(O-C\) diagram are related to the macroturbulent velocities. In fact, the metal abundances of those four stars are also negatively correlated with the macroturbulent velocity.
Low metal abundance RR Lyrae stars have greater luminosities and higher masses (Bono et al., 2007; Sandage, 2010; Nemec et al., 2013). If the correlations in Figure 10 are established, it indicates that strong modulation tends to occur in those RR Lyrae stars with greater masses, high luminosities, and longer pulsation periods. However, this conflicts with the proposition of Skarka et al. (2020). In their study, the pulsation period is used as the abscissa value. This parameter is not related to the intrinsic parameters of stars, but also related to the evolutionary stage. A long pulsation period does not necessarily mean that the metal abundance of RR Lyrae stars must be low, and a short period does not necessarily mean that the metal abundance is high (see Figure 7 in Fabrizio et al., 2021). In addition, in the Bailey diagram, the pulsation amplitude decreases with the pulsation period, which may statistically cause the modulation amplitude to decrease accordingly. The samples studied by Skarka et al. (2020) are those RRab stars in the Galactic bulge, most of which have pulsation periods of less than 0.7 days. The median photometric metallicity of the whole sample is about -1 dex, which is systematically higher than those Galactic field RR Lyrae stars ([Fe/H] = -1.59, Fabrizio et al., 2019). The selection effect cannot, therefore, be ruled out.
Anyway, the potential relationship between metal abundance and Blazhko modulation needs to be further verified from an observational and theoretical perspective. Recently, based on the spectral data of different sky survey projects, the metal abundance of thousands of RR Lyrae stars has been obtained (Fabrizio et al., 2019; Liu et al., 2020); The TESS Space Telescope is capable of providing enough photometric data to study the pulsations and modulations that occur in Bright RR Lyrae stars. The combination of the two data sources will certainly lead to more comprehensive research results. In terms of theoretical simulations, many models devoted to explaining the Blazhko effect do not consider the influence of different metal abundances, and perhaps new work can be considered from this perspective in the future.
## 5 Summary
Among the 16 Blazhko RRab stars studied by Nemec et al. (2013), V838 Cyg shows the weakest modulation and the highest metal abundance. After noting these characteristics, we made an in-depth study of V838 Cyg using data from several sky surveys, and made a comparative study of the modulations and physical parameters of some Blazhko RRab stars. Finally, we obtained the following results.
1. The \(O-C\) diagram shows that the pulsation period of V838 Cyg increases on a large time scale. The specific rate of period change varies depending on whether the early data is used: if used, the rate is 0.050 d Myr\({}^{-1}\); or 0.211 d Myr\({}^{-1}\) when they are not considered in the analysis. Further observation accumulation is needed to obtain a more exact value.
2. Based on the reanalysis of \(Kepler\) data, we confirmed the modulation component with a period of 59.45 Days discovered in earlier literature, and also found an additional weak modulation with a longer period (about 840 days). The former shows the characteristics of the typical Blazhko effect, while the long-period modulation is relatively simpler, and only shows the changes in pulsation period and amplitude. We suspected the corresponding mechanism is likely to be external (e.g. the LiTE?).
3. Based on the modulation and stellar physical parameters provided by Nemec et al. (2013), we found that there
is a significant negative correlation between the metal abundance and modulation amplitude of Blazhko RRab stars. We also collected the data from other samples (Layden, 1994; Le Borgne et al., 2012), and found that the metal abundances of the objects with the strongest modulations are poor, and the modulation factor also shows a downward trend with the increase of metal abundance. This shows that the metal abundance plays an important role in Blazhko modulation (i.e. the inhibitory effect). This finding may provide some useful support for theoretical models to explain the Blazhko effect.
When studying the weak modulation of scale from tens of days to hundreds of days, the ultraprecise and uninterrupted photometric data provided by the space telescope has an irreplaceable position. Although the accuracy and resolution of the data provided by the TESS space telescope are lower than those of the \(Kepler\) space telescope, it can still provide a sufficient number of RR Lyrae variable samples for analysis. Combined with the spectroscopic metal abundance data, it is believed that the relationship between Blazhko modulation and metal abundance can be more fully demonstrated.
## 6 Acknowledgments
This work is supported by the National Natural Science Foundation of China (No. 11933008); the Natural Science Foundation of Yunnan Province (No. 202201AT070187); the National Natural Science Foundation of China (No. 12103084); and the International Cooperation Projects of the National Key R&D Program (No. 2022YFE0127300). This paper makes use of data from the DR1 of the WASP data (Butters et al., 2010) as provided by the WASP consortium, and computational resources supplied by the project "e-Infrastruktura CZ" (e-INFRA CZ LM2018140) supported by the Ministry of Education, Youth and Sports of the Czech Republic.
## 7 Data availability statements
The data used in this article are from five sky survey projects, and the original data or images can be obtained through the following links query:
[http://rr-lyr.irap.omp.eu/dbrr/](http://rr-lyr.irap.omp.eu/dbrr/) (GEOS database)
[https://www.superwasp.org/](https://www.superwasp.org/) (SWASP)
[http://www.astronomy.ohio-state.edu/assassn](http://www.astronomy.ohio-state.edu/assassn)
(ASAS-SN)
[https://mast.stsci.edu/portal/Mashup/Clients/](https://mast.stsci.edu/portal/Mashup/Clients/)
Mast/Portal.html (MAST)
[https://mast.stsci.edu/tesscut/](https://mast.stsci.edu/tesscut/)(TESS)
The data generated by our analyses are available in the article and in its online supplementary material.
|
2304.04887
|
Central limit theorems for martingales-II: convergence in the weak dual
topology
|
A convergence theorem for martingales with c\`adl\`ag trajectories (right
continuous with left limits everywhere) is obtained in the sense of the weak
dual topology on Hilbert space, under conditions that are much weaker than
those required for any of the usual Skorohod topologies. Examples are provided
to show that these conditions are also very easy to check and yield useful
asymptotic results, especially when the limit is a mixture of stochastic
processes with discontinuities.
|
Bruno N. Remillard, Jean Vaillancourt
|
2023-04-10T22:21:04Z
|
http://arxiv.org/abs/2304.04887v2
|
# Central limit theorems for martingales-II: convergence in the weak dual topology
###### Abstract.
A convergence theorem for martingales with cadlag trajectories (right continuous with left limits everywhere) is obtained in the sense of the weak dual topology on Hilbert space, under conditions that are much weaker than those required for any of the usual Skorohod topologies. Examples are provided to show that these conditions are also very easy to check and yield useful asymptotic results, especially when the limit is a mixture of stochastic processes with discontinuities.
Key words and phrases:Brownian motion, stochastic processes, weak convergence, martingales, mixtures 2020 Mathematics Subject Classification: Primary 60G44, Secondary 60F17 Partial funding in support of this work was provided by the Natural Sciences and Engineering Research Council of Canada and the Fonds quebecois de la recherche sur la nature et les technologies. We would like to thank Bouchra R. Nasri for suggesting Proposition A.2.
space in question, namely that associated with locally square-integrable trajectories. Since weak convergence of probability measures is at the core of this paper, and in order to avoid the potential confusion brought upon by the overuse of the term "weak", we will from now on refer to this weaker topology as the \(\mathcal{L}_{w}^{2}\)-topology (and all related terms accordingly).
The paper is organized as follows. Section 2 presents the \(\mathcal{L}_{w}^{2}\)-topology. The corresponding CLT for sequences of \(D\)-valued square integrable martingales is in Section 3, along with the required preliminaries about martingales. Section 4 provides instances where even \(\mathcal{M}_{1}\)-convergence (let alone \(\mathcal{J}_{1}\)- or \(\mathcal{C}\)-convergence) fails for interesting sequences of \(D\)-valued processes, while \(\mathcal{L}_{w}^{2}\)-convergence does occur. Examples of applications are worked out in Section 4. More precisely, Section 4.1 deals with stock price approximation and random sums, and it is also shown in Remark 4.1 that many results about the limiting behaviour of renewal counting processes in the literature are erroneous, showing the difficulty to work with \(\mathcal{J}_{1}\) or \(\mathcal{M}_{1}\) convergence when the limit is not continuous. Next, in Section 4.2, we study the occupation time for planar Brownian motion; in particular, we complete the proof of Theorem 3.1 of Kasahara and Kotani (1979) and we correct their Theorem 3.2. Ancillary results as well as most definitions and notations are presented in Appendix A, while any unexplained terminology can be found in either Ethier and Kurtz (1986) or Jacod and Shiryaev (2003). The main proofs are collected in Appendix B.
## 2. Locally square integrable processes and \(\mathcal{L}_{w}^{2}\)-convergence
Denote by \(\mathcal{L}_{loc}^{2}\) the space of real-valued Lebesgue measurable functions on \([0,\infty)\) which are locally square integrable with respect to Lebesgue measure, in the sense that their restriction to \([0,T]\) belongs to Hilbert space \(\mathcal{L}^{2}[0,T]\) for every \(T>0\). The (squared) \(\mathcal{L}^{2}[0,T]\)-norm is denoted by \(|x|_{T}^{2}:=\int_{0}^{T}x^{2}(t)dt\).
A (deterministic) sequence \(x_{n}\in\mathcal{L}_{loc}^{2}\) is said to \(\mathcal{L}_{w}^{2}\)-converge to \(x\in\mathcal{L}_{loc}^{2}\) if and only if the sequence of scalar products \(\int_{0}^{T}x_{n}(t)f(t)dt\to\int_{0}^{T}x(t)f(t)dt\) converges as \(n\to\infty\), for each \(f\in\mathcal{L}^{2}[0,T]\) and each \(T>0\). There is a metric on \(\mathcal{L}_{loc}^{2}\) which is compatible with the \(\mathcal{L}_{w}^{2}\)-topology, turning it into a complete separable metric (hereafter Polish) space -- see Appendix A.
**Remark 2.1**.: Note that \(D\subset\mathcal{L}^{2}_{loc}\) since cadlag functions are bounded on bounded time sets. Furthermore \(\mathcal{M}_{1}\)-convergence of a (deterministic) sequence \(x_{n}\in D\) to a limit \(x\in D\) implies \(\sup_{n}\sup_{t\in[0,T]}|x_{n}(t)-x(t)|<\infty\) for every \(T>0\), since \(\{x_{n}\}\) is a \(\mathcal{M}_{1}\)-relatively compact set and hence uniformly bounded on finite time intervals. By the dominated convergence theorem, convergence of \(x_{n}\) to the same limit \(x\) ensues in the \(\mathcal{L}^{2}[0,T]\)-norm topology for every \(T>0\) and hence so does \(\mathcal{L}^{2}_{w}\)-convergence; indeed, a (deterministic) sequence \(x_{n}\in\mathcal{L}^{2}[0,T]\) converges to \(x\in\mathcal{L}^{2}[0,T]\) in the (strong) norm topology, if and only if both \(x_{n}\)\(\mathcal{L}^{2}_{w}\)-converges to \(x\) and \(|x_{n}|_{T}\) converges to \(|x|_{T}\). Thus \(x\mapsto|x|_{T}\) is not \(\mathcal{L}^{2}_{w}\)-continuous even though \(T\mapsto|x|_{T}\) is continuous.
We next turn to weak convergence of probability measures on the Borel subsets of \(\mathcal{L}^{2}_{loc}\) for the \(\mathcal{L}^{2}_{w}\)-topology. All \(\mathcal{L}^{2}_{loc}\)-valued \(\mathbb{F}\)-processes are built on the same filtered probability space \((\Omega,\mathcal{F},\mathbb{F},P)\) and adapted to a filtration \(\mathbb{F}=(\mathcal{F}_{t})_{t\geq 0}\) satisfying the usual conditions (notably, right continuity of the filtration and completeness). Since \(\mathcal{L}^{2}_{loc}\) is a Polish space, families of probability measures are relatively compact if and only if they are tight, by Prohorov's Theorem. Here is a quick test for \(\mathcal{L}^{2}_{w}\)-tightness.
**Lemma 2.1**.: _A sequence of \(\mathcal{L}^{2}_{loc}\)-valued \(\mathbb{F}\)-processes \(X_{n}\) is \(\mathcal{L}^{2}_{w}\)-tight if and only if the sequence of random variables \(\left\{|X_{n}|_{T};n\geq 1\right\}\) is tight, for each \(T>0\)._
Proof.: Balls in \(\mathcal{L}^{2}[0,T]\) are relatively compact in the \(\mathcal{L}^{2}_{w}\)-topology and because it is a Polish space, tightness is equivalent to relative compactness.
Weak convergence of a sequence of \(\mathcal{L}^{2}_{loc}\)-valued processes \(X_{n}\) to an \(\mathcal{L}^{2}_{loc}\)-valued process \(X\) is denoted by \(X_{n}\stackrel{{\mathcal{L}^{2}_{w}}}{{\rightsquigarrow}}X\). For convenience, write \(\mathcal{I}(f)(t)=\int_{0}^{t}f(s)ds\) for any \(t\geq 0\). This convergence is characterized by the following result proven in Appendix B.1.
**Theorem 2.2**.: \(X_{n}\stackrel{{\mathcal{L}^{2}_{w}}}{{\rightsquigarrow}}X\) _holds if and only if_
1. \(\left\{|X_{n}|_{T};n\geq 1\right\}\) _is tight, for each_ \(T>0\)_;_
2. _There is an_ \(\mathcal{L}^{2}_{loc}\)_-valued process_ \(X\) _such that_ \(\mathcal{I}(X_{n})\stackrel{{ f.d.d.}}{{\rightsquigarrow}} \mathcal{I}(X)\)_._
The Cramer-Wold device yields the equivalence of statement Theorem 2.2\((ii)\) to \(\mathcal{I}(X_{n}h)\stackrel{{ f.d.d.}}{{\rightsquigarrow}} \mathcal{I}(Xh)\), for any \(h\in\mathcal{L}^{2}_{loc}\). Note that \(X_{n}\stackrel{{\mathcal{L}^{2}_{w}}}{{\rightsquigarrow}}X\) does not necessarily imply \(X_{n}\stackrel{{ f.d.d.d.}}{{\rightsquigarrow}}X\); in fact it may even happen that \(X_{n}(t)\stackrel{{ Law}}{{\rightsquigarrow}}X(t)\) does not hold
for any \(t\in[0,T]\), even though the integrals \(\mathcal{I}(X_{n})(t)\) converge and their limits are almost surely differentiable everywhere. Moreover, if \(X_{n}\in D\), \(X_{n}(t)\) is arbitrarily close to \(\dfrac{\mathcal{I}(X_{n})(t+\epsilon)-\mathcal{I}(X_{n})(t)}{\epsilon}\).
**Remark 2.2**.: Extension to processes with \(\mathbb{R}^{d}\)-valued trajectories, of \(\mathcal{L}^{2}_{loc}\)-valued processes, \(\mathcal{L}^{2}_{w}\)-convergence, and so on -- through writing \(|X|_{t}:=\sum_{i=1}^{d}|X_{i}|_{t}\), \(X(s)f(s):=\sum_{i=1}^{d}X_{i}(s)f_{i}(s)\), etc., when \(X=(X_{1},\ldots,X_{d})\) and \(f=(f_{1},\ldots,f_{d})\) -- leads to immediate confirmation of Remark 2.1, Lemma 2.1 and Theorem 2.2 for processes with trajectories in \(\mathbb{R}^{d}\) as well. We need this clarification for the next statement.
**Lemma 2.3**.: _If \(\mathcal{L}^{2}_{loc}\)-valued processes \(X_{n}\), \(Y_{n}\), \(X\) and \(Y\) are such that \(X_{n}\stackrel{{\mathcal{L}^{2}_{w}}}{{\rightsquigarrow}}X\), \(Y_{n}\stackrel{{\mathcal{L}^{2}_{w}}}{{\rightsquigarrow}}Y\) and \((\mathcal{I}(X_{n}),\mathcal{I}(Y_{n}))\stackrel{{ f.d.d.}}{{ \rightsquigarrow}}(\mathcal{I}(X),\mathcal{I}(Y))\), then \((X_{n},Y_{n})\stackrel{{\mathcal{L}^{2}_{w}}}{{\rightsquigarrow}}( X,Y)\)._
**Remark 2.3**.: Lemma 2.3 also extends at once to processes with \(\mathbb{R}^{d}\)-valued trajectories and to \(d\)-tuples of processes instead of just pairs. We proceed with \(d=1\) for the rest of this section, since several of the statements involve the \(\mathcal{M}_{1}\)-topology, which is not compatible with the linear structure on \(D\) and therefore cannot be treated coordinatewise, save in some special cases -- see Whitt (2002).
**Remark 2.4**.: In the light of Proposition A.4, a sequence of \(D\)-valued processes \(X_{n}\) satisfying both \(X_{n}\stackrel{{ f.d.d.}}{{\rightsquigarrow}}X\) and \(|X_{n}|\stackrel{{ f.d.d.}}{{\rightsquigarrow}}|X|\) for some \(X\in D\), must also satisfy \(|X_{n}|\stackrel{{\mathcal{C}}}{{\rightsquigarrow}}|X|\) since the processes \(|X_{n}|_{t},|X|_{t}\) are nondecreasing in \(t\geq 0\); hence the sequence of continuous processes \(\big{\{}|X_{n}|;n\geq 1\big{\}}\) is \(\mathcal{C}\)-tight. By Remark 2.1, \(D\)-valued processes \(X\), \(X_{n}\) for which there holds \(X_{n}\stackrel{{\mathcal{M}_{1}}}{{\rightsquigarrow}}X\) automatically verify both \(X_{n}\stackrel{{\mathcal{L}^{2}_{w}}}{{\rightsquigarrow}}X\) and \(|X_{n}|\stackrel{{ f.d.d.}}{{\rightsquigarrow}}|X|\) (as well as \(X_{n}\stackrel{{ f.d.d.}}{{\rightsquigarrow}}X\), by definition of \(\mathcal{M}_{1}\)-convergence); thus, \(|X_{n}|\stackrel{{\mathcal{C}}}{{\rightsquigarrow}}|X|\) follows. As a result, so does \(|X_{n}-X|\stackrel{{\mathcal{C}}}{{\rightsquigarrow}}0\).
Summarizing, one has the following result.
**Proposition 2.4**.: _Suppose \(X\), \(X_{n}\) are \(D\)-valued processes satisfying \(X_{n}\stackrel{{\mathcal{M}_{1}}}{{\rightsquigarrow}}X\). Then \(X_{n}\stackrel{{\mathcal{L}^{2}_{w}}}{{\rightsquigarrow}}X\), \(|X_{n}|\stackrel{{\mathcal{C}}}{{\rightsquigarrow}}|X|\) and \(|X_{n}-X|\stackrel{{\mathcal{C}}}{{\rightsquigarrow}}0\)._
One of the main reasons for choosing the \(\mathcal{L}^{2}_{w}\)-topology is the following theorem stating sufficient conditions for the \(\mathcal{L}^{2}_{w}\)-continuity of the composition mapping for sequences of \(D\)-valued processes. The proof of the theorem is relegated to Appendix
Note that even in the \(\mathcal{M}_{1}\)-topology, the composition mapping is not continuous in general, even when \(X\) is a Brownian motion; see, e.g., Proposition A.2.
**Theorem 2.5**.: _Suppose \(X\), \(Y\), \(X_{n}\), \(Y_{n}\) are \(D\)-valued processes such that \(X_{n}\overset{\mathcal{C}}{\rightsquigarrow}X\) and \((X_{n},Y_{n})\overset{f.d.d.}{\rightsquigarrow}(X,Y)\) hold, with \(Y_{n}\), \(Y\) non-negative, non-decreasing such that \(Y_{n}(t)\to\infty\) as \(t\to\infty\) almost surely and either of the following hold: a) \(Y_{n}\overset{\mathcal{M}_{1}}{\rightsquigarrow}Y\) with \(Y_{n}^{-1}(0)=0\) for every \(n\); or b) \(Y_{n}^{-1}\overset{\mathcal{M}_{1}}{\rightsquigarrow}Y^{-1}\) with \(Y_{n}(0)=0\) for every \(n\). Then \(X_{n}\circ Y_{n}\overset{\mathcal{L}^{2}}{\rightsquigarrow}X\circ Y\)._
## 3. \(D\)-valued martingales and a CLT in the \(\mathcal{L}_{w}^{2}\)-topology
Suppose that \(M_{n}\) is a sequence of \(D\)-valued square integrable \(\mathbb{F}\)-martingales started at \(M_{n}(0)=0\). By square integrable martingale we mean those satisfying \(E\{M^{2}(t)\}<\infty\) for every \(t\geq 0\). Note that because of possible discontinuities of trajectories, the quadratic variation \([M_{n}]\) is can be distinct from its predictable compensator \(A_{n}:=\langle M_{n}\rangle\). The largest jump is denoted by \(J_{T}(M_{n}):=\sup_{s\in[0,T]}|M_{n}(s)-M_{n}(s-)|\). Some useful assumptions are formulated next.
**Hypothesis 3.1**.: All of the following hold:
* \(A_{n}(t)\to\infty\) as \(t\to\infty\) almost surely, for each fixed \(n\geq 1\);
* There is a \(D\)-valued process \(A\) such that (i) \(A_{n}\overset{f.d.d.}{\rightsquigarrow}A\); (ii) For all \(t\geq 0\), \(\lim_{n\to\infty}E\left\{A_{n}(t)\right\}=E\left\{A(t)\right\}<\infty\); (iii) \(A(t)\to\infty\) as \(t\to\infty\) almost surely.
Writing the inverse process for \(A_{n}\) as \(\tau_{n}(s)=\inf\{t\geq 0;A_{n}(t)>s\}\), one defines the rescaled \(\mathbb{F}_{\tau_{n}}\)-martingale \(W_{n}=M_{n}\circ\tau_{n}\), with compensator \(\langle W_{n}\rangle:=A_{n}\circ\tau_{n}\).
**Hypothesis 3.2**.: There holds \(\lim_{n\to\infty}E\langle W_{n}\rangle_{t}=t\) for all \(t\geq 0\).
**Remark 3.1**.: By the right-continuity of \(A_{n}\), \(A_{n}\circ\tau_{n}(t)\geq t\) holds; therefore, \(\lim_{n\to\infty}E\langle W_{n}\rangle_{t}=t\) is equivalent to \(\langle W_{n}\rangle_{t}\overset{L^{1}}{\rightsquigarrow}t\). Hypothesis 3.2 yields \(\langle W_{n}\rangle_{t}\overset{Law}{\rightsquigarrow}t\) for any \(t\geq 0\); by Proposition A.4, \(\langle W_{n}\rangle\) is \(\mathcal{C}\)-tight.
An inhomogeneous Levy process \(A\) is an \(\mathbb{F}\)-adapted \(D\)-valued process with independent increments (with respect to \(\mathbb{F}\)), without fixed times of discontinuity and such that \(A_{0}=0\in\mathbb{R}\). We assume also here that \(A\) has deterministic characteristics
(in the sense of the Levy-Khintchine formula), so all Levy processes here are also semimartingales -- see Jacod and Shiryaev (2003, Theorem II.4.15). The inhomogeneity (in time) means that stationarity of the increments, usually required of Levy processes, is lifted. This choice reflects the existence of weak limits exhibiting independent increments without homogeneity in time or space, in some applications. All processes in the present section are built on \(D\), ensuring continuity in probability of the trajectories, another usual requirement.
**Hypothesis 3.3**.: There is a \(D\)-valued process \(A\) started at \(A(0)=0\), such that
* \(\tau(s)=\inf\{t\geq 0;A(t)>s\}\) is an inhomogeneous Levy process;
* Either of the following conditions hold: (i) \(A_{n}\overset{f.d.d.}{\rightsquigarrow}A\) and for every \(n\), \(\tau_{n}(0)=0\); (ii) \(\tau_{n}\overset{f.d.d.}{\rightsquigarrow}\tau\) and for every \(n\), \(A_{n}(0)=0\).
**Remark 3.2**.: By Proposition A.3 and Remark A.2, Hypothesis 3.3.b(i) implies both \(A_{n}\overset{\mathcal{M}_{1}}{\rightsquigarrow}A\) and the \(\mathcal{M}_{1}\)-continuity of mapping \(A_{n}\mapsto\tau_{n}\), so \(\tau_{n}\overset{\mathcal{M}_{1}}{\rightsquigarrow}\tau\) as well. The reverse argument holds under Hypothesis 3.3.b(ii); therefore, Hypothesis 3.3 automatically implies both \(\tau_{n}\overset{\mathcal{M}_{1}}{\rightsquigarrow}\tau\) and \(A_{n}\overset{\mathcal{M}_{1}}{\rightsquigarrow}A\). Invoking Remark A.2 again confirms \((A_{n},\tau_{n})\overset{\mathcal{M}_{1}}{\rightsquigarrow}(A,\tau)\) as well, since the mapping \(A\mapsto(A,\tau)\) is also \(\mathcal{M}_{1}\)-continuous, under condition (i); and similarly for mapping \(\tau\mapsto(A,\tau)\), under condition (ii).
Our main result is the following CLT, proven in Appendix B.3.
**Theorem 3.1**.: _Assume that both Hypotheses 3.1 and 3.2 hold. Assume that either: a) Hypothesis 3.3 holds; or b) \(A\) is an inhomogeneous Levy process and \(\tau_{n}(0)=0\), for every \(n\). Then there holds \(W_{n}\overset{\mathcal{C}}{\rightsquigarrow}W\) and \((M_{n},A_{n},W_{n})\overset{\mathcal{C}_{w}^{2}}{\rightsquigarrow}(M,A,W)\), with \(M=W\circ A\) and \(W\) is a Brownian motion, independent of process \(A\)._
## 4. Examples of application
### Price approximation and random sums
Following Chavez-Casillas et al. (2019), Swishchuk et al. (2019), Guo et al. (2020) who studied limit order books, the price structure on markets can be written as \(X_{t}=\sum_{k=1}^{N_{t}}V_{0}(\xi_{k})\), where \(\xi=(\xi_{k})_{k\geq 1}\) is a finite Markov chain independent of a counting process \(N\). Here, \(N_{t}\) can be seen as the number of orders executed on a stock up to time \(t\). If \(\mu\) is the mean of \(V(\xi)\) under the stationary measure \(\pi\), then \(X_{t}=\sum_{k=1}^{N_{t}}V(\xi_{k})+\mu N_{t}\), where \(V(x)=\sum_{k=1}^{N_{t}}V(\xi_{k})\) is a finite Markov chain independent of a counting process \(N_{t}\). Here, \(N_{t}\) is the mean of \(V(\xi)\) under the stationary measure \(\pi\), and \(N_{t}\) is the mean of \(V(\xi)\) under the stationary measure \(\pi\).
**Lemma 4.1**.: _Assume that \(\mu\) is the mean of \(V(\xi)\) under the stationary measure \(\pi\). Then, for every \(n\), \(\tau_{n}(0)=0\
\(V_{0}(x)-\mu\). Next, according to (Durrett, 1996, Theorem 7.7.2 and Example 7.7.2), one can find \(f\) bounded so that \(Tf-f=Lf=-V\) and \(V(\xi_{k})=Y_{k}+Z_{k}-Z_{k+1}\), with \(Z_{k}=Tf(\xi_{k-1})\) and \(Y_{k}=f(\xi_{k})-Tf(\xi_{k-1})\) is a \(L^{2}\) ergodic martingale difference. As a result, \(\sum_{k=1}^{n}V(\xi_{k})=\sum_{k=1}^{n}Y_{k}+Z_{1}-Z_{n+1}\). Then, if \(\sigma^{2}=E(Y_{k}^{2})\), it follows that \(\mathcal{W}_{n}(t)=\frac{1}{\sigma\sqrt{n}}\sum_{k=1}^{\lfloor nt\rfloor}Y_{k}\) converges in \(\mathcal{C}\) to a Brownian motion \(\mathcal{W}\). Next, assume that \(A_{n}(t)=N_{a_{n}t}/n\) converges in \(\mathcal{M}_{1}\) to \(A(t)\). Then, \(M_{n}(t)=\frac{1}{n^{1/2}}\sum_{k=1}^{N_{a_{n}t}}Y_{k}=\sigma\mathcal{W}_{n} \circ A_{n}(t)\) converges to \(\sigma\mathcal{W}\circ A_{t}\) f.d.d., but the convergence in not necessarily with respect to \(\mathcal{M}_{1}\). It then follows that \(\frac{X_{a_{n}t}-\mu nA_{n}(t)}{n^{1/2}}=\sigma\mathcal{W}_{n}\circ A_{n}(t)+ o_{P}(1)\). This can be written as
\[X_{a_{n}t}=n^{1/2}\sigma\mathcal{W}_{n}\circ A_{n}(t)+n\mu A_{n}(t)+o_{P}(n^{1 /2}). \tag{4.1}\]
Processes of the form \(Y=\sigma W\circ A+\mu A\), where \(A\) is a non-negative stochastic process independent of the Brownian motion \(W\) have been proposed by Ane and Geman (2000) to model the behaviour of stock returns. A particular case is the famous Variance Gamma model (Madan et al., 1998), where \(A\) is a Gamma process. It follows that these processes can appear as limiting cases of (4.1). If in addition, \(A\) is deterministic, continuous, and \(\mathbb{A}_{n}(t)=\sqrt{n}\ \left\{A_{n}(t)-A(t)\right\}=\frac{N_{a_{n}t}- nA(t)}{n^{1/2}}\stackrel{{ \mathcal{J}_{1}}}{{\rightarrow}}\mathbb{A}(t)\), then
\[n^{1/2}\left\{\frac{X_{a_{n}t}}{n}-\mu A(t)\right\}=\sigma\mathcal{W}_{n} \circ A_{n}(t)+\mu\mathbb{A}_{n}(t)+o_{P}(1).\]
In most cases, \(\mathbb{A}\) will be a Brownian motion (up to a constant), independent of \(\mathcal{W}\). In what follows, we assume that \(N_{t}\) be a renewal counting process defined by \(\{N_{t}\geq k\}=\{S_{k}\leq t\}\), where \(S_{k}=\sum_{j=1}^{k}\tau_{k}\), and the \(\tau_{k}\)s are iid positive random variables. The asymptotic behaviour of \(N\) is determined by the asymptotic behaviour of \(S\).
#### 4.1.1. First case: \(S_{n}\) has finite variance
Suppose \(\tau_{k}\) has mean \(c_{1}\) and variance \(\sigma_{1}^{2}\). Then, taking \(a_{n}=n\), one gets that \(A(t)=\frac{t}{c_{1}}\), \(\mathbb{A}=-\frac{\sigma_{1}}{c_{1}^{3/2}}\mathbb{W}\), where \(\mathbb{W}\) is a Brownian motion independent of \(\mathcal{W}\). It then follows from Remillard and Vaillancourt (2023, Theorem 2.1) (stated as Theorem A.6 here) that \(n^{1/2}\left\{\frac{X_{a_{n}t}}{n}-t\frac{\mu}{c_{1}}\right\}\) converges in \(\mathcal{C}\) to \(\tilde{\sigma}\tilde{\mathcal{W}}(t)\), where \(\tilde{\mathcal{W}}\) is a Brownian motion and \(\tilde{\sigma}=\left(\frac{\sigma^{2}}{c_{1}}+\frac{\mu^{2}\sigma_{1}^{2}}{c _{1}^{3}}\right)^{1/2}\). This corrects
formula (16) in Chavez-Casillas et al. (2019), where the factor \(\sigma_{1}^{2}\) is missing. This is also Theorem 7.4.1 in Whitt (2002).
1.2. Second case: \(S_{n}\) is in the domain of attraction of a stable process of order \(\alpha<1\)
Setting \(a_{n}=n^{1/\alpha}\), assume that \(\mathcal{S}_{n}(t)=S_{\lfloor nt\rfloor}/a_{n}\) converges in \(\mathcal{J}_{1}\) to \(\mathcal{S}(t)\). It follows that \(A_{n}(t)=\dfrac{N_{n^{1/\alpha}t}}{n}\) converges in \(\mathcal{M}_{1}\) to \(A(t)=S^{-1}(t)\) since \(S_{n}^{-1}(t)=\dfrac{1+N_{n^{1/\alpha}t}}{n}\). Then, \(M_{n}(t)=\dfrac{1}{\sigma n^{1/2}}\sum_{k=1}^{N_{an_{t}}}V(\xi_{k})=\mathcal{W }_{n}\left(N_{a_{nt}}/n\right)\) converges to \(\mathcal{W}\circ A_{t}\) f.d.d., but the convergence in not with respect to \(\mathcal{M}_{1}\), according to Proposition A.2. However, it follows from Theorem 2.5 that \(\mathcal{W}_{n}\circ A_{n}\overset{\mathcal{L}_{w}^{2}}{\rightsquigarrow} \mathcal{W}\circ A\). Note that it also follows that \(\dfrac{X_{a_{nt}}}{n}\overset{\mathcal{M}_{1}}{\rightsquigarrow}\mu A(t)\).
**Remark 4.1**.: There have been many incorrect statements about convergence of \(A_{n}\) in the literature. For example, Corollary 3.4 of Meerschaert and Scheffler (2004) about the \(\mathcal{J}_{1}\)-convergence of \(A_{n}\) is not true. The convergence is in \(\mathcal{M}_{1}\), not \(\mathcal{J}_{1}\). The authors quoted an erroneous result in Theorem 3 of Bingham (1971), where the error in the proof of the latter result comes from the incorrect inequality in Equation (9) in Bingham (1971). As stated in that article, while it is true that \(\frac{1}{2}\omega_{\mathbb{J}}\left(x,\frac{T}{2k},T\right)\leq D(x,k,T)\), where \(D(x,k,T)=\max\limits_{1\leq r\leq k}|x(rT/k)-x((r-1)T/k)|\), it is not true that \(D(x,k,T)\leq\omega_{\mathbb{J}}\left(x,\frac{T}{k},T\right)\) as claimed, even if \(x\) is monotone. If \(X\in D\), then \(\limsup_{k\to\infty}P(D(k,x,T)>\epsilon)>0\) unless \(P(X\in C)=1\). In fact, for any \(x\in D\), \(\limsup_{k\to\infty}D(k,x,T)=\sup\limits_{0<t\leq T}|\Delta x(t)|\), and if \(x\) is nondecreasing, then
\[\omega_{\mathbb{J}}\left(x,\delta,T\right)\leq\sup\limits_{\delta\leq t\leq T- \delta}\min\{x(t)-x(t-\delta),x(t+\delta)-x(t)\}.\]
Next, Theorem 4.2 in Meerschaert and Scheffler (2004), also quoted in Scalas and Viles (2012, 2014), is also incorrect. The authors concluded that since the jumps of \(\mathcal{W}\) and \(\mathcal{S}\) are not simultaneous by independence, it follows that \(A\) is strictly increasing at points on discontinuity of \(\mathcal{W}\), then \(\mathcal{W}_{n}\circ A_{n}\overset{\mathcal{J}_{1}}{\rightsquigarrow} \mathcal{W}\circ A\). However, this is an incorrect use of Whitt (2002, Theorem 13.2.4). One cannot even conclude that \(\mathcal{W}_{n}\circ A_{n}\overset{\mathcal{M}_{1}}{\rightsquigarrow} \mathcal{W}\circ A\).
1.3. Third case: \(S_{n}\) is in the domain of attraction of a stable process of order \(\alpha=1.\)
In this case, if \(\lim\limits_{x\to\infty}xP(\tau_{1}>x)\to c_{1}\in(0,\infty)\), then \(S_{\lfloor nt\rfloor}/n\log n\to c_{1}\) and \(N_{nt}/\left(n/\log n\right)\to t/c_{1}\), using \(E\left(e^{-s\tau_{1}}\right)=1+c_{1}s\log s+o(s\log s)\), as
\(0\), from the proof of (Chavez-Casillas et al., 2019, Proposition A.1). In particular, if \(a_{n}=n\log n\), then \(N_{a_{n}t}/n\) converges in probability to \(\frac{t}{c_{1}}\). If in addition \(E\left(e^{-s\tau_{1}}\right)=1+c_{1}s\log s+c_{2}s+o(s)\), as \(s\to 0\), then \(S_{n}\) is in the domain of attraction of a stable law of order \(1\), since \(\frac{S_{\lfloor nt\rfloor}}{n}-c_{1}t\log n\rightsquigarrow c_{1}\mathcal{S}_{t }-c_{2}t\) in \(\mathcal{J}_{1}\), where \(\mathcal{S}\) is a stable process of index \(1\). This clarifies conditions under which the last case in (Chavez-Casillas et al., 2019, Section 4.2.2) holds true. As a result, as \(n\to\infty\), \(E\left\{e^{-s\mathcal{S}_{n}(t)}\right\}\to e^{t(c_{1}s\log s+c_{2}s)}\), so \(E\left\{e^{is\mathcal{S}(t)}\right\}=e^{-ic_{2}st+tc_{1}\psi(s)}\), where \(\psi(s)=-|s|\left\{\frac{\pi}{2}+i\mathrm{sgn}(s)\log|s|)\right\}\). See Formula (3.19) in Feller (1971). Hence, \(\log n\left\{\frac{X_{a_{n}t}}{n}-\mu t/c_{1}\right\}\overset{ \mathcal{M}_{1}}{\rightsquigarrow}-\mu\left(\mathcal{S}_{t}-t\frac{c_{2}}{c_{1} }\right)\), since \(\log n\left\{A_{n}(t)-t/c_{1}\right\}\overset{\mathcal{M}_{1}}{\rightsquigarrow }t\frac{c_{2}}{c_{1}}-\mathcal{S}_{t}\).
1.4. Fourth case: \(S_{n}\) is in the domain of attraction of a stable process of order \(\alpha\in(1,2)\)
Finally, if \(1<\alpha<2\), then we have the following result (Whitt, 2002, Theorem 7.4.3): setting \(a_{n}=n\), one gets
\[n^{1-1/\alpha}\left(\frac{X_{nt}}{n}-t\frac{\mu}{\mu_{1}}\right)=n^{-\left( \frac{1}{\alpha}-\frac{1}{2}\right)}\mathcal{W}_{n}\circ A_{n}(t)+\mu n^{1-1/ \alpha}\left(A_{n}(t)-\frac{t}{\mu_{1}}\right)\overset{\mathcal{M}_{1}}{ \rightsquigarrow}-\mu\frac{\mathcal{S}_{t}}{\mu_{1}^{1+1/\alpha}},\]
since \(\mathcal{S}_{n}(t)=n^{-1/\alpha}\left(S_{\lfloor nt\rfloor}-nt\mu_{1}\right) \overset{\mathcal{J}_{1}}{\rightsquigarrow}\mathcal{S}_{t}\), where \(\mu_{1}=E(\tau_{1})\). Note that \(\mathcal{S}_{t}\) has a stable distribution normalized so that \(\lim\limits_{x\to\infty}x^{\alpha}P(\tau_{1}>x)=\lim\limits_{x\to\infty}x^{ \alpha}P(\mathcal{S}_{1}>x)=c_{1}\).
### Occupation times for planar Brownian motion
The main results of this section are: a complete proof of Theorem 3.1 in Kasahara and Kotani (1979) and a correction to their Theorem 3.2.
Let \(B\) be a planar Brownian motion and let \(V:\mathbb{R}^{2}\mapsto\mathbb{R}\) be continuous with compact support and set \(\bar{V}=\int_{\mathbb{R}^{2}}V(x)dx\). Then \(F(x)=\pi^{-1}\int_{\mathbb{R}^{2}}V(y)\log|y-x|dy\) is bounded and \(M_{t}=F(B_{t})-\int_{0}^{t}V(B_{s})ds\) is a martingale with \([M]_{t}=\int_{0}^{t}|\nabla F(B_{s})|^{2}ds\) by Ito's formula, since \(\frac{1}{2}\Delta F=V\) holds (Lieb and Loss, 2001, Theorems 6.21 and 10.2). So, under rescaling, the martingale \(M\) and the occupation time \(\int_{0}^{t}V(B_{s})ds\) have the same asymptotic behavior. Hypothesis 3.1.a holds, provided \(V\) is not identically null, since \(\nabla F\) is continuous and also not everywhere null, entailing \([M]_{\infty}=\infty\). Since \(M\) is continuous, Hypothesis 3.2 is also satisfied for all rescalings. Let \(m(t)\) denote a numerical function that goes to \(\infty\) with \(t\), and define
the continuous martingale \(\tilde{M}_{n}(t)=\{\log m(n)\}^{-1/2}\int_{0}^{m(nt)}\nabla F(B_{s})\cdot dB_{s}\) with quadratic variation \(\tilde{A}_{n}(t)=[\tilde{M}_{n}]_{t}=\{\log m(n)\}^{-1}\int_{0}^{m(nt)}|\nabla F (B_{s})|^{2}ds\). The common formulations involve rescalings \(m(t)=t\) and \(m(t)=te^{2t}\) (ensuring that \(\tilde{A}_{n}(0)=0\)).
#### 4.2.1. First case: \(m(t)=t\)
Here, removing the superscript on \(\tilde{A}_{n}\) in order to avoid confusion for the reader between the treatment of the two rescaled sequences, \(A_{n}(t)=\{\log n\}^{-1}\int_{0}^{nt}|\nabla F(B_{s})|^{2}ds\) converges f.d.d. to \(A\), where \(A(0)=0\) and \(A(t)=A(1)\stackrel{{ Law}}{{=}}\frac{\mathcal{E}}{2\pi}\int_{ \mathbb{R}^{2}}|\nabla F(x)|^{2}dx=\frac{\mathcal{E}}{2\pi}c_{V}\) for every \(t>0\), where \(\mathcal{E}\) is an Exponential random variable with mean \(1\), and where
\[c_{V}:=-\frac{2}{\pi}\int\int_{\mathbb{R}^{4}}V(x)V(y)\log|x-y|dxdy=\int_{ \mathbb{R}^{2}}|\nabla F(x)|^{2}dx. \tag{4.2}\]
The proof of (4.2) is given in B.4. The proof of \(A_{n}\stackrel{{ f.d.d.}}{{\leadsto}}A\) follows from the fact that for any \(0<s\leq t\), \(A_{n}(t)-A_{n}(s)\stackrel{{ Law}}{{\leadsto}}0\), since \(0\leq EA_{n}(t)-EA_{n}(s)\to 0\), using Darling and Kac (1957). As a result, \(A\) is clearly not right-continuous at \(0\) and therefore does not belong to \(D\). It also fails to verify another requirement of Theorem 3.1, namely \(A(t)\to\infty\) as \(t\to\infty\) in Hypothesis 3.1.
**Remark 4.2**.: Proving the convergence in distribution of \(A_{n}(t)\) was achieved in stages. Kallianpur and Robbins (1953) claimed that \([M]_{t}/\log t\) converges in law to an Exponential distribution. In fact they never proved this result. They refer to Robbins (1953), where a result on sums of independent random variables on the plane is claimed but not proved. Kallianpur and Robbins (1953) actually appeared before Robbins (1953). It seems that the result claimed by Robbins (1953) was in fact proven in Darling and Kac (1957), while Erdos and Taylor (1960) proved a special case involving the number of visits at \(0\) for symmetric random walks. In fact, Darling and Kac (1957), when \(V\) is a non-negative function, showed that for any given \(t>0\), not only does \(A_{n}(t)\) converge in law, but all moments of \(A_{n}(t)\) also converge to those of the limiting Exponential distribution. The general case appeared in Yamazaki (1992) and Le Gall (1992, Lemma II.6 and ensuing remarks), after some preliminary work by others, notably Erdos and Taylor (1960) and Kasahara and Kotani (1979).
#### 4.2.2. Second case: \(m(t)=te^{2t}\)
This scaling was proposed by Kasahara and Kotani (1979). In fact, Kasahara and Kotani (1979) showed that, if \(G\) is a continuous positive function with \(\bar{G}=\int_{\mathbb{R}^{2}}|x|^{\alpha}G(x)dx<\infty\), for some \(\alpha>0\), then \(\frac{1}{n}\int_{0}^{m(nt)}G(B_{s})ds\) is equivalent in law to \(H_{n}(t)=\frac{1}{n}\int_{0}^{S^{-1}\{m(nt)\}}f(\beta_{s},\theta_{s})ds\), where \(\beta\) and \(\theta\) are two independent Brownian motions, \(S_{t}=\int_{0}^{t}e^{2\beta_{u}}du\), and \(f\) is chosen so that \(\bar{f}=\int_{-\infty}^{\infty}\int_{0}^{2\pi}f(x,u)dxdu=\bar{G}\). Then, setting \(T_{n}(t)=\frac{1}{n}m^{-1}\left\{S\left(n^{2}t\right)\right\}\), one gets \(H_{n}\circ T_{n}(t)=\frac{1}{n}\int_{0}^{n^{2}t}f(\beta_{s},\theta_{s})ds\). As a result, they proved that
\[\left(H_{n}\circ T_{n}(t),\beta(n^{2}t)/n,T_{n}(t)\right)\overset{\mathcal{C} }{\rightsquigarrow}\left(2\bar{G}\ell,b,\mathbb{M}\right), \tag{4.3}\]
where \(b\) is a Brownian motion, \(\mathbb{M}_{t}=\sup\limits_{s\leq t}b(s)\), and \(\ell_{t}\) is the local time at \(0\) of \(b\). It then follows that \(T_{n}^{-1}\overset{\mathcal{M}_{1}}{\rightsquigarrow}\mathbb{M}^{-1}\), which is a Levy process. In their Theorem 3.1, Kasahara and Kotani (1979) stated that \(H_{n}\overset{\mathcal{M}_{1}}{\rightsquigarrow}H=2\bar{G}\ell\circ M^{-1}\) without proving it, stopping at (4.3). To complete their proof, one may apply Proposition A.2 with \(x_{n}=H_{n}\circ T_{n}\), \(y_{n}=T_{n}^{-1}\), \(x=2\bar{G}\ell\), and \(y=\mathbb{M}^{-1}\) to conclude that \(H_{n}\overset{\mathcal{M}_{1}}{\rightsquigarrow}H=x\circ y=2\bar{G}\ell\circ \mathbb{M}^{-1}\), since \(\ell\) is monotone.
**Remark 4.3**.: It does not seem that such a result on the convergence of composition was available to Kasahara and Kotani (1979), since its first version appeared in (Yamazaki, 1992, Proposition 3.1).
Next, in their Theorem 3.2, Kasahara and Kotani (1979) stated, without proof, that if \(\bar{V}=0\), then \(\tilde{M}_{n}(t)=\frac{1}{\sqrt{n}}\int_{0}^{m(nt)}V(B_{s})ds\) converges in \(\mathcal{M}_{1}\) to \(c_{V}\beta_{2}\circ H\), where \(c_{V}=\frac{\bar{f}}{2\pi}\), \(H=2\bar{G}\ell\circ M^{-1}\) and \(\beta_{2}\) is a Brownian motion independent of \(b\) and \(H\). To try to prove this result, note that \(\frac{1}{\sqrt{n}}\int_{0}^{m(nt)}V(B_{s})ds\) is equivalent in law to \(M_{n}(t)=\frac{1}{\sqrt{n}}\int_{0}^{S^{-1}\{m(nt)\}}f(\beta_{s},\theta_{s})ds\), where \(\bar{f}=\bar{V}=0\). The latter is almost a martingale due to the decomposition described previously. Applying Theorem A.6, one gets that, for some \(\gamma>0\),
\[\left(M_{n}\circ T_{n}(t),H_{n}\circ T_{n}(t),\beta(n^{2}t)/n,T_{n}(t)\right) \overset{\mathcal{C}}{\rightsquigarrow}\left(\gamma\beta_{2}\circ\ell,\bar{G} \ell,b,\mathbb{M}\right),\]
where \(\beta_{2}\) is a Brownian motion independent of \(\ell\). Because of Proposition A.2, \(M_{n}=M_{n}\circ T_{n}\circ T_{n}^{-1}\) does not \(\mathcal{M}_{1}\)-converge to \(\gamma\beta_{2}\circ\ell\circ\mathbb{M}^{-1}\) in \(D\), since \(\beta_{2}\) is not monotone with probability one on the sets \([\ell\circ\mathbb{M}^{-1}(t-),\ell\circ\mathbb{M}^{-1}(t)]\). Therefore,
Theorem 3.2 in Kasahara and Kotani (1979) is incorrect. However, since the points of discontinuity of \(T^{-1}\) are at most countable and not fixed, it follows from Whitt (2002, Theorem 12.4.1) that \(M_{n}\overset{f.d.d.}{\rightsquigarrow}M_{\infty}=\gamma\beta_{2}\circ\ell\circ \mathbb{M}^{-1}\). This result was also proven in Csaki et al. (2004), using strong approximations. Next, the f.d.d. convergence implies that \(\int_{0}^{t}M_{n}(s)ds\overset{f.d.d.}{\rightsquigarrow}\int_{0}^{t}M_{\infty}(s )ds\), and hence this completes the proof of \(M_{n}\overset{\mathcal{L}_{\infty}^{2}}{\rightsquigarrow}M_{\infty}\), in this second case where \(m(t)=te^{2t}\), by an argument similar to the one used to prove Theorem 2.5. In fact, for any choice of finite partition \(0=t_{0}<t_{1}<t_{2}\cdots<t_{m}=t\), \(m\geq 1\) and \(n=\infty,1,2,\ldots\), put
\[Z_{n,m}(t):=\sum_{k=1}^{m}M_{n}(t_{k})\left\{(t_{k}\wedge t)-(t_{k-1}\wedge t) \right\}.\]
As \(n\to\infty\), \(Z_{n,m}(t)\overset{Law}{\rightsquigarrow}Z_{\infty,m}(t)\), while, as \(m\to\infty\),
\[E\left\{(Z_{n,m}(t)-\int_{0}^{t}M_{n}(s)ds)^{2}\right\}\\ \leq\sum_{k=1}^{m}\left\{(t_{k}\wedge t)-(t_{k-1}\wedge t)\right\} ^{2}\left\{E[M_{n}]_{t_{k}\wedge t}-E[M_{n}]_{t_{k-1}\wedge t}\right\}\\ \leq\sup_{k}(t_{k}-t_{k-1})^{2}\sup_{n}E[M_{n}]_{t}\to 0,\]
through the selection of any sequence of partitions \(\{t_{k,m};k\leq m\}\) with vanishing mesh, i.e., such that \(\sup_{k}(t_{k,m}-t_{k-1,m})\to 0\) as \(m\to\infty\).
## Appendix A Miscellanea
Let \(D=D[0,\infty)\) be the space of \(\mathbb{B}\)-valued cadlag trajectories (right continuous with left limits everywhere), for some separable Banach space \(\mathbb{B}\), i.e., \(\mathbb{B}\) is a complete separable normed linear space with norm \(\|\cdot\|\). \(\mathbb{B}\) will always be specified implicitly by the context at hand (usually \(d\)-dimensional Euclidean space \(\mathbb{R}^{d}\)) so the subscript \(\mathbb{B}\) is omitted. All processes considered in this section have their trajectories in \(D\) and are adapted to a filtration \(\mathbb{F}=(\mathcal{F}_{t})_{t\geq 0}\) on a probability space \((\Omega,\mathcal{F},P)\) satisfying the usual conditions (notably, right continuity of the filtration and completeness). Trajectories in \(D\) are usually noted \(x(t)\) but occasionally \(x_{t}\).
On \(\mathbb{B}^{3}\), set \(\mathbb{C}(x_{1},x_{2},x_{3})=\|x_{3}-x_{1}\|\), \(\mathbb{J}(x_{1},x_{2},x_{3})=\|x_{2}-x_{1}\|\wedge\|x_{2}-x_{3}\|\), where \(k\wedge\ell=\min(k,\ell)\) and let \(\mathbb{M}(x_{1},x_{2},x_{3})\) be the minimum distance between
and the Banach space segment \([x_{1},x_{3}]:=\{\lambda x_{1}+(1-\lambda)x_{3}\in\mathbb{B};\lambda\in[0,1]\}\). Then \(\mathbb{M}(x_{1},x_{2},x_{3})=0\) if \(x_{2}\in[x_{1},x_{3}]\) and otherwise \(\mathbb{M}(x_{1},x_{2},x_{3})\leq\mathbb{J}(x_{1},x_{2},x_{3})\).
For \(\mathbb{H}=\mathbb{M}\), \(\mathbb{H}=\mathbb{J}\) or \(\mathbb{H}=\mathbb{C}\), \(T>0\), and \(x\in D\), set
\[\omega_{\mathbb{H}}(x,\delta,T)=\sup_{0\leq t_{1}<t_{2}<t_{3}\leq T,\ t_{3}-t_ {1}<\delta}\mathbb{H}\{x(t_{1}),x(t_{2}),x(t_{3})\}.\]
Using the terminology in Skorohod (1956, Theorem 3.2.2), for each \(\mathbb{H}=\mathbb{M}\), \(\mathbb{H}=\mathbb{J}\) or \(\mathbb{H}=\mathbb{C}\), a sequence of \(D\)-valued processes \(X_{n}\) is \(\mathcal{H}_{1}\)-tight (respectively \(\mathcal{M}_{1}\)-tight, \(\mathcal{J}_{1}\)-tight or simply \(\mathcal{C}\)-tight) if and only if
1. for every \(t\) in an everywhere dense subset of \([0,\infty)\) that includes the origin, the marginal distributions of \(X_{n}(t)\) are tight,
2. for any \(\epsilon>0\), and \(T>0\), (A.1) \[\lim_{\delta\to 0}\limsup_{n\to\infty}P\{\omega_{\mathbb{H}}(X_{n},\delta,T)> \epsilon\}=0.\]
A sequence of processes \(X_{n}\) converges weakly under the \(\mathcal{H}_{1}\)-topology on \(D\) to \(X\), denoted \(X_{n}\stackrel{{\mathcal{H}_{1}}}{{\rightsquigarrow}}X\) when \(\mathbb{H}=\mathbb{J}\) or \(\mathbb{H}=\mathbb{M}\), if and only if it is \(\mathcal{H}_{1}\)-tight and the finite dimensional distributions of \(X_{n}\) converge to those of \(X\), denoted \(X_{n}\stackrel{{ f.d.d.}}{{\rightsquigarrow}}X\), over a dense set of times in \([0,\infty)\) containing \(0\). Similarly write \(X_{n}\stackrel{{\mathcal{C}}}{{\rightsquigarrow}}X\) for weak convergence under the \(\mathcal{C}\)-topology on \(D\), that of uniform convergence over compact time sets, defined by taking \(\mathbb{H}=\mathbb{C}\). While we are on the subject of notation, equality in law or equidistribution is denoted by \(\stackrel{{Law}}{{=}}\), convergence in the \(p^{th}\) mean by \(\stackrel{{ L^{p}}}{{\rightsquigarrow}}\), in probability by \(\stackrel{{ Pr}}{{\rightsquigarrow}}\), in law by \(\stackrel{{ Law}}{{\rightsquigarrow}}\) and almost sure convergence by \(\stackrel{{ a.s.}}{{\rightsquigarrow}}\).
Complete coverage of the \(\mathcal{J}_{1}\)-topology can be found in Ethier and Kurtz (1986) and Jacod and Shiryaev (2003), where \(\mathbb{B}\) is actually allowed be a Polish space; while Whitt (2002) is our main reference on the \(\mathcal{M}_{1}\)-topology. Neither topology turns \(D\) into a topological vector space, that is to say, neither is compatible with the linear structure inherited on \(D\) from the Banach space \(\mathbb{B}\), which would have made the sum a continuous operator. Consequently, when considering vectors of processes valued in a product \(\prod_{i=1}^{d}\mathbb{B}_{i}\) of Banach spaces, neither topology offers the equivalence between \(\mathcal{H}_{1}\)-convergence of sequences of probability measures on \(D\) and coordinatewise \(\mathcal{H}_{1}\)-convergence, an equivalence which holds for \(\mathcal{C}\)-convergence. Similarly \(\mathcal{H}_{1}\)-tightness cannot be treated coordinatewise; it does however ensue from the \(\mathcal{H}_{1}\)-tightness of each coordinate plus that of each pairwise sum of coordinates
\(d^{2}\) verifications are thus required. This fact precludes immediate extensions of some results from the real line to higher dimensional spaces, but there are exceptions, as will be seen next.
**Remark A.1**.: All topological statements in this paper involving multiple coordinate functions or processes with \(\mathbb{B}\)-valued cadlag trajectories, are stated and proved for the space \(D=D([0,\infty):\mathbb{B})\), with Banach space product \(\mathbb{B}:=\prod_{i=1}^{d}\mathbb{B}_{i}\) equipped with the norm \(|\cdot|_{\mathbb{B}}:=\vee_{i=1}^{d}|\cdot|_{\mathbb{B}_{i}}\). Any conclusion thus stated remains valid for the weaker product topology for the space \(\prod_{i=1}^{d}D([0,\infty):\mathbb{B}_{i})\) without further ado. This is the case whether the topology involved is \(\mathcal{C}\), \(\mathcal{L}_{w}^{2}\), \(\mathcal{M}_{1}\) or \(\mathcal{J}_{1}\). On those rare occasions when the latter product topology is used, it is done explicitly by displaying the product space, such as in Proposition A.2.
The following inequalities link the moduli associated with \(\mathcal{H}_{1}\)-topologies, for both \(\mathbb{H}=\mathbb{J}\) and \(\mathbb{H}=\mathbb{M}\), with that of the \(\mathcal{C}\)-topology, thus providing a first situation when coordinatewise arguments do work.
**Lemma A.1**.: _For any pair of functions \(X,Y\in D\) there holds, for every choice of \(\delta>0\) and \(T>0\), \(\omega_{\mathbb{H}}(X+Y,\delta,T)\leq\omega_{\mathbb{H}}(X,\delta,T)+\omega_{ \mathbb{C}}(Y,\delta,T)\). For sequences of \(D\)-valued processes, if \(X_{n}\) is \(\mathcal{H}_{1}\)-tight and \(Y_{n}\) is \(\mathcal{C}\)-tight, then \(X_{n}+Y_{n}\) and \((X_{n},Y_{n})\) are also \(\mathcal{H}_{1}\)-tight. Similarly if \(X_{n}\stackrel{{\mathcal{H}_{1}}}{{\sim}}X\), \(Y_{n}\stackrel{{\mathcal{C}}}{{\sim}}Y\), and \((X_{n},Y_{n})\stackrel{{ f.d.d.}}{{\sim}}(X,Y)\) then \(X_{n}+Y_{n}\stackrel{{\mathcal{H}_{1}}}{{\sim}}X+Y\) and \((X_{n},Y_{n})\stackrel{{\mathcal{H}_{1}}}{{\sim}}(X,Y)\)._
Proof.: Given two triplets \((x_{1},x_{2},x_{3}),(y_{1},y_{2},y_{3})\in\mathbb{B}^{3}\), there is a \(\lambda\in[0,1]\) achieving the minimum for \(\mathbb{M}(x_{1},x_{2},x_{3})=\|x_{2}-\{\lambda x_{1}+(1-\lambda)x_{3}\}\|\) and therefore
\[\mathbb{M}(x_{1}+y_{1},x_{2}+y_{2},x_{3}+y_{3})\] \[\leq \|x_{2}+y_{2}-\{\lambda(x_{1}+y_{1})+(1-\lambda)(x_{3}+y_{3})\}\|\] \[\leq \|x_{2}-\{\lambda x_{1}+(1-\lambda)x_{3}\}|+|y_{2}-\{\lambda y_{1} +(1-\lambda)y_{3}\}\|\] \[= \mathbb{M}(x_{1},x_{2},x_{3})+\|y_{2}-\{\lambda y_{1}+(1-\lambda) y_{3}\}\|\] \[\leq \mathbb{M}(x_{1},x_{2},x_{3})+\|y_{2}-y_{1}\|\vee\|y_{2}-y_{3}\|,\]
this last inequality using convexity of the norm and \(k\vee\ell=\max(k,\ell)\). Restricting to \(\lambda\in\{0,1\}\) instead of \([0,1]\) yields the same inequalities with \(\mathbb{J}\) in place of \(\mathbb{M}\).
The following result provides necessary and sufficient conditions under which the composition mapping \(\circ\) is \(\mathcal{M}_{1}\)-continuous, when \(B=\mathbb{R}^{k}\). Let \(D_{\uparrow}\subset D\) be the subspace of those non-decreasing non-negative \(y\) such that \(y(t)\to\infty\) as \(t\to\infty\).
**Proposition A.2**.: _Suppose that \((x_{n},y_{n})\stackrel{{\mathcal{M}_{1}}}{{\to}}(x,y)\) in \(D\left([0,\infty):\mathbb{R}^{k}\right)\times D_{\uparrow}\), where \(x\) is continuous. Then \(x_{n}\circ y_{n}\stackrel{{\mathcal{M}_{1}}}{{\to}}x\circ y\) in \(D\left([0,\infty):\mathbb{R}^{k}\right)\) if and only if \(x\) is monotone on \([y(t-),y(t)]\), for any \(t\in\operatorname{Disc}(y)\)._
Proof.: Sufficiency follows from Whitt (2002, Theorem 13.2.4) so we only prove necessity. Suppose \(y_{n}\to y\) in \(\mathcal{M}_{1}\), \(y_{n}\) continuous and strictly increasing. Assume that \(t\in\operatorname{Disc}(y)\), and let \(t_{1,m}\) be sequence increasing to \(t\) with \(t_{1,m}\not\in\operatorname{Disc}(y)\). Further let \(t_{m,3}\) be sequence decreasing to \(t\) with \(t_{m,3}\not\in\operatorname{Disc}(y)\). Finally, for a given \(\lambda\in(0,1)\), let \(t_{n,2,m}\) be the unique point such that \(y_{n}(t_{n,2,m})=\lambda y_{n}(t_{1,m})+(1-\lambda)y_{n}(t_{3,m})\). For a given \(\delta>0\), \(t-\delta<t_{1,m}<t_{m,3}<t+\delta\) if \(m\) is large enough. By (Whitt, 2002, Theorem 12.5.1 (v)), since \(t_{1,m},t_{3,m}\) are continuity point of \(y\), it follows that \(\lim\limits_{\delta_{1}\to 0}\limsup\limits_{n\to\infty}\omega_{\mathbb{C}}(y_{n} -y,t_{j,m},\delta_{1})=0\), \(j\in\{1,3\}\); in particular, \(y_{n}(t_{j,m})\to s_{j,m}=y(t_{j,m})\) as \(n\to\infty\). Thus, \(y_{n}(t_{n,2,m})\to s_{2,m}=\lambda y(t_{1,m})+(1-\lambda)y(t_{3,m})\). If \(x_{n}\to x\) in \(\mathcal{M}_{1}\) with \(x\) continuous, it follows from (Whitt, 2002, Theorem 12.5.1 (v)) that \(x_{n}\circ y_{n}(t_{n,j,m})\to x(s_{j,m})\), \(j\in\{1,2,3\}\). Therefore,
\[\limsup\limits_{n\to\infty}\omega_{\mathbb{M}}(x_{n}\circ y_{n},\delta,T)\geq \mathbb{M}\left\{x(s_{1,m}),x(s_{2,m}),x(s_{3,m})\right\}\]
if \(m\) is large enough. By letting \(m\to\infty\), one gets \(s_{1,m}\to s_{1}=y(t-)\), \(s_{3,m}\to s_{3}=y(t)\) and \(s_{2,m}\to\lambda y(t-)+(1-\lambda)y(t)\). As a result,
\[\lim\limits_{\delta\to 0}\limsup\limits_{n\to\infty}\omega_{ \mathbb{M}}(x_{n}\circ y_{n},\delta,T)\\ \geq\sup\limits_{t\in\operatorname{Disc}(y),t<T}\sup\limits_{y(t -)\leq s\leq y(t)}\mathbb{M}\left[x\{y(t-)\},x(s),x\{y(t)\}\right]>0,\]
unless \(x\) is monotone on each interval \([y(t-),y(t)]\).
From hereon we focus exclusively on the real-valued functions and processes so \(\mathbb{B}=\mathbb{R}\). Some observations for sequences of \(D\)-valued non-decreasing processes come next. For any non-decreasing non-negative function \(A\in D\), denote its inverse by \(\tau(s)=\inf\{t\geq 0;A(t)>s\}\). Let \(D_{\uparrow}^{0}\) be the subspace of those trajectories in \(D_{\uparrow}\subset D\) where \(\tau(0)=0\) as well.
**Proposition A.3**.: _The inverse map \(A\mapsto\tau\) is a well defined bijective mapping from \(D_{\uparrow}\) into itself, such that there holds \(A\circ\tau\circ A=A\) provided \(A\in D_{\uparrow}\) is either continuous everywhere or strictly increasing everywhere. Both \(A\mapsto\tau\) and the reverse map \(\tau\mapsto A\) are \(\mathcal{M}_{1}\)-continuous when restricted to \(D_{\uparrow}^{0}\); this is not necessarily the case without this additional restriction, not even on \(D_{\uparrow}\)._
Proof.: The first statement combines Whitt (2002, Corollary 13.6.1) together with Whitt (2002, Lemma 13.6.5); the second one is Whitt (2002, Theorem 13.6.3) essentially.
**Remark A.2**.: A sequence of \(D\)-valued non-decreasing processes verify \(A_{n}\stackrel{{\mathcal{M}_{1}}}{{\leadsto}}A\) if and only if \(A_{n}\stackrel{{ f.d.d.}}{{\leadsto}}A\), since they all satisfy \(\omega_{\mathbb{M}}(A_{n},\delta)=0\) for any \(\delta>0\). Similarly, the \(A_{n}\) are \(\mathcal{M}_{1}\)-tight if and only if their finite dimensional distributions are tight; since each \(A_{n}\) is non-decreasing, this last condition is equivalent to the tightness of \((A_{n}(T))_{n\geq 1}\) for every \(T>0\), provided the \(A_{n}\) are non-negative (or at least uniformly bounded below). Furthermore, the sum \(A_{n}+A_{n}^{\prime}\) of two \(\mathcal{M}_{1}\)-tight sequences of non-negative, non-decreasing processes is also \(\mathcal{M}_{1}\)-tight, hence so are \((A_{n},0)\), \((0,A_{n}^{\prime})\) and finally \((A_{n},A_{n}^{\prime})\) as well. If in addition there holds \(A_{n}\stackrel{{\mathcal{M}_{1}}}{{\leadsto}}A\) and \(A_{n}^{\prime}\stackrel{{\mathcal{M}_{1}}}{{\leadsto}}A^{\prime}\), then there ensues \(A_{n}+A_{n}^{\prime}\stackrel{{\mathcal{M}_{1}}}{{\leadsto}}A+A^{\prime}\) and \((A_{n},A_{n}^{\prime})\stackrel{{\mathcal{M}_{1}}}{{\leadsto}}(A,A ^{\prime})\).
More can be said when the prospective limit is continuous. The following result follows from Remillard and Vaillancourt (2023).
**Proposition A.4**.: _Let \(D\)-valued non-decreasing processes \(A_{n}\) and some continuous process \(A\) be such that \(A_{n}\stackrel{{ f.d.d.}}{{\leadsto}}A\). Then \(A_{n}\) is \(\mathcal{C}\)-tight and \(A_{n}\stackrel{{\mathcal{C}}}{{\leadsto}}A\)._
The following technical result ensues. Once again \(M_{n}\) is a sequence of \(D\)-valued square integrable \(\mathbb{F}\)-martingales started at \(M_{n}(0)=0\) with quadratic variation \([M_{n}]\) and predictable compensator \(\langle M_{n}\rangle\).
**Proposition A.5**.: \(M_{n}\stackrel{{\mathcal{C}}}{{\leadsto}}0\)_, \([M_{n}]\stackrel{{\mathcal{C}}}{{\leadsto}}0\) and \(\langle M_{n}\rangle\stackrel{{\mathcal{C}}}{{\leadsto}}0\) are equivalent._
Proof.: The operator on \(D\) defined by mapping \(x\in D\) to \(t\mapsto\sup_{0\leq s\leq t}x(s)\), is a continuous operator in the \(\mathcal{C}\)-topology. The three statements \(M_{n}\stackrel{{\mathcal{C}}}{{\leadsto}}0\), \(M_{n}^{2}\stackrel{{\mathcal{C}}}{{\leadsto}}0\) and \(\sup_{0\leq s\leq\{\cdot\}}M_{n}^{2}(s)\stackrel{{\mathcal{C}}}{{ \leadsto}}0\) are therefore equivalent. By Proposition A.4, it suffices to prove the proposition with each of \(M_{n}\stackrel{{\mathcal{C}}}{{\leadsto}}0\), \([M_{n}]\stackrel{{\mathcal{C}}}{{\leadsto}}0\) and \(\langle M_{n}\rangle\stackrel{{\mathcal{C}}}{{\leadsto}}0\), respectively
replaced by \(\sup_{0\leq s\leq t}M_{n}^{2}(s)\overset{Law}{\rightsquigarrow}0\), \([M_{n}]_{t}\overset{Law}{\rightsquigarrow}0\) and \(\langle M_{n}\rangle_{t}\overset{Law}{\rightsquigarrow}0\), for all \(t\geq 0\) in all three cases. Doob's inequality yields
\[E\{[M_{n}]_{t}\}\leq E\left\{\sup_{0\leq s\leq t}M_{n}^{2}(s)\right\}\leq 4E\{[M_ {n}]_{t}\}.\]
Applying Lemma A.8 twice, interchanging the roles of \(X\) and \(Y\), yields the first equivalence for \(M_{n}\) and \([M_{n}]\). Equality \(E\{\langle M_{n}\rangle_{t}\}=E\{[M_{n}]_{t}\}\) and another double application of Lemma A.8 yields the last one, this time with \(\langle M_{n}\rangle\) and \([M_{n}]\).
Next is a miscellanea of useful results. First comes a CLT in the \(\mathcal{C}\)-topology, taken from Remillard and Vaillancourt (2023). We only state it for \(d=1\) here.
**Theorem A.6**.: _Assume that Hypothesis 3.1 holds with \(A\) continuous everywhere; that \(J_{t}(M_{n})\overset{Law}{\rightsquigarrow}0\) for any \(t>0\); that there exists an \(\mathbb{F}\)-adapted sequence of \(D\)-valued square integrable martingales \(B_{n}\) started at \(B_{n}(0)=0\) so that_
1. \((B_{n},A_{n})\overset{\mathcal{C}}{\rightsquigarrow}(B,A)\) _holds, where_ \(B\) _is a Brownian motion with respect to its natural filtration_ \(\mathbb{F}_{B}=\{\mathcal{F}_{B,t}:\;t\geq 0\}\) _and_ \(A\) _is_ \(\mathbb{F}_{B}\)_-measurable;_
2. \(\langle M_{n},B_{n}\rangle_{t}\overset{Law}{\rightsquigarrow}0\)_, for any_ \(t\geq 0\)_._
_Then \((M_{n},A_{n},B_{n})\overset{\mathcal{C}}{\rightsquigarrow}(M,A,B)\) holds, where \(M\) is a continuous square integrable \(\mathcal{F}_{t}\)-martingale with respect to (enlarged) filtration with predictable quadratic variation process \(A\). Moreover, \(M=W\circ A\) holds, with \(W\) a standard Brownian motion which is independent of \(B\) and \(A\)._
Next, we have the following observations about Levy processes.
The characteristic function of an inhomogeneous Levy process \(A\) is given by
\[E\left[e^{i\theta(A_{t}-A_{s})}\Big{|}\,\mathcal{F}_{s}\right]=e^{\Psi_{t}( \theta)-\Psi_{s}(\theta)},\quad 0\leq s<t,\]
where \(A\) is \(\mathbb{F}\)-adapted with \(\mathbb{F}=(\mathcal{F}_{t})_{t\geq 0}\),
\[\Psi_{t}(\theta)=i\theta B_{t}-\frac{\theta^{2}C_{t}}{2}+\left\{\prod_{0<r \leq t}e^{-ir\Delta B_{r}}\right\}\int_{0}^{t}\int\left(e^{i\theta x}-1-i \theta x\mathbb{I}_{\{|x|\leq 1\}}\right)\nu(du,dx),\]
\(\Delta B_{r}=B_{r}-B_{r-}\), and \(\nu\) is a Levy measure. For details see Jacod and Shiryaev (2003, Theorem II.4.15, Theorem II.5.2 and Corollaries II.5.11,12,13). The characteristics of \(A\) are defined to be the deterministic functions \((B,C,\nu)\).
**Lemma A.7**.: _Suppose \(A\) is an \(\mathbb{F}\)-adapted inhomogeneous Levy process with finite variation on any finite interval \([0,T]\) and no Brownian component, i.e., \(C_{t}=0\) for all \(t\geq 0\). Then \(A\) is independent of any \(\mathbb{F}\)-Brownian motion \(W\)._
Proof.: If \(W\) is an \(\mathbb{F}\)-Brownian motion, then the martingale \(N_{1}(t)=e^{i\theta_{1}W_{t}+\frac{\theta^{2}t}{2}}\) is continuous and square integrable on any finite interval \([0,T]\) and the martingale \(N_{2}(t)=e^{i\theta_{2}A_{t}-\Psi_{t}(\theta_{2})}\) has finite variation \(V_{T}\) on any finite interval \([0,T]\), and \(V_{T}\) is square integrable. Then, from Applebaum (2004), one obtains that for any \(t\geq 0\), with \(\Delta N(u)=N(u)-N(u-)\),
\[E[N_{1}(t)N_{2}(t)]=1+\sum_{0<u\leq t}E[\Delta N_{1}(u)\Delta N_{2}(u)]=1.\]
Hence \(E\left[e^{i\theta_{1}W(t)+i\theta_{2}A(t)}\right]=e^{-\frac{\theta_{1}^{2}t}{2 }}e^{\Psi_{t}(\theta_{2})}\), proving that \(A(t)\) and \(W(t)\) are independent. Using this and the independence of the increments of \(A\) and \(W\), one may conclude that \(A\) and \(W\) are independent.
The proof of the following is in Jacod and Shiryaev (2003, Lemma I.3.30).
**Lemma A.8** (Lenglart's inequality).: _Let \(X\) be an \(\mathbb{F}\)-adapted \(D\)-valued process. Suppose that \(Y\) is optional, non-decreasing, and that, for any bounded stopping time \(\tau\), \(E|X(\tau)|\leq E\{Y(\tau)\}\). Then for any stopping time \(\tau\) and all \(\varepsilon,\eta>0\),_
* _if_ \(Y\) _is predictable,_ (A.2) \[P(\sup_{s\leq\tau}|X(s)|\geq\varepsilon)\leq\frac{\eta}{\varepsilon}+P(Y(\tau )\geq\eta).\]
* _if_ \(Y\) _is adapted,_ (A.3)
We end with a few comments about \(\mathcal{L}^{2}_{loc}\) which are used in the proofs herein.
A (deterministic) sequence \(x_{n}\in\mathcal{L}^{2}_{loc}\) is said to \(\mathcal{L}^{2}_{w}\)-converge to \(x\in\mathcal{L}^{2}_{loc}\) if and only if the sequence of scalar products \(\int_{0}^{T}x_{n}(t)f(t)dt\to\int_{0}^{T}x(t)f(t)dt\) converges as \(n\to\infty\), for each \(f\in\mathcal{L}^{2}_{loc}\) and each \(T>0\). This holds if and only if both the following statements hold for every \(T>0\): the sequence \(\left\{|x_{n}|_{T};n\geq 1\right\}\) is bounded and the requirement for convergence of the above scalar products is achieved for a norm dense subset of \(\mathcal{L}^{2}[0,T]\). In fact, given any complete orthonormal set \(\{f_{k}\}\)
for \(\mathcal{L}^{2}[0,1]\), the countable family \(\{f_{k}(\ell^{-1}\cdot);k,\ell=1,2,\ldots\}\), with \(f_{k}(\ell^{-1}t):=0\) when \(t>\ell\), will suffice, since it has a norm dense linear span in each vector space \(\mathcal{L}^{2}[0,T]\). Alternatively, given any complete orthonormal set \(\{g_{k}\}\) for Hilbert space \(\mathcal{L}^{2}[0,\infty)\) or \(\mathcal{L}^{2}(-\infty,\infty)\), the same holds with the countable family \(\{g_{k}\}\) -- for instance, in \(\mathcal{L}^{2}(-\infty,\infty)\), take the Hermite functions \(\{h_{k};k\geq 0\}\), defined by \(h_{k}(t):=(\pi^{1/2}2^{k}k!)^{-1/2}(-1)^{k}e^{t^{2}/2}D_{t}^{k}(e^{-t^{2}})\), where \(D_{t}^{k}\) is the \(k^{\mbox{th}}\) derivative with respect to \(t\), which also satisfy the uniform bound \(\sup_{k,t}|h_{k}(t)|=h_{0}(0)=\pi^{-1/4}\), due to Szasz (1951). The metric \(d\) on \(\mathcal{L}^{2}_{loc}\), defined by
\[d(x,y):=\sum_{k=0}^{\infty}\sum_{\ell=1}^{\infty}2^{-(k+\ell+1)}\left(1\wedge \left|\int_{0}^{\ell}\{x(t)-y(t)\}\,h_{k}(t)dt\right|\right),\]
is compatible with the \(\mathcal{L}^{2}_{w}\)-topology on all norm bounded subsets \(B\) of either \(\mathcal{L}^{2}[0,\infty)\) (\(\sup_{x\in B}|x|_{\infty}<\infty\)) or \(\mathcal{L}^{2}[0,T]\) (\(\sup_{x\in B}|x|_{T}<\infty\)), turning all \(\mathcal{L}^{2}_{w}\)-closed and norm bounded subsets of either \(\mathcal{L}^{2}[0,\infty)\) or \(\mathcal{L}^{2}[0,T]\) into complete separable metric (hereafter Polish) spaces, when equipped with this metric. In fact, they are compact Polish spaces since all balls in either \(\mathcal{L}^{2}[0,\infty)\) or \(\mathcal{L}^{2}[0,T]\) are relatively compact in the \(\mathcal{L}^{2}_{w}\)-topology -- for a proof see Lieb and Loss (2001).
## Appendix B Proofs
### Proof of Theorem 2.2
The projections \((\pi_{\ell_{1},k_{1}},\pi_{\ell_{2},k_{2}},\ldots,\pi_{\ell_{m},k_{m}}): \mathcal{L}^{2}_{loc}\to\mathbb{R}^{m}\), given by setting \(\pi_{\ell,k}(f):=\int_{0}^{\ell}f(t)h_{k}(t)dt\) for all \(k\geq 0\) and \(\ell,m\geq 1\), generate the Borel \(\sigma\)-algebra for metric \(d\), and they characterize probability measures on \(\mathcal{L}^{2}_{loc}\), just as in the proof of Kolmogorov's existence theorem for probability measures on \(\mathbb{R}^{\infty}\). In the light of Lemma 2.1, it suffices to prove that \((i)\) and \((ii)\) together imply, jointly for all \((k_{1},k_{2},\ldots,k_{m})\in\{0,1,2,\ldots\}^{m}\) and all \(m\geq 1\),
\[\int_{0}^{\cdot}X_{n}(s)\{h_{k_{1}},h_{k_{2}},\ldots,h_{k_{m}}\}(s)ds\stackrel{{ f.d.d.}}{{\rightsquigarrow}}\int_{0}^{\cdot}X(s)\{h_{k_{1}},h_{k_{2}},\ldots,h_{k_{m}}\}(s)ds\]
as \(n\to\infty\). The Cramer-Wold device, applied to linear combinations of finitely many indicators of finite time intervals, plus a density argument gives sufficiency. Necessity follows the same argument applied to \(\{h_{k}\}\) to approximate \(1\in\mathcal{L}^{2}_{loc}\).
### Proof of Theorem 2.5
Setting \(|X_{n}|_{\infty,t}=\sup_{0\leq s\leq t}|X_{n}(s)|\), \(|X_{n}\circ Y_{n}|_{T}\leq\sqrt{T}|X_{n}|_{\infty,Y_{n}(T)}\) follows. Since \(Y_{n}(T)\) is tight for every \(T>0\), so is \(|X_{n}\circ Y_{n}|_{T}\). Next, under alternatives a) or b), \(V_{n}=Y_{n}^{-1}\stackrel{{\mathcal{M}_{1}}}{{\rightsquigarrow}}V=Y ^{-1}\), by Proposition A.3.
According to Theorem 2.2, to complete the proof it suffices to prove the convergence of the finite dimensional distributions of the sequence of processes \(\int_{0}^{t}X_{n}\circ Y_{n}(s)ds=Z_{n}\circ Y_{n}(t)\) to the process \(\int_{0}^{t}X\circ Y(s)ds=Z\circ Y(t)\), where \(Z_{n}(t)=\int_{0}^{t}X_{n}(s)dV_{n}(s)\) and \(Z(t)=\int_{0}^{t}X(s)dV(s)\). Because \(X_{n}\)\(\mathcal{C}\)-converges, for \(M>0\), there is a finite partition \(0=t_{0}<t_{1}<t_{2}\cdots<t_{m}=M\) of \([0,M]\) such that \(\delta_{n,m}=|X_{n}-X_{n,m}|_{\infty,M}\) and \(\delta_{\infty,m}=|X-X_{\infty,m}|_{\infty,M}\) are as small as one wants with large probability, where \(X_{n,m}(t)=\sum_{k=1}^{m}X_{n}(t_{k})\mathbb{I}_{t_{k-1}\leq t<t_{k}}\) and \(X_{\infty,m}(t)=\sum_{k=1}^{m}X(t_{k})\mathbb{I}_{t_{k-1}\leq t<t_{k}}\). Explicitly, for every \(M>0\) and \(\epsilon>0\), there are integers \(m=m(M,\epsilon)\geq 1\) and \(N=N(M,m,\epsilon)\geq 1\) such that \(P(\sup_{n\geq N}\delta_{n,m}\geq\epsilon)\leq 3P(\delta_{\infty,m}\geq \epsilon)<\epsilon\). Similarly \(|Z_{n}-Z_{n,m}|_{\infty,M}\leq\sqrt{V_{n}(M)}\delta_{n,m}\) and \(|Z-Z_{\infty,m}|_{\infty,M}\leq\sqrt{V(M)}\delta_{\infty,m}\) hold, where
\[Z_{n,m}(t)=\int_{0}^{t}X_{n,m}(s)dV_{n}(s)=\sum_{k=1}^{m}X_{n}(t_{k})\left\{V_ {n}(t_{k}\wedge t)-V_{n}(t_{k-1}\wedge t)\right\},\]
and \(Z_{\infty,m}(t)=\int_{0}^{t}X_{\infty,m}(s)dV(s)\).
By Proposition A.3 and Remark A.2, \((X_{n},Y_{n},V_{n})\stackrel{{ f.d.d.}}{{\rightsquigarrow}}(X,Y,V)\) holds, hence so does \(Z_{n,m}\stackrel{{ f.d.d.}}{{\rightsquigarrow}}Z_{\infty,m}\) for each fixed \(m\geq 1\). Since \(|Z_{n}-Z_{n,m}|_{\infty,M}\stackrel{{ Pr}}{{\rightsquigarrow}}0\) and \(|Z-Z_{\infty,m}|_{\infty,M}\stackrel{{ Pr}}{{\rightsquigarrow}}0\) hold for each fixed \(n\geq 1\) and \(M>0\), plus \(V_{n}(M)\) is tight for every \(M>0\), \(Z_{n,m}\stackrel{{ f.d.d.}}{{\rightsquigarrow}}Z_{\infty,m}\) for each fixed \(m\geq 1\) yields \(Z_{n}\stackrel{{ f.d.d.}}{{\rightsquigarrow}}Z\). All that is left to show is that \(Z_{n}\circ Y_{n}\stackrel{{ f.d.d.}}{{\rightsquigarrow}}Z\circ Y\). Since \(Y_{n}(T)\) is tight, choosing \(\epsilon>0\), one can find \(M=M_{\epsilon}\) so that for all \(n\in\mathbb{N}\), \(P(|Y_{n}(T)|>M)\leq\epsilon\) and \(P(|Y(T)|>M)\leq\epsilon\). Suppose in addition that \(0=t_{0}<t_{1}<\cdots<t_{m}=M\) are points of continuity of the distribution of \(Y(T)\). Then, on \(\{Y_{n}(T)\leq M\}\),
\[Z_{n,m}\circ Y_{n}(T) = \sum_{j=1}^{m}\mathbb{I}\{t_{j-1}<Y_{n}(T)\leq t_{j}\}X_{n}(t_{ j})\{V_{n}\circ Y_{n}(T)-V_{n}(t_{j-1})\}\] \[\quad+\sum_{j=1}^{m}\mathbb{I}\{t_{j-1}<Y_{n}(T)\leq t_{j}\} \left[\sum_{k=1}^{j-1}X_{n}(t_{k})\{V_{n}(t_{k})-V_{n}(t_{k-1})\}\right]\] \[= \sum_{j=1}^{m}\mathbb{I}\{t_{j-1}<Y_{n}(T)\leq t_{j}\}X_{n}(t_{ j})\{V_{n}\circ Y_{n}(T)-V_{n}(t_{j-1})\}\] \[\quad+\sum_{k=1}^{m-1}X_{n}(t_{k})\{V_{n}(t_{k})-V_{n}(t_{k-1}) \}\mathbb{I}\{t_{k}<Y_{n}(T)\leq M\}.\]
Since \((X_{n},Y_{n},V_{n})\stackrel{{ f.d.d.}}{{\rightsquigarrow}}(X,Y,V)\), it follows that \(Z_{n,m}\circ Y_{n}(T)\) converges in law to \(Z_{\infty,m}\circ Y(T)\). The rest of the proof follows similarly by considering common points
of continuity of the marginal distributions of \(Y(T_{1}),Y(T_{2}),\ldots,Y(T_{i})\) for any choice of \(0=T_{0}<T_{1}<\cdots<T_{i}=T\), first yielding \(Z_{n,m}\circ Y_{n}\stackrel{{ f.d.d.}}{{\rightsquigarrow}}Z_{\infty,m}\circ Y\) for each fixed \(m\geq 1\) and then \(Z_{n}\circ Y_{n}\stackrel{{ f.d.d.}}{{\rightsquigarrow}}Z\circ Y\).
### Proof of Theorem 3.1
Hypothesis 3.2 implies that \(\langle W_{n}\rangle\) is \(\mathcal{C}\)-tight; applying Theorem A.6 to martingale \(W_{n}=M_{n}\circ\tau_{n}\) yields \((W_{n},\langle W_{n}\rangle)\stackrel{{\mathcal{C}}}{{\rightsquigarrow}}( W,\langle W\rangle)\), with \(W\) a standard Brownian motion since \(\langle W\rangle_{t}=t\), a deterministic trajectory. Hypothesis 3.1 implies \(A_{n}\stackrel{{\mathcal{M}_{1}}}{{\rightsquigarrow}}A\) (in the light of Remark A.2), which yields \(M_{n}\stackrel{{\mathcal{L}^{2}}}{{\rightsquigarrow}}M\), by Theorem 2.5. The independence of \(W\) and \(A\) stems from Lemma A.7 in the case where \(A\) is an inhomogeneous Levy process. In the case where Hypothesis 3.3 is valid instead, Remark 3.2 yields \((A_{n},\tau_{n})\stackrel{{\mathcal{M}_{1}}}{{\rightsquigarrow}}( A,\tau)\). Since the pair \((\tau,W)\) is \(\mathbb{F}_{\tau}\)-adapted and \(\tau\) is a non-decreasing inhomogeneous Levy process, it has no Brownian component; therefore, it is independent of \(W\) by Lemma A.7. Since \(A(0)=0\) guarantees the continuity of the reverse mapping \(\tau\mapsto A\), \(A\) is also independent of \(W\) and so is the pair \((A,\tau)\) as the image of Borel measurable mapping \(\tau\mapsto(A,\tau)\). The independence of \(W\) and \(A\) implies \((A_{n},W_{n})\stackrel{{ f.d.d.}}{{\rightsquigarrow}}(A,W)\) from which there follows \((W_{n},A_{n},\tau_{n})\stackrel{{ f.d.d.}}{{\rightsquigarrow}}( W,A,\tau)\) under either of the alternate conditions in the statement. Going back to the end of the proof of Theorem 2.5 which yielded \(\int_{0}^{\cdot}M_{n}(s)ds\stackrel{{ f.d.d.}}{{\rightsquigarrow}} \int_{0}^{\cdot}M(s)ds\), note that common points of continuity of the marginal distributions of \(A(T_{1}),A(T_{2}),\ldots,A(T_{i})\) also allow for the discretization of the integrals \(\mathcal{I}(A_{n})\) and \(\mathcal{I}(W_{n})\) along the same pattern as for \(\mathcal{I}(M_{n})\), since \(D\)-valued \(A_{n}\) and \(W_{n}\) have bounded trajectories on compact time sets. One gets, through the same reasoning as for \(\mathcal{I}(M_{n})\), \((\mathcal{I}(M_{n}),\mathcal{I}(A_{n}),\mathcal{I}(W_{n}))\stackrel{{ f.d.d.}}{{\rightsquigarrow}}(\mathcal{I}(M),\mathcal{I}(W))\). By Lemma 2.3 and Remark 2.3, \((M_{n},A_{n},W_{n})\stackrel{{\mathcal{L}^{2}_{w}}}{{\rightsquigarrow}} (M,A,W)\) ensues.
### Proof of (4.2)
We only do the case where \(V(x)=v(|x|)\) is radial as an illustration. Then \(F(x)=\pi^{-1}\int_{\mathbb{R}^{2}}V(y)\log|y-x|dy\) is also radial, \(\int_{0}^{\infty}(rv(r)dr=0\) since \(\bar{V}=0\), and \(F(x)=\Phi(|x|)=\dfrac{1}{2\pi}\int_{0}^{\infty}\int_{0}^{2\pi}rv(r)\log\left(|x |^{2}+r^{2}-2r|x|\cos\theta\right)d\theta dr\). Setting \(\gamma=2ra/(r^{2}+a^{2})=2\sin\alpha\cos\alpha=\sin(2\alpha)\), then successively
\[\dfrac{1}{2\pi}\int_{0}^{2\pi}\log\left(1-\gamma\cos\theta\right)d\theta=\log \left(\dfrac{1+\sqrt{1-\gamma^{2}}}{2}\right)=\log\left\{\dfrac{\max\left(r^{2},a^{2}\right)}{r^{2}+a^{2}}\right\},\]
\[\Phi(a) = \int_{0}^{\infty}rv(r)\log\left(a^{2}+r^{2}\right)dr+\frac{1}{2\pi} \int_{0}^{\infty}\int_{0}^{2\pi}rv(r)\log\left(1-\gamma\cos\theta\right)d \theta dr\] \[= \int_{0}^{\infty}rv(r)\log\left(a^{2}+r^{2}\right)dr+\int_{0}^{ \infty}rv(r)\log\left(\frac{\max(r^{2},a^{2})}{r^{2}+a^{2}}\right)dr\] \[= 2\int_{0}^{\infty}rv(r)\log\max\left(r,a\right)dr.\]
As a result, \(F(x)=2\int_{0}^{\infty}rv(r)\log\max\left(r,|x|\right)dr=2\int_{|x|}^{\infty}rv (r)\log\left(\frac{r}{|x|}\right)dr\) and \(|\nabla F(x)|^{2}=\left\{\frac{2}{|x|}\int_{|x|}^{\infty}rv(r)dr\right\}^{2}\), so that
\[\int|\nabla F(x)|^{2}dx = 8\pi\int_{0}^{\infty}\frac{1}{r}\left\{\int_{r}^{\infty}sv(s) ds\right\}^{2}dr\] \[= -8\pi\int_{0}^{\infty}\int_{0}^{\infty}rav(r)v(a)\log\max(r,a)drda\] \[= -2\int_{0}^{\infty}\int_{0}^{\infty}\int_{0}^{2\pi}rav(r)v(a) \log\left(r^{2}+a^{2}-2ra\cos\theta\right)d\theta drda\]
which is precisely \(c_{V}\).
|
2305.02107
|
Locosim: an Open-Source Cross-Platform Robotics Framework
|
The architecture of a robotics software framework tremendously influences the
effort and time it takes for end users to test new concepts in a simulation
environment and to control real hardware. Many years of activity in the field
allowed us to sort out crucial requirements for a framework tailored for
robotics: modularity and extensibility, source code reusability, feature
richness, and user-friendliness. We implemented these requirements and
collected best practices in Locosim, a cross-platform framework for simulation
and real hardware. In this paper, we describe the architecture of Locosim and
illustrate some use cases that show its potential.
|
Michele Focchi, Francesco Roscia, Claudio Semini
|
2023-05-03T13:33:12Z
|
http://arxiv.org/abs/2305.02107v2
|
# Locosim: an Open-Source Cross-Platform Robotics Framework
###### Abstract
The architecture of a robotics software framework tremendously influences the effort and time it takes for end users to test new concepts in a simulation environment and to control real hardware. Many years of activity in the field allowed us to sort out crucial requirements for a framework tailored for robotics: modularity and extensibility, source code reusability, feature richness, and user-friendliness. We implemented these requirements and collected best practices in Locosim, a cross-platform framework for simulation and real hardware. In this paper, we describe the architecture of Locosim and illustrate some use cases that show its potential.
Keywords:Computer Architecture for Robotics; Software Tools for Robot Programming; Software-Hardware Integration for Robot Systems
## 1 Introduction
Writing software for robotic platforms can be arduous, time-consuming, and error-prone. In recent years, the number of research groups working in robotics has grown exponentially, each group having platforms with peculiar characteristics. The choice of morphology, actuation systems, and sensing technology is virtually unlimited, and code reuse is fundamental to getting new robots up and running in the shortest possible time. In addition, it is pervasive for researchers willing to test new ideas in simulation without wasting time in coding, for instance, using high-level languages for rapid code prototyping. To pursue these goals, in the past years several robotics frameworks have been designed for teaching or for controlling specific platforms, e.g., OpenRAVE [1], Drake [2] and SL [3].
To avoid roboticists _reinventing the wheel_ whenever they buy or build a new robot, we present our framework Locosim1. Locosim is designed with the
primary goal of being platform-independent, dramatically simplifying the task of interfacing a robot with planners and controllers. Locosim consists of a ROS control node [4] (the _low-level_ controller), written in C++, that interfaces a custom Python ROS node (the _high-level_ planner/controller) with either a Gazebo simulator [5] or the real hardware. The planner relies on Pinocchio [6] for computing the robot's kinematics and dynamics and closes the control loop at a user-defined frequency.
### Advantages of Locosim
The benefits of the proposed framework are multiple.
* Locosim is platform-independent. It supports a list of robots with different morphology (e.g., quadrupeds, arms, hybrid structures, see Fig. 1) and provides features for fast designing and adding new robots.
* Locosim implements functions needed for all robots. Once the robot description is provided, no effort is spent on libraries for kinematics, dynamics, logging, plotting, or visualization. These valuable tools ease the synthesis of a new planner/controller.
* Locosim is easy to learn. The end user invests little time in training and gets an open-source framework with all the benefits of Python and ROS.
* Locosim is modular. Because it heavily uses the inheritance paradigm, classes of increasing complexity can provide different features depending on the nature of the specific robotic application. For instance, the controller for fixed-base robotic arms with a grasping tool can be reused for a four-legged robot with flywheels since it is built upon the same base class.
* Locosim is extensible. Our framework is modifiable and the end-user can add any supplementary functionality.
* Locosim is easy to install. It can be used either inside a virtual machine, a docker container, or natively by manually installing dependencies.
### Outline
The remainder of this paper is organized as follows. In Section 2 we highlight the critical requirements of a cross-platform robotics framework. In Section 3
Figure 1: Examples of robots already included in Locosim (from left to right, top to bottom): Aliengo [7], Go1 [8], HyQ [9], Starbot, CLIO [10], UR5 [11] with Gripper, Solo with Flywheels [12], Tractor (images are not in scale).
we detail structure and features of Locosim. In Section 4 we discuss use-case examples of our framework, either with the real robot or with its simulated digital twin. Eventually, we condense the results and present future works in Section 5.
## 2 Key aspects of a robotics framework
In the most general sense, a _robotics framework_ is a software architecture of programs and data that adhere to well-defined rules for operating robots. _End-users_ are people who will ultimately use the robotics framework and potentially bring some modifications. The robotics framework is the center of a triangle having at its vertices the end-user, the robotic platform and the simulation environment (see Fig. 2). The simulation must replicate the behaviour of the robot with a sufficient degree of accuracy. The end-user must be able to test new features and ideas in the simulation environment before running them on the robot platform. A robotics framework should provide the link among these three. In this context, we identify a list of crucial requirements that a robotics framework must possess: generality, modularity, reusability, extensibility, rapid prototyping vs. performances, feature-rich, and end-users development.
#### 2.0.1 Generality.
It is essential to release software free from unnatural restrictions and limitations. An end-user may require to design software for controlling a single robot, for multi-robot cooperation, for swarm coordination, or for reinforcement learning, which requires an abundant number of robots in the training phase. It should be possible to model any kinematic structure (floating base/fixed base robot, kinematic loops, etc.).
#### 2.0.2 Modularity.
A robotics framework should provide separate building blocks of functionalities according to the specific needs of the robot or the application.
Figure 2: End-users, robotic platform and simulation environment make a triad only if an effective robotics framework can join them.
These building blocks must be replaceable and separated by clear interfaces. The end-user may want to only visualize a specific robot in a particular joint configuration, move the joints interactively, or plan motions and understand the effects of external forces. Each of these functions must be equipped with tools for debugging and testing, e.g., unit tests. Replacing each module with improved or updated versions with additional functionalities should be easy.
#### 2.0.1 Reusability.
Pieces of source code of a program should be reused by reassembling them in more complex programs to provide a desired set of functionalities, with minor or no modifications. From this perspective, parametrization is a simple but effective method, e.g., the end-user should be able to use the same framework with different robots changing only a single parameter. In the same way, _digital twins_[13] can be realized by varying the status of a flag that selects between the real hardware and the simulation environment. This avoids writing different codes for the simulator and the real robot.
#### 2.0.2 Extensibility.
A robotics framework must be designed with the foresight to support the addition and evolution of hardware and software that may not exist at implementation time. This property can be achieved by providing a general set of application programming interfaces (APIs). Concepts typical of Object-Oriented Programming, such as inheritance and polymorphism, play a crucial role in extensibility.
#### 2.0.3 Rapid Prototyping vs. Performances.
A framework should allow for fast code prototyping of researchers' ideas. More specialized controllers/planners are built from simpler ones in the form of recursive inheritance (_matryoshka principle_). In this way end-users have unit tests at different levels of complexity. With fast code prototyping, end-users can quickly write software without syntax and logic errors. However, they do not have any assurance about the performance: such code is just good enough for the case of study in a simulation environment. Stringent requirements appear when executing codes on real robots, e.g., short computation time and limited memory usage. Thus, the framework must expose functionalities that can deal with performance.
#### 2.0.4 Feature-rich.
Most of the end-users need a sequence of functionalities when working with robots. These include but are not limited to the computation of kinematics and dynamics, logging, plotting, and visualization. A robotics framework should provide them, and they must be easily accessible.
#### 2.0.5 End-users Development.
Besides its implementation details, a robotics framework should provide methods, techniques and tools that allow end-users to create, modify, or extend the software [14] in an easy way. It should run on widely used Operating Systems and employ renowned high-level programming
languages that facilitate software integration. Clear documentation for installation and usage must be provided, and modules should have self-explanatory names and attributes.
## 3 Locosim Description
Locosim was born as a didactic framework to simulate both fixed- and floating-base robots. Quickly it evolved to be a framework for researchers that want to program newly purchased robots in a short time. Locosim runs on machines with Ubuntu as Operating System, and it employs ROS as middleware. Within Locosim, end-users can write robot controllers/planners in Python.
### Architecture
Locosim consists of four components: Robot Descriptions, Robot Hardware Interfaces, ROS Impedance Controller, and Robot Control. We illustrate each component in the following4.
Footnote 4: Some of the functions in Locosim components, which are quite established for the robotics community, are named after [3].
#### 3.1.1 Robot Descriptions.
The Robot Descriptions component contains dedicated packages for the characterization of each robot. For instance, the package that contains files to describe the robot myrobot, a generic mobile robot platform, is named myrobot_description. With the main focus on fast prototyping and human readability, the robot description is written in Xacro (XML-based scripting language) that avoids code replication through macros, conditional statements and parameters. It can import descriptions of some parts of the robot from urdfs sub-folder or meshes describing the geometry of rigid bodies from the meshes sub-folder. At run time, the Xacro file is converted into URDF, allowing the end-user to change some parameters. The gazebo.urdf.xacro launches ros_control package and the gazebo_ros_p3d plugin, which publishes the pose of the robot trunk in the topic /ground_truth (needed only for floating-base robots). The robot description directory must contain the files upload.launch and rviz.launch. The former processes the Xacro generating the URDF and loads into the parameter server, and the latter allows to visualize the robot and interact with it by providing the conf.rviz file. This is the configuration for the ROS visualizer RViz, which can be different for every robot. Additionally, the file ros_impedance_controller.yaml must be provided for each robot: it contains the sequence of joint names, joint PID gains and the home configuration. The Python script go0 will be used during the simulation startup to drive the robot to the home configuration.
#### 3.2.2 Robot Hardware Interfaces.
This folder contains drivers for the real hardware platforms supported by Locosim. They implement the interface that bridges the communication between the controller and the real robot, abstracting the specificity of each robot and exposing the same interface (e.g., EffortInterface). For instance, the UR5 robot through its driver provides three possible ROS hardware interfaces: Effort, Position and Velocity, hiding the details of the respective underlying low-level controllers.
#### 3.2.3 ROS Impedance Controller.
ROS Impedance Controller is a ROS package written in C++ that implements the _low-level_ joint controller, in the form of PID with feedforward effort (force or torque). The /joint_state_publisher publishes the actual position, velocity and effort for each robot joint. By default, the loop frequency is set to 1 kHz. This and other parameters can be regulated in the launch file called ros_impedance_controller.launch. Robots with specific needs can be dealt with by specifying a custom launch file. This is the case of the CLIO climbing robot that requires the model of the mountain to which it is attached. The robot_name parameter is used to load the correct robot description. If the real_robot flag is set to true, the robot hardware interface is initialized; otherwise, the Gazebo simulation starts running first the go0 script from the robot description. In any case, Rviz will be opened with the robot's specific configuration file conf.rviz. The end-user can manually change the location where the robot is spawned in Rviz with the spawn parameters. Physics parameters for the simulator are stored in the sub-folder worlds.
Figure 3: Schematic representation of a typical use-case of Locosim. The end-user wants to simulate the UR5 robot arm. An instance of Ur5Generic, which is a derived class of BaseControllerFixed, sends the command to the robot though ros_impedance_controller and it receives back the actual state. Ur5Generic implements features to manage the real robot and the gripper, perform a homing procedure at startup and a class for inverse kinematics.
\begin{table}
\begin{tabular}{l l l} \hline Name & Meaning & Class \\ \hline \hline q, q\_des & actual / desired joint positions & BCF, BC \\ qd, qd\_des & actual /desired joint velocities & BCF, BC \\ tau, tau\_ffwd & actual / feed-forward joint torques & BCF, BC \\ x\_ee & position of the end-effector expressed in & BCF \\ & base frame & \\ & contactForceW & contact force at the end-effector & BCF \\ & contactMomentW & contact moment at the end-effector & BCF \\ & basePoseW & base position and orientation in Euler & BC \\ & angles & \\ & baseTwistW & base linear and angular velocity & BC \\ b\_R\_w & orientation of the base link & BC \\ & contactsW & position of the contacts & BC \\ grForcesW & ground reaction forces on contacts & BC \\ \hline loadModelAndPublishers() & creates the object robot (Pinocchio wrapper) and loads publishers for visual features (ros\_pub), joint commands and declares subscriber to /ground\_truth and /joint\_states & BCF, BC \\ startFramework() & launch ros\_impedance\_controller & BCF, BC \\ send\_des\_jstate() & publishes /command topic with set-points for joint positions, velocities and feed-forward torques & BCF, BC \\ startupProcedure() & initializes PID gains & BCF, BC \\ initVars() & initializes class attributes & BCF, BC \\ logData() & fill in the variables x\_log with the content of x for plotting purposes, it needs to be called at every loop & BCF, BC \\ & receive\_jstate() & callback associated to the subscriber /joint\_states, fills in actual joint positions, velocities and torques & BCF, BC \\ & receive\_pose() & callback associated to the subscriber /ground\_truth, fills in the actual base pose and twist, and publishes the fixed transform between world and base & BCF, BC \\ & updateKinematics() & from q, qd, basePoseW, baseTwistW, computes position and velocity of the robot’s center of mass, the contacts position, the associated Jacobians, the ground reaction forces, the centroidal inertia, the mass matrix and the non-linear effects & BCF, BC \\
#### 3.2.2 Robot Control.
From the end-user perspective, this is the most crucial component. It embraces classes and methods for computation of the robot's kinematics and dynamics, logging and plotting time series variables, and real-time visualization on Rviz. Within this component, _high-level_ planning/control strategies are implemented. The codes of the component are entirely written in Python and have a few dependencies: above all, NumPy [15] and Pinocchio [6]. The former offers tools for manipulating multidimensional arrays; the latter implements functions for the robot's kinematics and dynamics. Pinocchio can be an essential instrument for researchers because of its efficient computations. Nevertheless, it can be time-consuming and cumbersome to understand for newcomers. To facilitate the employment, we developed a custom_robot_wrapper for building a robot object, computing robot mass, center of mass, Jacobians, centroidal inertia, and so on with easy-to-understand interfaces.
For the end-user, the starting point for building up a robot planner is the class BaseControllerFixed, suitable for fixed-base robots, and the derived class BaseController, which handles floating-base ones. In Table 1, we report a list of the main attributes and methods of BaseController and of its parent BaseControllerFixed. For having a more complex and specific controller, the end-user can create its own class, inheriting from one of the previous two and adding additional functionalities. E.g., QuadrupedController inherits from BaseController, and it is specific for quadruped robots. Ur5Generic adds to BaseControllerFixed the features of controlling the real robot, a gripper and a camera attached (or not) to the robotic arm UR5 (see Fig. 3). The controller class is initialized with the string robot_name, e.g., we write BaseController(myrobot) for the controller of myrobot. The end-user must pay particular attention to this string because it creates the link with the robot description and the robot hardware interface if needed. The robot_name is used for accessing the dictionary robot_param too. Among the other parameters of the dictionary, the flag real_robot permits using the same code for both the real (if set to true) and simulated (false) robot, resulting in the digital twin concept. The BaseControllerFixed class contains a ControllerManager to seamlessly swap between the control modes and controller types if the real hardware supports more than one. For instance, UR5 has two control modes (point, trajectory) and two controller types (position, torque), whereas Go1 supports a single low-level torque controller. Additionally, the GripperManager class manages the gripper in different ways for the simulation (i.e., adding additional finger joints) and for the real robot (on-off opening/closing service call to the UR5 driver as specified by the manufacturer), hiding this complexity to the end-user. The method startFramework() permits to launch the simulation or the driver, executing ros_impedance_controller.launch. It takes as input the list additional_args to propagate supplementary arguments, dealing with robot and task specificity. In the components folder there are additional classes for inverse kinematics, whole-body control, leg odometry, IMU utility, filters and more. Finally, the folder lab_exercises contains an ample list of scripts, with didactic exercises of incremental complexity, to learn Locosim and the main robotics concepts.
### Analysis of Design Choices
To fulfill the requirements stated in Section 2, we made a number of choices. We want to focus on why we selected ROS as middleware, Python as (preferred) programming language, and Pinocchio as computational tool for the robot's kinematics and dynamics.
#### 3.2.1 Why ROS.
The ROS community is spread worldwide. Over the last decade, the community produced hundreds or thousands of open-source tools and software: from device drivers and interface packages of various sensors and actuators to tools for debugging, visualization, and off-the-shelf algorithms for multiple purposes. With the notations of nodes, publishers, and subscribers, ROS quickly solves the arduous problem of resource handling between many processes. ROS is reliable: the system can still work if one node crashes. On the other hand, learning ROS is not an effortless task for newcomers. Moreover, modeling robots with URDF can take lots of time, as well as starting simulations [16]. Locosim relieves end-users from these complications by adopting a _common skeleton_ infrastructure for the robot description and for the high-level planner/controller.
#### 3.2.2 Why Python.
Among the general-purpose programming languages, Python is one of the most used. It is object-oriented and relatively straightforward to learn and use compared to other languages. The availability of open-source libraries and packages is virtually endless. These reasons make Python perfect for fast prototyping software. Being an interpreted language, Python may suffer in terms of computation time and memory allocation, colliding with the real-time requirements of the real hardware. In these cases, when testing the code in simulation, the end-user may consider using profiling tools to look for the most demanding parts of the code. Before executing the software on the real robot, these parts can be optimized within Python or translated into C++ code, providing Python bindings. For the same performance reasons, the most critical part of the framework, the low-level controller, is directly implemented in C++.
#### 3.2.3 Why Pinocchio.
Pinocchio [6] is one of the most efficient tools for computing poly-articulated bodies' kinematics and dynamics. Differently from other libraries in which a meta-program generates some source code for a single specific robot model given in input, as in the case of RobCoGen [17], Pinocchio is a dynamic library that loads at runtime any robot model. This characteristic makes it suitable for a cross-platform framework. It implements the state-of-the-art Rigid Body Algorithms based on revisited Roy Featherstone's algorithms [18]. Pinocchio is open-source, mostly written in C++ with Python bindings and several projects rely on it. Pinocchio also provides derivatives of the kinematics and dynamics, which are essential to in gradient-based optimization.
## 4 Use Cases
We want to emphasize the valuable features of Locosim with practical use cases2.
Footnote 2: A video showing the above-mentioned and other use cases can be found here:
[https://youtu.be/ZwV1LEqK-LU](https://youtu.be/ZwV1LEqK-LU)
### Visualize a Robot: kinematics check
As first use case, we illustrate the procedure to visualize the quadruped robot Aliengo (see Fig. 1) in RViz and how to manually interact with its joints. This is a debugging tool which is crucial during the design process of a new robot, because it allows to test the kinematics without added complexity. In the Robot Description package, we create a folder named aliengo_description. In the robots folder we add the XML file for describing the robot's kinematic and dynamic structure. We make use of the flexibility of the open source Xacro language to simplify writing process: we include files that describe a leg, the transmission model, and the meshes for each of the bodies. We create the launch folder containing the files upload.launch and rviz.launch. Launching rviz.launch from command line, the end-user can visualize the robot in RViz and manually move the joints by dragging the sliders of the joint_state_publisher_gui. The conf.rviz file helps the end-user to restore previous sessions in RViz. With this simple use case we can effectively understand the importance of the key aspects formalized so far. Being extensible, Locosim allows for the introduction of any new robots, reusing parts of codes already present.
Figure 4: Execution of the pick-and-place task with the anthropomorphic arm UR5. The end-user can drive the real hardware (setting real_robot to true) or perform a simulation (real_robot to false).
### Simulation and Real Robot
As a second example, we present a pick-and-place task with the anthropomorphic arm UR5 (6 degrees of freedom, see Fig. 1). The pipeline of planning and control starts with the launch file ros_impedance_controller.launch. This is common for all the robots: it loads the robot_description parameter calling the upload.launch and it starts the Gazebo simulator or the robot driver according to the status of the real_robot flag. Additionally, it loads the controller parameters (e.g., PID gains), which are located in the robot_description package of the robot. In the simulation case, the launch file spawns the robot at the desired location. In the real robot case, the robot driver is running (in a ROS node) on an onboard computer while the planner runs on an external computer. In both cases, the RViz GUI shows the robot configuration. Another node running on the same computer reads from a fixed frame ZED2 camera and publishes a point cloud message of the environment on a ROS topic. We extract the coordinates of the plastic bricks that are present in the workspace. With Ur5Generic, we plan trajectories for the end-effector position and orientation to grasp and relocate the bricks. We set an inverse kinematic problem to find a joint reference trajectory. It is published in the /ur5/command topic, together with feed-forward torques for gravity compensation. The low-level ros_impedance_controller provides feedback for tracking the joint references, based on the actual state in /joint_state. On the real robot there is no torque control and only position set-points are provided to the ur5/joint_group_pos_controller/command topic as requested by the robot driver provided by the manufacturer. All this is dealt with by the controller_manager class, transparent to the end users. The power of Locosim lies on the fact that it is possible to use the same robot control code both to simulate the task and to execute it on the real robot, as reported in Fig. 4.
## 5 Conclusions
Locosim is a platform-independent framework for working with robots, either in a simulation environment or with real hardware. Integrating features for computation of robots' kinematics and dynamics, logging, plotting, and visualization, Locosim is a considerable help for roboticists that needs a starting point for rapid code prototyping. If needed, performances can be ensured by implementing the critical parts of the software in C++, providing Python bindings. We proved the usefulness and versatility of our framework with use cases. Future works include the following objectives: Locosim will be able to handle multiple platforms simultaneously to be used in the fields of swarm robotics and collaborative robotics. We want to provide support for ROS2 since ROS lacks relevant qualifications such as real-time, safety, certification, and security. Additionally, other simulator like Coppelia Sim or PyBullet will be added to Locosim.
|
2306.14235
|
An algorithm for bilevel optimization with traffic equilibrium
constraints: convergence rate analysis
|
Bilevel optimization with traffic equilibrium constraints plays an important
role in transportation planning and management problems such as traffic
control, transport network design, and congestion pricing. In this paper, we
consider a double-loop gradient-based algorithm to solve such bilevel problems
and provide a non-asymptotic convergence guarantee of
$\mathcal{O}(K^{-1})+\mathcal{O}(\lambda^D)$ where $K$, $D$ are respectively
the number of upper- and lower-level iterations, and $0<\lambda<1$ is a
constant. Compared to existing literature, which either provides asymptotic
convergence or makes strong assumptions and requires a complex design of step
sizes, we establish convergence for choice of simple constant step sizes and
considering fewer assumptions. The analysis techniques in this paper use
concepts from the field of robust control and can potentially serve as a
guiding framework for analyzing more general bilevel optimization algorithms.
|
Akshit Goyal, Andrew Lamperski
|
2023-06-25T12:52:53Z
|
http://arxiv.org/abs/2306.14235v1
|
An algorithm for bilevel optimization with traffic equilibrium constraints: convergence rate analysis
###### Abstract
Bilevel optimization with traffic equilibrium constraints plays an important role in transportation planning and management problems such as traffic control, transport network design, and congestion pricing. In this paper, we consider a double-loop gradient-based algorithm to solve such bilevel problems and provide a non-asymptotic convergence guarantee of \(\mathcal{O}(K^{-1})+\mathcal{O}(\lambda^{D})\) where \(K\), \(D\) are respectively the number of upper- and lower-level iterations, and \(0<\lambda<1\) is a constant. Compared to existing literature, which either provides asymptotic convergence or makes strong assumptions and requires a complex design of step sizes, we establish convergence for choice of simple constant step sizes and considering fewer assumptions. The analysis techniques in this paper use concepts from the field of robust control and can potentially serve as a guiding framework for analyzing more general bilevel optimization algorithms.
## 1 Introduction
Bilevel optimization with traffic equilibrium constraints (BOTEC), also called the Stackelberg congestion game by Li et al. [10], is an important problem widely studied in the transportation literature. It has applications in many areas of transportation planning and management, including area traffic control (Chiou [2, 4]), transport network design (Suwansirikul et al. [17], Meng et al. [12], Chiou [3], Josefsson and Patriksson [7]), and congestion pricing (Verhoef [19]). At the core of this problem is the concept of traffic equilibria, also called Wardrop equilibria, which refers to the Nash equilibrium of a routing or congestion game with a finite number of populations. In this game, each population (i.e., group of travelers) with travel demand between a unique pair of origin-destination nodes in a network competes to minimize their travel time. The equilibrium is then defined as the set of network flows that satisfy the travel demand and minimize the travel time of each population, such that there is no incentive for any population to change their route to improve their travel time. This equilibrium condition is often used as a constraint in bilevel optimization problems, where the upper-level problem seeks to optimize some system-level objective, subject to the condition that the network is in a traffic equilibrium state. The equilibrium solution in fact can be found by solving an equivalent optimization problem since the routing game is a potential game as argued by Beckmann et al. [1], Rosenthal [15] in which all populations together aim to minimize a potential function with each flow vector constrained to a simplex. As a result, in this paper, we consider BOTEC problems of the form (1) where an extra regularization term is added to the lower-level objective. This regularization makes the analysis more tractable.
\[\min_{y\in\mathcal{C}}\,F(y) = f\,(h^{*}(y),y) \tag{1a}\] \[\text{s.t. }h^{*}(y) \in\operatorname*{arg\,min}_{h=(h_{i},i\in[\mathbb{N}])}\,g(h,y)+ \eta\,\,\overline{\psi}(h)\quad\text{s.t. }h_{i}\geq 0,\,\,\mathbf{1}^{\top}h_{i}=1\,\, \forall i\in[\mathbb{N}] \tag{1b}\]
where \(f(h,y)\) and \(g(h,y)\) are respectively the upper- and lower-level objective functions, \(y\) is the upper-level decision variable, \(h=(h_{i},i\in[\mathbb{N}])\) is the lower-decision variable (where \(h_{i}\) corresponds
to the decision of \(i^{\text{th}}\) population), \(h^{*}(y)\) is the optimal lower-level solution for a given \(y\) and the regularizer \(\overline{\psi}(h)\) is a strongly convex function composed of negative Shannon entropies. Note that \(\mathbf{1}\) is the vector of ones.
BOTEC is a non-convex problem and various gradient-based solution approaches have been proposed for different transportation applications in the literature. In context of transport network design problem, Suwansirikul et al. [17] proposed a heuristic for solving BOTEC and Meng et al. [12] apply an augmented Lagrangian method with known local convergence. Extending these works, Chiou [3] conduct extensive numerical experiments to demonstrate the computational efficacy of four different variants of gradient-based methods. All of the four approaches rely on obtaining the approximate gradient of \(F(y)\) using the sensitivity analysis of lower-level equilibrium network flows w.r.t upper-level decision as discussed by Tobin and Friesz [18], Patriksson [13], Josefsson and Patriksson [7]. This approximate gradient is used in a gradient projection algorithm to update the upper-level decision, for which Chiou [2] and Chiou [3] argue convergence to the KKT point. However, these works typically focus on asymptotic convergence without providing convergence rates.
Only recently, Liu et al. [11] provide the convergence rate of a single-loop algorithm for solving BOTEC by extending the work of Hong et al. [5] to a lower-level problem with simplex constraints. Furthermore, the recent work of Li et al. [9] construct upper and lower bounding problems to bilevel programs with general equilibrium constraints, and bounds the rate of convergence to the optimal objective. Compared to standard assumptions in bilevel literature, Liu et al. [11] and Li et al. [9] make additional assumptions on \(F(y)\) to be strongly convex. Further, Liu et al. [11] assumes its gradient estimator (which uses implicit differentiation of lower-level dynamics) to be Lipshchitz continuous. Since \(F(y)\) depends on the optimal solution of lower-level problem (1b), it is unclear if the convexity of \(F(y)\) can be checked. Contrary to this, our work only makes smoothness and boundedness assumptions on \(f\), \(g\) where \(f\) can be non-convex and \(g\) is convex but makes no assumption on \(F(y)\). Moreover, the bound on gradient estimation error of \(F(y)\) is derived explicitly (as given in Lemma 10) without assuming Lipschitz continuity.
### Our contributions
Overall, this paper makes the following contributions.
1. Prove \(\mathcal{O}(K^{-1})+\mathcal{O}(\lambda^{D})\) convergence of the proposed algorithm for solving BOTEC to a stationary point where \(K\), \(D\) are respectively the number of upper- and lower-level iterations, \(0\!<\!\lambda\!<\!1\) is a constant.
2. As compared to analysis in Liu et al. [11] which require complex design of step-sizes, we prove convergence by considering constant upper-level and lower-level step sizes satisfying appropriate conditions. Also, only empirical study by Li et al. [10] has been done on the convergence of routing game as well as BOTEC under fixed step-size whereas our work conducts theoretical analysis.
3. Introduce analysis techniques which use concepts from theory of robust control (Skelton et al. [16]). This provides a novel framework to analyze algorithms for a more general class of bilevel optimization problems in future.
### Outline
The paper is outlined as follows. In Section 2, we first introduce the problem in more detail followed by assumptions in Section 2.1 and relevant notations in 2.2. Section 3 highlights main results of the paper. In particular, Section 3.2 describes the proposed algorithm to solve problem (1) and Section 3.3 specify its convergence rate guarantee for both unconstrained & constrained cases. Section 4 specifies some of the key applications of BOTEC such as transport network design (Section 4.2) and traffic control (Section 4.3) which rely on solving traffic equilibrium problems explained in Section 4.1. In Section 4.4 we conduct numerical experiments on benchmark Sioux Fall network and report computational results. Lastly, section 5 details the proof of convergence rate results in Section 3.3.
## 2 Problem setup
Consider problem (1) with entropic regularization \(\overline{\psi}_{\mathsf{H}}(\cdot)\) as follows with \(\eta>0\)
\[\min_{y\in\mathcal{C}}\,F(y) = f\left(h^{*}(y),y\right) \tag{2a}\] \[\mathrm{s.t.}\,\,h^{*}(y) \in \operatorname*{arg\,min}_{h=(h_{i},i\in[\mathbb{N}])\in\mathcal{H }}g^{\eta}(h,y)=g(h,y)+\eta\,\,\overline{\psi}_{\mathsf{H}}(h) \tag{2b}\]
where \(\mathcal{C}\subseteq\mathbb{R}^{\mathsf{d}_{u}}\) is a closed convex set, \(\mathcal{H}=\mathcal{H}_{1}\times\cdots\times\mathcal{H}_{\mathsf{N}}\), \(\mathcal{H}_{i}=\left\{h_{i}\in\mathbb{R}^{d_{i}^{\ell}}|h_{i}\geq 0,\,\,\, \mathbf{1}^{\top}h_{i}=1\right\}\) for each \(i=1,\ldots,\mathsf{N}\), and the regularizer \(\overline{\psi}_{\mathsf{H}}(h)=\sum_{i=1}^{\mathsf{N}}\psi_{\mathsf{H}}(h_{ i})\) with \(\psi_{\mathsf{H}}(h_{i})=\sum_{j=1}^{d_{i}^{\ell}}\left(h_{i,j}\ln(h_{i,j})-h_ {i,j}\right)\) as the negative Shanon entropy. For any given \(y\), the function \(g(h,y)\) is convex in \(h\) and therefore, the lower-level objective \(g^{\eta}(h,y)\) is \(\eta\)-strongly convex in \(h\) (see Appendix B-(2b)). As a result, \(h^{*}(y)\) is unique for any given \(y\).
### Assumptions
For vectors, \(\|\cdot\|\) means the Euclidean norm and in case of matrices, \(\|\cdot\|\) refers to the spectral norm.
**Assumption 1**:
1. _Denote_ \(\boldsymbol{z}=(h,y)\)_. For upper-level problem, we assume_
1. _The gradient_ \(\nabla_{\boldsymbol{z}}f(\boldsymbol{z})\) _is_ \(L_{f}\)_-Lipschitz._ \[\|\nabla_{\boldsymbol{z}}f(\boldsymbol{z})-\nabla_{\boldsymbol{z}}f( \boldsymbol{z}^{\prime})\|\leq L_{f}\|\boldsymbol{z}-\boldsymbol{z}^{\prime} \|\,\,\forall\boldsymbol{z},\boldsymbol{z}^{\prime}\in\mathcal{H}\times \mathcal{C}\]
2. _The norm of gradient is bounded i.e._ \(\|\nabla_{\boldsymbol{z}}f(\boldsymbol{z})\|\leq\Omega_{f}\,\,\forall \boldsymbol{z}\in\mathcal{H}\times\mathcal{C}\)_._
**Assumption 2**:
1. _For lower-level problem, we assume for any given_ \(y\)__
2. _The gradient_ \(\nabla_{h}g(h,y)\) _is_ \(L_{g}\)_-Lipschitz w.r.t._ \(h\)_._ \[\|\nabla_{h}g(h,y)-\nabla_{h}g(h^{\prime},y)\|\leq L_{g}\|h-h^{\prime}\|\,\, \forall h,h^{\prime}\in\mathcal{H}\]
3. _The norm of gradient is bounded i.e._ \(\|\nabla_{h}g(h,y)\|\leq\Omega_{g}\,\,\forall h\in\mathcal{H}\)_._
4. _The derivatives_ \(\nabla_{y}\nabla_{h}g(h,y)\) _and_ \(\nabla_{h}^{2}g(h,y)\) _are_ \(\tau_{g}^{h}\)_- and_ \(\rho_{g}^{h}\)_-Lipschitz w.r.t._ \(h\) _respectively._ \[\|\nabla_{y}\nabla_{h}g(h,y)-\nabla_{y}\nabla_{h}g(h^{\prime},y)\|\leq\tau_{ g}^{h}\|h-h^{\prime}\|\,\,\forall h,h^{\prime}\in\mathcal{H}\] \[\|\nabla_{h}^{2}g(h,y)-\nabla_{h}^{2}g(h^{\prime},y)\|\leq\rho_{g}^{h}\|h- h^{\prime}\|\,\,\forall h,h^{\prime}\in\mathcal{H}\]
5. _The spectral norm of Hessian matrix is bounded i.e._ \(\|\nabla_{h}^{2}g(h,y)\|\leq L_{g}\,\,\forall h\in\mathcal{H}\)_._
6. _The spectral norm of cross Hessian matrix is bounded i.e._ \(\|\nabla_{y}\nabla_{h}g(h,y)\|\leq L_{g}\,\,\forall h\in\mathcal{H}\)_._
1. _The derivatives_ \(\nabla_{y}\nabla_{h}g(h,y)\) _and_ \(\nabla_{h}^{2}g(h,y)\) _are_ \(\tau_{g}^{y}\)_- and_ \(\rho_{g}^{y}\)_-Lipschitz w.r.t._ \(y\) _respectively._ \[\|\nabla_{y}\nabla_{h}g(h,y)-\nabla_{y}\nabla_{h}g(h,y^{\prime})\|\leq\tau_{g} ^{y}\|y-y^{\prime}\|\,\,\forall y,y^{\prime}\in\mathcal{C}\] \[\|\nabla_{h}^{2}g(h,y)-\nabla_{h}^{2}g(h,y^{\prime})\|\leq\rho_{g}^{ y}\|y-y^{\prime}\|\,\,\,\forall y,y^{\prime}\in\mathcal{C}\]
### Notation
In this paper, the notation \([d]\) refers to the set \(\{1,\ldots,d\}\) for \(d>0\). For a vector \(a\in\mathbb{R}^{d}\), its square \(a^{2}=(a_{1}^{2},\ldots,a_{d}^{2})^{\top}\), square root \(\sqrt{a}=(\sqrt{a_{1}},\ldots,\sqrt{a_{d}})^{\top}\) and diagonalization \(\mathrm{diag}(a)=\begin{bmatrix}a_{1}&\ldots&0\\ 0&\ddots&0\\ 0&0&a_{d}\end{bmatrix}\). For vectors \(a,b\in\mathbb{R}^{d}\), their multiplication \(a\circ b=(a_{1}b_{1},\ldots,a_{d}b_{d})^{\top}\) and division \(a/b=(a_{1}/b_{1},\ldots,a_{d}/b_{d})\). For matrix \(A\in\mathbb{R}^{d\times d}\), \(\mathrm{eig}(A)\) is the vector of eigenvalues of \(A\). The notation \(\mathrm{int}(S)\) means the interior of a set \(S\).
In algorithm, the upper-level iteration is indexed using \(k\) and the lower-level using \(t\). For a fixed upper level iterate \(y^{k}\), the lower-level iterate is denoted by \(h^{k,t}\) and the optimal lower-level solution \(h^{*}(y^{k})\) by \(h^{k,*}\). The approximate Jacobian matrix \(\partial h^{k,t}/\partial y^{k}\) is denoted by \(\mathsf{R}_{k,t}\) and the true Jacobian matrix \(\partial h^{k,*}/\partial y^{k}\) by \(\mathsf{R}_{k,*}\).
## 3 Main results
### Gradient estimation
The ideal algorithm is gradient descent with exact gradient \(\nabla F(y)\) (at \(k^{\text{th}}\) upper-level iteration) given as
\[\nabla F(y^{k})=\nabla_{y}f\big{(}h^{k,*},y^{k}\big{)}+\mathsf{R}_{k,*}^{\top} \nabla_{h}f\big{(}h^{k,*},y^{k}\big{)} \tag{3}\]
In practice, it is generally difficult to find \(h^{k,*}\), \(\mathsf{R}_{k,*}^{\top}\). The solution is to iteratively solve the lower-level problem for a fixed upper-level decision and use the resulting lower-level iterates to estimate the exact gradient (3). In this paper, we consider the following gradient estimator
\[\widehat{\nabla}F(y^{k})=\nabla_{y}f\big{(}h^{k,D},y^{k}\big{)}+\mathsf{R}_{k, D}^{\top}\nabla_{h}f\big{(}h^{k,D},y^{k}\big{)} \tag{4}\]
where \(D\) is the fixed number of lower-level iterations for each \(k\).
### Algorithm
In the algorithm considered in this paper, a projected gradient descent (PGD) step updates the upper-level decision and a projected mirror descent (PMD) step w.r.t. Kullback-Leibler (KL) divergence updates the lower-level (also discussed in Krichene et al. [8]) since the following projection step (5) on simplex has a closed form (6) in Algorithm 1.
\[h^{k,t+1}:=\arg\min_{h\in\mathcal{H}}\,g^{\eta}(h^{k,t},y^{k})+\big{\langle} \nabla g^{\eta}(h^{k,t},y^{k}),h-h^{k,t}\big{\rangle}+\frac{1}{\alpha_{k}} \overline{\mathsf{D}}_{\text{KL}}(h,h^{k,t}) \tag{5}\]
where KL divergence \(\overline{\mathsf{D}}_{\text{KL}}(h,h^{\prime})\) is specified in Appendix B. The Jacobian matrix \(\mathsf{R}_{k,t}\) is obtained by iteratively differentiating through the PMD dynamics. The overall algorithm is outlined as below.
```
1:Input: \(K\), \(D\), \(\eta\), step sizes: \((\beta_{k},\ \alpha_{k})_{k=0}^{K-1}\).
2:Initialize: \(y^{0}\in\mathcal{C}\), \(h_{i}^{0}\in\widetilde{\mathcal{H}}_{i}\)\(i=1,\ldots,\mathsf{N}\).
3:for\(k=0,\ldots,K-1\)do
4: If \(k>0\) set \(h^{k,0}=h^{k-1,D}\), otherwise \(h^{k,0}=h^{0}\). Set \(\mathsf{R}_{k,0}=\partial h^{k,0}/\partial y^{k}=0\).
5:for\(t=0,\ldots,D-1\)do \[h_{i}^{k,t+1} =\frac{h_{i}^{k,t}\circ\exp\left(-\alpha_{k}\nabla_{h_{i}}g^{ \eta}(h^{k,t},y^{k})\right)}{{h_{i}^{k,t}}^{\top}\exp\left(-\alpha_{k}\nabla_ {h_{i}}g^{\eta}(h^{k,t},y^{k})\right)}\quad\forall i\in[\mathsf{N}]\] (6) \[\mathsf{R}_{k,t+1} =\Phi(\mathsf{R}_{k,t},h^{k,t},h^{k,t+1},y^{k})\] (7)
6:endfor
7: Obtain estimation of gradient \(\widehat{\nabla}F(y^{k})\) using (4), \[y^{k+1}=\text{Proj}_{\mathcal{C}}\big{(}y^{k}-\beta_{k}\widehat{\nabla}F(y^{k} )\big{)}\] (8)
8:endfor
```
**Algorithm 1** PGD with Iterative Differentiation of PMD on simplex
Consider \(\widetilde{\mathcal{H}}=\widetilde{\mathcal{H}}_{1}\times\cdots\times \widetilde{\mathcal{H}}_{\mathsf{N}}\subset\text{int}(\mathcal{H})\) derived in Lemma 14 (under Assumption (12)) where for each \(i=1,\ldots,\mathsf{N}\)
\[\widetilde{\mathcal{H}}_{i}=\big{\{}h_{i}\in\mathbb{R}^{d_{\ell}^{i}}:h_{i} \geq\nu^{\min}\cdot\mathbf{1}_{d_{\ell}^{i}},\ \mathbf{1}^{\top}h_{i}=1\big{\}}\subset\text{int}(\mathcal{H}_{i}),\]
\(0<\nu^{\min}:=e^{-\frac{2\Omega_{g}}{\eta}}/\mathsf{d}_{\ell}^{\max}<1\). Note in Algorithm 1, the initialization \(h^{0}=(h_{i}^{0},i\in[\mathsf{N}])\in\widetilde{\mathcal{H}}\). Consequently, from Lemma 14 it holds that \(h^{k,t}\in\widetilde{\mathcal{H}}\ \forall k,t\) and \(h^{k,*}\in\widetilde{\mathcal{H}}\ \forall k\).
### Convergence rates
**Theorem 1**: _Consider \(\mathcal{C}=\mathbb{R}^{d_{u}}\). There exist constants \(\overline{\alpha}\), \(\underline{\beta}\), \(\overline{\beta}\) such that if \(\alpha_{k}=\alpha\), \(\beta_{k}=\beta\) satisfying \(0<\alpha\leq\overline{\alpha}\) and \(\underline{\beta}\leq\beta\leq\overline{\beta}\), then the output of Algorithm 1 satisfies the following for \(D\geq D^{0}:=D^{0}(\eta,\alpha)\)_
\[\frac{1}{K}\sum_{k=0}^{K-1}\|\nabla F(y^{k})\|^{2}\leq\mathcal{O}\left(\frac{1 }{K}\right)+\mathcal{O}(\lambda^{D}) \tag{9}\]
_where \(0<\lambda:=\lambda(\eta,\alpha)<1\) is a constant._
**Theorem 2**: _Consider closed convex set \(\mathcal{C}\subset\mathbb{R}^{d_{u}}\). There exist constants \(\overline{\alpha}\), \(\underline{\beta}\), \(\overline{\beta}\) such that if \(\alpha_{k}=\alpha\), \(\beta_{k}=\beta\) satisfying \(0<\alpha\leq\overline{\alpha}\) and \(\underline{\beta}\leq\beta\leq\overline{\beta}\), then the output of Algorithm 1 satisfies the following for \(D\geq D^{0}:=D^{0}(\eta,\alpha)\)_
\[\frac{1}{K}\sum_{k=1}^{K}\operatorname{dist}^{2}\left(-\nabla F(y^{k}),N_{ \mathcal{C}}(y^{k})\right)\leq\mathcal{O}\left(\frac{1}{K}\right)+\mathcal{O} (\lambda^{D}) \tag{10}\]
_where \(0<\lambda:=\lambda(\eta,\alpha)<1\) is a constant, \(N_{\mathcal{C}}(y)=\left\{p\in\mathbb{R}^{d_{u}}|\langle p,\ y^{\prime}-y \rangle\leq 0\ \forall y^{\prime}\in\mathcal{C}\right\}\) is the normal cone to the set \(\mathcal{C}\) at point \(y\) and \(\operatorname{dist}\left(-\nabla F(y),N_{\mathcal{C}}(y)\right)\) is the distance of \(-\nabla F(y)\) to \(N_{\mathcal{C}}(y)\)._
## 4 Example Applications
### Traffic Equilibrium/Routing Game (Lower-level problem)
Consider a road network with \(\mathsf{A}\) number of links and \(\mathsf{N}\) number of populations where each population is characterized by a unique origin-destination (OD) pair. For each population or OD pair \(i\in[\mathsf{N}]\), let \(\mathsf{d}_{\ell}^{i}\) be the number of different path (or route) choices for travel and \(\xi_{i}\) be the fixed travel demand. Define \(\delta^{ai}(\xi_{i})\) as the vector of link-paths weighted incidence between link \(a\) and OD pair \(i\). Its \(j^{\text{th}}\) entry is
\[\delta_{j}^{ai}(\xi_{i})=\begin{cases}\xi_{i}&\text{if path $j$ uses link $a$}\\ 0&\text{o.w.}\end{cases}\]
where \(j\in[\mathsf{d}_{\ell}^{i}]\). For each link \(a\in[\mathsf{A}]\), the overall vector of link-paths weighted incidence is given by \(\delta^{a}(\xi)=[\delta^{a1}(\xi_{1});\ldots;\delta^{a\mathsf{N}}(\xi_{ \mathsf{N}})]_{\mathsf{d}_{\ell}\times 1}\) where \(\mathsf{d}_{\ell}=\sum_{i\in[\mathsf{N}]}\mathsf{d}_{\ell}^{i}\). Let \(h_{i}=(h_{i,j},\ j\in[\mathsf{d}_{\ell}^{i}])_{\mathsf{d}_{\ell}^{i}\times 1}\) denote the vector of path flows for population \(i\) where \(h_{i,j}\) is the fraction of population \(i\) traveling on path \(j\in[\mathsf{d}_{\ell}^{i}]\). The overall vector of path flows is \(h=(h_{i},\ i\in[\mathsf{N}])_{\mathsf{d}_{\ell}\times 1}\). Say \(y\) is some fixed upper-level decision such as capacity expansion of road networks or traffic signal setting. For a given \(y\), if \(c_{a}(x_{a},y)\) is the travel time on link \(a\) where \(x_{a}\) is the amount of flow through \(a\), then the lower level objective function \(g(\cdot)\) which is the potential function of the routing game is given by
\[g(h,y)=\sum_{a\in[\mathsf{A}]}\int_{0}^{x_{a}}c_{a}(u,y)du \tag{11}\]
where \(x_{a}=\delta^{a}(\xi)^{\top}h=\sum_{i\in[\mathsf{N}]}\sum_{j\in[\mathsf{d}_{ \ell}^{i}]}\delta_{j}^{ai}(\xi_{i})\cdot h_{i,j}\) is the total flow through link \(a\) and \(h_{i}\in\mathcal{H}_{i}=\left\{h_{i}\in\mathbb{R}^{d_{\ell}^{i}}:h_{i}\geq 0,\ \mathbf{1}^{ \top}h_{i}=1\right\}\).
Define \(\Delta(\xi)=\left[\delta^{1}(\xi),\ \ldots,\ \delta^{\mathsf{A}}(\xi)\right]_{ \mathsf{d}_{\ell}\times\mathsf{A}}\) as the full links-paths weighted incidence matrix. Note that \(\nabla_{h}g(h,y)=\Delta(\xi)\left[c_{1}(x_{1},y);\ \ldots;\ c_{\mathsf{A}}(x_{ \mathsf{A}},y)\right]_{\mathsf{A}\times 1}\), \(\nabla_{h}^{2}g(h,y)=\Delta(\xi)\mathrm{diag}\left(\frac{\partial c_{1}(x_{1},y)} {\partial x_{1}},\ldots,\frac{\partial c_{\mathsf{A}}(x_{\mathsf{A}},y)}{ \partial x_{\mathsf{A}}}\right)\Delta(\xi)^{\top}\) where \(x=\Delta(\xi)^{\top}h\). Then \(\nabla_{h}^{2}g(h,y)\succeq 0\) if \(c_{a}(x_{a},y)\) is monotonically increasing in \(x_{a}\) i.e \(\partial c_{a}(x_{a},y)/\partial x_{a}\geq 0\ \forall a\). This monotonicity condition implies convexity of \(g\). Therefore, if \(c_{a}(\cdot)\) is differentiable w.r.t. \(x_{a}\) and \(y\) then convexity of \(g\) as well as assumptions (L1)-(L6) can be checked.
### Application-1: Transportation Network Design
Consider a system planner who wants to make a decision on expanding the capacity of roads in a transportation network that minimizes the total time of travel plus the construction cost. But the travel time in the network actually depends on the equilibrium traffic pattern induced by capacity decisions. More formally, let \(y_{a}\) be the decision on capacity expansion of link \(a\) upper bounded by \(u_{a}\), \(\widetilde{c}_{a}(\cdot)\) be the cost of travel on link \(a\), \(G_{a}(\cdot)\) be the investment cost for construction on link \(a\) and \(\theta\) is the conversion factor from investment cost to travel times. With \(y=(y_{a},a\in[\mathsf{A}])\), the objective function and constraints of the upper-level (system planner) problem are
\[\begin{array}{l}F(y)=f(h^{*}(y),y)=\sum_{a\in[\mathsf{A}]}\big{(}\widetilde{c }_{a}(x_{a}^{*}(y),y_{a})\cdot x_{a}^{*}(y)+\theta G_{a}(y_{a})\big{)}\\ \mathcal{C}=\big{\{}y|\ 0\leq y_{a}\leq u_{a}\ \forall a\in[\mathsf{A}]\big{\}} \end{array} \tag{12}\]
where \(x^{*}(y)=\Delta(\xi)^{\top}h^{*}(y)\) is the vector of link flows and \(h^{*}(y)\) is the equilibrium path flow of the routing game described in Section 4.1. The assumptions (11), (12) are checkable for well-behaved functions \(\widetilde{c}_{a}(\cdot)\), \(G_{a}(\cdot)\) such as differentiable once w.r.t. \(x_{a}\) and \(y\).
### Application-2: Traffic Signal Control
Consider the problem of optimization of traffic signal timings for area traffic control in order to minimize the total rate of delay and number of stops in a traffic network where both of the performance indices depend on the resulting traffic pattern from the setting of traffic signals. More formally, the decision needs to be made on the start and duration of green signal groups at various junctions in a road network. The detailed description of parameters and decision variables are given below.
Parameters
\(\lambda_{\min}\): minimum green
\(\tau_{jlm}\): clearance time between the green for signal group \(j\) and \(l\) at junction \(m\)
\(\Omega_{m}(j,l)\): 0 if the start of green for signal group \(j\) proceeds that of \(l\) at junction \(m\), 1 otherwise
\(D_{a}(\cdot)\): rate of delay on link \(a\)
\(S_{a}(\cdot)\): number of stops per unit time on link \(a\)
Decision variables
\(\varsigma\): reciprocal of common cycle time
\(\theta=[\theta_{jm}]\): vector of start of green, where \(\theta_{jm}\) is the start of green for signal group \(j\) at junction \(m\)
\(\phi=[\phi_{jm}]\): vector of duration of green where \(\phi_{jm}\) is the duration of green for group \(j\) at junction \(m\)
Let \(y:=(\varsigma,\theta,\phi)\). The constraint set and objective function (which is the weighted combination of rate of delay and number of stops with weights \(W_{aD}\) and \(W_{aS}\) respectively on link \(a\)) of the upper-level problem are given below.
\[\begin{array}{l}F(y)=f(h^{*}(y),y)=\sum_{a\in[\mathsf{A}]}W_{aD}D_{a}(x^{*} (y),y)+W_{aS}S_{a}(x^{*}(y),y)\\ \mathcal{C}=\big{\{}(\varsigma,\theta,\phi)|\ \varsigma_{\min}\leq \varsigma\leq\varsigma_{\max},\ \lambda_{\min}\varsigma\leq\phi_{jm}\leq 1\ \forall j,m,\\ \theta_{jm}+\phi_{jm}+\tau_{jlm}\varsigma\leq\theta_{lm}+\Omega_{m}(j,l)\ j \neq l,\forall j,l,m\big{\}}\end{array} \tag{13}\]
where \(x^{*}(y)=\Delta(\xi)^{\top}h^{*}(y)\) is the vector of link flows and \(h^{*}(y)\) is the equilibrium path flow of the routing game described in Section 4.1. Again the assumptions (11), (12) are checkable for functions \(D_{a}(\cdot)\), \(S_{a}(\cdot)\) differentiable once w.r.t. \(x\) and \(y\).
### Numerical Experiments
We test Algorithm 1 on Sioux Falls network which is a standard in traffic equilibrium literature. It has 24 nodes, 76 arcs, 552 OD pairs. For the computations in this section, we consider the application of transport network design (discussed in Section 4.2).
All the problem data has been taken from the paper by Suwansirikul et al. [17]. The travel time function is as given below
\[c_{a}(x_{a},y)=A_{a}+B_{a}\left(\frac{x_{a}}{K_{a}+y_{a}}\right)^{4}\]
where \(A_{a}\) is the free flow travel time, \(K_{a}\) is the existing capacity of link \(a\) and \(B_{a}>0\) is a parameter. All these values are specified in Suwansirikul et al. [17] for each link \(a\). Also, the travel demand matrix between OD pairs is given. In the upper-level objective, the travel cost function \(\widetilde{c}_{a}(x_{a},y_{a})\) on link \(a\) is set to be the same as \(c_{a}(x_{a},y)\), the investment cost function, \(G_{a}(y_{a})=d_{a}\cdot y_{a}^{2}\), and the conversion factor \(\theta=0.001\). Again, the values of \(d_{a}\) for each link \(a\) are given in Suwansirikul et al. [17]. Out of 76 arcs, Suwansirikul et al. [17] select 10 arcs (16, 17, 19, 20, 25, 26, 29, 39, 48, 74) for link capacity expansion upper bounded by 25. Specifically,
\[u_{a}=\begin{cases}25&\text{if }a\in\{16,17,19,20,25,26,29,39,48,74\}\\ 0&\text{o.w.}\end{cases}\]
For each OD pair, we extract 5 shortest paths based on free flow travel time \(A_{a}\) and use them along with travel demand data to construct matrix \(\Delta(\xi)\) described in Section 4.1. We initialize upper-level iterate \(y^{0}=\mathbf{0}\) and lower-level iterate \(h_{i}^{0}=\left[\frac{1}{5}\;\frac{1}{5}\;\frac{1}{5}\;\frac{1}{5}\;\frac{1}{ 5}\right]\) for each OD pair \(i\). We run Algorithm 1 for different values of fixed lower-level iterations, \(D\in\{40,60,80,100,120\}\). Other algorithm parameters: \(\eta=0.01\), lower-level step size \(\alpha=0.50\), upper-level step size \(\beta=0.25\), and no. of upper-level iterations \(K=100\). All computations run on a Windows 10 machine with 16GB RAM and 1.80GHz Intel Core i7 CPU. We plot the upper-level objective \(f(h^{k,D},y^{k})\) versus upper-level iteration \(k\) in Figure 1 and summarize the computational time results in Table 1. In order to illustrate the significance of \(D^{0}\) in Theorems 2.1 and 2.1, we also plot the convergence of lower-level iterates \(h^{k,t}\), \(\mathsf{R}^{k,t}\) for \(k=0\) in Figure 2.
Following observations can be made:
1. From Figure 1, observe for a fixed \(k\) the upper-level objective improves with increasing \(D\). This improvement comes at a cost of increasing the runtime per upper-level iteration as reported in Table 1. There is an increase by roughly 20 seconds for every 20 iterations increment in \(D\).
2. We can also observe the law of diminishing returns for successive increments in \(D\) as it results in smaller improvement in the upper-level objective compared to the computational effort needed.
3. From Figure 2, observe that the error in \(h^{k,t}\) converges monotonically to zero whereas error in \(\mathsf{R}_{k,t}\) is not necessarily monotonic. It first increases to achieve a peak and then decays to zero. This behavior is an empirical evidence to our convergence rate results in Theorems 2.1 and 2.1 in the sense that a threshold \(D^{0}\) lower-level iterations need to be performed in order to ensure convergence.
\begin{table}
\begin{tabular}{|c|c|} \hline \(D\) &
\begin{tabular}{c} CPU Time per \(k\) \\ (in seconds) \\ \end{tabular} \\ \hline
40 & 30.83 \\
60 & 50.00 \\
80 & 68.11 \\
100 & 87.48 \\
120 & 112.00 \\ \hline \end{tabular}
\end{table}
Table 1: Average time per upper-level iteration.
## 5 Proofs
In this section, we provide the proof of theorems stated in Section 3.3. In Section 5.1, we state lemmas corresponding to lower-level problem which will be used in later proofs. In Section 5.2, we provide lemmas that prove smoothness of \(F(y)\). In Section 5.3, we derive the bound on gradient estimation error, and in Section 5.4 we use this bound to prove Theorems 1 and 2.
### Lower-level problem
Denote the error in lower-level solution and error in its Jacobian matrix, respectively as
\[\varepsilon_{h}^{k,t}=\overline{\mathsf{D}}_{\mathsf{KL}}(h^{k,*},h^{k,t}), \qquad\varepsilon_{\mathsf{R}}^{k,t}=\|\mathsf{R}_{k,t}-\mathsf{R}_{k,*}\|^{2}.\]
The initialization error for \(k>0\), \(\varepsilon_{h}^{k,0}=\overline{\mathsf{D}}_{\mathsf{KL}}\big{(}h^{*}(y^{k}), h^{k,0}\big{)}:=\overline{\mathsf{D}}_{\mathsf{KL}}\big{(}h^{*}(y^{k}),h^{k-1,D} \big{)}\) and for \(k=0\), \(\varepsilon_{h}^{0,0}=\overline{\mathsf{D}}_{\mathsf{KL}}\big{(}h^{0,*},h^{0,0 }\big{)}:=\overline{\mathsf{D}}_{\mathsf{KL}}\big{(}h^{*}(y^{0}),h^{0}\big{)}\).
#### 5.1.1 Lipschitz continuity and boundedness results
**Lemma 1**:
1. _(Globally bounding_ \(\varepsilon_{h}^{k,0}\)_) For each_ \(k\)_, the initial error in lower-level solution is bounded i.e._ \(\varepsilon_{h}^{k,0}\leq\overline{\mathsf{D}}_{\mathsf{KL}}^{\max}=\mathsf{N} \ln(1/\nu^{\min})+\nu^{\min}\sum_{i=1}^{n}\mathsf{d}_{\ell}^{i}\ln(1/\mathsf{ d}_{\ell}^{i})\)_._
Proof of Lemma 1: See Appendix G.
**Lemma 2**:
1. \(\nabla_{h}\overline{\psi}_{\mathsf{H}}(h)\) _is_ \(1/\nu^{\min}\)_-Lipschitz continuous_ \(\forall h\in\widetilde{\mathcal{H}}\)_._
2. _For a fixed_ \(y\)_,_ \(\nabla_{h}g^{n}(h,y)\) _is_ \(L_{g}^{n}\)_-Lipschitz continuous_ \(\forall h\in\widetilde{\mathcal{H}}\) _where_ \(L_{g}^{\eta}=L_{g}+\eta/\nu^{\min}\)_._
Proof of Lemma 2: See Appendix C.
#### 5.1.2 Convergence rate of lower-level iterates
**Lemma 3**:
1. _Let the sequence_ \(\{h^{k,t}\}\) _be generated by Algorithm 1. For a fixed_ \(k\) _and_ \(0<\alpha_{k}\leq\widetilde{\alpha}\)_,_ \[\frac{1}{2}\|h^{k,*}-h^{k,t}\|^{2}\leq\varepsilon_{h}^{k,t}\leq\varepsilon_{h }^{k,0}\ (1-\eta\alpha_{k})^{t}\] (14)
_where \(\widetilde{\alpha}=1/(L_{g}+\eta/\nu^{\min})<1/\eta\)._
Proof of Lemma 3: See Appendix D.
Define \(\widetilde{\alpha}=\min\big{\{}\widetilde{\alpha},1/(2L_{g}^{2}/\eta+2\eta) \big{\}}\).
Figure 2: Convergence behavior of lower-level (\(k=0\) and \(D=100\)).
**Lemma 4**: _Let the sequence \(\{\mathsf{R}_{k,t}\}\) be generated by Algorithm 1. For a fixed \(k\) and \(0<\alpha_{k}\leq\overline{\alpha}\),_
\[\varepsilon_{\mathsf{R}}^{k,t}\leq(\lambda_{k})^{t}\,\left(\Gamma_{1k}+\Gamma_{ 2k}\ \varepsilon_{h}^{k,0}\right),\quad t\geq T_{k}^{0} \tag{15}\]
_where \(0<\lambda_{k}<1\) and constants \(\lambda_{k}\), \(\Gamma_{1k}\), \(\Gamma_{2k}\), \(T_{k}^{0}\) depend on \(\eta\) & \(\alpha_{k}\)._
See Appendix D.2.2.
**Remark 1**: All results mentioned hereafter hold for \(\alpha_{k}\) satisfying \(0<\alpha_{k}\leq\overline{\alpha}\).
### Smoothness of \(F(y)\)
The smoothness of \(F(y)\) follows from key Lemmas 5 & 6.
**Lemma 5**: _Denote \(\mathsf{R}_{*}(y)=\partial h^{*}(y)/\partial x\)._
1. \(\exists\ \mathsf{C}_{0}>0\) _such that for any upper level decision_ \(y\)_,_ \(\left\|\mathsf{R}_{*}(y)\right\|\leq\mathsf{C}_{0}\)_._
2. _Consequently,_ \(h^{*}(y)\) _is_ \(\mathsf{C}_{0}\)_-Lipschitz continuous w.r.t._ \(y\)_._
**Lemma 6**: _The Jacobian matrix \(\mathsf{R}_{*}(y)\) is \(L_{\mathsf{R}_{*}}\)-Lipschitz continuous w.r.t. \(y\)._
**Lemma 7**: \(\nabla F(y)\) _is \(L_{F}\)-Lipschitz continuous w.r.t. \(y\) where \(L_{F}=L_{\mathsf{R}_{*}}\Omega_{f}+L_{f}(1+\mathsf{C}_{0})^{2}\)._
The proof of Lemmas 5-7 can be found in Appendix F.
### Gradient estimation error
The goal of this section is to derive Lemma 10 which gives an expression for gradient estimation error that is easy to work with. Specifically, we bound the gradient estimation error in terms of quantities that decay exponentially w.r.t. \(D\) (no. of lower-level iterations) plus the sum of deviations of negative gradient from the normal cone.
**Lemma 8**: _Let the sequence \(\{y^{k}\}\) be generated by Algorithm 1. For \(D\geq T_{k}^{0}\),_
\[\left\|\widehat{\nabla}F(y^{k})-\nabla F(y^{k})\right\|^{2}\leq\mathsf{C}_{1k }^{D}+\mathsf{C}_{2k}^{D}\ \varepsilon_{h}^{k,0}\]
_where \(\mathsf{C}_{1k}^{D}=\mathcal{O}\left((\lambda_{k})^{D}\right)\), \(\mathsf{C}_{2k}^{D}=\mathcal{O}\left((1-\eta\alpha_{k})^{D}\right)+\mathcal{O }((\lambda_{k})^{D})\), and \(\lambda_{k}\), \(T_{k}^{0}\) as given in Lemma 4._
_Proof of Lemma 8_. Recall \(\nabla F(y^{k})\ =\ \nabla_{y}f\big{(}h^{k,*},y^{k}\big{)}\ +\ \mathsf{R}_{k,*}^{\top}\nabla_{h}f\big{(}h^{k,*},y^{k}\big{)}\quad\text{ and}\quad\widehat{\nabla}F(y^{k})\ =\ \nabla_{y}f\big{(}h^{k,D},y^{k}\big{)}+\mathsf{R}_{k,D}^{\top}\nabla_{h}f \big{(}h^{k,D},y^{k}\big{)}\). Therefore, we have
\[\widehat{\nabla}F(y^{k})-\nabla F(y^{k})=\nabla_{y}f\big{(}h^{k,D},y^{k} \big{)}-\nabla_{y}f\big{(}h^{k,*},y^{k}\big{)} +\left(\mathsf{R}_{k,D}-\mathsf{R}_{k,*}\right)^{\top}\nabla_{h}f \big{(}h^{k,D},y^{k}\big{)}\] \[+\mathsf{R}_{k,*}^{\top}\big{[}\nabla_{h}f\big{(}h^{k,D},y^{k} \big{)}-\nabla_{h}f\big{(}h^{k,*},y^{k}\big{)}\big{]}\]
which implies
\[\left\|\widehat{\nabla}F(y^{k})-\nabla F(y^{k})\right\|^{2}\leq 3L_{f}^{2}\ \left\|h^{k,D}-h^{k,*}\right\|^{2}+3\Omega_{f}^{2}\ \left\|\mathsf{R}_{k,D}-\mathsf{R}_{k,*}\right\|^{2}+3L_{f}^{2}\mathsf{C}_{0}^ {2}\ \left\|h^{k,D}-h^{k,*}\right\|^{2}\]
Using \(\|h^{k,D}-h^{k,*}\|^{2}\leq 2\overline{\mathsf{D}}_{\mathsf{KL}}\left(h^{k,*},h^{k, D}\right)=2\varepsilon_{h}^{k,D}\) gives
\[\left\|\widehat{\nabla}F(y^{k})-\nabla F(y^{k})\right\|^{2} \leq 6L_{f}^{2}\left(1+\mathsf{C}_{0}^{2}\ \right)\varepsilon_{h}^{k,D}+3\Omega_{f}^{2}\varepsilon_{h}^{k,D}\] \[\leq 6L_{f}^{2}\left(1+\mathsf{C}_{0}^{2}\right)\ \left(1-\eta\alpha_{k} \right)^{D}\ \varepsilon_{h}^{k,0}+3\Omega_{f}^{2}\ (\lambda_{k})^{D}\ \left(\Gamma_{1k}+\Gamma_{2k}\ \varepsilon_{h}^{k,0}\right)\] (from Lemma 3 and Lemma 4)
Taking \(\mathsf{C}_{1k}^{D}=3\Omega_{f}^{2}(\lambda_{k})^{D}\Gamma_{1k}\) and \(\mathsf{C}_{2k}^{D}=6L_{f}^{2}\left(1+\mathsf{C}_{0}^{2}\right)\left(1-\eta \alpha_{k}\right)^{D}+3\Omega_{f}^{2}(\lambda_{k})^{D}\Gamma_{2k}\) completes the proof.
**Remark 2**: For \(\alpha_{k}=\alpha\), it can be shown that \(\lambda_{k}=\lambda\), \(T_{k}^{0}=T^{0}\), \(\mathsf{C}_{1k}^{D}=\mathsf{C}_{1}^{D}:=\mathcal{O}\left(\lambda^{D}\right)\), \(\mathsf{C}_{2k}^{D}=\mathsf{C}_{2}^{D}:=\mathcal{O}\left((1-\eta\alpha)^{D} \right)+\mathcal{O}(\lambda^{D})\).
**Lemma 9**: _(Convergence of \(h^{k,0}\) w.r.t. \(k\)) Assume \(\alpha_{k}=\alpha\) satisfying \(0<\alpha\leq\overline{\alpha}\) and \(\beta_{k}\) satisfying \(0<\beta_{k}\leq\frac{\nu^{\min_{\gamma}}}{3\sqrt{2}\mathsf{C}_{0}}\). Define \(\gamma=1-(1-\eta\alpha)^{D}\) and \(D^{0}=\{D^{0}\geq T^{0}|\forall D\geq D^{0}:\gamma\geq 4/9,\ 0<\mathsf{C}_{2}^{D}\leq 1 /2\}\). For \(D\geq D^{0}\),_
\[\varepsilon_{h}^{k,0}\leq\Big{(}1-\frac{\gamma}{4}\Big{)}^{k}\ \ \varepsilon_{h}^{0,0}+\mathsf{C}_{1}^{D}\left(1-\frac{ \gamma}{4}\right)^{k}+\gamma/4\sum_{\ell=0}^{k-1}\Big{(}1-\frac{\gamma}{4} \Big{)}^{k-1-\ell}\|\nabla F(y^{\ell})+z^{\ell+1}\|^{2}\]
_where \(z^{k+1}\in N_{\mathcal{C}}(y^{k+1})\) (\(N_{\mathcal{C}}(y)\) is the normal cone to the set \(\mathcal{C}\) at point \(y\))._
Proof of Lemma 9.: See Appendix A.
**Lemma 10**: _Assume \(0<\alpha_{k}=\alpha\leq\overline{\alpha}\) and \(0<\beta_{k}\leq\frac{\nu^{\min_{\gamma}}}{3\sqrt{2}\mathsf{C}_{0}}\). Let the sequence \(\{y^{k}\}\) be generated by Algorithm 1. For \(D\geq D^{0}\),_
\[\begin{array}{l}\big{\|}\widehat{\nabla}F(y^{k})-\nabla F(y^{k})\big{\|}^{ 2}\\ \leq\mathsf{C}_{1}^{D}\big{(}1+\mathsf{C}_{2}^{D}\big{)}+\mathsf{C}_{2}^{D}\ \left(1-\frac{\gamma}{4}\right)^{k}\ \varepsilon_{h}^{0,0}+\mathsf{C}_{2}^{D}\ \gamma/4\ \sum_{\ell=0}^{k-1}\Big{(}1-\frac{\gamma}{4}\Big{)}^{k-1-\ell}\|\nabla F(y^{ \ell})+z^{\ell+1}\|^{2}\end{array} \tag{16}\]
_where \(z^{k+1}\in N_{\mathcal{C}}(y^{k+1})\) and \(\gamma\), \(D^{0}\) as defined in Lemma 9._
Proof of Lemma 10.: Substitute the bound on \(\varepsilon_{h}^{k,0}\) from Lemma 9 to upper bound the gradient estimation error in Lemma 8 where \(\mathsf{C}_{1k}^{D}=\mathsf{C}_{1}^{D}\), \(\mathsf{C}_{2k}^{D}=\mathsf{C}_{2}^{D}\) when \(\alpha_{k}=\alpha\) (Remark 2). The proof is complete by using \(\big{(}1-\frac{\gamma}{4}\big{)}^{k}\leq 1\) for the term in multiplication with \(\mathsf{C}_{1}^{D}\mathsf{C}_{2}^{D}\).
**Remark 3**: All results mentioned hereafter hold for \(\alpha_{k}=\alpha\) satisfying \(0<\alpha\leq\overline{\alpha}\).
### Proof of Theorems 1 and 2
This section provides the proof of main theorems which follow from key Lemma 11. In Lemma 11, we bound the average deviation of negative gradient from normal cone utilizing the expression for gradient estimation error from Lemma 10.
**Lemma 11**: _Fix \(\beta_{k}:=\beta\) satisfying \(0<\beta\leq\overline{\beta}=\min\Big{\{}\frac{\nu^{\min_{\gamma}}}{3\sqrt{2} \mathsf{C}_{0}},\frac{7}{48L_{F}}\Big{\}}\). For \(D\geq D^{0}\),_
\[\frac{1}{K}\sum_{k=0}^{K-1}\|\nabla F(y^{k})+z^{k+1}\|^{2}\leq\frac{4(F(y^{0})- \inf_{y}F(y))+(\beta+2L_{F}\beta^{2})\sum_{k=0}^{K-1}\Big{\{}3\mathsf{C}_{1}^ {D}+\big{(}1-\frac{\gamma}{4}\big{)}^{k}\varepsilon_{h}^{0,0}\Big{\}}}{(\beta- 6L_{F}\beta^{2})\,K} \tag{17}\]
_where \(z^{k+1}\in N_{\mathcal{C}}(y^{k+1})\) and \(\gamma\), \(\mathsf{C}_{1}^{D}\), \(D^{0}\) as defined in Lemma 10._
Proof of Lemma 11.: From Lemma 7,we know \(F(\cdot)\) is smooth. Therefore for any \(y\), \(y^{\prime}\) following holds
\[F(y)\ \leq F(y^{\prime})+\langle\nabla F(y^{\prime}),y-y^{\prime}\rangle+L_{F}/2 \|y-y^{\prime}\|^{2}\]
Take \(y:=y^{k+1}\) and \(y^{\prime}:=y^{k}\) to get
\[F(y^{k+1})\leq F(y^{k})+\big{\langle}\nabla F(y^{k}),y^{k+1}-y^{k}\big{\rangle}+ L_{F}/2\ \|y^{k+1}-y^{k}\|^{2}\]
Since \(z^{k+1}\in N_{\mathcal{C}}(y^{k+1})\), it implies \(\langle z^{k+1},y^{k}-y^{k+1}\rangle\leq 0\). Add \(\langle z^{k+1},y^{k+1}-y^{k}\rangle\geq 0\) on RHS
\[\begin{array}{l}F(y^{k+1})\\ \leq F(y^{k})+\big{\langle}\nabla F(y^{k})+z^{k+1},y^{k+1}-y^{k}\big{\rangle}+L _{F}/2\ \|y^{k+1}-y^{k}\|^{2}\\ =F(y^{k})-\beta_{k}\big{\langle}\nabla F(y^{k})+z^{k+1},\widehat{\nabla}F(y^{k} )+z^{k+1}\big{\rangle}+L_{F}/2\ \beta_{k}^{2}\|\widehat{\nabla}F(y^{k})+z^{k+1}\|^{2}\\ \leq F(y^{k})-\beta_{k}\|\nabla F(y^{k})+z^{k+1}\|^{2}-\beta_{k}\big{\langle} \nabla F(y^{k})+z^{k+1},\widehat{\nabla}F(y^{k})-\nabla F(y^{k})\big{\rangle}\\ \qquad\qquad\qquad\qquad+L_{F}\beta_{k}^{2}\|\widehat{\nabla}F(y^{k})-\nabla F(y ^{k})\|^{2}+L_{F}\beta_{k}^{2}\|\nabla F(y^{k})+z^{k+1}\|^{2}\end{array}\]
using \(-a^{\top}b\leq\|a\|\cdot\|b\|\leq(\|a\|^{2}+\|b\|^{2})/2\)
\[F(y^{k+1})\] \[\leq F(y^{k})-\left(\frac{\beta_{k}}{2}-L_{F}\beta_{k}^{2}\right)\| \nabla F(y^{k})+z^{k+1}\|^{2}+\left(\frac{\beta_{k}}{2}+L_{F}\beta_{k}^{2} \right)\|\widehat{\nabla}F(y^{k})-\nabla F(y^{k})\|^{2}\] \[\leq F(y^{k})-\left(\frac{\beta_{k}}{2}-L_{F}\beta_{k}^{2}\right)\| \nabla F(y^{k})+z^{k+1}\|^{2}\] \[\quad+\left(\frac{\beta_{k}}{2}+L_{F}\beta_{k}^{2}\right)\left\{ \mathsf{C}_{1}^{D}\big{(}1+\mathsf{C}_{2}^{D}\big{)}+\mathsf{C}_{2}^{D}\left(1 -\frac{\gamma}{4}\right)^{k}\xi_{h}^{0,0}+\mathsf{C}_{2}^{D}\gamma/4\sum_{\ell =0}^{k-1}\left(1-\frac{\gamma}{4}\right)^{k-1-\ell}\|\nabla F(y^{\ell})+z^{ \ell+1}\|^{2}\right\}\] (using Lemma 10)
Summing from \(k=0\) to \(K-1\)
\[\sum_{k=0}^{K-1}\left(\frac{\beta_{k}}{2}-L_{F}\beta_{k}^{2}\right)\| \nabla F(y^{k})+z^{k+1}\|^{2}\] \[\leq F(y^{0})-F(y^{K})+\sum_{k=0}^{K-1}\left(\frac{\beta_{k}}{2}+L_ {F}\beta_{k}^{2}\right)\Big{\{}\mathsf{C}_{1}^{D}(1+\mathsf{C}_{2}^{D})+ \mathsf{C}_{2}^{D}\left(1-\frac{\gamma}{4}\right)^{k}\xi_{h}^{0,0}\] \[\leq F(y^{0})-\inf_{y}F(y)+\sum_{k=0}^{K-1}\left(\frac{\beta_{k}}{2} +L_{F}\beta_{k}^{2}\right)\left\{\mathsf{C}_{1}^{D}\big{(}1+\mathsf{C}_{2}^{D }\big{)}+\mathsf{C}_{2}^{D}\left(1-\frac{\gamma}{4}\right)^{k}\xi_{h}^{0,0}+ \mathsf{C}_{2}^{D}\|\nabla F(y^{k})+z^{k+1}\|^{2}\right\}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\
For \(\beta\in\left(0,\frac{1}{6L_{F}}\right)\), the expression \(\beta-6L_{F}\beta^{2}\) is a concave function of \(\beta\). Note that, \(\exists\ 0<b<4\) such that \(\left[\frac{4-b}{48L_{F}},\overline{\beta}\right]\subseteq\left[\frac{4-b}{48L_ {F}},\frac{4+b}{48L_{F}}\right]\) and for \(\beta\in\left[\frac{4-b}{48L_{F}},\frac{4+b}{48L_{F}}\right]\), we have
\[\beta-6L_{F}\beta^{2}\geq\frac{16-b^{2}}{384L_{F}},\quad\beta+2L_{F}\beta^{2} \leq\frac{(4+b)(28+b)}{1152L_{F}} \tag{19}\]
Substitute above bounds and the bound on \(\epsilon_{h}^{0,0}\) (from Lemma 1) in (18),
\[\frac{1}{K}\sum_{k=0}^{K-1}\|\nabla F(y^{k})\|^{2}\] \[\leq\frac{1536L_{F}(F(y^{0})-\inf_{y}F(y))+(4+b)(28+b)\overline{ \mathsf{D}}_{\mathsf{Kl}}^{\max}\frac{1-(1-\gamma/4)^{K}}{3\gamma/4}}{(16-b^ {2})\cdot K}+\frac{(4+b)(28+b)}{16-b^{2}}\mathsf{C}_{1}^{D}\]
The proof is complete by taking \(\underline{\beta}=\frac{4-b}{48L_{F}}\) and noting \(\mathsf{C}_{1}^{D}=\mathcal{O}(\lambda^{D})\). \(\square\)
_Proof of Theorem 2._
\[\|\nabla F(y^{k+1})+z^{k+1}\|^{2}\] \[=\|\nabla F(y^{k+1})-\nabla F(y^{k})+\nabla F(y^{k})+z^{k+1}\|^{2}\] \[\leq 2\|\nabla F(y^{k})+z^{k+1}\|^{2}+2\|\nabla F(y^{k+1})-\nabla F (y^{k})\|^{2}\] \[\leq 2\|\nabla F(y^{k})+z^{k+1}\|^{2}+2\ \ L_{F}^{2}\|y^{k+1}-y^{k}\|^{2}\] \[=2\|\nabla F(y^{k})+z^{k+1}\|^{2}+2\ \beta^{2}L_{F}^{2}\| \widehat{\nabla}F(y^{k})+z^{k+1}\|^{2}\] \[\leq(4\ \beta^{2}L_{F}^{2}+2)\|\nabla F(y^{k})+z^{k+1}\|^{2}+4\ \beta^{2}L_{F}^{2}\| \widehat{\nabla}F(y^{k})-\nabla F(y^{k})\|^{2}\] \[\leq(4\ \beta^{2}L_{F}^{2}+2)\|\nabla F(y^{k})+z^{k+1}\|^{2}\] \[\quad+4\ \beta^{2}L_{F}^{2}\Big{\{}\mathsf{C}_{1}^{D}\big{(}1+ \mathsf{C}_{2}^{D}\big{)}+\mathsf{C}_{2}^{D}\ \left(1-\frac{\gamma}{4}\right)^{k}\ \epsilon_{h}^{0,0}+\mathsf{C}_{2}^{D}\ \gamma/4\ \sum_{\ell=0}^{k-1}\left(1-\frac{\gamma}{4}\right)^{k-1-\ell}\|\nabla F(y^{ \ell})+z^{\ell+1}\|^{2}\Big{\}}\] (using Lemma 10)
Summing from \(k=0\) to \(K-1\)
\[\sum_{k=0}^{K-1}\|\nabla F(y^{k+1})+z^{k+1}\|^{2}\] \[\leq\left(4\beta^{2}L_{F}^{2}(1+\mathsf{C}_{2}^{D})+2\right)\sum_ {k=0}^{K-1}\|\nabla F(y^{k})+z^{k+1}\|^{2}+4\ \beta^{2}L_{F}^{2}\sum_{k=0}^{K-1}\Big{\{}\mathsf{C}_{1}^{D}\big{(}1+ \mathsf{C}_{2}^{D}\big{)}+\mathsf{C}_{2}^{D}\ \left(1-\frac{\gamma}{4}\right)^{k} \epsilon_{h}^{0,0}\Big{\}}\] \[\qquad\qquad\qquad\qquad\left(\text{using }\sum_{k=1}^{K-1}\sum_{\ell=0}^{ k-1}\left(1-\frac{\gamma}{4}\right)^{k-1-\ell}\|\nabla F(y^{\ell})+z^{\ell+1}\|^{2} \leq\frac{1}{\gamma/4}\sum_{k=0}^{K-1}\|\nabla F(y^{k})+z^{k+1}\|^{2}\right)\] \[\leq\left(6\beta^{2}L_{F}^{2}+2\right)\sum_{k=0}^{K-1}\|\nabla F (y^{k})+z^{k+1}\|^{2}+\beta^{2}L_{F}^{2}\sum_{k=0}^{K-1}\left\{6\mathsf{C}_{1} ^{D}+2\left(1-\frac{\gamma}{4}\right)^{k}\epsilon_{h}^{0,0}\right\}\] \[\qquad\qquad\qquad\qquad\left(\text{using }0<C_{2}^{D}\leq 1/2\text{ for }D\geq D ^{0}\text{ from the definition of }D^{0}\right)\]
Dividing by \(K\) both sides and using Lemma 11 to bound \(\frac{1}{K}\sum_{k=0}^{K-1}\|\nabla F(y^{k})+z^{k+1}\|^{2}\) gives
\[\frac{1}{K}\sum_{k=0}^{K-1}\|\nabla F(y^{k+1})+z^{k+1}\|^{2}\] \[\leq\frac{(6\beta^{2}L_{F}^{2}+2)}{(\beta-6L_{F}\beta^{2})}\frac{4 (F(y^{0})-\inf_{y}F(y))+(\beta+2L_{F}\beta^{2})\sum_{k=0}^{K-1}\left\{3 \mathsf{C}_{1}^{D}+\left(1-\frac{\gamma}{4}\right)^{k}\epsilon_{h}^{0,0}\right\} }{K}\] \[\quad+\beta^{2}L_{F}^{2}\frac{\sum_{k=0}^{K-1}\left\{6\mathsf{C}_{1 }^{D}+2\left(1-\frac{\gamma}{4}\right)^{k}\epsilon_{h}^{0,0}\right\}}{K}\]
Similar to proof of Theorem 1, \(\exists\ 0<b<4\) such that \(\left[\frac{4-b}{48L_{F}},\overline{\beta}\right]\subseteq\left[\frac{4-b}{48L_{F }},\frac{4+b}{48L_{F}}\right]\) and for \(\beta\in\left[\frac{4-b}{48L_{F}},\frac{4+b}{48L_{F}}\right]\), we have
\[\beta-6L_{F}\beta^{2}\geq\frac{16-b^{2}}{384L_{F}},\quad\beta+2L_{F}\beta^{2} \leq\frac{(4+b)(28+b)}{1152L_{F}},\quad 6\beta^{2}L_{F}^{2}+2\leq\frac{(4+b)^{2}}{384}+2\]
Substitute above bounds and the bound on \(\varepsilon_{h}^{0,0}\) (from Lemma 1) in (18),
\[\frac{1}{K}\sum_{k=0}^{K-1}\|\nabla F(y^{k+1})+z^{k+1}\|^{2}\] \[\leq\frac{(4+b)^{2}+768}{16-b^{2}}\frac{4L_{F}(F(y^{0})-\inf_{y}F (y))+\frac{(4+b)(28+b)}{1152}\sum_{k=0}^{K-1}\left\{3\mathsf{C}_{1}^{D}+\left( 1-\frac{\gamma}{4}\right)^{k}\overline{\mathsf{D}}_{\mathsf{KL}}^{\max} \right\}}{K}\] \[\quad+\frac{(4+b)^{2}}{48^{2}}\frac{\sum_{k=0}^{K-1}\left\{6 \mathsf{C}_{1}^{D}+2\left(1-\frac{\gamma}{4}\right)^{k}\overline{\mathsf{D}}_ {\mathsf{KL}}^{\max}\right\}}{K}\]
Simplifying and rearranging gives
\[\frac{1}{K}\sum_{k=0}^{K-1}\|\nabla F(y^{k+1})+z^{k+1}\|^{2}\] \[\leq\frac{4\widetilde{b}L_{F}(F(y^{0})-\inf_{y}F(y))}{K}+\frac{( 4+b)\big{(}(4+b)+\widetilde{b}(28+b)\big{)}}{1152}\left\{\overline{\mathsf{D }}_{\mathsf{KL}}^{\max}\frac{1-(1-\gamma/4)^{K}}{\gamma/4\cdot K}+3\mathsf{C}_ {1}^{D}\right\}\]
where \(\widetilde{b}=\frac{(4+b)^{2}+768}{16-b^{2}}\). The proof is complete by taking \(\underline{\beta}=\frac{4-b}{48L_{F}}\) and noting \(\mathsf{C}_{1}^{D}=\mathcal{O}(\lambda^{D})\).
|
2307.03460
|
On the convergence of dynamic implementations of Hamiltonian Monte Carlo
and No U-Turn Samplers
|
There is substantial empirical evidence about the success of dynamic
implementations of Hamiltonian Monte Carlo (HMC), such as the No U-Turn Sampler
(NUTS), in many challenging inference problems but theoretical results about
their behavior are scarce. The aim of this paper is to fill this gap. More
precisely, we consider a general class of MCMC algorithms we call dynamic HMC.
We show that this general framework encompasses NUTS as a particular case,
implying the invariance of the target distribution as a by-product. Second, we
establish conditions under which NUTS is irreducible and aperiodic and as a
corrolary ergodic. Under conditions similar to the ones existing for HMC, we
also show that NUTS is geometrically ergodic. Finally, we improve existing
convergence results for HMC showing that this method is ergodic without any
boundedness condition on the stepsize and the number of leapfrog steps, in the
case where the target is a perturbation of a Gaussian distribution.
|
Alain Durmus, Samuel Gruffaz, Miika Kailas, Eero Saksman, Matti Vihola
|
2023-07-07T08:44:33Z
|
http://arxiv.org/abs/2307.03460v1
|
# On the convergence of dynamic implementations of Hamiltonian Monte Carlo and No U-Turn Samplers
###### Abstract
There is substantial empirical evidence about the success of dynamic implementations of Hamiltonian Monte Carlo (HMC), such as the No U-Turn Sampler (NUTS), in many challenging inference problems but theoretical results about their behavior are scarce. The aim of this paper is to fill this gap. More precisely, we consider a general class of MCMC algorithms we call dynamic HMC. We show that this general framework encompasses NUTS as a particular case, implying the invariance of the target distribution as a by-product. Second, we establish conditions under which NUTS is irreducible and aperiodic and as a corrolary ergodic. Under conditions similar to the ones existing for HMC, we also show that NUTS is geometrically ergodic. Finally, we improve existing convergence results for HMC showing that this method is ergodic without any boundedness condition on the stepsize and the number of leapfrog steps, in the case where the target is a perturbation of a Gaussian distribution.
## 1 Introduction
In this paper we consider No U-Turn Samplers (NUTS), a class of dynamic implementations of the Hamiltonian Monte Carlo algorithm (HMC). HMC is a Metropolis-Hastings algorithm designed to sample a target probability density \(\pi\) on \(\mathbb{R}^{d}\). This method has a pretty long history beginning from computational physics in 1987 [13] before quickly gaining in popularity inside the statistics community in the early paper of [27]; see, for example [23, chapter 9], [28] and [18]. This method is now the main inference engine of popular probabilistic programming languages such as Stan [11],
PyMC3 [32] and Turing [17]. The HMC algorithm aims to remove the random-walk behavior that plagues most MCMC algorithms: the proposals - obtained by integrating a system of Hamiltonian equations using the leapfrog integrator - are far away from the starting position while still having a high probability of being accepted.
During the previous decade the challenge to avoid a drop in performance was the correct tuning of the parameters of the leapfrog integrator: the stepsize \(h>0\) and the number of leapfrog steps \(T\in\mathbb{N}^{*}\). Indeed, the length of the time interval \(hT\) along which the Hamiltonian equations are integrated [5] rules the sampler's exploration/exploitation trade-off since it changes the distance between the current state and the proposal. One option is to fix \(T\) and to estimate \(h\) with an adaptive mechanism, see [2] for a review. Then, \(T\) may be selected by Cross Validation or by using expert knowledge, depending on the context.
As a major breakthrough, the first NUTS algorithm using slice sampling was introduced in [20] as a variant of HMC which selects \(T\) automatically by design and which finds \(h\) using an adaptive mechanism called dual-averaging [29]. The algorithm implemented in the Stan library [11] has been further developed and improved, in particular by replacing the slicing procedure with a multinomial mechanism [3]. The main idea is to integrate the Hamiltonian equations till the No-U-turn criteria is met, corresponding to the moment where the trajectory turns back to the area where it comes from, with the objective of maximizing the distance of the ending point to the starting point. Then, a position is sampled on the resulting trajectory. Moreover, this sampling is designed to leave \(\pi\) invariant and to encourage the selection of points in areas of high density (relatively to \(\pi\)) far from the starting point.
More generally, different dynamic and adaptive implementations of HMC selecting the integration time according to the context have been proposed, to cite a few: randomized integration time HMC [7] to unbound the integration time, ChEES-HMC to allow parallel computations on GPU or the recently suggested Apogee-to-Apogee Path Sampler to have a robust parametrization of HMC [35]. There is substantial empirical evidence about their success in many challenging inference problems [36, 34, 9, 37, 19, 38] but precise theoretical results about their behaviour are scarce compared to the original HMC [16, 33, 10, 4, 7, 6].
The goal of the present paper is to derive primary theoretical guarantees for NUTS. This constitutes a first stone in its analysis as far as we know. More precisely, our contributions are as follows. First, we present a general framework for dynamic HMC algorithms and prove a condition on their reversibility and invariance. The condition and its proof are transparent, yet general enough to encompass NUTS [20, 3]. Second, and the primary contribution of this work, we prove the irreducibility of the current Stan implementation of NUTS. Classical results depending on the regularity of the transition kernel [25, 31] and recent results for basic HMC [16, 33] do not apply directly. In particular, establishing classical regularity conditions for the transition kernel using a nonzero probability of an one-step transition is ruled out by the construction of NUTS, necessitating the use of global information of HMC trajectories. Our irreducibility results apply without any restrictions on the step size or maximum number of steps to the cases where \(-\log\pi\) is real analytic with vanishing Hessian at infinity, and with extra conditions on the step size on less stringent regularity assumptions.1 The conditions that we consider highlight the regularity of the No-U-Turn stopping rule which is at the heart of the dynamic trajectory selection of NUTS. Third, we establish geometric ergodicity of NUTS under similar conditions that the ones considered in [16] for HMC, without any additional smoothness assumptions on the potential \(-\log\pi\). Finally, our considerations of HMC trajectories allow us to weaken the conditions on the stepsize proposed in [16] to establish the ergodicity of basic HMC. More specifically, we remove any boundedness
condition on the stepsize by assuming that \(\pi\) is a perturbation of a Gaussian.
OutlineThe paper is organized as follows. In Section 2, we present a class of MCMC methods we call Dynamic HMC which encompasses NUTS and HMC as particular cases. In addition, we provide conditions ensuring that the target distribution is invariant for the resulting Markov kernel. In Section 3, we verify that these conditions are met for the NUTS implementation in Stan as an illustrative and comprehensive example. Conditions under which the NUTS implementation in Stan is ergodic and \(\mathcal{V}\)-uniformly geometrically ergodic are presented in Sections 4 and 5 respectively. Finally, some properties related to the irreducibility of the HMC algorithm, which are of independent interest, are stated in Section 6.
All results are more oriented toward a qualitative understanding than a quantitative analysis since the constants are only sometimes tractable. Nevertheless, this work can be a starting point for more quantitative analysis.
### Notation
We denote by \(\mathcal{P}(\mathsf{X})\) the power set of a set \(\mathsf{X}\), \([k:l]=\{k,\ldots,l\}\subset\mathcal{P}(\mathbb{Z})\) and \([l]=[1:l]\) with \(k,l\in\mathbb{N}\), and \(\mathbb{R}_{\geq 0}\)and \(\mathbb{R}_{>0}\), the sets of non-negative and positive real numbers respectively. The set \(\mathbb{R}^{d}\) is endowed with the Euclidean scalar product \(\langle\cdot,\cdot\rangle\), the corresponding norm \(|\cdot|\) and Borel \(\sigma\)-field \(\mathcal{B}(\mathbb{R}^{d})\). Denote by \(\mathbb{F}(\mathbb{R}^{d})\) the set of Borel measurable functions on \(\mathbb{R}^{d}\) and for \(f\in\mathbb{F}(\mathbb{R}^{d}),\|f\|_{\infty}=\sup_{x\in\mathbb{R}^{d}}|f(x)|\). The Lebesgue measure is denoted by \(\operatorname{Leb}\). For \(\mu\) a probability measure on \((\mathbb{R}^{d},\mathcal{B}(\mathbb{R}^{d}))\) and \(f\in\mathbb{F}(\mathbb{R}^{d})\) a \(\mu\)-integrable function, denote by \(\mu(f)\) the integral of \(f\) with respect to \(\mu\). Let \(\mathcal{V}:\mathbb{R}^{d}\to[1,\infty)\) be a measurable function. For \(f\in\mathbb{F}(\mathbb{R}^{d})\), the \(\mathcal{V}\)-norm of \(f\) is given by \(\|f\|_{\mathcal{V}}=\|f/\mathcal{V}\|_{\infty}\). For two probability measures \(\mu\) and \(\nu\) on \((\mathbb{R}^{d},\mathcal{B}(\mathbb{R}^{d}))\), the \(\mathcal{V}\)-total variation distance of \(\mu\) and \(\nu\) is defined as
\[\|\mu-\nu\|_{\mathcal{V}}=\sup_{f\in\mathbb{F}(\mathbb{R}^{d}),\|f\|_{ \mathcal{V}}\leq 1}\left|\int_{\mathbb{R}^{d}}f(x)\mathrm{d}\mu(x)-\int_{ \mathbb{R}^{d}}f(x)\mathrm{d}\nu(x)\right|\;.\]
If \(\mathcal{V}\equiv 1\), then \(\|\cdot\|_{\mathcal{V}}\) is the total variation denoted by \(\|\cdot\|_{\mathrm{TV}}\). For any \(x\in\mathbb{R}^{d}\) and \(M>0\) we denote by \(\operatorname{B}(x,M)\), the Euclidean ball centered at \(x\) with radius \(M\). Denote by \(\operatorname{I}_{n}\) the identity matrix. Let \(k\geq 1\). Denote by \((\mathbb{R}^{d})^{\otimes k}\) the \(k^{\text{th}}\) tensor power of \(\mathbb{R}^{d}\), for any \(x\in\mathbb{R}^{d},y\in\mathbb{R}^{\ell}\), \(x\otimes y\in(\mathbb{R}^{d})^{\otimes 2}\) the tensor product of \(x\) and \(y\), and \(x^{\otimes k}\in(\mathbb{R}^{d})^{\otimes k}\) the \(k^{\text{th}}\) tensor power of \(x\). For any \(x_{1},\ldots,x_{k}\in\mathbb{R}^{d}\), set \(\|x_{1}\otimes\cdots\otimes x_{k}\|=\sup_{i\in\{1,\ldots,k\}}|x_{i}|\). We let \(\mathcal{L}((\mathbb{R}^{d})^{\otimes k},\mathbb{R}^{\ell})\) stand for the set of linear maps from \((\mathbb{R}^{n})^{\otimes k}\) to \(\mathbb{R}^{\ell}\) and for \(\operatorname{L}\in\mathcal{L}((\mathbb{R}^{d})^{\otimes k},\mathbb{R}^{\ell})\), we denote by \(\|\mathrm{L}\|\) the operator norm of \(\operatorname{L}\). Let \(f:\mathbb{R}^{d}\to\mathbb{R}^{d}\) be a Lipschitz function, namely there exists \(C\geq 0\) such that for any \(x,y\in\mathbb{R}^{d},|f(x)-f(y)|\leq C|x-y|\). Then we denote by \(\|f\|_{\operatorname{Lip}}\,=\inf\big{\{}|f(x)-f(y)|/|x-y|\mid x,y\in\mathbb{R }^{d},x\neq y\big{\}}\). Let \(k\geq 0\) and \(\operatorname{U}\) be an open subset of \(\mathbb{R}^{d}\). Denote by \(\operatorname{C}^{k}(\mathsf{U},\mathbb{R}^{d})\) the set of all \(k\) times continuously differentiable funtions from \(\mathsf{U}\) to \(\mathbb{R}^{d}\). Let \(\Phi\in\operatorname{C}^{k}(\mathsf{U},\mathbb{R}^{d})\). Write \(\mathrm{d}^{k}\Phi:\mathsf{U}\to\mathcal{L}((\mathbb{R}^{d})^{\otimes k}, \mathbb{R}^{\ell})\) for the \(k^{\text{th}}\) differential of \(\Phi\in\operatorname{C}^{k}(\mathbb{R}^{d},\mathbb{R}^{\ell})\). For \(x\in\mathbb{R}^{d}\), denote by \(\mathrm{d}^{k}\Phi(x)\) the \(k\)-th differential of \(\Phi\) at \(x\). For smooth enough functions \(f:\mathbb{R}^{d}\to\mathbb{R}\), denote by \(\nabla f\) and \(\nabla^{2}f\) the gradient and the Hessian of \(f\) respectively. Let \(\mathsf{A}\subset\mathbb{R}^{d}\). We write \(\overline{\mathsf{A}},\mathsf{A}^{\circ}\) and \(\partial\mathsf{A}\) for the closure, the interior and the boundary of \(\mathsf{A}\), respectively. For any \(n_{1},n_{2}\in\mathbb{N},n_{1}>n_{2}\), we take the convention that \(\sum_{k=n_{2}}^{n_{1}}=0\). We denote for any not empty sets \(\mathsf{A},\mathsf{C}\subset(\mathbb{R}^{d})^{2}\), \(\operatorname{dist}((q,p),\mathsf{A})=\inf_{(q^{\prime},p^{\prime})\in\mathsf{ A}}\operatorname{dist}((q,p),(q^{\prime},p^{\prime}))\) and \(\operatorname{dist}(\mathsf{C},\mathsf{A})=\inf_{(q,p)\in\mathsf{C}} \operatorname{dist}((q,p),\mathsf{A})\). The space of real matrices with \(d\in\mathbb{N}^{*}\) rows and \(c\in\mathbb{N}^{*}\) columns is identified with \(\mathbb{R}^{d\times c}\) and the space of square symmetric matrices is denoted by \(\mathbb{S}_{d}(\mathbb{R})=\{\mathbf{A}\in\mathbb{R}^{d\times d}\,:\,\mathbf{A} ^{\top}=\mathbf{A}\}\).
Dynamic HMC algorithms
The aim of this section is to present a general framework for discussing HMC and its many variants. We introduce a general HMC scheme that includes the basic HMC algorithm [13, 27], its randomized version [7] and the dynamic No U-Turn Sampler [20] as special cases. Despite its generality, our scheme admits a simple sufficient condition on its constituents for the invariance of the target distribution \(\pi\), unifying and simplifying the existing case-by-case analysis of invariance of HMC-type algorithms. For ease of presentation we start by introducing HMC and the related concepts and objects that are necessary for our analysis. However, an exhaustive introduction is out of the scope of this work and we refer to [8, 3] for more detail and motivation.
### Hamiltonian Monte Carlo
We assume that the target distribution \(\pi\) admits a positive density (still denoted by \(\pi\)) with respect to the Lebesgue measure of the form \(\pi(x)\propto\exp(-U(x))\) with a twice differentiable potential function \(U:\mathbb{R}^{d}\to\mathbb{R}\). Given a positive definite mass matrix \(\mathbf{M}\in\mathcal{S}(\mathbb{R})\) we define the extended target distribution \(\tilde{\pi}=\pi\otimes\mathrm{N}(0,\mathbf{M})\), with density with respect to the Lebesgue (still denoted by \(\tilde{\pi}\)), \(\tilde{\pi}(q,p)\propto\exp(-H(q,p))\) with the Hamiltonian function \(H\) given by
\[(q,p)\mapsto H(q,p)=U(q)+\tfrac{1}{2}p^{\top}\mathbf{M}^{-1}p\;.\]
We assume that the potential \(U\) satisfies the following.2
Footnote 2: In all following sections we will, without loss of generality, for notational simplicity take \(\mathbf{M}=\mathrm{I}_{d}\). Namely, the general case reduces to the identity mass matrix case via the linear change of variables \(q^{\prime}=\mathbf{M}^{1/2}q,p^{\prime}=\mathbf{M}^{-1/2}p\).
**H 1**.: \(U\) _is continuously twice differentiable on \(\mathbb{R}^{d}\) and the map \(q\mapsto\mathbf{M}^{-1}\nabla U(q)\) is \(\mathsf{L}_{1}\)-Lipschitz: for any \(q,q^{\prime}\in\mathbb{R}^{d}\),_
\[|\mathbf{M}^{-1}\nabla U(q)-\mathbf{M}^{-1}\nabla U(q^{\prime})|\leq\mathsf{ L}_{1}|q-q^{\prime}|\;.\]
The HMC algorithm and its extensions rely on the Hamiltonian dynamics associated with \(U\), defined by Hamilton's equations
\[\frac{\mathrm{d}q_{t}}{\mathrm{d}t}=\frac{\partial H}{\partial p}(q_{t},p_{t} )=\mathbf{M}^{-1}p_{t}\;,\quad\frac{\mathrm{d}p_{t}}{\mathrm{d}t}=-\frac{ \partial H}{\partial q}(q_{t},p_{t})=-\nabla U(q_{t})\;. \tag{1}\]
Under \(\mathbf{H}1\) any initial condition \((q_{0},p_{0})\) gives rise to an unique solution to (1) and moreover it is well-known (see e.g., [8]) that the Hamiltonian dynamics preserves the extended target distribution \(\tilde{\pi}\) in the sense that \(\tilde{\pi}(q_{t},p_{t})=\tilde{\pi}(q_{0},p_{0})\) for any initial condition \((q_{0},p_{0})\) and the associated solution \((q_{t},p_{t})_{t\geq 0}\). Since Hamiltonian dynamics also preserves the volume of the phase space \((\mathbb{R}^{d})^{2}\), it follows that if the initial condition is drawn randomly as \((q_{0},p_{0})\sim\tilde{\pi}\) then also \((q_{t},p_{t})\sim\tilde{\pi}\) for all \(t\geq 0\). Given a fixed time horizon \(t_{f}>0\) and a sequence of i.i.d. \(\mathrm{N}(0,\mathbf{M})\) random variables \((G_{k})_{k\in\mathbb{N}}\), then the _ideal_ HMC algorithm consists in defining a Markov chain \((Q^{\mathrm{ideal}}_{k},P^{\mathrm{ideal}}_{k})_{k\in\mathbb{N}}\) such that for \(k=1,2,\dots,\;(Q^{\mathrm{ideal}}_{k},P^{\mathrm{ideal}}_{k})\) is the solution of (1) at \(t_{f}\) starting from \((Q^{\mathrm{ideal}}_{k-1},G_{k-1})\). The marginal chain \((Q^{\mathrm{ideal}}_{k})_{k\in\mathbb{N}}\) then targets the desired probability distribution \(\pi\).
Simulating the ideal Markov chain \((Q^{\mathrm{ideal}}_{k},P^{\mathrm{ideal}}_{k})\) exactly is computationally infeasible since in all but few special cases Hamilton's equations (1) need to be solved numerically. The most common implementation of HMC uses the leapfrog integrator associated with (1). Given a time step \(h>0\) and a current point \((q_{0},p_{0})\in(\mathbb{R}^{d})^{2}\), one leapfrog step is defined as
\[(q_{1},p_{1})=\Phi^{(1)}_{h}(q_{0},p_{0})\;,\quad\Phi^{(1)}_{h}=\Psi^{(1)}_{h/2 }\circ\Psi^{(2)}_{h}\circ\Psi^{(1)}_{h/2}\;, \tag{2}\]
where for each \(t\in\mathbb{R}_{\geq 0}\), the momentum and position update maps \(\Psi_{t}^{(1)},\Psi_{t}^{(2)}:(\mathbb{R}^{d})^{2}\to(\mathbb{R}^{d})^{2}\) are given by
\[\Psi_{t}^{(1)}(q,p)=(q,p-t\nabla U(q))\;,\quad\Psi_{t}^{(2)}(q,p)=(q+t\mathbf{ M}^{-1}p,p)\]
for any \((q,p)\in(\mathbb{R}^{d})^{2}\). Note that \(\Phi_{h}^{\circ(1)}\) is a volume-preserving bijection \((\mathbb{R}^{d})^{2}\to(\mathbb{R}^{d})^{2}\), which implies that likewise its inverse \(\Phi_{h}^{\circ(-1)}\) and more generally any iterate \(\Phi_{h}^{\circ(\ell)}\), \(\ell\in\mathbb{Z}\), is also a volume-preserving bijection \((\mathbb{R}^{d})^{2}\to(\mathbb{R}^{d})^{2}\).
In contrast to ideal Hamiltonian dynamics the leapfrog integrator \(\Phi_{h}^{\circ(\ell)}\), for any \(\ell\in\mathbb{N}^{*}\), does not preserve the extended target \(\tilde{\pi}\) and to ensure that HMC is invariant for \(\tilde{\pi}\) a Metropolis accept-reject step has to be added as follows. Given a number of leapfrog steps \(T\in\mathbb{N}^{*}\) and a sequence of i.i.d. \(\mathrm{N}(0,\mathbf{M})\) random variables \((G_{k})_{k\in\mathbb{N}}\), then the HMC algorithm consists in defining a Markov chain \((Q_{k},P_{k})_{k\in\mathbb{N}}\) such that for \(k=1,2,\dots\), (1) a proposal \((\tilde{Q}_{k},\tilde{P}_{k})=\Phi_{h}^{\circ(T)}(Q_{k-1},G_{k})\) is first generated, (2) which is accepted, i.e., we set \((Q_{k},P_{k})=(\tilde{Q}_{k},\tilde{P}_{k})\), with probability \(1\wedge\exp[H(Q_{k-1},P_{k-1})-H(\tilde{Q}_{k},\tilde{P}_{k})]\) and rejected, i.e., set \((Q_{k},P_{k})=(Q_{k-1},P_{k-1})\), otherwise. Again, the marginal chain \((Q_{k})_{k\in\mathbb{N}}\) correctly targets \(\pi\).
The choice of the number of leapfrog steps \(T\) is crucial for the efficiency of the algorithm. If the integration time \(Th\) is small, the algorithm reduces to a random walk and diffusive exploration due to the resampling of the momentum at every iteration. On the other hand if \(Th\) is large, the approximate Hamiltonian trajectories will loop back to previously explored neighborhoods and the increased computation time yields little benefit. To fully realize the algorithm's potential of making long moves in the state space while maintaining computational efficiency, it is essential to strike the right balance between these extremes. These observations highlight the critical role of \(T\) in the expected performance of the algorithm. In particular, as observed for example in [3, Section 4.3], even for simple models it turns out that the optimal integration time at iteration \(k\) for ideal HMC depends on the current point of the algorithm. Adjusting and choosing the integration time \(Th\) dynamically based on the current state is the main achievement of the NUTS algorithm [20] that has proven to be remarkably robust and efficient across a wide range of statistical applications. We defer the detailed presentation of NUTS to Section 3 and continue here by presenting a generalization that, hopefully, makes the details easier to understand.
### General framework for dynamic HMC algorithms
For the scheme in Definition 1 we need the following concepts and notation. Let \(h>0\), \(K_{\mathrm{m}}\in\mathbb{N}\) and let an _orbit selection kernel_
\[\mathrm{P}_{h}=\{\mathrm{P}_{h}(\cdot\mid q_{0},p_{0}):(q_{0},p_{0})\in( \mathbb{R}^{d})^{2}\}\]
be a family of probability distributions on \(\mathcal{P}([-2^{K_{\mathrm{m}}}:2^{K_{\mathrm{m}}}])\). In the dynamic HMC scheme below, an orbit selection kernel defines the probabilities \(\mathrm{P}_{h}(\mathsf{J}\mid q_{0},p_{0})\) of considering samples from the orbit \(\mathcal{O}_{\mathsf{J}}(q_{0},p_{0})=\{\Phi_{h}^{\circ(j)}(q_{0},p_{0}):j\in \mathsf{J}\}\), where the size of the index set \(\mathsf{J}\subset[-2^{K_{\mathrm{m}}}:2^{K_{\mathrm{m}}}]\subset\mathbb{Z}\) is bounded above by a constant.3 Let an _index selection kernel_
Footnote 3: While the restriction to a constant upper bound on the lengths of orbits is somewhat artificial from the theoretical point of view, and technically excludes algorithms such as Randomized Hamiltonian Monte Carlo [7] where the number of steps is taken to be exponentially distributed from the class of dynamic HMC algorithms as defined here, all practical algorithms have some effective limitation on the number of leapfrog steps taken during a single iteration. In particular, the NUTS algorithm explicitly incorporates the bound \(2^{K_{\mathrm{m}}}\) for the number of leapfrog steps so we take the convenient opportunity here to introduce the notation.
Some of our results include an assumption that bounds the total allowed integration time \(h2^{K_{\mathrm{m}}}\). In the cases where such a bound is assumed, we usually write \(K_{\mathrm{m}}=K_{\mathrm{m}}(h)\) to emphasize the interdependence of the admissible values of \(h\) and \(K_{\mathrm{m}}\).
\[\mathrm{Q}_{h}=\{\mathrm{Q}_{h}(\cdot\mid\mathsf{J},q_{0},p_{0}):\mathsf{J} \subset[-2^{K_{\mathrm{m}}}:2^{K_{\mathrm{m}}}],(q_{0},p_{0})\in(\mathbb{R}^{d })^{2}\}\]
be a family of probability distributions on the index sets \(\mathsf{J}\subset[-2^{K_{m}}:2^{K_{m}}]\), indexed by the orbit index sets \(\mathsf{J}\subset[-2^{K_{m}}:2^{K_{m}}]\) selected by \(\mathrm{P}_{h}\) and the associated initial points \((q_{0},p_{0})\in(\mathbb{R}^{d})^{2}\) in the phase space. An index selection kernel defines the probability \(\mathrm{Q}_{h}(j\mid\mathsf{J},q_{0},p_{0})\) of choosing the leapfrog iterate \(\Phi_{h}^{\circ(j)}(q_{0},p_{0})\in\mathcal{O}_{\mathsf{J}}(q_{0},p_{0})\) as the next state of the Markov chain when the current state is \((q_{0},p_{0})\) and the orbit has been selected.
**Definition 1**.: _We define the dynamic HMC scheme associated to an orbit selection kernel \(\mathrm{P}_{h}\) and index selection kernel \(\mathrm{Q}_{h}\) as the Markov chain \((Q_{k})_{k\in\mathbb{N}}\) defined by the following steps that define \(Q_{k+1}\) given \(Q_{k}\):_
1. _[label=(0)]_
2. _Sample_ \(\mathsf{I}_{k+1}\) _with distribution_ \(\mathrm{N}(0,\mathbf{M})\)_._
3. _Sample_ \(\mathsf{I}_{k+1}\) _with distribution_ \(\mathrm{P}_{h}(\cdot\mid Q_{k},P_{k+1})\)_._
4. _Sample_ \(J_{k+1}\) _with distribution_ \(\mathrm{Q}_{h}(\cdot\mid\mathsf{I}_{k+1},Q_{k},P_{k+1})\)_._
5. _Set_ \(Q_{k+1}=\mathrm{proj}_{1}\{\Phi_{h}^{\circ(J_{k+1})}(Q_{k},P_{k+1})\}\)_, where_ \(\mathrm{proj}_{1}:(\mathbb{R}^{d})^{2}\to\mathbb{R}^{d}\) _is the projection onto the first_ \(d\) _coordinates, i.e., from the phase space to the position coordinates._
To specialize the general scheme to a specific algorithm, the orbit selection kernel \(\mathrm{P}_{h}\) and index selection kernel \(\mathrm{Q}_{h}\) should be chosen so that the procedure leaves the desired target distribution \(\pi\) invariant. For example, the basic HMC algoritm with number of steps \(T\in\mathbb{N}\) may be realized as the dynamic HMC scheme with the orbit selection defined deterministically by
\[\mathrm{P}_{h}^{\mathrm{HMC}}(\{0,T\}\mid q_{0},p_{0})=1\]
and the index selection probabilities via the Metropolis acceptance rate as
\[\mathrm{Q}_{h}^{\mathrm{HMC}}(\cdot\mid\{0,T\},q_{0},p_{0}) = \left(1\wedge\frac{\tilde{\pi}(\Phi_{h}^{\circ(T)}(q_{0},p_{0})) }{\tilde{\pi}(q_{0},p_{0})}\right)\delta_{T}(\cdot)+\left(1-1\wedge\frac{ \tilde{\pi}(\Phi_{h}^{\circ(T)}(q_{0},p_{0}))}{\tilde{\pi}(q_{0},p_{0})} \right)\delta_{0}(\cdot)\;.\]
More generally, Proposition 2 below gives a simple sufficient condition for the dynamic HMC algorithm associated to a particular choice of \(\mathrm{P}_{h}\) and \(\mathrm{Q}_{h}\) to be invariant with respect to \(\pi\).
Before stating the invariance result we make a few further comments on the general dynamic HMC scheme. The transition kernel of the dynamic HMC algorithm associated to \(\mathrm{P}_{h}\) and \(\mathrm{Q}_{h}\) has the form
\[\mathrm{K}_{h}(q_{0},\mathsf{A})=\int\mathrm{d}p_{0}\,\rho_{ \mathbf{M}}(p_{0})\tilde{\mathrm{K}}_{h}((q_{0},p_{0}),\mathsf{A})\;,\qquad \text{where} \tag{3}\] \[\tilde{\mathrm{K}}_{h}((q_{0},p_{0}),\mathsf{A})=\sum_{\mathsf{J} \subset\mathbb{Z}}\sum_{j\in\mathsf{J}}\mathrm{P}_{h}(\mathsf{J}\mid q_{0},p_{ 0})\mathrm{Q}_{h}(j\mid\mathsf{J},q_{0},p_{0})\delta_{\mathrm{proj}_{1}(\Phi_ {h}^{\circ(j)}(q_{0},p_{0}))}(\mathsf{A})\]
for \(q_{0}\in\mathbb{R}^{d}\) and \(\mathsf{A}\in\mathcal{B}(\mathbb{R}^{d})\), and where we use the natural convention that \(\mathrm{P}_{h}(\mathsf{J}\mid q_{0},p_{0})=0\) for \(\mathsf{J}\not\subset[-2^{K_{m}}:2^{K_{m}}]\) and where \(\rho_{\mathbf{M}}(\cdot)\) denotes the density of \(\mathrm{N}(0,\mathbf{M})\) on \(\mathbb{R}^{d}\). We refer to as \(\tilde{\mathrm{K}}_{h}\) as an extended deterministic dynamic HMC kernel. Note that because of the dependence of \(\mathrm{Q}_{h}\) and \(\mathrm{P}_{h}\) on the momentum \(p_{0}\), the kernel \(\mathrm{K}_{h}\) cannot generally be expressed as a position-dependent mixture of basic HMC kernels, i.e., in the form \(\mathrm{K}_{h}^{\prime}(q_{0},\mathsf{A})=\sum_{j\in\mathbb{Z}}\varpi_{j}(q_{ 0})\int\mathrm{d}p_{0}\ \rho_{\mathbf{M}}(p_{0})\delta_{\mathrm{proj}_{1}(\Phi_{h}^{\circ(j)}(q_{0},p_{ 0}))}(\mathsf{A})\) where \(\{\varpi_{j}(q_{0})\}_{j\in\mathbb{Z}}\) is a sequence of non-negative weights which sum to \(1\) for all \(q_{0}\). Thus we emphasize that the dynamic HMC scheme presented here is significantly more general than mixtures of HMC kernels with different numbers of leapfrog steps. In particular the scheme is general enough to encompass e.g., NUTS [20], the Apogee-to-Apogee Path Sampler [35] and other algorithms where the orbit selection is defined dynamically via a stopping time.
The following proposition enables us to choose appropriate distributions \(\mathrm{P}_{h}\) and \(\mathrm{Q}_{h}\) so that \(\pi\) is invariant for \(\mathrm{K}_{h}\). The proposition and its proof unifies and generalizes existing invariance proofs of HMC algorithms in the literature, particularly that in [3, Appendix A].
**Proposition 2**.: _If the orbit selection kernel \(\mathrm{P}_{h}\) and index selection kernel \(\mathrm{Q}_{h}\) satisfy_
\[\tilde{\pi}(q_{0},p_{0})\mathrm{P}_{h}(\mathsf{J}\mid q_{0},p_{0})\] \[\qquad=\sum_{j\in\mathbb{Z}}\mathbb{1}_{\,\mathsf{J}}(0)\tilde{ \pi}(\Phi_{h}^{\circ(-j)}(q_{0},p_{0}))\mathrm{P}_{h}(\mathsf{J}+j\mid\Phi_{h} ^{\circ(-j)}(q_{0},p_{0}))\mathrm{Q}_{h}(j\mid\mathsf{J}+j,\Phi_{h}^{\circ(-j) }(q_{0},p_{0}))\;, \tag{4}\]
_for all \(q_{0},p_{0}\in\mathbb{R}^{d}\) and \(\mathsf{J}\subset\mathbb{Z}\), the transition kernel \(\mathrm{K}_{h}\) defined in (3) leaves the target measure \(\pi\) invariant._
Proof.: The proof is a straightforward computation, presented in Appendix A.1.
The orbit selection probabilities \(\mathrm{P}_{h}(\mathsf{J}\mid q_{0},p_{0})\) and \(\mathrm{P}_{h}(\mathsf{J}+j\mid\Phi_{h}^{\circ(-j)}(q_{0},p_{0}))\) in (4) refer to the same orbit in phase space, as
\[\mathcal{O}_{\mathsf{J}+j}(\Phi_{h}^{\circ(-j)}(q_{0},p_{0})) =\{\Phi_{h}^{\circ(\ell)}(\Phi_{h}^{\circ(-j)}(q_{0},p_{0}))\mid \ell\in\mathsf{J}+j\} \tag{5}\] \[=\{\Phi_{h}^{\circ(\ell)}(q_{0},p_{0})\mid\ell\in\mathsf{J}\}= \mathcal{O}_{\mathsf{J}}(q_{0},p_{0})\;,\]
and the index selection probability \(\mathrm{Q}_{h}(j\mid\mathsf{J}+j,\Phi_{h}^{\circ(-j)}(q_{0},p_{0}))\) in (4) is the probability of choosing \(\Phi_{h}^{\circ(j)}(\Phi_{h}^{\circ(-j)}(q_{0},p_{0}))=(q_{0},p_{0})\) from the orbit started at \(\Phi_{h}^{\circ(-j)}(q_{0},p_{0})\). Thus the meaning of the condition (4) is that for the invariance of the dynamic HMC kernel to hold it is sufficient that _on every fixed phase space trajectory separately4_ the index selection kernel \(\mathrm{Q}_{h}\) leaves invariant the finitely supported measure on \(\mathsf{J}\) defined by the induced weights \(\tilde{\pi}(\Phi_{h}^{\circ(-j)}(q_{0},p_{0}))\mathrm{P}_{h}(\mathsf{J}+j \mid\Phi_{h}^{\circ(-j)}(q_{0},p_{0}))\). In particular, for a given \(\mathrm{P}_{h}\), the choice of \(\mathrm{Q}_{h}\) to guarantee the invariance of \(\pi\) for \(\mathrm{K}_{h}\) according to Proposition 2 reduces to a problem of designing invariant Markov kernels on _finite_ state spaces.
Footnote 4: To be precise, by phase space trajectory we mean an orbit \(\mathcal{O}_{\mathsf{J}}(q_{0},p_{0})\) together with the indexing information. The concept could be formally defined as an equivalence class of pairs of index sets and initial points \((\mathsf{J},(q_{0},p_{0}))\) with pairs considered equivalent, \((\mathsf{J}^{\prime},(q^{\prime},p^{\prime}))\equiv(\mathsf{J},(q,p))\), if \((q^{\prime},p^{\prime})=\Phi_{h}^{\circ(-j)}(q,p)\) and \(\mathsf{J}^{\prime}=\mathsf{J}+j\) for some \(j\in\mathsf{J}\). We will, however, not explicitly work with this formulation.
As far as we are aware, in all dynamic HMC algorithms currently in use the orbit selection kernel \(\mathrm{P}_{h}\) is symmetric in the sense of the following corollary, which gives the invariance condition (4) a particularly simple form.
**Corollary 3**.: _Suppose the orbit selection kernel \(\mathrm{P}_{h}\) satisfies the symmetry condition_
\[\mathrm{P}_{h}(\mathsf{J}+j\mid\Phi_{h}^{\circ(-j)}(q_{0},p_{0}))=\mathrm{P}_{ h}(\mathsf{J}\mid q_{0},p_{0}) \tag{6}\]
_for all \((q_{0},p_{0})\in(\mathbb{R}^{d})^{2}\), \(\mathsf{J}\subset\mathbb{Z}\) and \(-j\in\mathsf{J}\). Then, the invariance condition (4) is equivalent to_
\[\tilde{\pi}(q_{0},p_{0})=\sum_{j\in\mathbb{Z}}\mathbb{1}_{\mathsf{J}}(0)\tilde{ \pi}(\Phi_{h}^{\circ(-j)}(q_{0},p_{0}))\mathrm{Q}_{h}(j\mid\mathsf{J}+j,\Phi_{h }^{\circ(-j)}(q_{0},p_{0}))\;.\]
Proof.: Immediate from (4) and (6).
In Section 3 we present the NUTS algorithm as a particular instance of the dynamic HMC scheme and show that it defines a Markov kernel which admits \(\tilde{\pi}\) as invariant probability measure using Proposition 2.
### Ergodicity of dynamic HMC
The main focus of this work is to study the ergodic properties of dynamic HMC algorithms, in particular the NUTS algorithm of Hoffman and Gelman [20] and its more recent developments [3]. As background we review some existing results in the literature, though a comprehensive survey is out of the scope of this work. We focus here on results that contextualize our main results for NUTS, Theorems 8 and 16, in terms of earlier contributions towards similar theoretical guarantees for HMC-type algorithms and also the fundamental limits of such algorithms.
We say that a Markov kernel \(\mathrm{K}\) on \(\mathbb{R}^{d}\) with the invariant measure \(\pi\) is \(\pi\)-ergodic if for \(\pi\)-almost every \(x\in\mathbb{R}^{d}\),
\[\lim_{k\to+\infty}\|\mathrm{K}^{k}(x,\cdot)-\pi\|_{\mathrm{TV}}=0\;. \tag{7}\]
In addition, recall that, for a measurable function \(\mathpzc{V}:\mathbb{R}^{d}\to[1,\infty)\), a Markov kernel \(\mathrm{K}\) on \(\mathbb{R}^{d}\) with the invariant measure \(\pi\) is said to be \(\mathpzc{V}\)-uniformly ergodic if there exist \(C>0\) and \(\rho\in(0,1)\) for which for any \(x\in\mathbb{R}^{d}\) and \(k\in\mathbb{N}\),
\[\|\mathrm{K}^{k}(x,\cdot)-\pi\|_{\mathpzc{V}}\leq C\rho^{k}\mathpzc{V}(x)\;. \tag{8}\]
We first discuss the fundamental limitation on the range of potentials for which ergodicity and \(\mathpzc{V}\)-uniform ergodicity of HMC algorithms may hold. It is apparent from the definition (2) of the leapfrog integrator that if \(|\nabla U(q_{0})|\) is large compared to \(|q_{0}|\) and \(h|p_{0}|\), the discretized dynamics is unable to accurately track the continuous dynamics described in (1). The loss of stability of the leapfrog integrator is well-known in the case of rapidly growing potentials, particularly in the tails, as discussed in more detail in [8, 24]. For the usual quadratic kinetic energy as in (1), the limit for stability is at quadratic potentials. However, since the definitions (7) and (8) of \(\pi\)-ergodicity and \(\mathpzc{V}\)-uniform ergodicity guarantee convergence from (\(\pi\)-almost) any starting point \(x\in\mathbb{R}^{d}\), the instability of the leapfrog integrator in the tails indicates that ergodicity may not be expected to hold for potentials exhibiting growth faster than quadratic. We emphasize that this limitation concerning the tails of the target distribution reflects the well-known instability of the leapfrog integrator, which has practical implications when fine-tuning HMC algorithms.
The Metropolis-adjusted Langevin algorithm (MALA) can be seen as a variant of the basic HMC algorithm with a single leapfrog step. Establishing ergodicity for MALA is relatively straightforward under mild regularity conditions on the potential function \(U\), as discussed in [31]. Specifically, the MALA kernel \(\mathrm{K}_{h}^{\mathrm{MALA}}=\mathrm{K}_{h,1}^{\mathrm{HMC}}\) corresponds to a Metropolis-Hastings kernel with a position-dependent Gaussian proposal. In this case, it is evident that any open set can be reached from any point with a positive probability in a single iteration. However, achieving \(\mathpzc{V}\)-uniform ergodicity requires additional conditions on \(U\), especially with regards to its growth at infinity. Specifically, it requires that the potential exhibits at most quadratic growth as explained in more detail in [31] and other references. The proof strategies for establishing \(\mathpzc{V}\)-uniform ergodicity are relatively more involved [31, 15].
Livingstone et al. [33] investigate HMC kernels of the form
\[\mathrm{K}_{h}^{\mathrm{HMC}}=\sum_{j=1}^{J}\varpi_{j}\mathrm{K}_{h,j}^{ \mathrm{HMC}}, \tag{9}\]
In this form, \(\mathrm{K}_{h}^{\mathrm{HMC}}\) is a sum of HMC kernels, each with a different number of leapfrog steps. The weights \((\varpi_{j})_{j}\) are nonnegative and sum up to \(1\), with \(\varpi_{1}\) being positive. Additionally, a upper bound \(J\in\mathbb{N}\) is specified for the non-zero weights. Therefore, irreducibility and as result ergodicity of HMC established in [33, Section 5.1] are direct consequences of the ones of MALA since \(\varpi_{1}>0\). Durmus et al. [16] relax the restriction \(\varpi_{1}>0\) for achieving ergodicity of HMC kernels in the form
of equation (9). Notably, their results cover cases with a deterministic number of leapfrog steps \(T\geq 2\). Our main results, as presented in Theorem 8 and Theorem 16, establish both ergodicity and \(\mathscr{V}\)-uniform ergodicity for the NUTS algorithm described in detail in Section 3. These results have broad applicability for potentials that exhibit growth slower than quadratic at infinity, with additional assumptions in the case of quadratic growth.
A major challenge in establishing our results arises from the fact that while some NUTS variants include a MALA component in their transition kernel, the specific variant we deal with, which is used in recent versions of Stan, does not. As a result, the conventional irreducibility arguments based on generic approaches, as seen in [31, 33], are insufficient. Instead, we rely on more sophisticated and specialized arguments. Further, adapting the approaches used in [31, 33] for MALA or in [16] for HMC is challenging, if not impossible, in the case of NUTS. Thus, we employ a new proof strategy to establish the ergodicity of NUTS.
## 3 NUTS and its invariance
The original NUTS (No-U-Turn Sampler) algorithm developed by Hoffman and Gelman [20] has undergone further development, and the current variant implemented in recent versions of Stan [11] and other probabilistic programming frameworks (such as PyMC3 [32] and Turing.jl [17]) has some differences from the original algorithm.
In this section, we provide a precise description of the algorithm that we analyze along with a comprehensive and detailed proof of its invariance. We have made efforts to align our algorithm with the current version of Stan (2.32), but since a complete description of Stan's algorithm is not readily available outside of the program code, there may be some differences, and certain minor differences are intentional. In summary, our algorithm implements the original NUTS stopping rule for orbit selection [20] while excluding the energy check, and what is called biased progressive sampling in [3] for index selection.
We will now proceed with the detailed presentation of our algorithm, outlining its key components and steps.
### The NUTS algorithm
Implementation of one iteration of the NUTS algorithm is given as pseudo-code in Algorithm 1. Given an initial position and momentum \(q_{0},p_{0}\in\mathbb{R}^{d}\) and sequences \((V_{k})_{k=0}^{K_{m}}\), \((\bar{U}_{k})_{k=0}^{K_{m}}\) and \((\bar{U}_{k})_{k=0}^{K_{m}}\) of i.i.d. random variables with distribution \(\mathrm{Ber}(1/2)\), \(\mathrm{Unif}([0,1])\) and \(\mathrm{Unif}([0,1])\), Algorithm 1 gives as output \((q_{j_{f}},I_{f},j_{f})\) satisfying \(q_{j_{f}}=\mathrm{proj}_{1}\{\Phi_{h}^{\circ(j_{f})}(q_{0},p_{0})\}\) by definition and \(I_{f},j_{f}\) are measurable transformations of \((V_{k},\bar{U}_{k},\bar{U}_{k})_{k=0}^{K_{m}}\) and \((q_{0},p_{0})\) which satisfy \(j_{f}\in I_{f}\). As a result, Algorithm 1 defines an extended dynamic HMC kernel and its related dynamic HMC kernel in the sense of (3) that we denote by \(\tilde{\mathrm{K}}_{h}^{\mathrm{U}}\) and \(\mathrm{K}_{h}^{\mathrm{U}}\), respectively. The orbit selection kernel associated to \(\mathrm{K}_{h}^{\mathrm{U}}\), i.e., the distribution of \(I_{f}\) given \((q_{0},p_{0})\), will be denoted by \(\mathrm{p}_{h}\) and the index selection kernel associated to \(\mathrm{K}_{h}^{\mathrm{U}}\), i.e., the distribution of \(j_{f}\) given \(I_{f}\) and \((q_{0},p_{0})\), will be denoted by \(\mathrm{q}_{h}\).
We briefly describe the construction of the random variables \(I_{f},j_{f}\). The random interval \(I_{f}\subset\mathbb{Z}\) is defined recursively through the sequence of random intervals \((I_{k})_{k=0}^{K_{m}}\) starting from \(I_{0}=\{0\}\). The interval \(I_{k+1}\) is the union of \(I_{k}\) and an interval \(I_{k}^{\mathrm{new}}\) with \(|I^{k}|=|I_{k}^{\mathrm{new}}|\), which is to the left of \(I_{k}\) if \(V_{k}=0\) and to the right of \(I_{k}\) otherwise; see Figure 1. Given the intervals \((I_{k})_{k=0}^{K_{m}}\), the final interval is defined as \(I_{f}=I_{K_{f}}\) where \(K_{f}+1\) is a stopping time that indicates that an U-turn has occurred in the trajectory associated to \(I_{k}\) as described in the sub-routine Algorithm 2. Note that since the intervals \((I_{k})_{k=0}^{K_{m}}\) have lengths \(2^{k}\) they may be naturally indexed by an increasing sequence of complete binary trees of depths \(k\). Once \(I_{f}\) has been constructed the index \(j_{f}\in I_{f}\) is
chosen so that the transition \(0\to j_{f}\) leaves the induced target measure on \(I_{f}\) (i.e., the measure on \(I_{f}\) with weights \(\tilde{\pi}(\Phi_{h}^{\circ(j)}(q_{0},p_{0}))\) for \(j\in I_{f}\)) invariant and so that \(j_{f}\) is as far as possible from \(0\) on \(I_{f}\) in terms of the binary tree induced by the construction; see Figure 2. We refer to [3] for more intuition and an alternative presentation of this construction. In addition, more detail on these random variables is given in Section 3.2 and Section 3.3 where explicit expressions for \(\mathrm{p}_{h}\) and \(\mathrm{q}_{h}\) are given and where we define the notation required for our theoretical analysis.
Our pseudo-code implementation for the simulation of \(\tilde{\mathrm{K}}_{h}^{\mathrm{U}}((q_{0},p_{0}),\cdot)\) in Algorithm 1 is different from the implementation used in Stan, which relies instead on a recursive construction of the full binary trees which appears in the indexing of the intervals \((I_{k})_{k=0}^{K_{m}}\). We stress that the distinction is solely computational in that the recursive implementation is considerably more memory-efficient; there is no difference in terms of the Markov transitions defined by the different implementations. We consider the details of the fully recursive implementation in Appendix G.
```
Input initial position and momentum \((q_{0},p_{0})\in(\mathbb{R}^{d})^{2}\) leapfrog parameters \(h>0\) sequences of i.i.d. random variables \((V_{k})_{k=0}^{K_{m}}\), \((\bar{U}_{k})_{k=0}^{K_{m}}\) and \((\tilde{U}_{k})_{k=0}^{K_{m}}\) with distribution \(\mathrm{Ber}(1/2)\), \(\mathrm{Unif}([0,1])\) and \(\mathrm{Unif}([0,1])\) respectively Initialize \(I_{0}\leftarrow\{0\}\), \(I_{0}^{\prime}\leftarrow\{0\}\), \(k\gets 0\), \(j_{0}\gets 0\), \(\mathrm{NoUTurns}\leftarrow\mathrm{True}\) while\(\mathrm{NoUTurns}\) and \(k<K_{\mathrm{m}}\)do if\(V_{k}>0\)then \(I_{k}^{\mathrm{new}}=\bigcup_{l=1}^{2^{k}}\{\max I_{l}^{\prime}+l\}\) \(I_{k+1}^{\prime}\gets I_{k}^{\prime}\cup I_{k}^{\mathrm{new}}\) for\(\ell=\max I_{k}^{\prime}+1\,:\,\max I_{k}^{\prime}+2^{k}\)do \(q_{\ell},p_{\ell}\leftarrow\Phi_{h}^{\circ(1)}(q_{\ell-1},p_{\ell-1})\) endfor else \(I_{k}^{\mathrm{new}}\leftarrow\bigcup_{l=1}^{2^{k}}\{\min I_{k}^{\prime}-l\}\) \(I_{k+1}^{\prime}\gets I_{k}^{\prime}\cup I_{k}^{\mathrm{new}}\) for\(\ell=\min I_{k}^{\prime}-1\,:\,\min I_{k}^{\prime}-2^{k}\)do \(q_{\ell},p_{\ell}\leftarrow\Phi_{h}^{\circ(-1)}(q_{\ell+1},p_{\ell+1})\) endfor endif \(j_{k+1}\gets j_{k}\) \(\mathrm{NoUTurns}\leftarrow\mathrm{NoUTurns}(I_{k+1}^{\prime},\mathcal{O}_{I_{ k+1}^{\prime}}(q_{0},p_{0}))\) if\(\mathrm{NoUTurns}\)then \(I_{k+1}\gets I_{k+1}^{\prime}\) \((j_{k+1},i_{k+1}^{\prime})\leftarrow\mathrm{Update}(j_{k},I_{k}^{\mathrm{new}}, I_{k},q_{0},p_{0},\bar{U}_{k},\tilde{U}_{k})\) endif \(k\gets k+1\) endwhile \(I_{f}\gets I_{k}\), \(j_{f}\gets j_{k}\) return\((q_{j_{f}},I_{f},j_{f})\)
```
**Algorithm 1** Doubling dynamic orbits with the No U-Turn stopping rule.
```
Subroutine \(\mathrm{NoUTurns}(\mathsf{J},\mathcal{O}_{\mathsf{J}}(q_{0},p_{0}))\) Input set \(\mathsf{J}\subset\mathbb{Z}\) of \(2^{k}\) consecutive integers for some \(k>0\), orbit \(\mathcal{O}_{\mathsf{J}}(q_{0},p_{0})\) indexed by \(\mathsf{J}\) Initialize\(K\leftarrow\log_{2}|\mathsf{J}|\), \(j_{0}\leftarrow\min\mathsf{J}\), \(\mathrm{UTurnFound}\leftarrow\mathrm{False}\) for\(k=1,\ldots,K-1\)do for\(h=1,\ldots,2^{K-k}\)do \(\ell\gets j_{0}+(h-1)2^{k}\) if\(p_{\ell+2^{k-1}}^{\ell}(q_{\ell+2^{k-1}}-q_{\ell})<0\) or \(p_{\ell}^{\dagger}(q_{j_{0}+h2^{k-1}}-q_{\ell})<0\)then \(\mathrm{UTurnFound}\leftarrow\mathrm{True}\) endif endfor if\(\mathrm{UTurnFound}\)thenreturn False elsereturn True endif
```
**Algorithm 2** U-turn checking.
```
Subroutine Update(\(j_{k},I_{k}^{\text{new}},I_{k},q_{0},p_{0},\bar{U}_{k},\tilde{U}_{k}\)) Input\(j_{k}\) the current index, index sets \(I_{k}^{\text{new}},I_{k}\), initial states \(q_{0},p_{0}\), random variables \(\bar{U}_{k},\tilde{U}_{k}\) \(\bar{\pi}_{i}\leftarrow\bar{\pi}(\Phi_{h}^{(i)}(q_{0},p_{0}))\,/\sum_{m\in I_{k }^{\text{new}}}\bar{\pi}(\Phi_{h}^{(m)}(q_{0},p_{0}))\,\), for \(i\in I_{k}\cup I_{k}^{\text{new}}\) \(i_{k+1}^{\prime}\sim\text{Multinomial}((\bar{\pi}_{i})_{i\in I_{k}^{\text{new} }},I_{k}^{\text{new}},\tilde{U}_{k})\) \(\bar{V}_{k}=\mathbbm{1}\left\{\bar{U}_{k}\leq[1\wedge\sum_{m\in I_{k}^{\text{ new}}}\bar{\pi}_{m}/\sum_{m\in I_{k}}\bar{\pi}_{m}]\right\}\) \(j_{k+1}\gets j_{k}\) if\(\bar{V}_{k}=1\)then \(j_{k+1}\gets i_{k+1}^{\prime}\) endif return\((j_{k+1},i_{k+1}^{\prime})\)
```
**Algorithm 3** Elementary step of the recursive sampling on the index set
In the rest of this section we give explicit expressions for the orbit and index selection kernels \(\text{p}_{h}\) and \(\text{q}_{h}\) and use Proposition 2 to prove, in full detail, that \(\pi\) is invariant for the NUTS transition kernel \(\text{K}_{h}^{\text{U}}\). An entirely different, but also fully mathematically detailed, proof has been given in [1] for the original slice variant of NUTS. We rely on the more general Proposition 2 and show that the orbit and index selection kernels \(\text{p}_{h}\) and \(\text{q}_{h}\) satisfy the assumption and conclusion of Corollary 3; the details consist in the more or less straightforward but laborious matter of unpacking the complexity inherent in Algorithm 1 and Algorithm 3. An alternative, considerably less detailed, presentation of the ideas is given in [3].
**Theorem 4**.: _Assume \(\boldsymbol{H}1\). The target distribution \(\pi\) is invariant for \(\text{K}_{h}^{\text{U}}\)._
Proof.: The result is a consequence of Proposition 2 in Section 2.2, Proposition 5 in Section 3.2 and Proposition 6 in Section 3.3.
Figure 1: Scheme of the construction of the index set \(I_{f}\) in Algorithm 1 based on [20, Figure 1]
### The orbit selection kernel \(\mathrm{p}_{h}\)
We let \((q_{0},p_{0})\in(\mathbb{R}^{d})^{2}\) be fixed throughout this section and specify here the orbit selection kernel \(\mathrm{p}_{h}\) for which \(\mathrm{p}_{h}(.|q_{0},p_{0})\) is the distribution of \(I_{f}\), the index set returned by Algorithm 1 starting from \(q_{0},p_{0}\). Let \((V_{k})_{k=0}^{K_{m}-1}\in\{0,1\}^{K_{m}}\) be the sequence of i.i.d random variables used in Algorithm 1. The sequence \((I_{k})_{k=0}^{K_{m}}\) of subintervals of \(\mathbb{Z}\) is constructed recursively via \((V_{k})_{k=0}^{K_{m}-1}\) by setting \(I_{0}=\{0\}\), and if \(V_{k}=1\) the \(2^{k}\) consecutive integers to the right of \(I_{k}\) are added to define \(I_{k+1}\), and if \(V_{k}=0\) the \(2^{k}\) consecutive integers to the left of \(I_{k}\) are added to define \(I_{k+1}\).
We first aim for an explicit expression for \(I_{k}\) in terms of the sequence \((V_{k})_{k=0}^{K_{m}-1}\). To this end, we introduce some additional notation. We denote binary sequences \((v_{k})_{k=0}^{K-1}\in\{0,1\}^{K}\), for \(K\in\mathbb{N}^{*}\), by \(v_{K-1}\ldots v_{0}\). We identify \(\{0,1\}^{K}\) with \(B_{K}=[0:2^{K}-1]\) via the bijection \(v_{K-1}\ldots v_{0}\to\sum_{k=0}^{K-1}v_{k}2^{k}\), i.e., any element \(v\in B_{K}\) is identified with its unique length \(K\) binary representation \(v_{K-1}\ldots v_{0}\in\{0,1\}^{K}\). We define the concatenation of \(a\in B_{K}\) and \(b\in B_{K^{\prime}}\) by \(ab=a_{K-1}\ldots a_{0}b_{K^{\prime}-1}\ldots b_{0}\) and denote the truncation of \(a\) to its \(n\) last bits by \(a|_{n}=a_{n-1}\ldots a_{0}\). Finally, for \(v\in B_{K}\) we denote
\[B_{K}(v)=\{-T_{-}^{(K)}(v),\ldots,T_{+}^{(K)}(v)\}=B_{K}-(2^{K}-1-v)\,\quad \text{where}\]
\[T_{-}^{(K)}(v)=\sum_{k=0}^{K-1}(1-v_{k})2^{k}\quad\text{and}\quad T_{+}^{(K)}( v)=\sum_{k=0}^{K-1}v_{k}2^{k}=v. \tag{10}\]
Equipped with these notations, by setting \(V=\sum_{i=0}^{K_{m}-1}V_{i}2^{i}\) we have \(I_{K}=B_{K}(V|_{K})\). The construction of \(B_{K}(V|_{K})\) is depicted in Figure 1.
We define \(K_{f}\) as the step at which the algorithm, starting from \(q_{0}\) and \(p_{0}\), stops due to a U-turn. Moreover, let \(I_{f}(V)\) be the random index set returned by the algorithm: \(I_{f}(V)=B_{K_{f}}(V|_{K_{f}})\). In the following, we show that \(K_{f}=(S_{f}-1)\wedge K_{\mathrm{m}}\) where \(S_{f}\) is a stopping time for the filtration generated by the i.i.d. sequence of Bernouilli \((V_{k})_{k=0}^{K_{m}-1}\). By establishing this relationship, we will be able to express \(I_{f}\) as a function of \((V_{k})_{k=0}^{K_{m}-1}\) and specify \(\mathrm{p}_{h}\). We now focus on the precise definition of \(S_{f}\) based on Algorithm 2. We say that a U-turn occurs between indices \(k\) and \(k^{\prime}\) belonging to \([-2^{K_{m}}+1:2^{K_{m}}-1]\) if, denoting \((q_{k},p_{k})=\Phi_{h}^{\circ(k)}(q_{0},p_{0})\), at least one of the two following inequalities holds:
\[p_{k^{\prime}}^{\top}(q_{k^{\prime}}-q_{k})<0\quad\text{or}\quad p_{k}^{\top} (q_{k^{\prime}}-q_{k})<0\;.\]
We need to specify the set of pairs of indices in \(I_{k}\) that are considered in Algorithm 2. Define
\[\mathscr{U}_{k,l,+}^{(K)} =\{v\in B_{K}:\ p_{i_{+}(K,k,l,v)}^{\top}(q_{i_{+}(K,k,l,v)}-q_{i_ {-}(K,k,l,v)})<0\}\;,\] \[\mathscr{U}_{k,l,-}^{(K)} =\{v\in B_{K}:\ p_{i_{-}(K,k,l,v)}^{\top}(q_{i_{+}(K,k,l,v)}-q_{i_ {-}(K,k,l,v)})<0\}\;,\]
where
\[i_{-}(K,k,l,v)=-T_{-}^{(K)}(v)+(l-1)2^{k},\qquad i_{+}(K,k,l,v)=-T_{-}^{(K)}(v) +l2^{k}-1\]
for \(v\in B_{K}\), \(K\in[K_{\mathrm{m}}]\), \(k\in[K-1]\) and \(l\in[2^{K-k}]\), and further
\[\mathscr{U}_{k}^{(K)}(q_{0},p_{0})=\bigcup_{l=1}^{2^{K-k}}\big{(}\mathscr{U}_{ k,l,+}^{(K)}\cup\mathscr{U}_{k,l,-}^{(K)}\big{)}\quad\text{and}\quad\mathscr{U} ^{(K)}(q_{0},p_{0})=\bigcup_{k=1}^{K-1}\mathscr{U}_{k}^{(K)}(q_{0},p_{0})\;, \tag{11}\]
with the convention \(\cup_{k=1}^{0}=\emptyset\). Then, the event \(\{V|_{K}\in\mathscr{U}^{(K)}\}\) corresponds to the event that a U-turn occurs at the \(K\)-th stage of the algorithm between two indices in \(I_{K}\); more precisely between
\(-T_{-}^{(K)}(V|_{K})+(l-1)2^{k}\) and \(-T_{-}^{(K)}(V|_{K})+l2^{k}-1\) for some \(k\in[K-1]\) and \(l\in[2^{K-k}]\). It is worth pointing out that, by construction, the event \(\{V|_{K}\in\mathscr{U}^{(K)}\}\) does not consider _all_ the pairs of points in \(\mathcal{O}_{B_{K}(V|_{K})}(q_{0},p_{0})\). For instance, Algorithm 2 does not verify if there is a U-turn between the pair of indices \(1\) and \(2\) or, more generally, between pairs of indices with different parity. The reason for this is primarily computational and allows for a significant reduction in the amount of memory used by the algorithm6
Footnote 6: We remark here that, in one of our minor intentional differences from Stan 2.32, the stopping rule implemented in Stan checks slightly more pairs of indices for U-turns. Namely, the sets \(\mathscr{U}^{(K)}_{k,h,+}\) and \(\mathscr{U}^{(K)}_{k,h,-}\) are augmented with additional checks given by
\[\mathscr{U}^{(K)}_{k,h,++}=\{p_{-T_{-}^{(K)}(v|_{K})+h2^{k}}^{\top}(q_{-T_{-} ^{(K)}(v|_{K})+h2^{k}}-q_{-T_{-}^{(K)}(v|_{K})+(h-1)2^{k}})<0\}\]
and various symmetrizations in order to plug some gaps that are left in the checks in \(\mathscr{U}^{(K)}_{k,h,+}\) and \(\mathscr{U}^{(K)}_{k,h,-}\). The notation being heavy already, we do not write out these additional checks. While the additional checks can make a significant difference for the computational performance of the algorithm, our results and methods are independent of these details.
We are ready to define \(S_{f}\). Defining \(S:\cup_{K=1}^{K_{\mathrm{m}}}B_{K}\times(\mathbb{R}^{d})^{2}\to[K_{\mathrm{m}}]\) by
\[S(v,q_{0},p_{0})=\inf\{k\in[K]\,:\,v|_{k}\in\mathscr{U}^{(k)}(q_{0},p_{0})\}\;,\quad v\in B_{K}\;, \tag{12}\]
with \(\inf\emptyset=\infty\), we set \(S_{f}=S(V,q_{0},p_{0})\). Finally, \(\mathrm{p}_{h}(\cdot\mid q_{0},p_{0})\) is defined as the distribution of the random variable \(I_{f}(V)\):
\[I_{f}(V)=B_{K_{f}}(V|_{K_{f}})\;,\;\text{with}\;K_{f}=(S_{f}-1)\wedge K_{ \mathrm{m}}\;. \tag{13}\]
By construction, \(S_{f}\) is a stopping time with respect to the filtration generated by the sequence \((V_{k})_{k=0}^{K_{\mathrm{m}}-1}\) and for any \(v\in B_{K^{\prime}}\), \(K^{\prime}\geq K\), it holds that
\[\text{if}\;S(v,q_{0},p_{0})=K\;,\;\;\;\;\;S(v,q_{0},p_{0})=S(v|_{K},q_{0},p_{0 })\;\;. \tag{14}\]
With a slight abuse of notation we drop the dependence on \(q_{0},p_{0}\) in \(S\) defined in (12) and simply denote \(S(v,q_{0},p_{0})\) by \(S(v)\) as long as \(q_{0}\), \(p_{0}\) are considered fixed.
The following result gives an expression for the orbit selection probabilities \(\mathrm{p}_{h}\) that will be used in verifying that the symmetry condition (6) in Corollary 3 is satisfied by NUTS.
**Lemma 1**.: _Assume **H**1. For any \(K\in[K_{\mathrm{m}}]\) and \(a\in B_{K}\),_
\[\mathrm{p}_{h}(B_{K}(a)\mid q_{0},p_{0})=\begin{cases}\sum_{b\in\{0,1\}}2^{-K -1}\mathbb{1}\{S(ba)=K+1\},&K<K_{\mathrm{m}}\\ 2^{-K_{\mathrm{m}}}\mathbb{1}\{S(a)>K_{\mathrm{m}}\},&K=K_{\mathrm{m}}\;.\end{cases} \tag{15}\]
_If \(\mathrm{J}\subset\mathbb{Z}\) is not of the form \(\mathrm{J}=B_{K}(a)\) for some \(K\in[K_{\mathrm{m}}]\) and \(a\in B_{K}\), \(\mathrm{p}_{h}(\mathrm{J}|q_{0},p_{0})=0\). Finally, \((q_{0}^{\prime},p_{0}^{\prime})\in\mathbb{R}^{d}\to\mathrm{p}_{h}(B_{K}(a) \mid q_{0}^{\prime},p_{0}^{\prime})\) is measurable._
Proof.: For \(K<K_{\mathrm{m}}\) and \(a\in B_{K}\) we have
\[\mathrm{p}_{h}(B_{K}(a)\mid q_{0},p_{0})=\sum_{b\in B_{K_{\mathrm{m }}}:b|_{K}=a|_{K}}\mathbb{P}(V_{0}=b_{0},\ldots,V_{K_{\mathrm{m}}-1}=b_{K_{ \mathrm{m}}-1})\mathbb{1}\{S(b)=K+1\}\] \[=\sum_{b\in B_{K_{\mathrm{m}}}:b|_{K}=a|_{K}}2^{-K_{\mathrm{m}}} \mathbb{1}\{S(b)=K+1\}=\sum_{b^{\prime}\in B_{K_{\mathrm{m}}-K}}2^{-K_{ \mathrm{m}}}\mathbb{1}\{S(b^{\prime}a|_{K})=K+1\}\] \[=\sum_{b^{\prime}\in B_{K_{\mathrm{m}}-K-1}}\sum_{b^{\prime\prime} \in\{0,1\}}2^{-K_{\mathrm{m}}}\mathbb{1}\{S(b^{\prime}b^{\prime\prime}a|_{K})=K +1\}\;,\]
which completes the proof of (15) using (14). The case \(K=K_{\mathrm{m}}\) is clear and this completes the proof of the first part of the statement.
Finally, the measurability of \((p,q)\in(\mathbb{R}^{d})^{2}\mapsto\mathrm{p}_{h}(B_{K}(a)\mid p,q)\) can be deduced from the measurability of \((q,p)\in(\mathbb{R}^{d})^{2}\mapsto S(a,(q,p))=\inf_{k\in[K]}k/[\mathbb{1}_{ \mathscr{U}^{(k)}(q,p)}(v)]\), which in turn is implied by the measurability of
\[(q,p)\in(\mathbb{R}^{d})^{2}\mapsto\mathbb{1}_{\mathscr{U}^{(k)}(q,p)}(v)= \mathbb{1}_{\mathscr{U}^{(k)}(v)}(q,p)\]
for \(k\in[K]\) where \(\mathscr{U}^{(k)}(v)=\{(q,p)\in(\mathbb{R}^{d})^{2}:v\in\mathscr{U}^{(k)}(q,p)\}\) are open sets. Namely, they are pre-images of open sets under continous functions as the maps \((q,p)\in(\mathbb{R}^{2})^{2}\mapsto\Phi_{h}^{\circ(j)}(q,p)\) for \(j\in[-2^{K}+1:2^{K}-1]\) are continuous by \(\mathbf{H}1\).
We may then deduce
**Proposition 5**.: _Assume 1.1. The orbit selection kernel \(\mathrm{p}_{h}\) satisfies the condition (6)._
Proof.: Let \(\mathsf{J}\subset\mathbb{Z}\) with \(\mathrm{p}_{h}(\mathsf{J}\mid q_{0},p_{0})>0\) and let \(-j\in\mathsf{J}\). Then, by Lemma 1, \(\mathsf{J}=B_{K}(a)\) for \(K\in[K_{\mathrm{m}}]\) and \(a\in B_{K}\). Let \(c=c_{K-1}\ldots c_{0}\in\{0,1\}^{K}\) denote the unique binary sequence for which \(B_{K}(c)=\mathsf{J}+j\). Then \(\mathcal{O}_{\mathsf{J}}(q_{0},p_{0})=\mathcal{O}_{\mathsf{J}+j}(\Phi_{h}^{ \circ(-j)}(q_{0},p_{0}))=\mathcal{O}_{B_{K}(c)}(\Phi_{h}^{\circ(-j)}(q_{0},p_{ 0}))\) by (5) and as a result of the construction \(S(b^{\prime}c,\Phi_{h}^{\circ(-j)}(q_{0},p_{0}))=S(b^{\prime}a,q_{0},p_{0})\) for any \(b^{\prime}\) in \(\{0,1\}\). It follows from Lemma 1 and \(\mathsf{J}+j=B_{K}(c)\) that
\[\mathrm{p}_{h}(\mathsf{J}+j\mid\Phi_{h}^{\circ(-j)}(q_{0},p_{0})) =2^{-K-1}\sum_{b^{\prime}\in\{0,1\}}\mathbb{1}\{S(b^{\prime}c, \Phi_{h}^{\circ(-j)}(q_{0},p_{0}))=K+1\}\] \[=2^{-K-1}\sum_{b^{\prime}\in\{0,1\}}\mathbb{1}\{S(b^{\prime}a,q_{ 0},p_{0})=K+1\}=\mathrm{p}_{h}(\mathsf{J}\mid q_{0},p_{0})\]
for \(K<K_{\mathrm{m}}\). For \(K=K_{\mathrm{m}}\), applying Lemma 1 again yields
\[\mathrm{p}_{h}(\mathsf{J}+j\mid\Phi_{h}^{\circ(-j)}(q_{0},p_{0}))=2^{-K_{m}}= \mathrm{p}_{h}(\mathsf{J}\mid q_{0},p_{0})\;.\]
### The index selection kernel \(\mathrm{q}_{h}\)
We still consider \((q_{0},p_{0})\in(\mathbb{R}^{d})^{2}\) to be fixed, and in addition we consider an index set \(\mathsf{l}\subset\mathbb{Z}\) satisfying \(\mathrm{p}_{h}(\mathsf{l}\mid q_{0},p_{0})>0\) to also be fixed throughout this section. By Corollary 3 and Proposition 5, in order to keep the target distribution \(\pi\) invariant by \(\mathrm{K}_{h}^{\mathsf{l}}\) it is sufficient to show that the finitely supported distribution \(\bar{\pi}(\cdot\mid\mathsf{l},q_{0},p_{0})\) defined as
\[\bar{\pi}(j\mid\mathsf{l},q_{0},p_{0})=\frac{\bar{\pi}(\Phi_{h}^{\circ(j)}(q_{ 0},p_{0}))}{\sum_{j^{\prime}\in\mathsf{l}}\bar{\pi}(\Phi_{h}^{\circ(j^{\prime}) }(q_{0},p_{0}))}\;,\qquad j\in\mathsf{l}\;, \tag{16}\]
is invariant for the transition kernel \(\bar{\mathrm{q}}_{h}(\cdot,\cdot\mid\mathsf{l},q_{0},p_{0})\) defined as
\[\bar{\mathrm{q}}_{h}(j,a\mid\mathsf{l},q_{0},p_{0})=\mathrm{q}_{h}(a-j\mid \mathsf{l}-j,\Phi_{h}^{\circ(j)}(q_{0},p_{0}))\;,\qquad a,j\in\mathbb{Z}\;. \tag{17}\]
A crucial step in showing this property is an explicit expression for the index selection kernel \(\mathrm{q}_{h}\) defined by Algorithm 1.
We remark that a simple way to ensure that \(\bar{\pi}(\cdot\mid\mathsf{l},q_{0},p_{0})\) is invariant for \(\bar{\mathrm{q}}_{h}(\cdot,\cdot\mid\mathsf{l},q_{0},p_{0})\) would be to replace the index selection kernel \(\mathrm{q}_{h}\), which was defined as the distribution of \(j_{f}\) from Algorithm 1, with sampling independently from \(\bar{\pi}(\cdot\mid\mathsf{l},q_{0},p_{0})\). This, in fact, was how index
selection was implemented in certain versions of Stan. However, this choice would not encourage the selection of distant states and thus can be expected to be less efficient.
As implemented by Algorithms 1 and 3, the index selection kernel \(\mathrm{q}_{h}\) can be expressed recursively as follows. Let \(v=(v_{k})_{k=0}^{K_{\mathrm{m}}-1}\in B_{K_{\mathrm{m}}}\) and denote, for any \(K\in[K_{\mathrm{m}}]\), \(\mathsf{l}_{v,K}^{\mathrm{old}}=B_{K}(v|_{K})\) and \(\mathsf{l}_{v,K}^{\mathrm{new}}=\mathsf{l}_{v,K+1}^{\mathrm{old}}\setminus \mathsf{l}_{v,K}^{\mathrm{old}}\). For \(K<S(v,q_{0},p_{0})\) and \(j\in\mathsf{l}_{v,K}^{\mathrm{old}}\), the index selection kernel \(\mathrm{q}_{h}\) satisfies
\[\mathrm{q}_{h}(j\mid\mathsf{l}_{v,K}^{\mathrm{old}},q_{0},p_{0})=(1-R_{v|_{K- 1}})\mathrm{q}_{h}(j\mid\mathsf{l}_{v,K-1}^{\mathrm{old}},q_{0},p_{0})+R_{v|_{ K-1}}\bar{\pi}(j\mid\mathsf{l}_{v,K-1}^{\mathrm{new}},q_{0},p_{0}), \tag{18}\]
where \(R_{v|_{K}}=1\wedge[\bar{\pi}(\mathsf{l}_{v,K}^{\mathrm{new}})/\bar{\pi}( \mathsf{l}_{v,K}^{\mathrm{old}})]\) with the shorthand notation
\[\bar{\pi}(\mathsf{J})=\bar{\pi}_{q_{0},p_{0}}(\mathsf{J})=\sum_{j\in\mathsf{J }}\bar{\pi}(\Phi_{h}^{\circ(j)}(q_{0},p_{0}))\;,\qquad\mathsf{J}\subset\mathbb{ Z}\;,\]
and where \(\mathrm{q}_{h}(j\mid\{0\},q_{0},p_{0})=\mathbb{1}_{0}(j)\). This mechanism is depicted in Figure 2. When we expand the recursion starting from \(B_{K}(v|K)=\mathsf{l}v,K^{\mathrm{old}}\), we obtain the following formula:
\[\mathrm{q}_{h}(j\mid B_{K}(v|_{K}),q_{0},p_{0})=\sum_{k=0}^{K-1}\left(\prod_{ \ell=k+1}^{K-1}(1-R_{v|_{\ell}})\right)R_{v|_{k}}\bar{\pi}(j\mid\mathsf{l}_{v, k}^{\mathrm{new}},q_{0},p_{0})+\mathbb{1}_{0}(j)\prod_{\ell=0}^{K-1}(1-R_{v|_{ \ell}})\;.\]
Note that since \(\mathsf{l}_{v,k}^{\mathrm{new}}\) and \(\mathsf{l}_{v,k^{\prime}}^{\mathrm{new}}\) are disjoint for \(k,k^{\prime}\in[K_{\mathrm{m}}]\), \(k\neq k^{\prime}\), exactly one of the terms in the expression above is nonzero for \(j\in B_{K}(v|_{K})\). This is the desired explicit expression for \(\mathrm{q}_{h}(\cdot\mid\mathsf{l},q_{0},p_{0})\), though the notation is quite cumbersome for analyzing its properties.
In order to obtain a more manageable form of the explicit expression for \(\mathrm{q}_{h}(\cdot\mid\mathsf{l},q_{0},p_{0})\) we perform the following reduction to the case where \(\mathsf{l}\) is replaced by \(B_{K}\) for some \(K\in[K_{\mathrm{m}}]\). For the rest of this section, we let \(K\in[K_{\mathrm{m}}]\) and \(v\in B_{K_{\mathrm{m}}}\) be fixed such that \(\mathsf{l}=B_{K}(v|_{K})=B_{K}-(2^{K}-1-v|_{K})\). In addition, consider \(\iota\), the unique increasing bijection from \(\mathsf{l}\) to \(B_{K}\), i.e.,
\[\iota(c)=c+(2^{K}-1-v|_{K})\text{ for any }c\in B_{K}\;\;. \tag{19}\]
**Lemma 2**.: _Assume **H**1. For any \(a,b\in B_{K}\) we have_
\[\bar{\mathrm{q}}_{h}(a,b\mid B_{K},\Phi_{h}^{\circ(-(2^{K}-1-v|_{K}))}(q_{0},p _{0}))=\bar{\mathrm{q}}_{h}(\iota^{-1}(a),\iota^{-1}(b)\mid\mathsf{l},q_{0},p _{0})\;.\]
Proof.: See Appendix B.1.
For \(a,b\in B_{K}\), denote by
\[\hat{\mathrm{q}}_{h,K}(a,b)=\bar{\mathrm{q}}_{h}(a,b\mid B_{K},\Phi_{h}^{\circ( -(2^{K}-1-v|_{K}))}(q_{0},p_{0}))\;, \tag{20}\]
\[\hat{\pi}_{K}(a)=\bar{\pi}(a\mid B_{K},\Phi_{h}^{\circ(-(2^{K}-1-v|_{K}))}(q_{ 0},p_{0}))\;. \tag{21}\]
Then, Lemma 2 and definitions (16)-(17) imply that for any \(a,b\in\mathsf{l}\),
\[\hat{\mathrm{q}}_{h,K}(\iota(a),\iota(b))=\bar{\mathrm{q}}_{h}(a,b\mid\mathsf{ l},q_{0},p_{0})\text{ and }\hat{\pi}_{K}(\iota(a))=\bar{\pi}(a\mid\mathsf{l},q_{0},p_{0})\;. \tag{22}\]
As a result and as already stated, we can restrict to the case \(\mathsf{l}=B_{K}\).
Algorithm 1 defines recursively a binary tree as in Figure 1. Indeed, each Bernoulli random variable \(V_{k}\) increases the depth of the tree by adding a new branch to the left (\(V_{k}=0\)) or to the right (\(V_{k}=1\)). Then, to select the index \(j_{f}\in I_{f}=\mathsf{l}\), this binary tree is explored backward (see
Figure 2), i.e., starting from the root corresponding to the last bit \(V_{K-1}\). Based on this observation we introduce the following notation. For any \(n\in[K]\), \(u\in B_{n}\) define
\[\check{\pi}_{n}(u)=\tilde{\pi}_{\Phi_{h}^{\circ(-(2K-1-v|_{K}))}(q_{0},p_{0})}( \{a\in B_{K}:\ a|^{n}=u\})\;, \tag{23}\]
where we recall that \(a|^{n}=a_{K-1}\ldots a_{K-n}\) is the truncation to the \(n\) last bits. The quantity \(\check{\pi}_{n}(u)\) is the sum of the weights of states associated to indices \(a\) such that \(a|^{n}=u\). Writing \(1^{c}=0\) and \(0^{c}=1\), for any \(t\in[K]\) and \(a\in B_{K}\) we define
\[\Pi(a,t)=\prod_{i=0}^{t-1}\left(1-\left(1\wedge\frac{\check{\pi}_{i+1}(a|^{i}a _{K-i-1}^{c})}{\check{\pi}_{i+1}(a|^{i}a_{K-i-1})}\right)\right)\;,\;\Pi(a,0)=1\;, \tag{24}\]
where we denote \(a|^{0}b=b\) for any \(a\in B_{K}\) and \(b\in\{0,1\}\). We are ready to state the explicit expression for \(\hat{\mathrm{q}}_{h}\) and thus for \(\bar{\mathrm{q}}_{h}\) and \(\mathrm{q}_{h}\).
**Lemma 3**.: _Assume **H**1. For \(a,b\in B_{K}\),_
\[\hat{\mathrm{q}}_{h,K}(a,b)=\begin{cases}\Pi(a,K)&\text{if }n=K\\ \Pi(a,n)\Big{(}1\wedge\frac{\check{\pi}_{n+1}(a|^{n}a_{K-n-1}^{c})}{\check{\pi }_{n+1}(a|^{n}a_{K-n-1})}\Big{)}\frac{\check{\pi}_{K}(b)}{\check{\pi}_{n+1}(a |^{n}a_{K-n-1}^{c})}&\text{otherwise}\;,\end{cases}\]
_where \(n=\max\big{(}\{i\in[K]:\ a|^{i}=b|^{i}\}\cup\{0\}\big{)}\). In addition, it holds that_
\[\mathrm{q}_{h}(a\mid\mathsf{l},q_{0},p_{0})=\hat{\mathrm{q}}_{h,K}(\iota(0), \iota(a))\;,\]
_where \(\iota\) is defined in (19). Moreover, \(\tilde{q}_{0},\tilde{p}_{0}\to\mathrm{q}_{h}(a\mid\mathsf{l},\tilde{q}_{0}, \tilde{p}_{0})\) is continuous._
Proof.: The explicit expression follows from (18); see Appendix B.2.
We remark that if \(a|^{n}\neq b|^{n}\) for some for \(a,b\in B_{K}\) and \(n\in[K]\), then
\[a|^{l}\neq b|^{l}\;, \tag{25}\]
for any \(l\in[n:K]\), so that for any \(l\in[0:n]\) we have \(a|^{l}=b|^{l}\) when \(n=\max[\{i\in[K]:\ a|^{i}=b|^{i}\}\cup\{0\}]\). With the explicit expression of Lemma 3, the reversibility of \(\hat{\mathrm{q}}_{h,K}\) follows easily.
**Proposition 6**.: _The transition kernel \(\hat{\mathrm{q}}_{h,K}\) is reversible for \(\hat{\pi}_{K}\) which implies that the transition kernel \(\bar{\mathrm{q}}_{h}(\cdot,\,\cdot\mid\mathsf{l},q_{0},p_{0})\) leaves \(\bar{\pi}(.\mid\mathsf{l},q_{0},p_{0})\) invariant._
Proof.: Let \(a,b\in B_{K}\) and \(n\) be defined as in Lemma 3. When \(b\neq a\) (the case \(b=a\) is trivial), we have
\[\hat{\pi}_{K}(a)\hat{\mathrm{q}}_{h,K}(a,b) =\Pi(a,n)\Big{(}1\wedge\frac{\check{\pi}_{n+1}(a|^{n}a_{K-n-1}^{ c})}{\check{\pi}_{n+1}(a|^{n}a_{K-n-1})}\Big{)}\frac{\hat{\pi}_{K}(b)\hat{\pi}_{K} (a)}{\check{\pi}_{n+1}(a|^{n}a_{K-n-1}^{c})}\] \[=\Pi(a,n)\Big{(}\frac{1}{\check{\pi}_{n+1}(a|^{n}a_{K-n-1}^{c})} \wedge\frac{1}{\check{\pi}_{n+1}(a|^{n}a_{K-n-1})}\Big{)}\hat{\pi}_{K}(b)\hat{ \pi}_{K}(a)\] \[=\hat{\pi}_{K}(b)\hat{\mathrm{q}}_{h,K}(b,a)\]
since \(a|^{n}=b|^{n}\). From (22)-(16)-(17) and Corollary 3, the implications are clear.
In Section 4, we are interested in the irreducibility of \(\mathrm{K}_{h}^{0}\). To this herd, we rely on the irreducibility of \(\bar{\mathrm{q}}_{h}(\cdot\mid\mathsf{l},q_{0},p_{0})\) which is a consequence of the following result. In words, while \(\hat{\mathrm{q}}_{h,K}(a,b)\) can be equal to \(0\) for most of elements \(b\in\mathsf{l}\) (i.e., when \(b\) is in an old set with small weights; see Figure 2), it can be shown that there exists \(j_{0}\in\{1,2\}\) such that \(\hat{\mathrm{q}}_{h,K}^{j_{0}}(a,b)>0\).
**Proposition 7**.: _Assume \(\mathbf{H}\)1._
1. _For any_ \(a,b\) _in_ \(B_{K}\)_, there exists_ \(j_{0}\) _in_ \(\{1,2\}\) _such that_ \(\hat{\mathrm{q}}_{h,K}^{j_{0}}(a,b)>0\)_. In particular, it follows that_ \(\bar{\mathrm{q}}_{h}(\cdot\mid\mathsf{l},q_{0},p_{0})\) _is irreducible._
2. _In addition, suppose that for any_ \(n\in[K]\) _and_ \(a,b\in B_{K}\) _such that_ \(a|^{n}\neq b|^{n}\)_, we have_ \(\check{\pi}_{n}(a)\neq\check{\pi}_{n}(b)\)_. Then, for any_ \(a,b\in B_{K}\) _and for any_ \(j\in\mathbb{N}\)_, we have_ \(\hat{\mathrm{q}}_{h,K}^{2+j}(a,b)>0\)_. In particular, it follows that_ \(\bar{\mathrm{q}}_{h}(\cdot\mid\mathsf{l},q_{0},p_{0})\) _is irreducible and aperiodic._
Proof.: The proof is postponed to Appendix C.
## 4 Ergodicity
The purpose of this section is to establish ergodicity of the NUTS sampler defined in the previous section. To this end, we consider three explicitly verifiable conditions under which the ergodicity can be shown: one concerns the step size \(h>0\) of the integrator (\(\mathbf{H}2(h)\)), one restricts to real analytic potentials \(U\) (\(\mathbf{H}3\)) and assumes that its Hessian vanishes at infinity, and the last one assumes \(\pi\) is a Gaussian distribution (\(\mathbf{H}4\)).
Define for \(s\geq 0\), \(\mathcal{V}_{1}(s)=1+s/2+s^{2}/4\).
**H2**\((h,K_{\mathrm{m}})\).: _The step size \(h>0\) and \(K_{\mathrm{m}}\) satisfy the following inequality:_
\[[(1+h\mathsf{L}_{1}^{1/2}\mathcal{V}_{1}(h\mathsf{L}_{1}^{1/2}))^{2^{K_{ \mathrm{m}}}}-1]<1/4\;.\]
\(\mathbf{H}2(h,K_{\mathrm{m}})\) is nearly the same condition that is considered [16, Eq (10), p.10] to prove HMC ergodicity. We can relax the boundedness condition on the maximal integration time \(h2^{K_{\mathrm{m}}}\) assuming additional regularity condition on \(U\). To this end, we recall that a function \(f:\mathbb{R}^{d}\to\mathbb{R}\) is said to be real analytic if it can be locally expanded as a power series, i.e., for every \(x_{0}\in\mathbb{R}^{d}\) there exists a neighborhood \(\mathsf{V}\) and a sequence \((P_{n})_{n=0}^{\infty}\) of \(n\)-homogeneous polynomials7 such that
for any \(x\in\mathsf{V}\), \(f(x)=\sum_{n=0}^{\infty}P_{n}(x-x_{0})\).
**H3**.: _The potential \(U:\mathbb{R}^{d}\mapsto\mathbb{R}\) is real analytic and in addition \(\lim_{|q|\to\infty}\|\nabla^{2}U(q)\|=0\)._
While **H3** excludes quadratic potential, we also consider this case separately.
**H4**.: _The potential \(U\) is a quadratic form, i.e., \(\pi\) a non-generate Gaussian distribution._
Before stating our first results, we introduce some definitions relative to Markov chain theory which are at the basis of our statements. A kernel \(\mathrm{K}\) is said to be irreducible if it admits an accessible small set [12, Definition 9.2.1]. A set \(\mathsf{E}\in\mathcal{B}(\mathbb{R}^{d})\) is _accessible_ for the transition kernel \(\mathrm{K}\) if for any \(q\in\mathbb{R}^{d}\) we have \(\sum_{n=0}^{\infty}\mathrm{K}^{n}(q,\mathsf{E})>0\). A set \(\mathsf{C}\subset\mathbb{R}^{d}\) is a _\(n\)-small_ for \(\mathrm{K}\) with \(n\in\mathbb{N}^{*}\) if there exist \(\varepsilon>0\) and a probability measure \(\mu\) on \(\mathbb{R}^{d}\) such that \(\mathrm{K}^{n}(q,\mathsf{A})\geq\varepsilon\mu(\mathsf{A})\) for any \(q\in\mathsf{C}\) and any measurable set \(\mathsf{A}\subset\mathbb{R}^{d}\). Let \((X_{n})_{n\geq 0}\) be the canonical chain associated with \(\mathrm{K}\) defined on the canonical space \(((\mathbb{R})^{\mathbb{N}},\mathsf{B}(\mathbb{R}^{d})^{\otimes\mathbb{N}})\). Defining for any measurable set \(\mathsf{A}\subset\mathbb{R}^{d}\)\(N_{\mathsf{A}}=\sum_{i=0}^{\infty}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
**H5** (\((h,K_{\rm m})\)).:
1. _For any_ \(q\in\mathbb{R}^{d}\)_, the following set is dense,_ \[\mathsf{F}_{q,-0}=\{p\in\mathbb{R}^{d}:F_{q}^{T_{1},T_{2}}(p)\neq 0,\,T_{1},T_{2} \in[-2^{K_{\rm m}}+1:2^{K_{\rm m}}-1]^{2},\,T_{1}\neq T_{2}\}\;.\]
2. _For any_ \(q_{0}\in\mathbb{R}^{d}\)_, there exist_ \(p_{0}\in\mathbb{R}^{d}\)_,_ \(r_{H}>0\) _such that for any_ \(T\in[-2^{K_{\rm m}}+1:2^{K_{\rm m}}-1]\) _with_ \(T\neq 0\)_,_ \[\psi_{q_{0}}^{(T)}|_{\mathrm{B}(p_{0},r_{H})}:p\in\mathrm{B}(p_{0},r_{H}) \mapsto\mathrm{proj}_{1}\,\Phi_{h}^{\circ(T)}(q_{0},p)\] _is a local homeomorphism._
In contrast to the easily verifiable \(\mathbf{H}2(h,K_{\rm m})\), \(\mathbf{H}3\) or \(\mathbf{H}4\), the condition \(\mathbf{H}5(h,K_{\rm m})\)-(i) is technical but less stringent, since it focuses on pathological cases related to the stopping time that cause the main technical difficulties in the proof of irreducibility. This assumption allows us to make clear and precise the different steps of the proof of Theorem 8.
**Theorem 9**.: _Assume \(\mathbf{H}1\), \(\mathbf{H}5(h,K_{\rm m})\), for \(h>0\) and \(K_{\rm m}\in\mathbb{N}_{>0}\). Then, the conlusions (i) and (ii) of Theorem 8 hold._
Theorem 9 is a consequence of the general result [26, Theorem 14.0.1] and our results Theorem 11 and Theorem 13 below. At this stage, Theorem 8 follows after we show that the set of assumptions (\(\mathbf{H}1\), \(\mathbf{H}2(h)\)), (\(\mathbf{H}1\), \(\mathbf{H}3\)) and \(\mathbf{H}4\) are strictly stronger than \(\mathbf{H}5(h,K_{\rm m})\):
**Proposition 10**.: _Let \(K_{\rm m}\in\mathbb{N}_{>0}\)._
1. _Assume_ \(\mathbf{H}1\) _and_ \(\mathbf{H}2(h,K_{\rm m})\) _or_ \(\mathbf{H}3\)_. Then the NUTS transition kernel_ \(\mathrm{K}_{h}^{\mathsf{U}}\) _satisfies hypothesis_ \(\mathbf{H}5(h,K_{\rm m})\) _for any_ \(h>0\)_._
2. _Assume_ \(\mathbf{H}4\)_. Then, there exists a countable set_ \(\mathsf{H}_{0}\subset\mathbb{R}_{\geq 0}\) _such that for any_ \(h\in\mathbb{R}_{>0}\setminus\mathsf{H}_{0}\) _the NUTS transition kernel_ \(\mathrm{K}_{h}^{\mathsf{U}}\) _satisfies_ \(\mathbf{H}5(h,K_{\rm m})\)_._
The main technical challenge in proving accessibility for the NUTS transition kernel arises from the dependence of index selection probabilities on the entire trajectory, which in turn relies on the global geometry of the potential energy function \(U\). However, the following result overcomes this challenge and establishes accessibility from every point in either one or two steps.
**Theorem 11**.: _Assume \(\mathbf{H}1\) and \(\mathbf{H}5(h,K_{\rm m})\)-(i), for \(h>0\) and \(K_{\rm m}\in\mathbb{N}_{>0}\). For the NUTS transition kernel \(\mathrm{K}_{h}^{\mathsf{U}}\), every open set \(\mathsf{E}\subset\mathbb{R}^{d}\) is accessible for all \(q\) in \(\mathbb{R}^{d}\). Moreover, for every open set \(\mathsf{E}\subset\mathbb{R}^{d}\) and for any \(q_{0}\in\mathbb{R}^{d}\), there exist \(\mathsf{W}(q_{0})\) a neighborhood of \(q_{0}\), \(m_{\mathsf{W}}(q_{0})>0\) and \(j(q_{0})\in\{1,2\}\) such that for any \(q\in\mathsf{W}(q_{0})\):_
\[(\mathrm{K}_{h}^{\mathsf{U}})^{j(q_{0})}(q,\mathsf{E})\geq m_{\mathsf{W}}(q_{ 0})>0\;.\]
We also need to show that the transition kernel admits small sets.
**Theorem 12**.: _Assume \(\mathbf{H}1\) and \(\mathbf{H}5(h,K_{\rm m})\)-(ii), for \(h>0\) and \(K_{\rm m}\in\mathbb{N}_{>0}\). For every \(q\in\mathbb{R}^{d}\) there exists an \(r>0\) for which \(\mathrm{B}(q,r)\) is \(1\)-small for the NUTS transition kernel \(\mathrm{K}_{h}^{\mathsf{U}}\)._
Finally, as a byproduct of the proofs of Theorem 11 and Theorem 12:
**Theorem 13**.: _Assume \(\mathbf{H}1\) and \(\mathbf{H}5(h,K_{\rm m})\), for \(h>0\) and \(K_{\rm m}\in\mathbb{N}_{>0}\). All compact sets are 3-small for the NUTS transition kernel \(\mathrm{K}_{h}^{\mathsf{U}}\). Consequently, the NUTS transition kernel \(\mathrm{K}_{h}^{\mathsf{U}}\) is aperiodic._
Geometric ergodicity
In this section, we give conditions on the potential \(U\) which imply that the NUTS kernel converge geometrically to its invariant distribution. Let \(\mathcal{V}:\mathbb{R}^{d}\to[1,+\infty)\) be a measurable function and \(\mathrm{K}\) be a Markov kernel on \((\mathbb{R}^{d},\mathcal{B}(\mathbb{R}^{d}))\). Recall that the definition of \(\mathcal{V}\)-uniformly geometrically ergodicity is given in (8). By [26, Theorem 16.0.1], if \(\mathrm{K}\) is aperiodic, irreducible and satisfies a Foster-Lyapunov drift condition, i.e., there exist a small set \(\mathsf{C}\in\mathcal{B}(\mathbb{R}^{d})\) for \(\mathrm{K}\), \(\lambda\in[0,1)\) and \(b<+\infty\) such that
\[\mathrm{K}\mathcal{V}\leq\lambda\,\mathcal{V}+b\mathbb{1}_{\,\mathsf{C}}\;, \tag{27}\]
then \(\mathrm{K}\) is \(\mathcal{V}\)-uniformly geometrically ergodic. If a function \(\mathcal{V}:\mathbb{R}^{d}\to[0,+\infty)\) satisfies (27), then \(\mathcal{V}\) is said to be a Foster-Lyapunov function for \(P\).
Define for \(a>0\) and \(q\in\mathbb{R}^{d}\), the function
\[\mathcal{V}_{a}(q)=\exp(a|q|)\;.\]
In what follows we show that, for any \(a>0\), \(\mathcal{V}_{a}\) is a Foster-Lyapunov function for the NUTS kernel under the same assumptions on the potential \(U\) considered for HMC in [16]. Let \(m\in(1,2]\).
**H6**: \((m)\)_._
1. _There exists_ \(\mathsf{M}_{1}\geq 0\) _such that for any_ \(q\in\mathbb{R}^{d}\)_,_ \[|\nabla U(q)|\leq\mathsf{M}_{1}(1+|q|^{m-1})\;.\]
2. _There exist_ \(A_{1}>0\) _and_ \(A_{2}\in\mathbb{R}\) _such that for any_ \(q\in\mathbb{R}^{d}\)_,_ \[\big{(}\nabla U(q)\big{)}^{\top}q\geq A_{1}|q|^{m}-\mathsf{A}_{2}\;.\]
3. \(U\in\mathrm{C}^{3}(\mathbb{R}^{d})\) _and there exists_ \(A_{3}>0\) _such that for any_ \(q\in\mathbb{R}^{d}\) _and k=2,3:_ \[|\operatorname{d}^{k}U(q)|\leq A_{3}(1+|q|)^{m-k}\;.\]
4. _There exist_ \(A_{4}>0\) _and_ \(R_{U}\in\mathbb{R}_{\geq 0}\) _such that for any_ \(q\in\mathbb{R}^{d}\)_,_\(|q|\geq R_{U}\)_,_ \[\left(\nabla U(q)\right)^{\top}\operatorname{d}^{2}U(q)\nabla U(q)\geq A_{4}|q |^{3m-4}\;.\]
We remark that **H**6(m), originating from [16], concerns the geometry of the tail of the target distribution \(\pi\). Conditions **H**6(\(m\))-(ii) and **H**6(\(m\))-(i) induce a restoring force in the tails of \(\pi\) and will imply the stability of the proposal kernel. This will be more transparent after Lemma 5 below. Conditions **H**6(\(m\))-(iv) and **H**6(\(m\))-(iii) are both strenghtenings of **H**6(\(m\))-(ii) and **H**6(\(m\))-(i), respectively, and are needed in order to guarantee that proposals which move away from the center are rejected with probability approaching one in the tails of \(\pi\). These last conditions are pretty mild: a smooth perturbation of a Gaussian target satisfies **H**6(\(m\)). More generally, they are satisfied by \(m\)-homogeneous quasi-convex functions and by perturbations of such functions (see [16, Proposition 6]). Recall that a function \(U_{0}\) is \(m\)-homogeneous quasi-convex outside a ball of radius \(R_{1}\) if the following conditions are satisfied:
* For all \(t\geq 1\) and \(q\in\mathbb{R}^{d},|q|\geq R_{1},U_{0}(tq)=t^{m}U_{0}(q)\).
* For all \(q\in\mathbb{R}^{d},|q|\geq R_{1}\), the level sets \(\{x:U_{0}(x)\leq U_{0}(q)\}\) are convex.
In the case \(m=2\) we propose the following milder alternative to **H**6(2)-(iii),(iv):
**H7**.: _There exists a twice continuously differentiable \(\tilde{U}:\mathbb{R}^{d}\to\mathbb{R}\) and a positive definite matrix \(\mathbf{\Sigma}\) such that \(U(q)=q^{\top}\mathbf{\Sigma}q/2+\tilde{U}(q)\), and there exist \(A_{5}\geq 0\) and \(\varrho\in[1,2)\) such that for any \(q,x\in\mathbb{R}^{d}\)_
\[|\tilde{U}(q)|\leq A_{5}\left(1+|q|^{\varrho}\right),\quad|\nabla \tilde{U}(q)|\leq A_{5}\left(1+|q|^{\varrho-1}\right)\;,\] \[|\nabla\tilde{U}(q)-\nabla\tilde{U}(x)|\leq A_{5}|q-x|\;.\]
**Remark 14**.: _It is straightforward to check that under **H7**, the conditions **H1** and **H6**(2)-(i),(ii) hold._
The following Lemma gives the main ingredients to establish the drift condition on the kernel \(\mathrm{K}^{\mathsf{U}}_{h}\).
**Lemma 4**.: _Assume either **H1** and **H6**(m)-(i) for some \(m\in(1,2]\), or **H7**. Let \(\gamma\in((m-1)/2,m-1)\) and denote \(\mathrm{B}(q_{0})=\{p\in\mathbb{R}^{d}:|p|\leq|q_{0}|^{\gamma}\}\) for any \(q_{0}\in\mathbb{R}^{d}\). Let \(h>0\) and suppose that there exists \(R_{0}>0\) such that for any \(q_{0}\in\mathbb{R}^{d}\) with \(|q_{0}|\geq R_{0}\) and \(p_{0}\in\mathrm{B}(q_{0})\) we have for any \(j\in[-2^{K_{\mathrm{m}}},2^{K_{\mathrm{m}}}]\setminus\{0\}\)_
\[|\operatorname{proj}_{1}\Phi^{\circ(j)}_{h}(q_{0},p_{0})|-|q_{0}|\leq-1 \tag{28}\]
_and for \(j\in\{-1,1\}\)_
\[H(\Phi^{\circ(j)}_{h}(q_{0},p_{0}))-H(q_{0},p_{0})\leq 0\;. \tag{29}\]
_Then, there exist \(\lambda\in(0,1)\) and \(b,R^{\prime}>0\) such that_
\[\mathrm{K}^{\mathsf{U}}_{h}\mathcal{V}_{a}\leq\lambda\mathcal{V}_{a}+b\mathbb{1 }_{\bar{B}(0,R^{\prime})}\;.\]
Proof.: The proof is postponed to Appendix E.1.
Based on the previous lemma, we shall analyze the dynamics when the norm of the position \(|q_{0}|\) is large enough and when the norm of the momentum \(|p_{0}|\) is smaller than \(|q_{0}|^{\gamma}\) with \(\gamma\in((m-1)/2,m-1)\). In that case, we aim to establish that the positions on the orbit \(\mathcal{O}_{[-2^{K}_{\mathrm{m}}+1:2^{K}_{\mathrm{m}}-1]\setminus\{0\}}(q_{0 },p_{0})=\{\Phi^{\circ(j)}_{h}(q_{0},p_{0}):j\in[-2^{K}_{\mathrm{m}}+1:2^{K}_{ \mathrm{m}}-1]\setminus\{0\}\}\) lie in the ball \(\mathrm{B}(0,|q_{0}|-1)\), and one of these points is always accepted by the index selection rule due to (29). This is done in Lemma 5 and Proposition 15 below, respectively.
**Proposition 15**.: _Assume either **H1** and **H6**(m) for some \(m\in(1,2]\), or **H7**. Let \(\gamma\in(0,m-1)\)._
* _If_ \(m\in(1,2)\) _and_ \(h>0\)_, there exists_ \(R_{H}>0\) _such that for any_ \((q_{0},p_{0})\in(\mathbb{R}^{d})^{2}\) _with_ \(|q_{0}|\geq R_{H}\) _and_ \(|p_{0}|\leq|q_{0}|^{\gamma}\) _we have_ \(H(\Phi^{\circ(j)}_{h}(q_{0},p_{0}))-H(q_{0},p_{0})\leq 0\) _for_ \(j\in\{-1,1\}\)_._
* _If_ \(m=2\)_, there exists_ \(\bar{S}>0\) _such that for any_ \(h\in(0,\bar{S}]\)_, there exists_ \(R_{H}>0\) _such that for any_ \((q_{0},p_{0})\in(\mathbb{R}^{d})^{2}\) _with_ \(|q_{0}|\geq R_{H}\) _and_ \(|p_{0}|\leq|q_{0}|^{\gamma}\) _we have_ \(H(\Phi^{\circ(j)}_{h}(q_{0},p_{0}))-H(q_{0},p_{0})\leq 0\) _for_ \(j\in\{-1,1\}\)_._
Proof.: The proof is postponed to Appendix E.2.
**Lemma 5**.: _Assume either **H1**, **H6**(m)-(i),(ii) for some \(m\in(1,2]\) or **H7** and let \(T\in\mathbb{N}^{*}\)._
* _If_ \(m<2\)_, let_ \(\gamma\in\big{(}\max\big{(}2(m-1)-1,(m-1)/2\big{)},m-1\big{)}\)_. Then, for any_ \(h>0\)_, there exists_ \(R_{0}>0\) _such that for any_ \((q_{0},p_{0})\in(\mathbb{R}^{d})^{2}\) _with_ \(|q_{0}|\geq R_{0}\) _and_ \(p_{0}\leq|q_{0}|^{\gamma}\)_, for any_ \(j\in[-T:T]\) _with_ \(j\neq 0\) _we have_ \[|\operatorname{proj}_{1}\Phi^{\circ(j)}_{h}(q_{0},p_{0})|-|q_{0}|\leq-1\;.\]
_._
2. _If_ \(m=2\)_, let_ \(\gamma=2/3\)_. Denote_ \[\mathcal{V}_{2}(s)=\mathsf{M}_{1}/\mathsf{L}_{1}^{\frac{1}{2}}+\mathsf{M}_{1}s/2 +\mathsf{L}_{1}^{\frac{1}{2}}\mathsf{M}_{1}s^{2}/4\;,\] \(\mathsf{M}_{1}\) _is well defined even under_ \(H_{7}\) _by Remark_ 14_. Let_ \(\bar{S}>0\) _be such that_ \(\Theta(s)<A_{1}\) _for any_ \(s\in(0,\bar{S}]\)_, with_ \[\Theta(s) =2\mathsf{L}_{1}^{\frac{1}{2}}\mathcal{V}_{2}(s)\big{(}\exp \big{(}\mathsf{L}_{1}^{\frac{1}{2}}s\mathcal{V}_{1}(\mathsf{L}_{1}^{\frac{1}{ 2}}s)\big{)}-1\big{)}\] \[\qquad+6s^{2}\big{[}\mathsf{M}_{1}^{2}+\mathsf{L}_{1}\mathcal{V}_ {2}^{2}(s)\big{(}\exp\big{(}\mathsf{L}_{1}^{\frac{1}{2}}s\mathcal{V}_{1}( \mathsf{L}_{1}^{\frac{1}{2}}s)\big{)}-1\big{)}^{2}\big{]}\] _and where_ \(\mathcal{V}_{1}\) _is defined in_ _H__2(h__\(K_{\mathrm{m}}\)_). Then for any_ \(h\in(0,\bar{S}/T]\)_, there exists_ \(R_{0}>0\) _such that for any_ \((q_{0},p_{0})\in(\mathbb{R}^{d})^{2}\) _with_ \(|q_{0}|\geq R_{0}\) _and_ \(p_{0}\leq|q_{0}|^{\gamma}\)_, for any_ \(j\in[-T,T]\) _with_ \(j\neq 0\) _we have_
\[|\operatorname{proj}_{1}\Phi_{h}^{\circ(j)}(q_{0},p_{0})|-|q_{0}|\leq-1\;.\]
**Lemma 6**.: _The proof is postponed to Appendix E.3._
The geometric ergodicity of the NUTS sampler follows.
**Theorem 16**.: _Assume \(\textbf{H}5(h,K_{\mathrm{m}})\), for \(h>0\) and \(K_{\mathrm{m}}\in\mathbb{N}_{>0}\). Assume either **H**1, **H**6(\(m\)) for some \(m\in(1,2]\), or **H**7._
1. _Case_ \(m<2\)_: for_ \(a>0\)_, the No U-turn sampler kernel_ \(\mathrm{K}_{h}^{\mathsf{U}}\) _is_ \(\mathcal{V}_{a}\)_-uniformly geometrically ergodic._
2. _Case_ \(m=2\)_: there exists_ \(\bar{S}>0\) _such that for any_ \(a>0\) _and_ \(h>0\) _such that_ \(h2^{K_{\mathrm{m}}}\leq\bar{S}\) _and_ \(\textbf{H}2(h,K_{\mathrm{m}})\)_, the No U-turn sampler kernel_ \(\mathrm{K}_{h}^{\mathsf{U}}\) _is_ \(\mathcal{V}_{a}\)_-uniformly geometrically ergodic._
Proof.: The proof is postponed to Appendix E.4.
We remark that only the condition \(\textbf{H}6(m)\)-(i) is imposed in Lemma 4, compared to Lemma 5 where \(\textbf{H}6(m)\)-(ii) is also needed. Conditions \(\textbf{H}6(m)\)-(iv), (iii) are used for Proposition 15. Regarding the conditions on the potential, the bottleneck of the demonstration is Proposition 15, which relies on [16, Proposition 7] and the symmetry of the Hamiltonian in the momentum variable, i.e., \(H(\cdot.,p)=H(\cdot,-p)\) for any \(p\in\mathbb{R}^{d}\). The most restrictive assumption on the stepsize appears in Lemma 5 for the case \(m=2\). Compared with the geometric ergodicity of the HMC sampler in the case \(m=2\)[16, Theorem 9], instead of having \(hT\leq\bar{S}\) where \(T\) is the number of leapfrog steps, we have \(h2^{K_{\mathrm{m}}}\leq\bar{S}\) where \(2^{K_{\mathrm{m}}}\) is the maximum number of leapfrog steps for the NUTS sampler.
## 6 General properties on Hamiltonian Monte Carlo
In this section, we extend and improve some results presented in [16]. Our aim is to establish the convergence of the HMC kernel under milder conditions on the stepsize.
Let \(T\in\mathbb{N}^{*}\) be the number leapfrog steps and \(h>0\) be the stepsize. The HMC kernel is defined, for any \(q_{0}\in\mathbb{R}^{d}\), \(\mathsf{A}\in\mathcal{B}(\mathbb{R}^{d})\), by
\[\mathrm{K}_{h,T}^{\mathsf{H}}(q_{0},\mathsf{A})=\int\rho(p_{0})\alpha_{h,T}(q_ {0},p_{0})\delta_{\operatorname{proj}_{1}\Phi_{h}^{\circ(T)}(q_{0},p_{0})}( \mathsf{A})\,\mathrm{d}p_{0}+(1-\alpha_{h,T}(q_{0},p_{0}))\delta_{q_{0}}( \mathsf{A})\;, \tag{30}\]
where for any \(q_{0},p_{0}\in(\mathbb{R}^{d})^{2}\), the acceptance ratio is
\[\alpha_{h,T}(q_{0},p_{0})=1\wedge\exp\left\{H(q_{0},p_{0})-H(\Phi_{h}^{\circ(T )}(q_{0},p_{0}))\right\}\;.\]
We consider in this section the following assumption on the potential \(U\).
**H8**.: _There exist \(\tilde{U}:\mathbb{R}^{d}\to\mathbb{R}\), twice continuously differentiable and a real positive definite matrix \(\mathbf{\Sigma}\) such that \(U(q)=q^{\top}\mathbf{\Sigma}q/2+\tilde{U}(q)\). In addition, there exist \(A_{5}\geq 0\) and \(\varrho\in[1,2)\) such that for any \(q,x\in\mathbb{R}^{d}\)_
\[|\tilde{U}(q)|\leq A_{5}\left(1+|q|^{\varrho}\right),\quad|\nabla \tilde{U}(q)|\leq A_{5}\left(1+|q|^{\varrho-1}\right)\;,\] \[|\nabla\tilde{U}(q)-\nabla\tilde{U}(x)|\leq A_{5}|q-x|\;.\]
The main result of this section is the following.
**Theorem 17**.: _Assume **H1** and let \(h>0\) and \(T\in\mathbb{N}^{*}\). Suppose in addition **H8** or_
\[\mathsf{L}_{1}h^{2}<2(1-\cos(\pi/T))\;. \tag{31}\]
_Then, there exists a countable set \(\mathsf{H}_{0}\subset\mathbb{R}_{\geq 0}\), defined in Lemma 7 under **H8** and \(\mathsf{H}_{0}=\mathbb{R}_{\geq 0}\) otherwise, such that if \(h\not\in\mathsf{H}_{0}\) we have,_
1. _the HMC kernel_ \(\mathrm{K}^{\mathsf{H}}_{h,T}\) _is irreducible, aperiodic, the Lebesgue measure is an irreducibility measure and any compact set of_ \(\mathbb{R}^{d}\) _is 1-small._
2. \(\mathrm{K}^{\mathsf{H}}_{h,T}\) _is positive recurrent with invariant probability_ \(\pi\) _and for_ \(\pi\)_-almost every_ \(q\in\mathbb{R}^{d}\)_,_ \[\lim_{n\to+\infty}\|\delta_{q}(\mathrm{K}^{\mathsf{H}}_{h,T})^{n}-\pi\|_{\mathrm{ TV}}=0\;.\]
Proof.: The proof is postponed to Appendix F.3
Let us compare our result with [16, Theorem 1].
First, in the case \(\limsup_{|x|\to+\infty}[|\nabla U(x)|/|x|^{2}]\neq 0\), [16, Theorem 1] only shows that HMC is ergodic if \(h\) and \(T\) satisfy
\[[(1+h\mathsf{L}_{1}^{\frac{1}{T}}\mathcal{V}(h\mathsf{L}_{1}^{\frac{1}{T}}))^ {2^{T}}-1]<1\;, \tag{32}\]
where \(\mathcal{V}(s)=1+s/2+s^{2}/4\) for \(s\in\mathbb{R}_{\geq 0}\). We show in Appendix D.4 that this condition is strictly stronger than (31). Finally, under **H8**, we obtain ergodicity for HMC for any given number of leapfrog steps \(T\) and for Leb-almost every choice of stepsize \(h\) and in particular, for Leb-almost every choice of integration time \(hT\) (for a fixed \(T\)). This result is in accordance with the ergodicity properties of the ideal HMC algorithm (i.e., the exact Hamiltonian dynamics (1) for a fixed integration time \(\mathrm{t}_{\mathrm{int}}>0\) instead of the leapfrog scheme \(\Phi^{(T)}_{h}\) in (30)) in the case where \(\pi\) is a Gaussian distribution. Indeed, in that case, explicit expression of the Hamiltonian dynamics [8, Proposition 3.1] shows that there exists a countable set \(\tilde{\mathsf{H}}^{0}\) included in \(\mathbb{R}_{\geq 0}\) such that if the integration time \(\mathrm{t}_{\mathrm{int}}\in\tilde{\mathsf{H}}^{0}\), the resulting ideal HMC algorithm is periodic, whereas if \(\mathrm{t}_{\mathrm{int}}\not\in\tilde{\mathsf{H}}^{0}\), the algorithm is ergodic (and even geometrically ergodic).
To show Theorem 17, we extend part of the results obtained in [16]. First, the proof of the ergodicity of HMC in [16] use that the map \(p_{0}\mapsto\mathrm{proj}_{1}\,\Phi^{\circ(T)}_{h}(q_{0},p_{0})\) is a bi-Lipschitz homeomorphism for any \(q\in\mathbb{R}^{d}\) by assuming **H1** and (32). We show that in fact this is still true under (31).
**Theorem 18**.: _Assume **H1** and let \(h>0\) and \(T\in\mathbb{N}^{*}\) satisfying (31). Let \(q_{0}\in\mathbb{R}^{d}\). Then, for any \(\tilde{q}\), there exists a unique pair \((p,\tilde{p})\in(\mathbb{R}^{d})^{2}\) such that_
\[\Phi^{\circ(T)}_{h}\big{(}q_{0},p\big{)}=(\tilde{q},\tilde{p})\;.\]
_Therefore, \(\psi_{q}:p\in\mathbb{R}^{d}\to\mathrm{proj}_{1}\,\Phi^{\circ(T)}_{h}(q,p)\) is a one-to-one continuous map. Finally, its inverse \(\psi^{\leftarrow}_{q}\) is Lipschitz._
Proof.: The proof is postponed to Appendix F.1.
**Remark 19**.: _For \(T=1\) the corresponding statement trivially holds without the condition (31)._
The condition (31) is sharp in the sense of the following counterexample. Consider the standard Gaussian target \(U(q)=|q|^{2}/2\) which satisfies \(\mathbf{H}1\) with \(\mathrm{L}_{1}=1\). Given a number of leapfrog steps \(T\) and choosing the stepsize \(h^{2}=2(1-\cos(\pi/T))\), explicit calculations show that for any \((q_{0},p_{0})\in(\mathbb{R}^{d})^{2}\), it holds that \(\Phi_{h}^{\circ(T)}(q_{0},p_{0})=(-q_{0},-p_{0})\). Therefore, the conclusion of Theorem 18 cannot hold in this situation.
However, for a given number of leapfrog steps \(T\), in the Gaussian case still, we can show that \(\tilde{q}\mapsto\bar{\Psi}_{h}^{(T)}(q,\tilde{q})\) is a \(\mathrm{C}^{1}\)-diffeomorphism still, if \(h\) do not belong to a countable subset \(\mathsf{H}_{0}\subset\mathbb{R}\), as illustrated by the following result. Indeed, when \(\pi\) is Gaussian, leapfrog iterates can be explicitly expressed polynomial in \(h\) and linear in \(q_{0},p_{0}\in(\mathbb{R}^{d})^{2}\). Therefore their analysis can be simplified. Finally, note that we state here further properties of leapfrog iterates that are used to prove the convergence of the NUTS kernel as \(\pi\) is Gaussian.
**Lemma 7**.: _If there exists a real positive definite matrix \(\Sigma\) such that for any \(q\in\mathbb{R}^{d}\)\(U(q)=q^{\top}\Sigma q/2\), then there exists a countable set \(\mathsf{H}_{0}\subset\mathbb{R}_{\geq 0}\) such that for any \(h\in\mathbb{R}_{>0}\setminus\mathsf{H}_{0}\), \((T,T_{1},T_{2})\in\mathbb{Z}^{3}\) with \(T_{2}\neq T_{1},\,T\neq 0\) and \(q_{0}\in\mathbb{R}^{d}\), the functions \(\psi_{q_{0}}^{(T)},\nabla F_{q_{0}}^{(T_{1},T_{2})}\) are linear one-to-one maps, where_
\[F_{q_{0}}^{(T_{1},T_{2})}:\,p_{0}\in\mathbb{R}^{d}\mapsto p_{T_{1}}^{T}(q_{T_{ 2}}-q_{T_{1}}),\quad\psi_{q_{0}}^{(T)}:\,p_{0}\in\mathbb{R}^{d}\mapsto q_{T}\;,\]
_denoting for \(i\in\mathbb{Z}\)\(q_{i}=\operatorname{proj}_{1}\Phi_{h}^{\circ(i)}(q_{0},p_{0})\) and \(p_{i}=\operatorname{proj}_{2}\Phi_{h}^{\circ(i)}(q_{0},p_{0})\)._
Proof.: The proof is postponed to Appendix F.2.
Based on Lemma 7, the proof of Theorem 17 under \(\mathbf{H}8\) then follows from an homotopy argument.
|
2303.11593
|
Difficulty in chirality recognition for Transformer architectures
learning chemical structures from string
|
Recent years have seen rapid development of descriptor generation based on
representation learning of extremely diverse molecules, especially those that
apply natural language processing (NLP) models to SMILES, a literal
representation of molecular structure. However, little research has been done
on how these models understand chemical structure. To address this black box,
we investigated the relationship between the learning progress of SMILES and
chemical structure using a representative NLP model, the Transformer. We show
that while the Transformer learns partial structures of molecules quickly, it
requires extended training to understand overall structures. Consistently, the
accuracy of molecular property predictions using descriptors generated from
models at different learning steps was similar from the beginning to the end of
training. Furthermore, we found that the Transformer requires particularly long
training to learn chirality and sometimes stagnates with low performance due to
misunderstanding of enantiomers. These findings are expected to deepen the
understanding of NLP models in chemistry.
|
Yasuhiro Yoshikai, Tadahaya Mizuno, Shumpei Nemoto, Hiroyuki Kusuhara
|
2023-03-21T04:47:45Z
|
http://arxiv.org/abs/2303.11593v4
|
# Difficulty in learning chirality
###### Abstract
Recent years have seen development of descriptor generation based on representation learning of extremely diverse molecules, especially those that apply natural language processing (NLP) models to SMILES, a literal representation of molecular structure. However, little research has been done on how these models understand chemical structure. To address this, we investigated the relationship between the learning progress of SMILES and chemical structure using a representative NLP model, the Transformer. The results suggest that while the Transformer learns partial structures of molecules quickly, it requires extended training to understand overall structures. Consistently, the accuracy of molecular property predictions using descriptors generated from models at different learning steps was similar from the beginning to the end of training. Furthermore, we found that the Transformer requires particularly long training to learn chirality and sometimes stagnates with low translation accuracy due to misunderstanding of enantiomers. These findings are expected to deepen understanding of NLP models in chemistry.
1 Laboratory of Molecular Pharmacokinetics, Graduate School of Pharmaceutical Sciences, The University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo, Japan
2 Laboratory of Molecular Pharmacokinetics, Graduate School of Pharmaceutical Sciences, The University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo, Japan
3Author to whom correspondence should be addressed.
\({}^{\dagger}\) These authors contributed equally.
## 1 Introduction
Recent advancements in machine learning have influenced various tasks in chemistry such as molecular property prediction, energy calculation, and structure generation[1; 2; 3; 4; 5; 6]. To utilize machine learning methods in chemistry, we first need to make computers recognize chemical structures. One of the most popular approaches is to use chemical language models, which are natural language processing (NLP) models fed with strings representing chemical structures such as simplified molecular input line entry specification (SMILES)[7]. In 2016, Gomez-Bombarelli et al. applied a chemical language model using a neural network for descriptor generation and created a trend [8; 9; 10]. In this approach, a neural NLP model such as a recurrent neural network (RNN) learns an extremely wide variety of SMILES from public databases [11; 12; 13], converts the string into a low-dimensional vector, decodes it back to the original SMILES, and then the intermediate vector is drawn out as a descriptor. The obtained descriptor is superior to the conventional fingerprints, such as MACCS keys [14] and ECFP [15], in continuous and thus highly expressive natures, and the original structures can be restored from the descriptor using the decoder [16].
On the other hand, the presented approach has the disadvantage that it obscures the process of descriptor generation and that the meanings of each value in the descriptor are hard to interpret. It is scarcely studied how chemical language models understand structures of extremely diverse molecules and connect chemical structures and descriptors. To address this black box, we attempted to clarify what kinds of molecular features of molecules are easily incorporated into the descriptor and what kinds are not, by comparing the performance of the model and its descriptor at various steps of training, focusing on the most prosperous NLP model, the Transformer, which is well utilized for descriptor generation and other chemical language tasks these days [17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33]. This knowledge is, we supposed, crucial for optimizing the
model architecture and proper setting of training tasks and data in chemical language models. To be specific, we trained a Transformer to translate SMILES strings and then compared perfect agreement and similarity of molecular fingerprints between prediction and target at different training steps. We also conducted 6 molecular property prediction tasks with descriptors generated by models at different steps and studied what kinds of tasks are easily solved. We further found that the translation accuracy of the Transformer sometimes stagnates at a low level for a while and then suddenly surges. To clarify the cause of this, we compared translation accuracy for each character of SMILES. Finally, we searched for methods to prevent stagnation and stabilize learning.
## 2 Related Work
Chemical language models can be broadly classified into 3 categories based on their applications: structure generation (e.g., de novo drug design and retrosynthesis), end-to-end property prediction, and descriptor generation [17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32]. The difference between the former 2 and descriptor generation from a machine learning perspective is the need for prior information. The former ones are supervised methods, whereas descriptor generation is unsupervised in general. In this regard, chemical language models for descriptor generation, which is the main topic of this study, are models that purely learn chemical structures.
Research on chemical language models with neural networks was first started in the study of Gomez-Bombarelli et al. [8], which applied a variational autoencoder (VAE) [34] structure to SMILES strings. They tried to represent the distribution of SMILES strings of various molecules by VAE structure, using GRU as the encoder and the decoder of VAE. Winter et al. [10] generated molecular descriptors by GRU model trained for translation tasks between 2 different string representations of molecules. While the studies by Gomez-Bombarelli et al. or Winter et al. adopted the RNN model for chemical language models, the most prosperous model in the NLP study is recently becoming the Transformer [33]. The key feature of the Transformer model is its attention structure, in which the model processes each word attending to all words in the sentence (self-attention) or other information (cross-attention) globally, and various derivative methods have been devised based on this structure such as bidirectional encoder representations from transformers (BERT) [35; 36; 37; 38]. Honda et al. [39] trained a Transformer by having it translate 2 different types of SMILES and conducted property prediction by descriptor pooled from intermediate memory in the model. Fabian et al. [40], based on BERT, pretrained the model not only with NLP tasks adopted in the original BERT model, but also with domain-specific tasks such as fixed molecular descriptor prediction and reported that these additional training tasks improved the model performance on downstream cheminformatics tasks. Irwin et al. [17] pretrained a Transformer by predicting masked SMILES, translating different types of SMILES, and fine-tuned the model to predict molecular properties. Of course, the Transformer architecture is widely utilized for other objectives of chemical language models such as molecular generation [18; 19; 20; 21; 22; 23] and retrosynthesis prediction [24; 25; 26; 27].
There are several studies to modify the Transformer structure to be suitable for recognizing chemical structures by chemical language models. One potent strategy is to combine a Transformer, or its key feature, self-attention mechanism, with 2D graph-based architecture because 2D graph structure provides high visibility of chemical structures [28; 29; 30; 32; 41; 42; 43]. However, these studies focused on recognizing chemical structures in end-to-end tasks such as molecular property prediction, dependent on tasks. To the best o4 knowledge, no studies have focused on recognizing chemical structures by chemical language models for descriptor generation, which purely learns a wide variety of chemical structures.
## 3 Methods
### Training a Transformer
#### 3.1.1 Dataset
To pretrain a Transformer model, molecules were sampled from the ZINC-15 dataset [11], which contains approximately 1 billion molecules. Instead of random sampling generally employed, we used stratified sampling in terms of SMILES length to reduce the bias of molecular weights in the training set. Briefly, we classified all molecules in ZINC-15 according to lengths of SMILES strings, sampled 460,000 molecules from each length, and prepared a dataset containing 30 million molecules. Regarding lengths that contain less than 460,000 molecules, all molecules were sampled. Note that although we believe that our sampling strategy enables fair training of molecular structures with regard to molecular weights, random sampling is employed in general. Thus, we conducted the key experiments with the dataset
prepared by random sampling to confirm generality and confirmed similar results. The details about the experiments are written in **Supplementary Note 1**.
From the sampled molecules, we omitted those with atoms other than H, B, C, N, O, F, P, S, Cl, Br, or I and molecules that have more than 50 or less than 3 heavy atoms. The remaining molecules were then stripped of small fragments, such as salts, and canonical SMILES and randomized SMILES of them were generated. SMILES is a one-dimensional graphical representation of a molecule, and because any atom can be the starting point, multiple SMILES representations correspond to a single molecule. To identify the representation, there is a rule for selecting the initial atom, and the SMILES representation identified based on this rule is called canonical SMILES, whereas the others are referred to here as random SMILES. Translation between randomized and canonical SMILES is used as a training task in [17], [39]. When generating randomized SMILES, we renumbered all atoms in a molecule and then generated SMILES [17], [44]. These processes were all conducted using the RDKit library (ver. 2022.03.02) [45].
#### 3.1.2 Model architecture
We implemented models with the PyTorch[46] framework (ver. 1.8, except for the pre-LN structure in 3.5). The model dimension was 512, the dimension of the feed-forward layer was 2,048, and the number of layers in the encoder and decoder was 6. We used ReLU activation, and the dropout ratio was 0.1 in both the encoder and the decoder. These parameters and model architecture were determined according to the original Transformer in [33].
#### 3.1.3 Learning procedure
Randomized SMILES strings of molecules in the training dataset were tokenized and then fed into the encoder of the Transformer. Tokenization was conducted with the vocabulary shown in **Table 1**. Positional encoding was added to the embedded expression of SMILES tokens. The input of the decoder was canonical SMILES strings of the same molecule, and the model was forced to predict the same canonical SMILES shifted by one character before, with the attention from posterior tokens being masked. Hence, the model was forced to predict each token of SMILES based on its prior tokens (teacher-forcing [47]). We calculated cross-entropy loss for each token except the padding token, and the mean loss along all tokens was used as the loss function.
25,000 tokens were inputted per step, according to [33]. Due to resource restriction, the batch size was set to 12,500 tokens, and the optimizer was stepped after every 2 batches. We introduced bucketing [10], [33], that is, we classified SMILES strings in training data into several ranges of their lengths, and generated batches from SMILES in the same range of length to reduce padding. The number of batches in total amounted to about 150,000 (for 75,000 steps). We used Adam optimizer [48] with a warmup scheduler (warmup step = 4,000) and continued learning for 80,000 steps (slightly longer than one epoch).
#### 3.1.4 Metrics
We measured the translation performance of each model by 2 metrics: _perfect accuracy_ and _partial accuracy_[49]. _Perfect accuracy_ means the ratio of molecules whose target SMILES strings were completely translated by the model, except those after end-of-string tokens (i.e., padding tokens). _Partial accuracy_ means the character-wise ratio of coincidence between the target and predicted strings.
To evaluate recognition of the model about partial structures of molecules, we calculated MACCS keys and ECFP for both targeted and predicted SMILES and calculated the Tanimoto similarity of these fingerprints between the targeted and predicted molecules for the test set. The radius of ECFP was varied from 1 to 3. Because prediction of the Transformer does not always output valid SMILES, we omitted molecules for which the Transformer at any step predicted invalid SMILES when calculating Tanimoto similarity. Note that valid SMILES means SMILES encoding molecules that satisfy the octet rule.
We calculated the agreement between MACCS keys of the predicted and target SMILES molecules in each dimension. For each dimension, we calculated the percentage of molecules for which the model makes valid predictions and for which the predicted molecules have MACCS keys of 1, in all molecules that have MACCS keys of 1. We also calculated the percentage of molecules for which the model makes valid predictions and for which the predicted molecules have MACCS keys of 0, in all molecules that have MACCS keys of 0.
### Performance improvement during training
We obtained physical and biological property data of molecules from MoleculeNet [50] with the DeepChem module [51]. **Table 2** shows the datasets we used and their information. We applied the same filtering and preprocessing as in Section 3.1.1 to molecules in each dataset, although we did not remove too long or short SMILES in order not to overestimate prediction performance, and removed duplicated SMILES. We trained and evaluated the model with 3 train-validation-test splits provided by ChemBench [52].
#### 3.2.2 Molecular property prediction task
We tested the property prediction ability for models at steps when perfect accuracy reached 0.2, 0.5, 0.7, 0.9, 0.95, and 0.98 and at steps 0, 4,000, and 80,000 (end of training). In the property prediction experiments, we only used the encoder of the Transformer. We inputted randomized SMILES strings into it, and then the memory (= output of encoder) was pooled and used as the descriptor of molecules. To minimize the effect of the pooling procedure, we tested 4 pooling methods: 1) average all memory along the axis of SMILES length 2) extract memory corresponding to the first token in SMILES. 3) obtain the average and maximum memory along the axis of the SMILES length and concatenate them with the memories of the first and last token [39]. 4) concatenate average, maximum, minimum, standard deviation, beginning, and end of memory. Note that the dimensions of pooled descriptors are not the same: 1) 512, 2) 512, 3) 2,048, and 4) 3,072.
From these pooled descriptors, we predicted the target molecular properties with SVM, XGBoost, and MLP in our preliminary study and chose XGBoost, which showed the best performance. For each of the 3 splits in ChemBench, we searched hyperparameters by Bayesian optimization using Optuna [53]. As baseline descriptors, we calculated ECFP (R = 2, dimension = 2,048) and CDDD [10] (dimension = 512) for molecules in MoleculeNet datasets and measured the prediction accuracy of MoleculeNet. We also generated random values from a uniform distribution in [0, 1) and used them as baseline descriptors (dimension = 2,048).
### 3.3 Experiment in different initial weight and iteration order
14 different initial weights were randomly generated with different random seeds in PyTorch, and 2 different orders of data iteration were made by randomly sampling molecules for each batch and randomly shuffling the batch iteration order with 2 different random seeds. All hyperparameters were fixed during these 28 experiments in total. We aborted some of the experiments when accuracy reached 0.95 instead of continuing until step 80,000 to conduct numerous experiments. We calculated perfect accuracy and partial accuracy for every 2000 steps, and the step when perfect accuracy first reached 0.7/0.95 was called _step-0.7/0.95_, respectively. The mean step-0.7 and step-0.95 were compared between 2 iteration orders by two-sided Welch's t-test.
### 3.4 Research for the cause of stagnation
For each character in the vocabulary, we calculated perfect accuracy with the selected character being masked. This means we did not check whether the selected character was correctly predicted by the Transformer when calculating perfect accuracy. We computed partial accuracy for each character as well. Because the Transformer predicts each character from memory and previous characters it predicted, it is more likely to produce wrong predictions after it once made a mistake. We therefore adopted teacher-forcing when calculating this metric, meaning the model predicts each character with the correct preceding characters [47].
### 3.5 Structural modifications of the model to prevent stagnation
For AdamW, He normal initialization, and pre-LN (pre-layer normalization) structures, we used PyTorch implementation. As pre-LN is not implemented in PyTorch version 1.8, we conducted experiments with the pre-LN structure in version 1.10. For experiments with more "(\(\bar{o}\))" and "(\(\bar{o}\))(\(\bar{o}\))," training data were sampled again from the training dataset we prepared in Section 3.1.1. SMILES strings that have either "(\(\bar{o}\))" or "(\(\bar{o}\))(\(\bar{o}\))" were sampled at 100% probability, and those that do not were sampled at 50%. The new training dataset contained about 135,000 molecules (about 67,500 steps). We did not alter the test set.
These modifications were introduced respectively, and the model was trained from 14 initial weights. The number of steps the model took until perfect accuracy reached 0.7 and 0.95 was compared to the control experiment with no modification by two-sided Welch's t-test with Bonferroni correction. Because we had to conduct many experiments in this section, we aborted experiments when accuracy reached 0.95 instead of continuing until step 80,000.
## 4 Results & Discussion
### Partial/overall structure recognition of the Transformer in learning progress
To understand how the Transformer model learns the diverse chemical structures, we first researched the relationship between model performance and learning progress by comparing the models at various training steps. In this study, we trained the Transformer to predict canonical SMILES of molecules based on their randomized SMILES [17], [39], [44]. For models at various steps of training, we calculated perfect accuracy and partial accuracy of predicted SMILES expression [49], as described in Section 3.1.4. We supposed perfect accuracy, which evaluates the complete consistency of target and prediction, represents how much the models understand the overall molecular structures, whereas partial accuracy, which measures position-wise accuracy of prediction, indicates recognition of partial structures of molecules. The result showed that partial accuracy rapidly converged to 1.0, meaning almost complete translation, whereas perfect accuracy gradually increased as the learning proceeded (**Figure 1a**). This result suggests that the Transformer model recognizes partial structures of molecules at quite an early stage of training when overall structures are yet to be understood well. To further evaluate partial and overall recognition of molecules, we prepared the models when perfect accuracy surpassed 0.2, 0.5, 0.7, 0.9, 0.95, and 0.98 and at steps 0, 4,000, and 80,000 (end of training). For models at these steps, we computed MACCS keys [14] and ECFP [15] (radius \(R=1,2,3\)) of predicted/target molecules; and calculated the Tanimoto similarity for each prepared model. As these 2 descriptors represent partial structures of molecules, their similarity between target and prediction can be thought to measure understanding of the model about partial structures of molecules. As a result, the Tanimoto similarity of molecular fingerprints saturated at nearly 1.0, meaning complete correspondence of fingerprint between prediction and target, when perfect accuracy was merely about 0.3 (**Figure 1a, 1b**). We also compared the Tanimoto similarity with the loss function (**Figure 1b**), and it was shown that the similarity of fingerprints reached about 1.0 when the loss had yet to be converged. These results also support the early recognition of partial structures and late understanding of the overall structure of molecules by the Transformer model. We previously found that the GRU model, derived from NLP, has a similar tendency as this finding [49]. It is then suggested that NLP models, when trained to chemical structures by learning SMILES, recognize partial structures of molecules at the early stage of training, regardless of their architecture. This implies enabling the model to refer to the overall structures of molecules, rather than their partial structures, would improve the performance of the descriptor and downstream tasks.
MACCS keys consist of 166 binary flags, and each of them represents whether the molecule has a certain predefined substructure. To investigate what kind of substructures are easy or difficult for the model to predict, we conducted a dimension-wise analysis of the similarities between prediction and target. Here, we did not exclude molecules with invalid prediction; instead, for each dimension, we calculated the percentage of molecules that were validly decoded and for which MACCS key bits matched between the predicted and target structures, relative to all molecules.
\begin{tabular}{|c|c|c|c|c|} \multicolumn{4}{l}{For each dimension \(i\) :} \\ \hline \multirow{4}{*}{} & \multirow{4}{*}{Valid} & \multicolumn{2}{c|}{Target Molecule} \\ \cline{3-5} & & MACCS[\(i\) ] = 0 & MACCS[\(i\) ] = 1 \\ \hline \multirow{4}{*}{Prediction} & \multicolumn{2}{c|}{Invalid} & A & D \\ \cline{2-5} & \multirow{2}{*}{Valid} & MACCS[\(i\) ] = 0 & B & E \\ \cline{1-1} \cline{3-5} & & MACCS[\(i\) ] = 1 & C & F \\ \hline \end{tabular}
\begin{tabular}{|c|c|c|c|} \multicolumn{4}{l}{\(Ratio_{0}[i]=\frac{B}{A+B+C}\)} \\ \hline \multirow{2}{*}{\(Ratio_{1}[i]=\frac{F}{D+E+F}\)} \\ \end{tabular}
\end{table}
Table 1: Model-based models for the 2-layer model.
### Downstream task performance in the learning progress
Molecular descriptors are frequently used in solving cheminformatics tasks. Therefore, in many cases, the performance of descriptor generation methods is evaluated by how much downstream tasks, like prediction of molecular properties, are solved from their descriptor. On the other hand, we have shown that in the case of a descriptor generation method based on the GRU model, downstream task performance is mainly related to the recognition of partial structures of molecules [49]. We therefore worked on the evaluation of the downstream task performance over the learning progress in the Transformer model. To be specific, we predicted the molecular properties from intermediate expression in the Transformer at different steps, whose details are described in Section 3.2. We used benchmark datasets from MoleculeNet [50] as summarized in **Table 2**. In this study, we pooled the memory of the Transformer Encoder as a descriptor in several ways and predicted property from it. Previously reported methods such as ECFP [15] (radius \(R=2\)) and CDDD [10], and randomly generated vectors are also used as baseline descriptors of molecules. We adopted XGBoost as the algorithm to predict property from the descriptors. Note that to evaluate the performance of memory expression itself, rather than the inherent architecture of the model, we did not conduct fine-tuning. Metrics are based on recommendations in MoleculeNet. We used splits of data provided by [52] and experimented with each of the 3 splits.
**Figure 2 and Supplementary Figures 3 and 4** show the prediction scores of each descriptor (also summarized in **Table 3**). The results showed that descriptors of models at an early phase, or even at the beginning of training, can perform just as well as that of the fully trained model, except for the prediction of Lipophilicity, although the score for this task saturated at an early phase (step 6,000). [54] showed that neural fingerprint (NFP), a deep-learning and graph-based descriptor, correlated to ECFP and was able to predict molecular properties without training. Similarly, one of the explanations of the presented result is that the Transformer model, even with its initial weights, generates a meaningful descriptor by its inherent mechanism such as self-attention. This implies that the modifying structure of the model is more helpful for improving the performance than changing what data the model is fine-tuned on. Note that the performance of the descriptor pooled from the Transformer memory is almost similar to that of ECFP and is slightly lower than that of CDDD. Because the Transformer model encodes structural information of molecules to variable-length memory, it is possible that the pooling process omitted part of the structural information, which is scattered in the whole memory, thereby degrading the performance. This also implies the limitation o4 study; that is, a more sophisticated way of pooling may improve the performance of the descriptor at different steps of the learning, although we obtained consistent results using 4 different ways of pooling.
### Stagnation of perfect accuracy in learning chemical structures
We experimented with different random seeds to reproduce the results in Section 4.1. Then we observed that the perfect accuracy of the Transformer sometimes stagnated at a low value for a while and then abruptly increased at a certain step. We are interested in this phenomenon and conducted an experiment changing the randomly determined conditions. To be specific, we trained the model with 14 different initial weights and 2 different orders of iteration. Here, an order of iteration refers to the order in which the molecules are learned. **Figure 3a and Supplementary Figure 5a** show the perfect accuracy in these different conditions. In this section, we did not conduct training for 80,000 steps but aborted training when perfect accuracy reached 0.95, which is approximately the final accuracy of the model, considering the computational cost. The figure shows that while perfect accuracy uneventfully converges in many cases, there are cases where perfect accuracy stayed at \(\sim\)0.6 from approximately 10,000 to 70,000 steps and then surged to nearly 1.0 or even maintained a low accuracy after 1 epoch (\(\sim\)80,000 steps). **Figure 3b** shows loss function changes in conditions in which stagnation did or did not occur. This shows that the loss sharply decreased at the same time as accuracy surged.
To specify the determining factor of the stagnation, we calculated steps when accuracy exceeded 0.7 and 0.95, named _step-0.7_ and _step-0.95_ respectively. Based on **Figure 3a.**, we considered _step-0.7_ to represent the step when stagnation was resolved, and _step-0.95_ is the step when learning was almost completed. **Supplementary Figure 5b** shows the relationship between _step-0.7/0.95_ of the same seed and different iteration orders. The result shows that the trend of learning progress is similar for different iteration orders when the same initial weight was used. **Supplementary Figure 5c** shows the average _step-0.7/0.95_ of each iteration order, and no significant difference of _step-0.7/0.95_ was observed. These results suggest that whether the stagnation occurs or not depends on initial weight, rather than iteration orders.
We replicated the experiments in Sections 4.1 and 4.2 for a trial in which stagnation occurred. We studied the agreement of fingerprints and performance on downstream tasks at different steps of the learning with stagnation. As a result, the tendency as found in Sections 4.1 and 4.2 was conserved even when stagnation occurred, reinforcing our findings in the previous sections. Details are shown in **Supplementary Note 2**.
### Cause of stagnation in learning chemical structures
Next, to clarify the cause of this stagnation, we investigated the model performance on each character of SMILES strings using 2 metrics. The first one is the perfect accuracy when each character is masked. This is calculated like perfect accuracy defined in Section 4.1 except that prediction for a certain _masked_ character in the target is not considered. This value is expected to rise when a difficult, or commonly mistaken character is masked. The second metric is the accuracy of each character of the target when teacher force is used. In the test phase, as the model usually predicts each letter of SMILES from a previously predicted string, the model is likely to make a mistake when it has already made an incorrect prediction. This means characters that appear more in the back (like ")" compared with "(") tend to show low accuracy. To remedy this, we adopted teacher-forcing [47] when predicting the SMILES, meaning the model always predicts each letter from the correct SMILES string, and computed the accuracy of each character.
**Figure 4a** shows the transition of masked accuracy about training with or without stagnation. The results show that in stagnation, predictions of "\(\alpha\)" and "\(\alpha\)@" are wrong by a large number. These 2 characters are used to describe chirality in SMILES representation (**Figure 4b**). It suggests that stagnation was caused by confusion in discriminating enantiomers, and the subsequent surge of perfect accuracy was the result of the resolution of this confusion.
**Supplementary Figures 6 and 7** show the accuracy for each character. Accuracy is plotted against the frequency of each character, which is likely to affect accuracy. The change in the score shows that "\(\alpha\)@" and "\(\alpha\)@" are difficult to predict compared to characters with similar frequency. The result also indicated that the accuracy increase of "\(\alpha\)@" and "\(\alpha\)@" is slow even after stagnation was resolved, or when the learning proceeded smoothly without stagnation. In summary, understanding chirality is difficult for the Transformer model and sometimes it is confused for a long period.
### Solution of stagnation in learning chemical structures
Why does the Transformer model learning SMILES representation of molecules fail to learn chirality? To answer this question, we applied the following perturbation to the learning process and evaluated its effect on stagnation.
#### 4.5.1 Increasing "@" and "@@" in training dataset
It is possible that learning more enantiomers facilitates the model to understand chirality. We therefore omitted half of the molecules in the training set whose SMILES has neither "\(\alpha\)" nor "\(\alpha\)@" and trained the model with the data in which chirality appears more frequently.
#### 4.5.2 Introduction of AdamW
In deep-learning studies, one of the possible explanations for this kind of stagnation is that the model is stuck to a local optimum, and changing the optimizer would therefore avoid stagnation. We have been using the Adam optimizer based on [33] so far, but here we tried the AdamW [55] optimizer. The AdamW optimizer is a refined optimizer of Adam with \(L^{2}\) normalization in the loss function. [55] showed that this optimizer can be adopted to a wider range of the field than Adam.
#### 4.5.3 He normal initialization
Experiments in 4.3 suggested that stagnation occurs depending on the initial weight of models. Thus, changing the initialization of model weight would stabilize learning. Here, we introduced He normal initialization, which is referred to as suitable for the ReLU activation function in the Transformer.
#### 4.5.4 pre-LN structure
Pre-LN is a structural modification of the Transformer first proposed in [56] to stabilize learning. This method prevents vanishing gradients in the lower layer of the Transformer by ensuring that the residual connection is not affected by layer normalization, which can cause vanishing gradients. This method has been shown to stabilize the learning of the Transformer [56].
All these perturbations were respectively introduced to the baseline model, and training was conducted 14 times with different initial weights for each modification, except for the introduction of He normal, which showed a significant delay in learning and was aborted when 5 studies were finished. In this section, we aborted training when perfect accuracy reached 0.95, which is approximately the final accuracy of the model, considering the computational cost.
**Supplementary Figure 8** shows the average of _step-0.7/0.95_. In some cases where the accuracy did not reach 0.7/0.95, _step-0.7/0.95_ was defined as 80,000 (end of training). The result showed that the introduction of pre-LN significantly accelerated the learning speed, whereas other modifications did not achieve improvement. **Figure 5a** also shows the change in accuracy over time in the 14 trainings with pre-LN, compared with the control. This figure also demonstrates that pre-LN strongly stabilizes learning.
Then, does pre-LN facilitate understanding of chirality, or simply accelerate overall learning? **Figure 5b and Supplementary Figure 9** show "masked accuracy" and accuracy for each character in one of the studies in which pre-LN was adopted, respectively. These results show that learning of "@" and "@@" is relatively slow even in the model with pre-LN, and it is suggested that pre-LN accelerates the learning of not only chirality but also molecular structure in general.
### Investigation of chirality misunderstanding with another chemical language
Finally, to clarify the generality of this trouble of the Transformer concerning chirality, we conducted the experiments with another expression of molecules. Instead of SMILES, here we used InChI, an alternative literal representation of molecules adopted in some cheminformatics studies with chemical language models, although the performances of chemical language models fed with InChI are reported to be inferior to those with SMILES [10], [57]. We trained the Transformer model to translate InChI expression of molecules into canonical SMILES of them, and the experiment was conducted 5 times. We used the molecules extracted and preprocessed from ZINC in Section 3.1. In this paper, we changed batch size according to the length of strings so that each batch contains about 25,000 tokens [33], and therefore, relatively long InChI expression reduced batch size and extended the length of 1 epoch to about 185,000 steps. We therefore trained the model for up to 200,000 steps, although we aborted training when perfect accuracy reached 0.95.
The results showed that the stagnation did occur in InChI-to-SMILES translation (**Figure 6a**), and character-wise analysis showed that confusion in discriminating enantiomers caused it (**Figure 6b and Supplementary Figure 10**). In addition, pre-LN introduction relieved the stagnation (**Figures 6a and 6b**). These results suggest that the difficulty in learning chirality for the Transformer is an innate property of this model rather than a grammatical or processive problem specific to SMILES.
## 5 Conclusion
In recent years, a new field of research has been established in which NLP models, especially the Transformer model, is applied to literal representations of molecules like SMILES to solve various tasks handling molecular structures: chemical language models with neural network[7]. In this paper, as a basic study of chemical language models for descriptor generation, we investigated how a Transformer model understands diverse chemical structures during the learning progress. We compared the agreement between the output and the target, and the fingerprints related to substructures, for the models in the process of learning. The performance of the models under training was also examined on the downstream tasks of predicting molecular properties. We further found that perfect accuracy of translation sometimes stagnates at a low level depending on the initial weight of the model. To find the cause of this phenomenon, we compared the accuracy per each character of SMILES, and we experimented with some alterations to prevent stagnation. The major findings in this paper are as follows:
1. In the Transformer model, partial structures of molecules are recognized in the early steps of training, whereas recognition of the whole structures requires more training. Together with our previous study using RNN models[49], this finding can be generally true for various NLP models fed with SMILES strings. Enabling the Transformer model to refer to overall structural information as an auxiliary task more in its structure would help improve the descriptor generation model.
2. For molecular property prediction, the performance of the descriptor generated by the Transformer model may already have been saturated before it was trained, and it was not improved by the subsequent
training. This suggests that the descriptor of the initial model already contains enough information for downstream tasks, which is perhaps the partial structures of molecules. On the other hand, downstream tasks like property prediction of molecules may be "too easy" for the Transformer and inappropriate for evaluating Transformer-based descriptor generation methods.
[33]
3. Translation performance of the Transformer concerning chirality is relatively slow to rise compared to the other factors, and the model is sometimes confused about chirality for a long period, causing persistent stagnation in whole structure recognition. This suggests that additional structures or tasks that notice chirality can improve the performance of the descriptor of the model.
4. Introducing the pre-LN structure accelerates and stabilizes learning, including chirality.
These discoveries deepen the understanding of chemical language models for descriptor generation and are expected to activate the field. It is an intriguing future task to investigate whether these findings hold true in chemical language models for other applications with supervised natures such as structure generation and end-to-end property prediction, although we focused on descriptor generation in this study considering that it purely learns chemical structure in an unsupervised manner. NLP is one of the most advanced fields in deep learning; thus, chemical language models would be increasingly developed. On the other hand, there are many unknowns in the relationship between language models and chemical structures compared with prevalent neural network models in the field of chemistry, such as graph neural networks [58], [59]. Further basic research on the relationship between NLP models and chemical structures is expected to clarify the black box, "_How do NLP models recognize chemical structures?_", leading to the development and performance improvement of chemical language models for various tasks in chemistry.
## 6 Declarations
Code & Data AvailabilityCode, models, and data are available at: [https://github.com/mizuno-group/2023](https://github.com/mizuno-group/2023).
Author ContributionsYasuhiro Yoshikai: Methodology, Software, Investigation, Writing - Original Draft, Visualization.
Tadahaya Mizuno: Conceptualization, Resources, Supervision, Project administration, Writing - Original Draft, Writing - Review & Editing, Funding acquisition.
Shumpei Nemoto: Writing - Review & Editing.
Hiroyuki Kusuhara: Writing - Review & Editing
Competing interestsThe authors declare that they have no conflicts of interest.
AcknowledgementWe thank all those who contributed to the construction of the following data sets employed in the present study such as ZINC and MoleculeNet. This work was supported by AMED under Grant Number JP22mk0101250h and the JSPS KAKENHI Grant-in-Aid for Scientific Research (C) (grant number 21K06663) from the Japan Society for the Promotion of Science.
|
2305.05668
|
Neurosymbolic Artificial Intelligence (NSAI) based Algorithm for
predicting the Impact Strength of Additive Manufactured Polylactic Acid (PLA)
Specimens
|
In this study, we introduce application of Neurosymbolic Artificial
Intelligence (NSAI) for predicting the impact strength of additive manufactured
polylactic acid (PLA) components, representing the first-ever use of NSAI in
the domain of additive manufacturing. The NSAI model amalgamates the advantages
of neural networks and symbolic AI, offering a more robust and accurate
prediction than traditional machine learning techniques. Experimental data was
collected and synthetically augmented to 1000 data points, enhancing the
model's precision. The Neurosymbolic model was developed using a neural network
architecture comprising input, two hidden layers, and an output layer, followed
by a decision tree regressor representing the symbolic component. The model's
performance was benchmarked against a Simple Artificial Neural Network (ANN)
model by assessing mean squared error (MSE) and R-squared (R2) values for both
training and validation datasets. The results reveal that the Neurosymbolic
model surpasses the Simple ANN model, attaining lower MSE and higher R2 values
for both training and validation sets. This innovative application of the
Neurosymbolic approach in estimating the impact strength of additive
manufactured PLA components underscores its potential for optimizing the
additive manufacturing process. Future research could investigate further
refinements to the Neurosymbolic model, extend its application to other
materials and additive manufacturing processes, and incorporate real-time
monitoring and control for enhanced process optimization.
|
Akshansh Mishra, Vijaykumar S Jatti
|
2023-05-07T12:11:04Z
|
http://arxiv.org/abs/2305.05668v1
|
Neurosymbolic Artificial Intelligence (NSAI) based Algorithm for predicting the Impact Strength of Additive Manufactured
###### Abstract
In this study, we introduce application of Neurosymbolic Artificial Intelligence (NSAI) for predicting the impact strength of additive manufactured polylactic acid (PLA) components, representing the first-ever use of NSAI in the domain of additive manufacturing. The NSAI model amalgamates the advantages of neural networks and symbolic AI, offering a more robust and accurate prediction than traditional machine learning techniques. Experimental data was collected and synthetically augmented to 1000 data points, enhancing the model's precision. The Neurosymbolic model was developed using a neural network architecture comprising input, two hidden layers, and an output layer, followed by a decision tree regressor representing the symbolic component. The model's performance was benchmarked against a Simple Artificial Neural Network (ANN) model by assessing mean squared error (MSE) and R-squared (R\({}^{2}\)) values for both training and validation datasets.
The results reveal that the Neurosymbolic model surpasses the Simple ANN model, attaining lower MSE and higher R\({}^{2}\) values for both training and validation sets. This innovative application of the Neurosymbolic approach in estimating the impact strength of additive manufactured PLA components underscores its potential for optimizing the additive manufacturing process. Future research could investigate further refinements to the Neurosymbolic model, extend its application to other materials and additive manufacturing processes, and incorporate real-time monitoring and control for enhanced process optimization.
Neurosymbolic AI; Additive Manufacturing; Neural Networks; Impact Strength
## 1 Introduction
Additive manufacturing refers to the fabrication technique where an object is constructed by sequentially depositing material in layers. This approach contrasts with subtractive manufacturing, which involves carving out a desired shape from a solid block of material. Although the term "additive manufacturing" can encompass any process in which an object is formed by the accumulation of material, it is predominantly associated with 3D printing [1-4]. The technology first emerged in the 1980s as a method for rapid prototyping, enabling the quick production of non-functional models without the conventional time-consuming and costly procedures associated with prototype development [5-7]. As additive manufacturing technologies evolved, their applications expanded to include rapid tooling, which facilitated the creation of molds for end-use products. By the early 21st century, additive manufacturing
techniques were being employed for the fabrication of functional components. In recent years, industry giants such as Boeing and General Electric have adopted additive manufacturing as a crucial component of their production processes.
Traditional machining processes, such as milling, drilling, rolling, and forming, continue to dominate medium to large-scale production. However, over the past decade, additive manufacturing (AM) has emerged as a disruptive force, offering new opportunities for customized, small-scale production. Conventional manufacturing techniques struggle to achieve optimized production, whereas additive manufacturing can easily facilitate this with minimal tooling changes and significantly reduced manufacturing time. Despite its advantages, additive manufacturing presents unique challenges depending on the specific technology employed, including vat polymerization, powder bed fusion, material extrusion, material jetting, binder jetting, direct energy deposition, or sheet lamination [8-10]. Advances in post-processing techniques have significantly enhanced additive manufacturing capabilities, transforming it from a prototyping-focused approach to a viable method for producing finished products.
Many organizations view digitization and automation as critical factors for advancing additive manufacturing. Consequently, an increasing number of manufacturers are adopting cloud-based solutions and incorporating various algorithms into their 3D printing systems to fully harness the technology's potential. As a digital process, 3D printing is an integral component of Industry 4.0, an era characterized by the growing use of artificial intelligence (AI), such as machine learning, to optimize the value chain [11-15]. AI has the capacity to rapidly process vast amounts of complex data, making it increasingly valuable for decision-making. Machine learning, a subset of AI, refers to systems or software that employ algorithms to analyze data and subsequently identify patterns or derive solutions. While some may believe that machine learning is a recent development, its origins can be traced back to the 1940s, when researchers began emulating brain neurons using electrical circuits. In 1957, the Mark I Perceptron marked a significant milestone in the field, as the machine was capable of independently classifying input data. By learning from past errors, the device continuously improved its classification capabilities. This early success laid the groundwork for ongoing research, as scientists became captivated by the technology's possibilities and potential. Today, AI is encountered daily across various aspects of life, from speech recognition and intelligent chatbots to personalized treatment plans. Machine learning continues to be employed in a wide array of applications.
## 2 Problem Statement
In recent years, additive manufacturing techniques have gained prominence due to their ability to create complex and customized structures, particularly with the growing demand for sustainable materials like Polylactic Acid (PLA). However, accurately predicting the impact strength of additive manufactured PLA components remains a challenge, which directly affects their performance and application potential. This research paper aims to address this issue by employing two distinct predictive models: a Neurosymbolic-based algorithm and a simple Artificial Neural Network (ANN) model. The problem statement for this research work is to investigate the efficacy of these two approaches in accurately predicting the impact
strength of additive manufactured PLA components and to determine which model provides better prediction performance and insights for practical applications.
## 3 Concept of Neurosymbolic Artificial Intelligence (NSAI)
Neuro-symbolic artificial intelligence represents an emerging field in AI research, aiming to integrate the advantages of traditional rule-based AI methodologies with contemporary deep learning techniques [16]. Symbolic models offer several benefits, including the requirement for few input samples, effective generalization to new problems, and a conceptually straightforward internal functionality when compared to deep learning models. However, these models necessitate considerable manual tuning, making them challenging to develop for complex problems.
Neuro-symbolic AI strives to harness the strengths of both deep learning and symbolic approaches. Deep learning has demonstrated remarkable success in extracting intricate features from data for tasks such as object detection and natural language processing. In contrast, symbolic AI excels at formalizing human-like reasoning processes. The primary goal of neuro-symbolic AI is to employ deep learning techniques to extract features from data and then manipulate these features using symbolic methodologies, thus capitalizing on the best aspects of both fields.
The framework implemented in the present work is shown in Figure 1.
## 4 Need of Synthetic Data Generation in Additive Manufacturing
Synthetic data generation offers significant advantages in additive manufacturing (AM) by expanding available data, enabling design exploration, reducing costs, and maintaining data privacy. By supplementing existing datasets with artificially generated data, synthetic data generation can improve the accuracy and dependability of machine learning models and computational tools used for process optimization, defect detection, and material property prediction. In addition, synthetic data can facilitate the exploration of various design parameters and their effects on part performance, material properties, and manufacturing efficiency. This helps to optimize designs and reduce the reliance on costly and time-consuming physical prototyping. Synthetic data generation can also lower experimental costs by decreasing the number of physical experiments required, which is particularly beneficial in AM due to the high expenses associated with materials and equipment usage.
Figure 2 presents synthetic data generated using a sine function, augmented with random noise to emulate real-world observations. The original data points are depicted as blue
Figure 1: Framework of NSAI implemented in the present work
circular markers, while the magenta diamond markers represent the synthetic data points. This illustration highlights the capability of synthetic data generation to produce additional data points that maintain the fundamental pattern of the original data, while also introducing some variability.
Synthetic data generation primarily involves analyzing the given data to discern its distribution, correlations, and underlying patterns. Subsequently, new data points are generated based on this knowledge, frequently employing statistical models, sampling techniques, or machine learning algorithms. The objective is to produce synthetic data that mirrors the original data but includes added variation, ultimately enhancing the robustness of machine learning models and other analytical processes.
## 5 Material and Methods
Figure 3 showcases the Fused Deposition Modeling (FDM) process, which entails constructing three-dimensional objects in a layer-by-layer manner using thermoplastic materials like polylactic acid (PLA). The procedure commences with a computer-aided design (CAD) model that is transformed into a suitable file format and divided into thin horizontal layers with the help of dedicated software. These layers produce a set of instructions, or G-code, for the 3D printer to execute. The printer's extruder warms the PLA filament and deposits it through a nozzle onto the build platform, constructing the object one layer at a time. The PLA material merges with the preceding layer and solidifies upon
Figure 2: Visualizing Original Data and generated Synthetic data
cooling, culminating in the completed 3D object. Support structures may be necessary for intricate geometries or overhangs, and post-processing methods such as sanding or painting can be employed for final touches.
The FDM process was utilized to fabricate impact strength specimens in compliance with ASTM D256 standard specifications, employing a Creality Ender 3 machine with a bed size of 220 x 220 x 250 mm, as depicted in Figure 4. The component design was developed using CATIA software and transformed into an STL file. Subsequently, the file was converted into a machine-interpretable G-code file through the Cura engine within Repetier software, as demonstrated in Figure 5.
Table 1 presents the input and output parameters of the experimental work. The resulting experimental data is converted into a CSV format file and imported into the Google Colab platform for application of the neurosymbolic programming algorithm. To enhance the model's accuracy, the available data is synthetically expanded to 1,000 data points.
The neurosymbolic programming approach merges the advantages of neural networks and symbolic AI. The neural network structure consists of a series of densely connected layers, with 32 and 16 hidden units and a single output neuron. A Rectified Linear Unit (ReLU) activation function introduces nonlinearity to the model. The network is trained using the Adam optimizer and the mean squared error loss function. Training and validation datasets are employed to fit the model for 2,000 epochs, with a batch size of 32. The resulting trained neural network model is then utilized to extract learned features from the input data. A decision tree, functioning as the symbolic component of the model, is created using the
Figure 3: Schematic representation of FDM Process
DecisionTreeRegressor from the scikit-learn library. The decision tree's maximum depth is limited to four to prevent overfitting. The model's performance is evaluated by predicting output values for the training and validation sets, using the learned features as inputs. Mean squared error (MSE) and R-squared (R\({}^{2}\)) values are calculated to assess the model's performance. The obtained metric features are subsequently compared to a simple Artificial Neural Network (ANN) model.
Figure 4: Ender 3 3D printer
Figure 5: Impact Test specimen a) Before slicing, b) After slicing
## 6 Results and Discussion
The neurosymbolic programming approach amalgamates the advantages of neural networks and symbolic AI, resulting in a more robust model. In this study, a neural network comprising three layers (input, two hidden layers, and output layer) is initially established. The first hidden layer contains 32 units, and the second has 16 units, culminating in a single output
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Infill percentage (\%) & Layer height (mm) & Print speed (mm/sec) & Extrusion temperature (\({}^{\circ}\)C) & Impact strength (kJ/m\({}^{2}\)) \\ \hline
[MISSING_PAGE_POST]
\end{tabular}
\end{table}
Table 1: Experimental Dataset
neuron. The Rectified Linear Unit (ReLU) activation function is employed to incorporate nonlinearity into the model, as depicted in Figure 6.
The input layer takes the input data represented by a feature vector x, where 'n' is the number of input features as shown in Equation 1. The first hidden layer applies a linear transformation using weights (W\({}_{1}\)) and biases (b\({}_{1}\)), followed by the ReLU activation function to introduce nonlinearity as shown in Equation 2. This results in an activation vector a\({}_{1}\) with 32 units. The second hidden layer follows a similar process using weights (W\({}_{2}\)) and biases (b\({}_{2}\)), resulting in an activation vector a\({}_{2}\) with 16 units as shown in Equation 3. The output layer produces the predicted output value \(\hat{\mathcal{Y}}\) by applying a linear transformation using weights (W\({}_{3}\)) and biases (b\({}_{3}\)) as shown in Equation 4.
\[\mathbf{x}\in\mathbf{R}^{n} \tag{1}\] \[\mathbf{a}_{1}=\text{ReLU(W}_{1}\.\ \mathbf{x}+\mathbf{b}_{1})\] (2) \[\mathbf{a}_{2}=\text{ReLU(W}_{2}\.\ \mathbf{a}_{1}+\mathbf{b}_{2})\] (3) \[\hat{\mathcal{Y}}=\text{W}_{3}\.\ \mathbf{a}_{2}+\mathbf{b}_{3} \tag{4}\]
Figure 6: Neural Network architecture used in the present work
During the training phase, the neural network strives to minimize the mean squared error loss function illustrated in Figure 7. This function measures the disparity between the predicted output value (\(\hat{y}\)) and the actual output value (\(\hat{y}\)), as expressed in Equation 5. The Adam optimizer updates the network's weights and biases to minimize the loss function. Once trained, the neural network can extract learned features from the input data. In this instance, the activation vector \(\text{a}_{2}\) from the second hidden layer, containing 16 units, is utilized as the learned features (f) for the subsequent step.
\[\text{L(y, }\hat{y})=\text{(y - }\hat{y})^{2} \tag{5}\]
Utilizing the learned features (f) and target output values (y), a decision tree regressor is trained. The decision tree serves as the symbolic component of the model, offering a human-interpretable depiction of the data relationships. To prevent overfitting, the maximum depth of the decision tree is limited to four. The trained decision tree regressor predicts output values for both the training and validation datasets using the learned features (f) as inputs. The model's performance is evaluated using the mean squared error (MSE) and R-squared (\(\text{R}^{2}\)) values. The MSE quantifies the average squared difference between the predicted and true output values, while the \(\text{R}^{2}\) score represents the proportion of output value variance that the model can explain.
Table 2 presents the calculated MSE and \(\text{R}^{2}\) values for both the simple ANN and the neurosymbolic approach-based algorithm.
Figure 7: Decreasing loss function with the number of epochs
Table 2 shows that the Neurosymbolic model outperforms the Simple ANN model in terms of both MSE and R\({}^{2}\) values. The MSE for the Neurosymbolic model on the training set is 2.7448, while it is 3.4174 for the Simple ANN model. Similarly, on the validation set, the Neurosymbolic model has a lower MSE (2.7026) compared to the Simple ANN model (3.3666). Lower MSE values indicate that the Neurosymbolic model has a better fit to the data, as it reduces the average squared difference between the predicted and true output values. Furthermore, the R\({}^{2}\) values of the Neurosymbolic model are higher than those of the Simple ANN model for both the training and validation sets. The R\({}^{2}\) values for the Neurosymbolic model are 0.9840 (training) and 0.9850 (validation), while the Simple ANN model has R\({}^{2}\) values of 0.9800 (training) and 0.9813 (validation). Higher R\({}^{2}\) values demonstrate that the Neurosymbolic model can explain a larger proportion of the variance in the output values compared to the Simple ANN model.
It is essential to highlight the significance of comparing true versus predicted values for both training and validation sets when evaluating a machine learning model, such as the neurosymbolic programming approach. This comparison enables researchers to measure the model's performance, assess its generalization capabilities, and ensure it is neither overfitting nor underfitting the data. The true values represent the actual, observed target outputs in the datasets, while the predicted values are generated by the model as its best estimate of the outputs based on the input features.
Comparing these values on the training set allows the assessment of the model's ability to learn the underlying patterns and relationships in the data during the training process. A good fit on the training set is crucial but not sufficient to ensure the model's effectiveness, as it may still overfit the data by memorizing noise or capturing spurious correlations. Evaluating the model on the validation set, which consists of data unseen by the model during training, provides an estimate of its generalization capabilities.
A high-performing model should maintain its accuracy and exhibit similar performance on both the training and validation sets. Discrepancies between the true and predicted values on the validation set can indicate that the model is not generalizing well to new data, potentially due to overfitting or underfitting. Figures 8 and 9 display the graphs of true versus predicted values for the training and validation sets for both the simple ANN and the neurosymbolic model.
Figure 8: Plot obtained in case of Simple ANN a) true vs. predicted values for training b) true vs. predicted values for validation
Figure 9: Plot obtained in case of Neurosymbolic developed algorithm a) true vs. predicted values for training b) true vs. predicted values for validation
## 7 Conclusion
In conclusion, this research demonstrates the effectiveness of the Neurosymbolic model in predicting the impact strength of additive manufactured polylactic acid. The model leverages the strengths of both neural networks and symbolic AI, resulting in a more robust and accurate prediction compared to the Simple ANN model. The performance metrics, including mean squared error (MSE) and R-squared (R\({}^{2}\)) values, show that the Neurosymbolic model outperforms the Simple ANN model on both the training and validation sets, exhibiting superior generalization capabilities.
Future research can build upon the findings of this study by exploring several avenues. First, the Neurosymbolic model can be further fine-tuned and optimized by adjusting the neural network architecture or employing different activation functions and optimization algorithms. This may lead to even better performance in predicting the impact strength of additive manufactured materials. Second, the application of the Neurosymbolic model can be extended to other additive manufacturing materials and processes, such as metal or ceramic-based materials, and other printing techniques like selective laser sintering (SLS) or stereolithography (SLA). This will help assess the generalizability and versatility of the model across various manufacturing scenarios. Third, the integration of advanced feature selection methods and dimensionality reduction techniques can be investigated to improve the model's performance further. These approaches can help identify the most relevant input features and reduce the complexity of the model, potentially enhancing its interpretability and efficiency.
|
2303.10855
|
Electron Wave Spin in Excited States
|
The wave spin of an electron can be fully characterized by the current
density calculated from the exact four-spinor solution of the Dirac equation.
In the excited states of the electron in a magnetic field-free quantum well,
the current density has a multiple vortex topology. The interaction of the
current with a magnetic potential produces a finer structure of anomalous
Zeeman splitting. When the magnetic potential is comparable to the size of the
individual vortices, fractional or zero spin effects can be observed.
|
Ju Gao, Fang Shen
|
2023-03-20T04:27:23Z
|
http://arxiv.org/abs/2303.10855v1
|
# Electron Wave Spin in Excited States
###### Abstract
The wave spin of an electron can be fully characterized by the current density calculated from the exact four-spinor solution of the Dirac equation. In the excited states of the electron in a magnetic field-free quantum well, the current density has a multiple vortex topology. The interaction of the current with a magnetic potential produces a finer structure of anomalous Zeeman splitting. When the magnetic potential is comparable to the size of the individual vortices, fractional or zero spin effects can be observed.
## I Electron Wave Spin Described by Current Density
Spin is a fundamental property of an electron that represents the electron's internal angular momentum. However, what actually spins remains an open question, because the particle spin interpretation would require the electron to spin on its own axis at a speed greater than that of light. In electromagnetism, the charge behavior is fully described by the Lorentz covariant four-current, defined as \(j=(c\rho,\mathbf{j})\), where \(\rho\) and \(\mathbf{j}\) stand for the charge and current densities, respectively, since it is responsible for both the generation and interaction of an electromagnetic field. This leads to the question: could the spin also be described by the current?
In a recent paper [1], we have shown that a stable circulating current density exists for a Dirac electron in a quantum well without a magnetic field. The circulating current density, denoted by \(j\), forms a spinning vortex around the center of the charge density, often referred to as an electron cloud. Expressing the current density in terms of the charge density and a spinning velocity distribution \(v(\mathbf{x})\) in \(j(\mathbf{x})=\rho(\mathbf{x})v(\mathbf{x})\), we find that \(v(\mathbf{x})\) is limited to the speed of light everywhere in space. In other words, the entire electron wave, or the electron cloud, spins.
Essentially, this is the wave spin interpretation that was first proposed by Belinfante [2; 3] who argued that spin should be regarded as a circulating flow of energy of the electron field. Ohanian [4] further elaborated the connection between the circulating momentum density and current density with the electron spin and the magnetic moment. Gao [1] showed that a confined electron has a stable circulating current density, but the wave packet of a free electron discussed by Ohanian is not stable and de-coherences quickly because the wavepacket is not constructed by a pure state.
In this paper we continue the discussion of wave spin. To first show that spin is an embedded property of the electron wave, we derive explicit expressions for the momentum and current densities of an electron in an eigenstate wavefunction \(\Psi\) of the Dirac equation, where \(i\hbar\frac{\partial}{\partial t}\Psi=\mathcal{E}\Psi\) and \(\mathcal{E}\) is the eigen energy. These equations are as follows:
\[\mathbf{j}=\frac{ec^{2}}{\mathcal{E}}\left\{\mathbf{\nabla}\times\left( \Psi^{\dagger}\frac{\hbar}{2}\mathbf{\Sigma}\Psi\right)+i\frac{\hbar}{2}\left[ \left(\mathbf{\nabla}\Psi^{\dagger}\right)\Psi-\Psi^{\dagger}\left(\mathbf{\nabla} \Psi\right)\right]\right\},\] \[\mathbf{G}=\left\{\frac{1}{2}\mathbf{\nabla}\times\left(\Psi^{\dagger} \frac{\hbar}{2}\mathbf{\Sigma}\Psi\right)+i\frac{\hbar}{2}\left[\left(\mathbf{\nabla} \Psi^{\dagger}\right)\Psi-\Psi^{\dagger}\left(\mathbf{\nabla}\Psi\right)\right] \right\}. \tag{1}\]
These equations show that both the current density \(\mathbf{j}\) and the momentum density \(\mathbf{G}\) contain the spin term with the spin operator \(\frac{\hbar}{2}\mathbf{\Sigma}\) and the translation term with the momentum operator \(-i\hbar\mathbf{\nabla}\). It is worth noting that these expressions are derived from different origins. The momentum density is derived from a symmetrical energy-momentum tensor known as the Belinfante-Rosenfeld tensor [5]. This symmetrical tensor is constructed to serve as a source for the gravitational field as required by the general relativity. The current density, on the other hand, is derived from the definition \(\mathbf{j}(\mathbf{x})=ec\Psi^{\dagger}(\mathbf{x})\mathbf{\alpha}\Psi(\mathbf{x})\) using the Gordon decomposition, where \(\mathbf{\alpha}\) is the \(\alpha-\)matrix in the Dirac equation. This definition ensures the conservation of charge for the Dirac electron. The momentum and current densities represent the mechanical and electrical nature of the wave spin, and have the same spin and translation terms, except for the factor \(\frac{1}{2}\) in the momentum term, which gives the gyromagnetic ratio \(g=2\) for the Dirac field.
We intend to direct our attention especially to the current density, because the wave spin manifests itself by the current density through the interaction with the electromagnetic field
\[jA=\rho\phi+\mathbf{j}\cdot\mathbf{A}. \tag{2}\]
The above equation accounts for the full electromagnetic interactions in both classical and quantum electrodynamics. In this work, we will show that the current-field interaction \(jA\) not only recovers the conventional spin-field interaction, but also reveals other geometric and topological properties that are missing in the particle spin picture, in particular for electrons in excited states.
Wave spin in excited states
To investigate the wave spin in the excited states of a confined electron, we first seek the exact solution of the Dirac equation
\[i\hbar\frac{\partial}{\partial t}\Psi(\mathbf{r},t)=\left[\mathbf{\alpha}\cdot(-i\hbar \mathbf{\nabla})+\gamma^{0}\text{mc}^{2}+U(\mathbf{r})\right]\Psi(\mathbf{r},t), \tag{3}\]
in a two-dimensional quantum well
\[U(\mathbf{r})=U(x,y)=\begin{cases}0,-L_{x}<x<L_{x},-L_{y}<y<L_{y}\\ \infty,\text{elsewhere}.\end{cases} \tag{4}\]
The four-component spinor wavefunction is expressed by the separation of the temporal and \(z\)-coordinate variables
\[\Psi(\mathbf{r},t)=Ne^{-i\mathcal{E}t/\hbar}e^{iP_{z}z/\hbar}\left(\begin{array}[] {c}\mu_{A}(x,y)\\ \mu_{B}(x,y)\end{array}\right), \tag{5}\]
where \(\mathcal{E}\) is the energy, \(P_{z}\) is the momentum along \(z\)-direction, and \(\mu_{A}(x,y)\) and \(\mu_{B}(x,y)\) are two-component spinor wavefunctions.
We now plug Eq. 5 into the Dirac equation Eq. 3 and set \(P_{z}=0\) to obtain the coupled equations
\[\left(\mathcal{E}-mc^{2}\right)\mu_{A}(x,y) =-i\hbar c\left(\sigma_{x}\frac{\partial}{\partial x}+\sigma_{y} \frac{\partial}{\partial y}\right)\mu_{B}(x,y);\] \[\left(\mathcal{E}+mc^{2}\right)\mu_{B}(x,y) =-i\hbar c\left(\sigma_{x}\frac{\partial}{\partial x}+\sigma_{y} \frac{\partial}{\partial y}\right)\mu_{A}(x,y), \tag{6}\]
where \(\sigma_{x}\) and \(\sigma_{y}\) are the Pauli matrixes.
Eqs. 6 is combined to obtain a second-order differential equation for \(\mu_{A}(x,y)\)
\[\left(\mathcal{E}^{2}-m^{2}c^{4}\right)\mu_{A}(x,y)=-\hbar^{2}c^{2}\left( \frac{\partial^{2}}{\partial x^{2}}+\frac{\partial^{2}}{\partial y^{2}}\right) \mu_{A}(x,y), \tag{7}\]
whose eigen solution for the spin-up electron is found
\[\mu_{A}(x,y)=\sin[k_{x}(x+L_{x})]\sin[k_{y}(y+L_{y})]\left(\begin{array}{c}1 \\ 0\end{array}\right), \tag{8}\]
where \(k_{x}=\frac{\pi n_{x}}{2L_{x}};k_{y}=\frac{\pi n_{y}}{2L_{y}}\) are the wave vectors and \(n_{x}\), \(n_{y}=1,2,3...\) are the quantum numbers of the eigen states. The eigen energy
\[\mathcal{E}=mc^{2}\sqrt{1+\eta^{2}} \tag{9}\]
is quantized by \(n_{x}\) and \(n_{y}\) according to the expression of a dimensionless geometric factor
\[\eta=\sqrt{n_{x}^{2}\left(\frac{\lambda_{c}}{4L_{x}}\right)^{2}+n_{y}^{2} \left(\frac{\lambda_{c}}{4L_{y}}\right)^{2}} \tag{10}\]
that measures the dimensions of the quantum well \((L_{x},L_{y})\) against the Compton wavelength \(\lambda_{c}=\frac{\hbar}{mc}\) at different states \((n_{x},n_{y})\).
The wavefunction \(\mu_{B}(x,y)\) is subsequently derived via Eq. 6 and the complete four-component spinor wavefunction is then obtained
\[\Psi(\mathbf{r},t)=Ne^{-i\mathcal{E}t/\hbar}\left(\begin{array}{c}\sin[k_{x}(x+ L_{x})]\sin[k_{y}(y+L_{y})]\\ 0\\ 0\\ -i\frac{\eta_{x}}{1+\sqrt{\eta^{2}+1}}\cos[k_{x}(x+L_{x})]\sin[k_{y}(y+L_{y})] +\frac{\eta_{y}}{1+\sqrt{\eta^{2}+1}}\sin[k_{x}(x+L_{x})]\cos[k_{y}(y+L_{y})] \end{array}\right). \tag{11}\]
where \(\eta_{x}=\frac{\hbar k_{x}}{mc}\), \(\eta_{y}=\frac{\hbar k_{y}}{mc}\) are dimensionless factors along \(x\) and \(y\) directions. The normalization factor of the wavefunction is found
\[N=\sqrt{\frac{1+\sqrt{1+\eta^{2}}}{\sqrt{1+\eta^{2}}}}. \tag{12}\]
The wavefunction in Eq. 11 is used to calculate the stable charge density \(\rho(x,y)=e\Psi^{\dagger}(x)\Psi(x)\) inside the quantum well
\[\rho(x,y)=eN^{2}\sin^{2}[k_{x}(x+L_{x})]\sin^{2}[k_{y}(y+L_{y})]\] \[+eN^{2}\frac{\eta_{x}^{2}}{(1+\sqrt{1+\eta^{2}})^{2}}\cos^{2}[k_ {x}(x+L_{x})]\sin^{2}[k_{y}(y+L_{y})]\] \[+eN^{2}\frac{\eta_{y}^{2}}{(1+\sqrt{1+\eta^{2}})^{2}}\sin^{2}[k_ {x}(x+L_{x})]\cos^{2}[k_{y}(y+L_{y})]. \tag{13}\]
and the corresponding current density
\[j_{x} =ec\frac{2\eta_{y}}{\sqrt{1+\eta^{2}}}\sin^{2}[k_{x}(x+L_{x})]\sin[2k _{y}(y+L_{y})],\] \[j_{y} =-ec\frac{2\eta_{x}}{\sqrt{1+\eta^{2}}}\sin^{2}[k_{y}(y+L_{y})] \sin[2k_{x}(x+L_{x})]. \tag{14}\]
Both the charge density and the current density exhibit properties of a standing wave, characterized by the quantum numbers \((n_{x},n_{y})\). As an example, we choose an excited state (\(n_{x}=2,n_{y}=2\)) of a quantum well (\(L_{x}=10\) nm, \(L_{y}=10\) nm). Fig. 1 is the density plot of the charge expected for the behavior of an electron cloud. Fig. 2 is the density plot of the current density of the same electron, showing multiple vortices around the peaks of the electron cloud. The wave spin picture is further illustrated by the vector plot of the current in Fig. 3, which shows the circulation of each vortex in the same direction. It is clear that the wave spin in the excited state is distributed among multiple vortices that are holographic to each other. This topology suggests that each wave spin vortex represents a part or fraction of the total spin that can be studied and observed by interaction with an external field, as we will elaborate later.
In Fig. 3 it can also be seen that the current flows continuously along the edge of the quantum well. A similar behavior of the edge current is observed and studied in the quantum Hall effect [6] when a strong magnetic field is applied to topological insulator materials. The intrinsic edge current shown here suggests that topological spin effects can be observed even without the presence of any magnetic field, internally and externally.
## III Topological current-field interaction
The interaction of a particle spin with an external field is expressed by \(\frac{e}{m}\frac{\hbar}{2}\mathbf{\Sigma}\cdot\mathbf{B}\), which excludes all geometrical and topological effects since \(\mathbf{\Sigma}\) depicts a dimensionless point. However, since the wave spin is encoded in the current density, its topological property should be trans
Figure 1: Charge distribution for the excited state \(n_{x}=2\) and \(n_{y}=2\) for a Dirac electron in a quantum well of \(L_{x}=10\) nm and \(L_{y}=10\) nm. The z-axis represents the charge density of the relative unit.
Figure 3: Vector plot of the current density for the excited state (\(n_{x}=2,n_{y}=2\)) for a Dirac electron in a quantum well of \(L_{x}=10\) nm and \(L_{y}=10\) nm.
Figure 2: Density plot of the current density circulates for the excited state \(n_{x}=2\) and \(n_{y}=2\) for a Dirac electron in a quantum well of \(L_{x}=10\) nm and \(L_{y}=10\) nm.
ferred to the interaction
\[\mathbf{j}\cdot\mathbf{A}=\Psi^{\dagger}(\mathbf{r},t)e\mathbf{\alpha}\cdot\mathbf{A}(\mathbf{r})\Psi(\mathbf{ r},t), \tag{15}\]
where the vector potential \(\mathbf{A}(\mathbf{r})\) may itself be a vortex field
\[\mathbf{A}(x,y)=\frac{B}{2}(-y,x,0) \tag{16}\]
to represent a uniform magnetic field along the \(z\) direction by the definition \(\mathbf{\nabla}\times\mathbf{A}(x,y)=(0,0,B)\). In this case, this interaction shall be characterized as the vortex-vortex interaction. When the vector potential \(\mathbf{A}(\mathbf{r})\) is larger in size than the electron wave, which is usually the case, the interaction of Eq. 15 can be evaluated by integration over the entire quantum well, yielding the following
\[\mathcal{E}^{(1)} =\frac{1}{2L_{x}}\frac{1}{2L_{y}}\int_{-L_{y}}^{L_{y}}\int_{-L_{x} }^{L_{x}}\mathbf{j}\cdot\mathbf{A}(x,y)dxdy\] \[=\frac{e\hbar B}{2m\sqrt{1+\eta^{2}}}=\frac{\mu_{B}B}{\sqrt{1+ \eta^{2}}}, \tag{17}\]
where \(\mathcal{E}^{(1)}\) is the first-order energy shift for a weak field and \(\mu_{B}=\frac{e\hbar}{2m}\) is the Bohr magneton. Thus, the difference between the spin-up and spin-down energy shifts is
\[\Delta\mathcal{E}=2\mathcal{E}^{(1)}=\frac{2\mu_{B}B}{\sqrt{1+\eta^{2}}}, \tag{18}\]
which recovers the expression for the anomalous Zeeman splitting \(2\mu_{B}B\) modified by additional geometric factors and quantum numbers via Eq. 10. This indicates a finer structure of the anomalous Zeeman effect. Since in the wave spin picture the spin state and the spatial state are coupled in the spinor expression, each excited state corresponds to a unique spin state of a unique topology to interact with an external field.
To show the topological footprints of the vortex-vortex interaction described by Eq. 15, we postulate a vector potential smaller in size than the electron wave,
\[\mathbf{\tilde{A}}(\mathbf{r})=\begin{cases}\frac{B}{2}(-y+b,x-a,0),&\frac{-L_{x}/2<x -a<L_{x}/2}{-L_{y}/2<y-b<L_{y}/2}\\ \\ 0,&\text{elsewhere}.\end{cases} \tag{19}\]
to represent a field enclosed in a square that is only a quarter of the quantum well with center \((a,b)\). The vector potential of Eq. 19 produces the same uniform magnetic field within the enclosed region. Fig. 4 shows the vector plot of such a field superimposed on the electron current distribution. We now perform the same integration over the quantum well as in Eq. 17. We find that the current-field interaction depends on the relative position of the vortices. The energy shift for \((n_{x}=2,n_{y}=2)\) is now
\[\mathcal{E}^{(1)}_{a,b} =\frac{1}{2L_{x}}\frac{1}{2L_{y}}\int_{-L_{y}}^{L_{y}}\int_{-L_{x }}^{L_{x}}\mathbf{j}\cdot\mathbf{\tilde{A}}(x,y)dxdy \tag{20}\] \[=\begin{cases}\frac{1}{4}\frac{\mu_{B}B}{\sqrt{1+\eta^{2}}},a= \pm L_{x}/2,b=\pm L_{y}/2\\ \phantom{\frac{1}{4}}0,\phantom{\frac{1}{4}}a=0,b=0,\end{cases}\]
which is only \(\frac{1}{4}\) of the spin and magnetic field interaction in Eq. 17 when the field overlaps with one of the current vortices. The fractional spin effect is due to the partial participation of the wave spin represented by the current density. A special situation arises when the field lies at the center of the quantum well. Then a zero interaction is observed, because equal fractions of the current flowing in different directions are exactly cancelled out.
The above results are certainly not to be expected from the particle spin picture and should be considered as
Figure 4: The upper figure shows the magnetic potential (blue) of Eq. 19 overlapping with the upper left current vortex (red) at \(a=-L_{x}/2,b=L_{y}/2\), producing a quarter spin-field coupling. The lower figure shows that the magnetic potential (blue) of Eq. 19 lies in the middle of the quantum well at \(a=0,b=0\), creating zero spin-field coupling.
properties of the wave spin alone. It is conceivable that we can image the entire current density profile by scanning the field against the current distribution. It is now possible to generate vortex optical fields [7] that can be focused on a smaller region than the quantum well. Then the interaction of wave spin and orbital angular momentum can be studied via the vortex-vortex interaction on a controllable scale. All of these discussions and developments mean that we can study and manipulate partial spins to gain knowledge and control over the entire wave spin due to the holographic nature of the multi-vortex current density.
## IV Discussions and conclusions
In summary:
1. We argue that spin is not an abstract two-valued property of the electron particle but a property of the electron wave that can be fully described by its momentum and current densities.
2. We show that in the excited state, the current density of an electron forms multiple vortices in a magnetic field-free quantum well. The topology of these vortices depends on the quantum numbers of the states and is holographic in nature.
3. We investigate the anomalous Zeeman effect of the wave spin and show that the anomalous Zeeman splitting contains finer structures than those of the conventional particle spin picture.
4. We show that the geometrical and topological properties of the wave spin can be observed and studied through its interaction with an electromagnetic field. By studying the interaction with a field smaller in size than the wave spin itself, we show that fractional and even zero spin effects can be observed.
Apparently, the discussion of wave spin could have implications in many areas, and each area requires in-depth study. Here we offer some preliminary discussions.
1. In the field of quantum technology, the electron spin has been proposed as a candidate for use as a quantum bit (qubit), which is a superposition of the spin-up and spin-down states \(\alpha\left(\begin{array}{c}1\\ 0\end{array}\right)+\beta\left(\begin{array}{c}0\\ 1\end{array}\right)\), where \(\alpha\) and \(\beta\) are complex numbers. In the wave spin picture, the spin states \(\left(\begin{array}{c}1\\ 0\end{array}\right)\) and \(\left(\begin{array}{c}0\\ 1\end{array}\right)\) are replaced by the spinor wave functions, as in Eq. 11, where the spatial wave function serves as the spinor component. Therefore, we have argued that the spin cannot be completely isolated from the physical environment in which it resides. Any change of the boundary conditions alters the wave functions, and hence the spin states. We further argue that the spin cannot be completely isolated from the Hilbert space of the electron either. Any transition to or from other spatial states alters the original spin state, leading to decoherence of the prepared qubit and loss of the quantum information. This means that additional protection and correction mechanisms need to be implemented to protect and preserve the spin qubit.
2. The multi-vortex wave spin topology and its partial interaction with electromagnetic fields show that the spin-field interaction is not dimensionless, suggesting novel schemes for parallel information processing using spin. Each vortex of the current is a holographic part of the entire spin and can interact simultaneously with multiple electromagnetic fields, potentially enabling parallel computing. Building such parallel computers shall benefit from the advancement of both spintronics and optics.
3. It is conceivable that the holographic spin interactions could already exist in nature. It is suggested that the electron spin could play a role in biobomochirality [8], which makes the wave spin perspective interesting for this biological topic [9] and the general spin effects in molecules. The wave spin could interact holographically with all atoms within the molecule via the shared electron cloud, leaving a coherent spin footprint on the entire molecular structure.
## V Acknowledgement
The authors would like to thank Jora L. Gao for her careful review of the manuscript. The authors would also like to thank Jane Y. Gao and W. Wang for stimulating discussions on the role of spin in biology.
|
2306.04026
|
Value Functions are Control Barrier Functions: Verification of Safe
Policies using Control Theory
|
Guaranteeing safe behaviour of reinforcement learning (RL) policies poses
significant challenges for safety-critical applications, despite RL's
generality and scalability. To address this, we propose a new approach to apply
verification methods from control theory to learned value functions. By
analyzing task structures for safety preservation, we formalize original
theorems that establish links between value functions and control barrier
functions. Further, we propose novel metrics for verifying value functions in
safe control tasks and practical implementation details to improve learning.
Our work presents a novel method for certificate learning, which unlocks a
diversity of verification techniques from control theory for RL policies, and
marks a significant step towards a formal framework for the general, scalable,
and verifiable design of RL-based control systems. Code and videos are
available at this https url: https://rl-cbf.github.io/
|
Daniel C. H. Tan, Fernando Acero, Robert McCarthy, Dimitrios Kanoulas, Zhibin Li
|
2023-06-06T21:41:31Z
|
http://arxiv.org/abs/2306.04026v4
|
# Value Functions as Control Barrier Functions: Verification of Safe Policies using Control Theory
###### Abstract
Guaranteeing safe behaviour of reinforcement learning (RL) policies poses significant challenges for safety-critical applications, despite RL's generality and scalability. To address this, we propose a new approach to apply verification methods from control theory to learned value functions. By analyzing task structures for safety preservation, we formalize original theorems that establish links between value functions and control barrier functions. Further, we propose novel metrics for verifying value functions in safe control tasks and practical implementation details to improve learning. Our work presents a novel method for certificate learning, which unlocks a diversity of verification techniques from control theory for RL policies, and marks a significant step towards a formal framework for the general, scalable, and verifiable design of RL-based control systems.
## 1 Introduction
Deep reinforcement learning (RL) [1] is a powerful and scalable tool for solving control problems, such as Atari games [2], robotic control [3], and protein folding [4]. However, because of their black-box nature, it is difficult to determine the behaviour of neural networks. In extreme cases, out-of-distribution or adversarially constructed inputs [5] can catastrophically degrade network performance. In the control context, this can lead to highly unsafe behaviour; it is thus risky to deploy such controllers in safety-critical applications, such as autonomous vehicles or human-robot interaction, as well as future applications for general-purpose robots.
The problem of safe control has been extensively studied in safe reinforcement learning, through the lens of constrained Markov Decision Processes [6]. Such methods implicitly assume that there are known constraints which are sufficient to guarantee safety. In contrast, our work assumes no prior knowledge of safe dynamics and aims to learn a constraint (in the form of a barrier function) to guarantee safety. This enables our approach to handle applications where safety cannot be easily expressed analytically, such as avoiding dynamic obstacles from raw pixel input [7].
On the other hand, there exists rich literature in control theory on proving properties of dynamical systems using _certificate functions_. The most well-known are Lyapunov functions, which prove the
stability of dynamical systems around a fixed point [8]. Traditionally, it is difficult to design certificate functions for complex systems of interest. We discuss recent learning-based methods in Section 5. Other prior work combines classical and RL-based control methods by learning high-level policies over programmatic low-level controllers [9], which could be designed to respect safety constraints.
However, designing effective and safe low-level controllers is still difficult and time-consuming. In both cases, the difficulty of manual design limits scalability to arbitrary tasks. Drawing inspiration from control theory, we aim to design a learning-based control method that benefits from the verifiability of certificate functions without sacrificing the generality and flexibility of reinforcement learning.
Our contributions are twofold. Firstly, we propose a **reinforcement learning method** for synthesizing control barrier certificates. Under mild assumptions on task structure, we prove a strong connection between barrier functions and value functions. We implement and ablate principled design choices for learning good barrier functions. Secondly, we propose and empirically validate novel **metrics** to evaluate the quality of learned barrier functions. We demonstrate that these metrics capture important structure not reflected in standard RL metrics. Control barrier certificates verified by these metrics successfully allow safety-constrained exploration of a large fraction of the safe state space, as shown in Figure 1.
Concretely, our method involves considering a safety-preserving task, where the reward function is given by \(r=0\) in safety-violating states and \(r=1\) otherwise. We show that the value function \(V\) satisfies properties to be a control barrier function and we derive a threshold for predicting safety. We then learn \(V\) by using standard RL techniques, propose new metrics to verify learned \(V\) as a control barrier function, and finally demonstrate that our metrics capture the safety-preserving capacity of \(V\). By connecting value functions to certificate functions, our work presents a novel perspective on learning certificate functions, which offers a new approach for applying the wealth of verification strategies in control theory to reinforcement learning.
## 2 Preliminaries
In this work, we consider Markov Decision Processes (MDPs) and reinforcement learning (RL), with states \(x\in\mathcal{X}\) and actions \(u\in\mathcal{U}\). Further exposition is provided in Appendix A.
### Indefinitely Safe Control
We consider augmenting an MDP with a set of _safety violations_\(\mathcal{X}_{\mathrm{unsafe}}\), unsafe states specified by the practitioner. This partitions the state space \(\mathcal{X}\) into three subsets \(\mathcal{X}_{\mathrm{unsafe}},\mathcal{X}_{\mathrm{safe}},\mathcal{X}_{ \mathrm{irrec}}\), illustrated in Figure 2. \(\mathcal{X}_{\mathrm{safe}}\) consists of _indefinitely safe_ states; i.e., there exists a controller \(\pi:\mathcal{X}\rightarrow\mathcal{U}\) such that \(\mathcal{X}_{\mathrm{safe}}\) is _forward-invariant_ under closed-loop dynamics \(f_{\pi}(x)=f(x,\pi(x))\). \(\mathcal{X}_{\mathrm{irrec}}\) consists of _irrecoverable_ states. For example, a car travelling at high velocity on a low-friction surface may
Figure 1: Top: Control barrier function (CBF) constrained exploration (blue) allows reaching both extremes of the CartPole safe state space (demarcated in black). Rolling out reward-optimal policy fails to do so (orange). Bottom: The CBF-constrained trajectory is visualized.
inevitably collide with an imminent obstacle despite applying maximum braking effort. We define \(\overline{\mathcal{X}}_{\mathrm{unsafe}}=\mathcal{X}_{\mathrm{unsafe}}\cup \mathcal{X}_{\mathrm{irrec}}\).
**Finite irrecoverability**. In general, due to \(\mathcal{X}_{\mathrm{irrec}}\), safe control requires perfect knowledge of dynamics for arbitrarily long horizons, which can be intractable; hence we assume stronger conditions on \(\mathcal{X}_{\mathrm{irrec}}\). We say \(x\) is \(k\)-irrecoverable if it is guaranteed to enter \(\mathcal{X}_{\mathrm{unsafe}}\) within \(k\in\mathbb{N}\) timesteps regardless of control. For \(x\in\mathcal{X}_{\mathrm{irrec}}\), let \(k_{x}\) be the minimum integer such that \(x\) is \(k\)-irrecoverable. We will assume \(\{k_{x}:x\in\mathcal{X}\}\) is upper bounded by a constant \(H<\infty\). Finite irrecoverability has been studied in previous work [10], and is expected to be satisfied for reasonably well-actuated dynamics \(f\) and well-behaved choices of \(\mathcal{X}_{\mathrm{unsafe}}\).
### Control Barrier Functions
Control barrier functions (CBFs) are a useful tool for solving safe control problems. A CBF \(h:\mathcal{X}\to\mathbb{R}\) can be thought of as a classifier that classifies safe and unsafe states according to its level set \(h(x)=0\). The set \(\{x:h(x)\geq 0\}\) defines a safe set \(\mathcal{X}_{\mathrm{safe}}(h)\). Loosely speaking, larger values of \(h(x)\) correspond to'safer' states. Formally, given \((M,\mathcal{X}_{\mathrm{unsafe}})\), and \(\alpha\in(0,1]\), we say that \(h:\mathcal{X}\to\mathbb{R}\) is a (discrete-time) CBF _against_\(\mathcal{X}_{\mathrm{unsafe}}\) if it satisfies:
\[\begin{split}(i)&\quad\forall x\in\mathcal{X}_{ \mathrm{unsafe}},\quad h(x)<0\\ (ii)&\quad\forall x:h(x)\geq 0,\quad\sup_{u}\{h(f(x,u))\} \geq(1-\alpha)h(x)\end{split} \tag{1}\]
We note the following properties, proved in the appendix:
**Lemma 2.1**.: _By condition (1)(i), \(\mathcal{X}_{\mathrm{safe}}(h)\cap\mathcal{X}_{\mathrm{unsafe}}=\emptyset\). By condition (1)(ii), there exists a policy \(\pi\) such that \(\mathcal{X}_{\mathrm{safe}}\) is forward-invariant under \(f_{\pi}\); if \(x\in\mathcal{X}_{\mathrm{safe}}\), then \(f(x,\pi(x))\in\mathcal{X}_{\mathrm{safe}}\)._
A CBF \(h\) is useful for safe control because it eliminates the need to reason about dynamics over long horizons. Instead, we only need to check a one-step bound in condition (1)(ii) to guarantee safety indefinitely. One edge case occurs when \(\mathcal{X}_{\mathrm{safe}}(h)=\emptyset\); we call such CBFs _trivial_. Subsequently we assume that we can always find nontrivial CBFs against \(\mathcal{X}_{\mathrm{unsafe}}\) (if not, this indicates that \(\mathcal{X}_{\mathrm{unsafe}}\) is 'too large' and we should reconsider the choice of \(\mathcal{X}_{\mathrm{unsafe}}\)).
**Transforms of CBFs.** Lastly, we note that certain classes of transformations preserve the control barrier function property, formalized as follows:
**Lemma 2.2**.: _Let \(h:\mathcal{X}\to\mathbb{R}\) be a CBF. Let \(w:\mathbb{R}\to\mathbb{R}\) such that \(\mathrm{Im}(h)\subseteq\mathrm{Dom}(w)\). Suppose there exists \(C\in\mathbb{R}\) such that for all \(x,y\in\mathrm{Dom}(w)\), we have:_
\[\begin{split}(i)&\quad w(x)\geq Cx\\ (ii)&\quad w(x)-w(y)\geq C(x-y)\\ (iii)&\quad\{x:w(x)\geq 0\}=\{x:x\geq 0\}.\end{split} \tag{2}\]
_Then \(\tilde{h}=w\circ h\) is also a CBF. We will say that such \(w\) are CBF-preserving transforms._
The proof is straightforward and given in the appendix.
## 3 Learning of Control Barrier Functions
This section presents the main results connecting value functions to control barrier functions, and then proposes a principled and practical algorithm for learning control barrier functions. Detailed proofs can be found in Appendix D.
### Safety Preserving Task Structure
In the safety-preserving task framework, we assume a reward structure of: \(r(x,u,x^{\prime})=0\) when \(x^{\prime}\in\mathcal{X}_{\mathrm{unsafe}}\); otherwise, \(r(x,u,x^{\prime})=1\). We also assume _early termination_, where the episode terminates immediately when \(x^{\prime}\in\mathcal{X}_{\mathrm{unsafe}}\). Within this task structure, we analyze the optimal value function \(V^{*}\) under the partition illustrated in Figure 2, which consists of:
* \(x\in\mathcal{X}_{\mathrm{unsafe}}\). Since the episode terminates immediately, we trivially have \(V(x)=0\).
* \(x\in\mathcal{X}_{\mathrm{safe}}\). In this case, we know there exists a policy which preserves safety indefinitely, hence we have \(V(x)=\sum_{j=0}^{\infty}\gamma^{j}(1)=\frac{1}{1-\gamma}\).
* \(x\in\mathcal{X}_{\mathrm{irrec}}\). Let \(x\) be \(k\)-irrecoverable. Then \(V^{*}(x)=\sum_{j=0}^{k-1}\gamma^{j}=\frac{1-\gamma^{k}}{1-\gamma}\).
We make two remarks from this analysis. Firstly, \(V^{*}\) is _bounded_ ; we have \(V^{*}(x)\in[0,\frac{1}{1-\gamma}]\). Secondly, the range of \(V^{*}\) is _partitioned_ by \(\mathcal{X}_{\mathrm{safe}},\overline{\mathcal{X}}_{\mathrm{unsafe}}\):
\[\sup_{\overline{\mathcal{X}}_{\mathrm{unsafe}}}\{V(x)\}=\frac{1-\gamma^{H}}{1 -\gamma}<\frac{1}{1-\gamma}=\inf_{\mathcal{X}_{\mathrm{safe}}}\{V(x)\} \tag{3}\]
These two observations motivate us to propose CBFs of the form \(h=V^{*}-R\), formalized below.
**Theorem 3.1**.: _Let \(M\) be an MDP and suppose (a) early termination is employed with termination condition \(c(x)=1\), (b) \(r\) has safety-preserving reward structure, and (c) there exists an upper bound \(H\) on irrecoverability. Then for any \(R\in(\frac{1-\gamma^{H}}{1-\gamma},\frac{1}{1-\gamma}]\), we have that \(h=V^{*}-R\) is a control barrier function against \(\mathcal{X}_{\mathrm{unsafe}}\)._
In practice, we do not have access to \(V^{*}\); we only have access to learned functions \(V\approx V^{*}\). Nonetheless, so long as \(V\) is 'not too far' from \(V^{*}\), we can use \(h=V(x)-R\) as a barrier function.
**Theorem 3.2**.: _Let \(M\) be an MDP and let the assumptions (a) - (c) of Theorem 3.1 hold. Additionally, assume that \(V\) satisfies (d) \(\epsilon\)-optimality; \(\sup_{x\in X}|V(x)-V^{*}(x)|<\epsilon\), (e) \(\epsilon<\frac{\gamma^{H}}{2(1-\gamma)}\). Then for \(\alpha\in[\frac{2\epsilon}{1+\gamma}+\epsilon-R,1]\) and any \(R\in(\frac{1-\gamma^{H}}{1-\gamma}+\epsilon,\frac{1}{1-\gamma}-\epsilon]\), we have that \(h=V-R\) is a control barrier function against \(\mathcal{X}_{\mathrm{unsafe}}\)._
We find that the bound on \(\epsilon\) is very permissive. To illustrate how loose the bound is, let \(H=10\)[10] and \(\gamma=0.99\). Then \(\epsilon\leq\frac{1-\gamma^{H}}{2(1-\gamma)}\approx 47\) suffices, inducing a corresponding \(R=\frac{1}{1-\gamma}-\epsilon\approx 53\) and \(\alpha=0.96\). For smaller \(\epsilon\), a wider range of values of \(R\) will be valid. In our experiments we find that \(R=\frac{1}{2(1-\gamma)}=50\) and \(\alpha=0.1\) work well empirically. Note that in our approach, we do not need to explicitly set \(H\); rather, it is defined implicitly by \(R\).
### Reinforcement Learning Framework
We train a Deep Q-Network [2] for \(2\times 10^{6}\) timesteps on the CartPole environment in OpenAI Gym [11]. A detailed description is provided in Appendix C. The network parametrizes a Q-function; the corresponding value function is \(V(x)=\sup_{u\in\mathcal{U}}Q(x,u)\). The network is trained via standard temporal-difference learning [1] to minimize the TD error: \(\mathcal{L}_{TD}=\mathbb{E}_{(x,u,x^{\prime})\sim f_{\pi}}\|r(x,u)+\gamma V(x^{ \prime})-V(x)\|^{2}\). Our baseline uses the implementation in CleanRL [12]. Training results are visualized in Appendix B.
**Implementation details.** In theory, training a sufficiently expressive \(V\) for sufficiently long on the TD objective results in \(V\) converging uniformly to \(V^{*}\). In practice, we find that training a vanilla DQN is insufficient; certain additional implementation details are required to obtain high-quality barrier functions. Below, we describe and motivate these design choices.
**Bounded value**. Recall from Section 3.1 that \(V^{*}\) is bounded; this motivates us to consider a parametrization of the form \(V(x)=g(\sigma(\phi(x)))\) where \(\phi\) is a neural network, \(\sigma(x)=\frac{1}{1+\exp(-x)}\) is the sigmoid function and \(g(x)=\frac{x}{1-\gamma}\) is a linear mapping that allows \(V\) to have the correct range.
We hypothesize that this aids learning by stabilizing the learning signal on the network weights, by essentially converting the two-sided regression loss into a one-sided classification loss. We denote architectures with bounded value by \(\mathtt{SIGMOID}\), and those without by \(\mathtt{MLP}\).
**Supervision of \(V\)**. Recall that we analytically know \(V^{*}(x)=0\) for \(x\in\mathcal{X}_{\mathrm{unsafe}}\). This motivates us to introduce a supervised loss \(\mathcal{L}_{\mathrm{unsafe}}=\mathbb{E}_{x\sim\mathcal{X}_{\mathrm{unsafe}}} \|V(x)\|\). Since we can specify \(\mathcal{X}_{\mathrm{unsafe}}\), this loss can be approximated by sampling \(\mathcal{X}_{\mathrm{unsafe}}\) (e.g. by rejection sampling). We hypothesize that the supervised loss provides a valuable auxiliary learning signal that complements the standard TD objective. Because it is undesirable to enter unsafe states, we expect such states to be sparsely sampled. Furthermore, due to early termination, it may be outright impossible to reach most of \(\mathcal{X}_{\mathrm{unsafe}}\) (beyond a thin boundary). Hence, the supervised loss over \(\mathcal{X}_{\mathrm{unsafe}}\) provides a learning signal exactly in the regions where the TD objective does not, and vice versa. We indicate models trained with supervision by \(\{\mathtt{SIGMOID},\mathtt{MLP}\}\)-\(\mathtt{SUP}\).
**Exploration**. We implement stronger exploration by modifying the initial state distribution to be more diverse. Because the TD objective only acts on states experienced during rollout, improved exploration provides a learning signal to \(V\) over a larger region of \(\mathcal{X}\). Exploration through diverse reset initialization is enabled by default; to evaluate its impact, we perform an experiment using the original state distribution, denoted by \(\mathtt{NOEXP}\).
Recalling Lemma 2.2, we define barrier functions of the form \(h=w(V(x)-R)\) with \(R=\frac{1}{2(1-\gamma)}\). In the case where unbounded value functions are used, we let \(w\) be identity; i.e. \(h(x)=V(x)-R\). In the case where bounded value functions are used, we define \(h(x)=\phi(x)=w(V(x)-R)\). The corresponding transform is \(w(x)=\tilde{\sigma}^{-1}\circ g^{-1}\), with \(\tilde{\sigma}=\sigma(x)-0.5\). We assert that \(w\) is CBF-preserving; a proof is given in the Appendix.
_Remark 3.3_.: \(w=\tilde{\sigma}^{-1}\circ g^{-1}\) is CBF-preserving.
\begin{table}
\begin{tabular}{l|c c c|c c} Experiment & \(\mathtt{MLP}\) & \(\mathtt{SIGMOID}\) & \(\mathtt{MLP}\)-\(\mathtt{SUP}\) & \(\mathtt{SIGMOID}\)-\(\mathtt{SUP}\) & \(\mathtt{NOEXP}\) \\ \hline Bounded & No & Yes & No & Yes & Yes \\ Supervised & No & No & Yes & Yes & Yes \\ Exploration & Yes & Yes & Yes & Yes & No \\ \hline \(\pi^{*}\) return & \(493\pm 14.8\) & \(\mathbf{500}\pm 0\) & \(465\pm 48.0\) & \(\mathbf{500}\pm 0.0\) & \(\mathbf{500}\pm 0.0\) \\ TD error & \(2.43\pm 0.71\) & \(2.09\pm 0.27\) & \(0.958\pm 0.14\) & \(0.746\pm 0.057\) & \(\mathbf{0.607}\pm 0.053\) \\ \(m_{valid}(h)\) & \(0.476\pm 0.140\) & \(0.752\pm 0.130\) & \(0.603\pm 0.046\) & \(0.991\pm 0.002\) & \(\mathbf{0.993}\pm 0.002\) \\ \(m_{cov}(h)\) & \(\mathbf{0.767}\pm 0.146\) & \(0.477\pm 0.141\) & \(0.595\pm 0.048\) & \(0.106\pm 0.010\) & \(0.063\pm 0.013\) \\ \(\pi_{h}\) return & \(9.36\pm 0.16\) & \(21.3\pm 14.4\) & \(21.3\pm 11.2\) & \(\mathbf{163.5}\pm 54.7\) & \(114.6\pm 85.1\) \\ \hline \end{tabular}
\end{table}
Table 1: Description and final metrics for \(5\) seeds of \(5\) settings. On all metrics except coverage, enabling both bounded parametrization and supervision outperformed all ablations. The lower coverage can be explained by the trade-off between \(m_{valid}\) and \(m_{cov}\).
## 4 Verification of Learned CBFs
After obtaining candidate barrier functions through the learning process, it is crucial to verify whether they meet the conditions in (1). We investigate a total of \(5\) experimental settings, ablating each design choice, summarized in Table 1, and perform \(5\) seeded runs of each setting. We visualize the learned barrier functions for each setting in Figure 3. Overall, the SIGMODID-SUP model is best. Supervision is essential to ensuring \(\mathcal{X}_{\text{safe}}(h)\cup\mathcal{X}_{\text{unsafe}}=\emptyset\). Exploration results in a larger \(\mathcal{X}_{\text{safe}}(h)\). We also remark that SIGMODI results in more even contours than MLP.
Despite clear differences in CBFs between model variants, we note that standard metrics used in RL such as episode return and TD error fail to capture this discrepancy, as evidenced in Appendix B and Figure 6. Therefore, we further propose metrics that evaluate the quality of learned barrier functions.
### Metrics on Control Barrier Functions
**Validity.** Given \(h\), we aim to quantify the extent to which it is valid across the state space, satisfying the conditions in (1). Concretely, we will define a _validity_ metric \(m_{\text{valid}}\) to measure the quality of the learned CBF. Hence we rewrite (1) as logical assertions \(p_{i}\) and define associated predicates \(\rho_{i}:\mathcal{X}\to\{0,1\}\) indicating whether \(p_{i}\) holds for \(x\in\mathcal{X}\).
* \(p_{1}(x):=x\in\mathcal{X}_{\text{unsafe}}\implies h(x)<0\). We define the associated predicate \(\rho_{1}(h)=1-\mathds{1}\{x\in\mathcal{X}_{\text{unsafe}},\,h(x)\geq 0\}\).
* \(p_{2}(x,\alpha):=h(x)\geq 0\implies\sup_{u}\{h(f(x,u))\}\geq(1-\alpha)h(x)\). We define the associated predicate \(\rho_{2}(h,\alpha)=1-\mathds{1}\{h(x)\geq 0,\,\sup_{u}\{h(f(x,u))\}<(1- \alpha)h(x)\}\).
We have defined \(\rho_{1},\rho_{2}\) such that \(\mathbb{E}_{x\in\mathcal{X}}[\rho_{1}(h)(x)]\) (respectively \(\rho_{2}(h,\alpha)\)) measures the fraction of states where condition (1)(i) (respectively (1)(ii)) holds. Since we need both conditions to hold for \(h\) to be a barrier function, it makes sense to define the metric \(m_{\text{valid}}(h)=\mathbb{E}_{x\in\mathcal{X}}[\rho_{1}(h)(x)\rho_{2}(h, \alpha)(x)]\). In all experiments, we use a value of \(\alpha=0.1\).
**Coverage.** Given \(h\), we would also like to measure the size of its safe set. A trivial barrier function (where \(\mathcal{X}_{\text{safe}}(h)=\emptyset\)) is of no practical use even if it is valid everywhere. We measure this with the _coverage_ metric \(m_{\text{cov}}(h)=\mathbb{E}_{x\in\mathcal{X}}[\mathds{1}\{h(x)\geq 0\}]\), computed by sampling. In practice, we sample from a bounded subset \(\mathcal{X}^{\prime}\) which is assumed to contain \(\mathcal{X}_{\text{safe}}\).
**Discussion.** Throughout the training history of different architectures, we observe a trade-off between validity and coverage, demonstrated in Figure 4. Validity refers to the extent to which a barrier function satisfies the specified conditions, while coverage measures the proportion of the state space on which the barrier function is applicable. The goal is to find the best barrier functions that achieve a validity metric, \(m_{\text{valid}}\), equal to 1, indicating complete satisfaction of the conditions. Simultaneously, we aim to maximize the coverage, measured by the metric \(m_{\text{cov}}\), while still maintaining the high validity. We visualize training histories of \(m_{\text{cov}},m_{\text{valid}}\) in Figure 7 of Appendix B. The final results are also summarized in Table 1. Empirically, bounded parametrization and supervision both aid in improving validity, whereas exploration aids in improving coverage. Thus, our experimental design choices are vindicated by evaluation on barrier metrics. More importantly, we note that standard RL metrics such as episode reward and TD error did not accurately distinguish between the learned networks in this regard. This demonstrates that
Figure 4: Validity and coverage throughout training history of different architectures. As validity increases, coverage tends to decrease. The best barrier functions have \(m_{valid}=1\), and \(m_{cov}\) as high as possible subject to that.
our proposed barrier metrics provide a valuable and _orthogonal_ perspective for evaluating learned barrier functions.
### Safety Constraints with Barrier Functions
One common use of control barrier functions is to constrain a nominal policy \(\pi_{nom}\) to respect safety constraints. While this naively requires one-step lookahead to calculate \(h(f(x,u))\), we note that the Q-function allows us to perform _implicit_ one-step lookahead through the Bellman optimality condition \(Q^{*}(x,u)=r(x,u)+\gamma V^{*}(f(x,u))\). Thus, we define the safety-constrained policy:
\[\pi_{h}(x)=\begin{cases}\pi_{nom}(x)&\text{if }Q(x,\pi_{nom}(x))\geq R\\ \operatorname*{argmax}_{u}Q(x,u)&\text{if }Q(x,\pi_{nom}(x))<R\end{cases} \tag{4}\]
We take \(\pi_{nom}\) to be the uniform random policy and roll out \(100\) episodes of \(\pi_{h}\) for varying \(h\). For each architecture, we evaluate (i) the safety-constrained episode length, and (ii) the safety success rate, defined as the fraction of episodes without safety violations. The results are summarized in Figure 5. On the whole, the architectures SIGMOD-SUP,NOEXP with higher validity \(m_{valid}\) serve as better safety constraints, justifying the use of \(m_{valid}\) for model selection. However, we note that the best architecture failed to reach a safety success rate of \(100\%\); we attribute this to the fact that \(m_{valid}\) is not a rigorous measure of validity, but only provides statistical evidence of validity through sampling.
## 5 Related work
**Certificates and RL.** Previous work on leveraging certificate functions in RL focuses on practical algorithms for learning safe control. Cheng et al. [13] used CBFs during online learning to both guide exploration and guarantee safety with high probability. Westenbroek et al. [14] proposed a method to learn safe, stabilizing policies for locomotion by minimizing control Lyapunov-Barrier functions using reinforcement learning. A similar work used CLFs in the reward formulation for sample-efficient learning on real-world CartPole and quadruped systems [15]. Concurrently with our work, Zhao et al. [16] proposed an actor-critic method for learning discrete-time barrier functions, using an augmented Lagrangian method to constrain \(V\). Compared to previous work, we are the first to formalize and prove a connection between value functions and control barrier functions, and analyze the resulting implications. From a practical standpoint, our method makes minimal modifications to the RL algorithm, and thus is simpler and more general.
**Reward framework.** The reward framework we consider was originally proposed in Safe MBPO [10], which provided similar safety guarantees. However, the original formulation required \(H\) steps of _explicit_ lookahead, and hence considered only the model-based setting as safety was estimated using model-based imaginary rollouts. By leveraging control theory, we show that we only need to perform a single step of _implicit_ lookahead (through the \(Q\) function), allowing us to generalize to the model-free setting.
**Learned certificates.** Generally, there exists a wealth of literature on learning of neural certificates [17; 18; 19; 20]. While a full review of certificate learning is outside the scope of this paper, we
Figure 5: Safety-constrained exploration with learned CBFs. Left: Average safety-constrained episode length over \(100\) rollouts. Right: Safety success rate, defined as fraction of episodes with no safety violations. SIGMOD-SUP is the best safety constraint.
refer interested readers to Dawson et al. [21] for a comprehensive survey. Learning methods for neural certificates typically rely on self-supervised learning, consider continuous systems, assume knowledge of dynamics, and _control-affine_ dynamics. Certificates for discrete-time systems were studied in [22; 23]. Recent work studied certificate learning for black-box systems through learned dynamics models [24]. Compared to the main body of work on certificate learning, our method is applicable to a much wider range of systems as it works with black-box dynamics, discrete-time systems, and does not need control-affineness.
**Safe RL.** Finally, we discuss our work in the context of the safe RL literature. As discussed in the introduction, such methods aim to preserve safety by augmenting \(M\) with safety constraints of the form \(c_{i}(x,u)\leq 0\). Methods for learning to solve cMDPs have been widely studied, such as Lagrangian methods [25; 26] and Lyapunov-based methods [27]. Recent work considers building a trust region of policies [28; 29], projecting to a safer policy [30], and using conservative policy updates [31]. Within this context, our results show that learned Q-functions can be directly used in a constraint of the form \(c(x,u)=Q(x,u)-R\leq 0\) in order to guarantee safety. Hence, our method is orthogonal to and compatible with all of the safe RL methods discussed above.
## 6 Limitations and Future Work
**Safety violations during exploration**. Our method assumes no prior knowledge on dynamics. Hence, a barrier function trained _tabula rasa_ will likely need to encounter many safety violations in order to learn safe behaviour. This may be unsuitable for learning in real-world environments where safety must be preserved throughout exploration. An exciting direction for future work is to reduce safety violations during exploration by using nominal (and possibly imperfect) dynamics models to pre-train a CBF solution using self-supervised learning approaches [21], and subsequently fine-tune using our RL-based method.
**Soft safety guarantees**. Despite empirically correlating well with the capacity of barrier functions to constrain unsafe policies, our validity metric can only be interpreted as a statistical argument for safety, rather than a formal proof; indeed, provable guarantees are impossible so long as we assume completely black-box dynamics. By considering gray-box dynamics models instead, such as nominal models with unknown parameters, future work can explore methods that provide stronger guarantees such as rigorous verification through Lipschitz-continuity bounds [21], formal verification through symbolic logic [32], or exhaustive verification [33].
**Discrete-action environments**. In this work, we consider discrete-action environments as we parametrize barrier functions in terms of \(Q\)-functions and evaluate \(V(x)=\sup_{u\in\mathcal{U}}Q(x,u)\), which is more difficult in continuous-action environments. Nevertheless, the results in Section 3.1 show the applicability to any general MDP. An analogous method for continuous-action tasks could use an actor to obtain a variational lower bound on \(V\), or explore dueling network architectures [34] which parametrize and learn separate \(V,Q\).
**Sample efficiency**. Our work adopts a minimalist RL approach which can be sample-inefficient. Future work can improve sample efficiency through offline datasets [35], model-based reinforcement learning [36; 37], or better representation learning methods [38; 39].
## 7 Conclusion
This work presents theoretical contributions that establish a connection between barrier functions and value functions and demonstrates the feasibiliy of learning of barrier functions through an RL approach. We explore and ablate critical implementation details for learning high-quality barrier functions using our method. We demonstrate that standard RL metrics fail to evaluate the capacity of learned barrier functions to act as safety constraints. To address this gap, we propose our own novel barrier metrics.
The proposed approach is especially suitable for learning **perceptual CBFs**, where safety can be defined as a direct function of sensor inputs. In one case study, perceptual CBFs on LiDAR scans enabled safe obstacle avoidance in cluttered environments [7]. In contrast to self-supervised learning, which requires careful handling of sensor dynamics, reinforcement learning naturally scales to end-to-end robot control [40], making it a promising alternative.
The theoretical contributions of this work have broad applicability and can extend to any MDP \(M\) with any choice of RL algorithm. This suggests that our method can be employed to learn barrier functions for safe control in diverse tasks. Future work will extend to tasks with different reward structures by defining an auxiliary safety-preserving reward for the unsafe set \(\mathcal{X}_{\text{unsafe}}\) and training an auxiliary value function as the CBF. This will enable joint learning of safety constraints and task-oriented behaviours.
In summary, our work contributes to the development of general, scalable, and _verifiable_ control methods that can be applied to various tasks. By introducing novel barrier metrics and leveraging reinforcement learning techniques, we provide a useful framework for developing verifiable control systems, enabling safer and more reliable autonomous behaviors in real-world environments.
|
2306.03991
|
Numerical evidence for a small-scale dynamo approaching solar magnetic
Prandtl numbers
|
Magnetic fields on small scales are ubiquitous in the universe. Though they
can often be observed in detail, their generation mechanisms are not fully
understood. One possibility is the so-called small-scale dynamo (SSD).
Prevailing numerical evidence, however, appears to indicate that an SSD is
unlikely to exist at very low magnetic Prandtl numbers ($Pr_M$) such as are
present in the Sun and other cool stars. We have performed high-resolution
simulations of isothermal forced turbulence employing the lowest $Pr_M$ values
so far achieved. Contrary to earlier findings, the SSD turns out to be not only
possible for $Pr_M$ down to 0.0031, but even becomes increasingly easier to
excite for $Pr_M$ below $\simeq\,$0.05. We relate this behaviour to the known
hydrodynamic phenomenon referred to as the bottleneck effect. Extrapolating our
results to solar values of $Pr_M$ indicates that an SSD would be possible under
such conditions.
|
Jörn Warnecke, Maarit J. Korpi-Lagg, Frederick A. Gent, Matthias Rheinhardt
|
2023-06-06T19:58:29Z
|
http://arxiv.org/abs/2306.03991v1
|
# Numerical evidence for a small-scale dynamo approaching solar magnetic Prandtl numbers
###### Abstract
Magnetic fields on small scales are ubiquitous in the universe. Though they can often be observed in detail, their generation mechanisms are not fully understood. One possibility is the so-called small-scale dynamo (SSD). Prevailing numerical evidence, however, appears to indicate that an SSD is unlikely to exist at very low magnetic Prandtl numbers (\(\mathrm{Pr}_{\mathrm{M}}\)) such as are present in the Sun and other cool stars. We have performed high-resolution simulations of isothermal forced turbulence employing the lowest \(\mathrm{Pr}_{\mathrm{M}}\) values so far achieved. Contrary to earlier findings, the SSD turns out to be not only possible for \(\mathrm{Pr}_{\mathrm{M}}\) down to \(0.0031\), but even becomes increasingly easier to excite for \(\mathrm{Pr}_{\mathrm{M}}\) below \(\simeq 0.05\). We relate this behaviour to the known hydrodynamic phenomenon referred to as the bottleneck effect. Extrapolating our results to solar values of \(\mathrm{Pr}_{\mathrm{M}}\) indicates that an SSD would be possible under such conditions.
*Corresponding author(s). E-mail(s): [email protected];
helicity, or more generally, lacking mirror-symmetry, due to rotation, shear, and/or stratification. It generates coherent, dynamically significant, magnetic fields on the global scales of the object in question [1]. Characteristics of LSDs vary depending on the dominating generative effects, such as differential rotation in the case of the Sun. Convective turbulence provides both generative and dissipative effects [2], and their presence and astrophysical relevance is no longer strongly debated.
The presence of the other type of dynamo instability, namely the small-scale or fluctuation dynamo (SSD), however, remains controversial in solar and stellar physics. In an SSD-active system, the magnetic field is generated at scales comparable to, or smaller than the characteristic scales of the turbulent flow, enabled by chaotic stretching of field lines at high magnetic Reynolds number [3]. In contrast to the LSD, excitation of an SSD requires significantly stronger turbulence [1]. Furthermore, it has been theorized that it becomes more difficult to excite SSD at very low magnetic Prandtl number \(\Pr_{\rm M}\)[4; 5; 6; 7; 8; 9; 10], the ratio of kinematic viscosity \(\nu\) and magnetic diffusivity \(\eta\). In the Sun, \(\Pr_{\rm M}\) can reach values as low as \(10^{-6}\)-\(10^{-4}\)[11], thus seriously repudiating whether an SSD can at all be present. Numerical models of SSD in near-surface solar convection typically operate at \(\Pr_{\rm M}\approx 1\)[12; 13; 14; 15; 16; 17; 18] and thus circumvent the issue of low-\(\Pr_{\rm M}\) dynamos.
A powerful SSD may potentially have a large impact on the dynamical processes in the Sun. It can, for example, influence the angular momentum transport and therefore the generation of differential rotation [19; 20], interact with the LSD [21; 22; 23; 24; 25] or contribute to coronal heating via enhanced photospheric Poynting flux [26]. Hence, it is of great importance to clarify whether or not an SSD can exist in the Sun. Observationally, it is still debated whether the small-scale magnetic field on the surface of the Sun has contributions from the SSD or is solely due to the tangling of the large-scale magnetic field by the turbulent motions [27; 28; 29; 30; 31; 32]. However, these studies show a slight preference of the small-scale fields to be cycle-independent. SSD at small \(\Pr_{\rm M}\) are also important for the interiors of planets and for liquid metal experiments [33].
Various numerical studies have reported increasing difficulties in exciting the SSD when decreasing \(\Pr_{\rm M}\)[6; 10; 34], confirming the theoretical predictions. However, current numerical models reach only \(\Pr_{\rm M}=0.03\) using explicit physical diffusion or slightly lower (estimated) \(\Pr_{\rm M}\), relying on artificial hyperdiffusion [7; 8]. To achieve even lower \(\Pr_{\rm M}\), one needs to increase the grid resolution massively, see also [35]. Exciting the SSD requires a magnetic Reynolds number (\(\Re_{\rm M}\)) typically larger than 100; hence, e.g., \(\Pr_{\rm M}=0.01\) implies a fluid Reynolds number \(\Re=10^{4}\), where \(\Re=u_{\rm rms}\ell/\nu\), \(u_{\rm rms}\) being the volume integrated root-mean-squared velocity, \(\ell\) a characteristic scale of the velocity, and \(\Re_{\rm M}=\Pr_{\rm M}\Re\). In this paper, we take this path and lower \(\Pr_{\rm M}\) significantly using high-resolution simulations.
## Results
We include simulations with resolutions of \(256^{3}\) to \(4608^{3}\) grid points and \(\mathrm{Re}=46\) to \(33000\). This allows us to explore the parameter space from \(\mathrm{Pr_{M}}=1\) to \(0.0025\), which is closer to the solar value than has been investigated in previous studies. For each run, we measure the growth rate \(\lambda\) of the magnetic field in its kinematic stage and determine whether or not an SSD is being excited.
To afford an in-depth exploration of the effect of \(\mathrm{Pr_{M}}\), we omit large-scale effects such as stratification, rotation and shear. We avoid the excessive integration times, required to simulate convection, by driving the turbulent flow explicitly under isothermal conditions. Our simulation setup consists of a fully periodic box with a random volume force, see Online Methods for details; the flow exhibits a Mach number of around \(0.08\). In Fig. 1, we visualize the velocity and magnetic fields of one of the highest resolution and Reynolds number cases. As might be anticipated for low \(\mathrm{Pr_{M}}\) turbulence, the flow exhibits much finer, fractal-like structures than the magnetic field. Note, that all our results refer to the kinematic stage of the SSD, where the magnetic field strength is far too weak to influence the flow and otherwise arbitrary.
### Growth rates and critical magnetic Reynolds numbers
In Fig. 2 we visualize the growth rate \(\lambda\) as function of \(\mathrm{Re}\) and \(\mathrm{Re_{M}}\). We find positive growth rates for all sets of runs with constant \(\mathrm{Pr_{M}}\) if \(\mathrm{Re_{M}}\) is large enough. \(\lambda\) increases always with increasing \(\mathrm{Re_{M}}\) as expected. Surprisingly, the growth rates are significantly lower within the interval from \(\mathrm{Re}=2000\) to
Figure 1: Visualisation of flow and SSD solution. Flow speed (left) and magnetic field strength (right) from a high resolution SSD-active run with \(\mathrm{Re}=18200\) and \(\mathrm{Pr_{M}}=0.01\) on the surface of the simulation box.
10000 than below and above. With the \(\mathrm{Re}_{\mathrm{M}}\) values used, this maps roughly to a \(\mathrm{Pr}_{\mathrm{M}}\) interval from about 0.1 to 0.04.
The growth rates for \(\mathrm{Pr}_{\mathrm{M}}=0.1\) match very well the ones from [10], indicated by triangles. From Fig. 2, we clearly see that the critical magnetic Reynolds number \(\mathrm{Re}_{\mathrm{M}}^{\mathrm{crit}}\), defined by growth rate \(\lambda=0\), first rises as a function of \(\mathrm{Re}\) and then falls for \(\mathrm{Re}>3\times 10^{3}\), see the thin black line. Looking at \(\mathrm{Re}_{\mathrm{M}}^{\mathrm{crit}}\) as a function of magnetic Prandtl number \(\mathrm{Pr}_{\mathrm{M}}\), it first increases with decreasing \(\mathrm{Pr}_{\mathrm{M}}\) and then decreases for \(\mathrm{Pr}_{\mathrm{M}}<0.05\). Hence, an SSD is easier to excite here than for \(0.05<\mathrm{Pr}_{\mathrm{M}}<0.1\). We could even find a nearly marginal, positive growth rate for \(\mathrm{Pr}_{\mathrm{M}}=0.003125\). The decrease of \(\lambda\) at low \(\mathrm{Pr}_{\mathrm{M}}\) is an important result as the SSD was believed to be even harder [4; 9] or at least equally hard [7; 8] to excite when \(\mathrm{Pr}_{\mathrm{M}}\) was decreased further from previously investigated values. The growth rates agree qualitatively with the earlier work at low \(\mathrm{Pr}_{\mathrm{M}}\)[6; 7; 8].
For a more accurate determination of \(\mathrm{Re}_{\mathrm{M}}^{\mathrm{crit}}\), we next plot the growth rates for fixed \(\mathrm{Pr}_{\mathrm{M}}\) as a function of \(\mathrm{Re}_{\mathrm{M}}\), see Fig. 3(a). The data are consistent with \(\lambda\propto\ln\left(\mathrm{Re}_{\mathrm{M}}/\mathrm{Re}_{\mathrm{M}}^{ \mathrm{crit}}\right)\) as theoretically predicted [36; 37]. Fitting accordingly, we are able to determine \(\mathrm{Re}_{\mathrm{M}}^{\mathrm{crit}}\) as a function of \(\mathrm{Pr}_{\mathrm{M}}\), see Fig. 3(b). The latter plot clearly shows that there are three distinct regions of dynamo excitation: When \(\mathrm{Pr}_{\mathrm{M}}\) decreases in the range \(1\geq\mathrm{Pr}_{\mathrm{M}}\geq 0.1\) it becomes much harder to excite
Figure 2: **Small-scale dynamo growth rate as function of the fluid and magnetic Reynolds numbers (\(\mathrm{Re}\) and \(\mathrm{Re}_{\mathrm{M}}\)).** The diamonds represent the results of this work and the triangles those of [10]. The color coding indicates the value of the normalized growth rate \(\lambda\tau\) with \(\tau=1/u_{\mathrm{rms}}k_{\mathrm{f}}\), a rough estimate for the turnover time. Dotted lines indicate constant magnetic Prandtl number \(\mathrm{Pr}_{\mathrm{M}}\). White circles indicate zero growth rate for certain \(\mathrm{Pr}_{\mathrm{M}}\), obtained from fitting for the critical magnetic Reynolds number, as shown in Fig. 3; fitting errors are signified by yellow-black bars. see Supplementary Material, Section 5. The background colors including the thin black line (zero growth) are assigned via linear interpolation of the simulation data. The green dashed line shows the power-law fit of the critical \(\mathrm{Re}_{\mathrm{M}}\) for \(\mathrm{Pr}_{\mathrm{M}}\leq 0.08\), with power 0.125, see Fig. 3b.
the SSD. In the range \(0.1\geq\Pr\geq 0.04\), excitation is most difficult with little variation of \(\mathrm{Re}_{\mathrm{M}}^{\mathrm{crit}}\). For \(\Pr\leq 0.04\), it again becomes easier as \(\Pr_{\mathrm{M}}\) reduces. In [7; 8], the authors already found an indication of \(\mathrm{Re}_{\mathrm{M}}^{\mathrm{crit}}\) to level-off with decreasing \(\Pr_{\mathrm{M}}\), however only when using artificial hyperdiffusion. Similarly, with our error bars, a constant \(\mathrm{Re}_{\mathrm{M}}^{\mathrm{crit}}\) cannot be excluded for \(0.01<\Pr_{\mathrm{M}}<0.1\). However, at \(\Pr_{\mathrm{M}}=0.005\), the error bar allows to conclude that \(\mathrm{Re}_{\mathrm{M}}^{\mathrm{crit}}\) is here lower than at \(\Pr_{\mathrm{M}}=0.05\). This again confirms our result that \(\mathrm{Re}_{\mathrm{M}}^{\mathrm{crit}}\) is decreasing with \(\Pr_{\mathrm{M}}\) for very low \(\Pr_{\mathrm{M}}\).
For \(\Pr_{\mathrm{M}}\leq 0.05\), the decrease of \(\mathrm{Re}_{\mathrm{M}}^{\mathrm{crit}}\) with \(\Pr_{\mathrm{M}}\) can be well represented by the power-law \(\mathrm{Re}_{\mathrm{M}}^{\mathrm{crit}}\propto\Pr_{\mathrm{M}}^{0.125}\). Extrapolating this to the Sun and solar-like stars would lead to \(\mathrm{Re}_{\mathrm{M}}^{\mathrm{crit}}\approx 40\) at \(\Pr_{\mathrm{M}}=10^{-6}\), which means that we could expect an SSD to be present. For increasing \(\mathrm{Re}\), by decreasing \(\nu\), it would be reasonable to assert that the statistical properties of the flow and hence \(\mathrm{Re}_{\mathrm{M}}^{\mathrm{crit}}\) become independent of \(\Pr_{\mathrm{M}}\). However, episodes of non-monotonic behavior of \(\mathrm{Re}_{\mathrm{M}}^{\mathrm{crit}}\) when approaching this limit cannot be ruled out.
The well-determined \(\mathrm{Re}_{\mathrm{M}}^{\mathrm{crit}}\) dependency on \(\Pr_{\mathrm{M}}\) together with its error bars and the power-law fit have been added to Fig. 2, and agree very well with the thin black line for \(\lambda=0\) interpolated from the growth rates.
### Regions of dynamo excitation
Next we seek answers to the obvious question arising: why is the SSD harder to excite in a certain intermediate range of \(\Pr_{\mathrm{M}}\) and easier at lower and higher values? For this, we investigate the kinetic and magnetic energy spectra of a representative subset of the runs, see Supplementary Table 2. We show in Fig. 4 spectra of two exemplary cases: Run F005, with \(\Pr_{\mathrm{M}}=0.05\), probes the
Figure 3: **Growth rate and critical Reynolds number.** Panel (a): normalized growth rate \(\lambda\tau\) as function of magnetic Reynolds number \(\mathrm{Re}_{\mathrm{M}}\) for simulation sets with fixed magnetic Prandtl number \(\Pr_{\mathrm{M}}\), indicated by different colors. Logarithmic functions \(\lambda\tau\propto\ln\left(\mathrm{Re}_{\mathrm{M}}/\mathrm{Re}_{\mathrm{M}}^{ \mathrm{crit}}\right)\) according to [36; 37] were fitted separately to the individual sets, as indicated by the colored lines; see the dashed-dotted line for the mean slope. Panel (b): critical magnetic Reynolds number \(\mathrm{Re}_{\mathrm{M}}^{\mathrm{crit}}\) as function of \(\Pr_{\mathrm{M}}\) obtained from the fits in panel (a). Error bars show the fitting error, see Supplementary Material, Section 5. The diamond indicates a run with growth rate \(\lambda\approx 0\), hence its \(\mathrm{Re}_{\mathrm{M}}\) represents \(\approx\mathrm{Re}_{\mathrm{M}}^{\mathrm{crit}}\) for the used \(\Pr_{\mathrm{M}}=0.003125\). The red dashed line is a power law fit \(\mathrm{Re}_{\mathrm{M}}^{\mathrm{crit}}\propto\Pr_{\mathrm{M}}^{0.125}\), valid for \(\Pr_{\mathrm{M}}\lesssim 0.08\). The grey shaded area indicates the \(\Pr_{\mathrm{M}}\) interval where the dynamo is hardest to excite (\(\mathrm{Re}_{\mathrm{M}}^{\mathrm{crit}}\gtrsim 150\)).
\(\mathrm{Pr_{M}}\) interval of impeded dynamo action, while Run H0005, with \(\mathrm{Pr_{M}}=0.005\), is clearly outside it; see Supplementary Fig. 1 and 2 for spectra of other cases.
In all cases the kinetic energy clearly follows a Kolmogorov cascade with \(E_{\mathrm{kin}}\propto k^{-5/3}\) in the inertial range. When compensating with \(k^{5/3}\), we find the well-known bottleneck effect [38; 39]: a local increase in spectral energy, deviating from the power-law, as found both in fluid experiments [40; 41; 42] and numerical studies [43; 44]. It has been postulated to be detrimental to SSD growth [4; 10]. For the magnetic spectrum on the other hand, yet clearly visible only for \(\mathrm{Pr_{M}}\leq 0.005\), we find a power-law following \(E_{\mathrm{mag}}\propto k^{-3}\). A \(3/2\) slope at low wavenumbers as predicted by [45] is seen only in the runs with \(\mathrm{Pr_{M}}\) close to one, while for the intermediate and low \(\mathrm{Pr_{M}}\) runs, the positive-slope part of the spectrum shrinks to cover only the lowest \(k\) values, and the steep negative slopes at high \(k\) values become prominent. A steep negative slope in the magnetic power spectra was also seen by [7] for \(\mathrm{Pr_{M}}\) slightly below unity. However, the authors propose a tentative power of \(-1\) given that the \(-3\) slope is not yet clearly visible for their \(\mathrm{Pr_{M}}\) values.
Analyzing our simulations, we adopt the following strategy: For each spectrum, we determine the wavenumber of the bottleneck, \(k_{\mathrm{b}}\), as the location of its maximum in the (smoothed) compensated spectrum, along with its starting
Figure 4: **Energy spectra.** Kinetic (top row) and magnetic (bottom row) energy spectra for two exemplary runs with \(\mathrm{Re}=7958\), \(\mathrm{Pr_{M}}=0.05\) (left column) and \(\mathrm{Re}=32930\), \(\mathrm{Pr_{M}}=0.005\) (right column). In the middle row, the kinetic spectra are compensated by \(k^{5/3}\). Vertical lines indicate the forcing wavenumber \(k_{\mathrm{f}}\) (green solid), the wavenumber of the bottleneck’s peak \(k_{\mathrm{b}}\) (red solid) and its starting point \(k_{\mathrm{bs}}\) (red dotted), the viscous dissipation wavenumber \(k_{\nu}\) (orange), the ohmic dissipation wavenumber \(k_{\eta}=k_{\nu}\mathrm{Pr_{M}^{3/4}}\) (dark blue) and the characteristic magnetic wavenumber \(k_{\mathrm{M}}\) (light blue). All spectra are averaged over the kinematic phase whereupon each individual magnetic spectrum was normalized by its maximum, thus taking out the exponential growth.
point \(k_{\rm bs}<k_{\rm b}\) at the location with 75% of the maximum, see the middle-row panels of Fig. 4. We additionally calculate a characteristic magnetic wavenumber, defined as \(k_{\rm M}=\int_{k}E_{\rm mag}(k)k\,dk/\int_{k}E_{\rm mag}(k)\,dk\), which is often connected with the energy-carrying scale. Furthermore, we calculate the viscous dissipation wavenumber \(k_{\nu}=(\epsilon_{\rm K}/\nu^{3})^{1/4}\) following Kolmogorov theory, where \(\epsilon_{\rm K}\) is the viscous dissipation rate \(2\nu\mathbf{S}^{2}\) with the traceless rate-of-strain tensor of the flow, \(\mathbf{S}\). From the relations between these four wavenumbers (listed in Supplementary Table 2), we will draw insights about the observed behavior of \(\rm Re_{M}^{\rm crit}\) with respect to \(\rm Pr_{M}\).
We plot \(k_{\rm b}/k_{\nu}\) and \(k_{\rm bs}/k_{\nu}\) as functions of \(\rm Pr_{M}\) in Fig. 5. As is expected, \(k_{\rm b}/k_{\nu}\), or the ratio of the viscous scale to the scale of the bottleneck, does not depend on \(\rm Pr_{M}\), as the bottleneck is a purely hydrodynamic phenomenon. The start of the bottleneck \(k_{\rm bs}\) should likewise not depend on \(\rm Pr_{M}\), but the low \(\rm Re\) values for \(\rm Pr_{M}=1\) to \(0.1\) lead to apparent thinner bottlenecks, hence an unsystematic weak dependency. The red shaded area between \(k_{\rm b}\) and \(k_{\rm bs}\) is the low-wavenumber part of bottleneck where the slope of the spectrum is larger (less negative) than \(-5/3\) see Supplementary Table 2 for values of the modified slope \(\alpha_{\rm b}\) and Supplementary, Section 1 for a discussion. We note that \(\alpha_{\rm b}\approx-1.3\ldots-1.5\) and can thus deviate significantly from \(-5/3\). Overplotting the \(k_{\rm M}/k_{\nu}\) curve reveals that it intersects with the red shaded area exactly
Figure 5: **Relation of the characteristic magnetic wavenumber \(k_{\rm M}\) to the bottleneck.** We show its peak \(k_{\rm b}\) and its starting point \(k_{\rm bs}\) in red, the characteristic magnetic wavenumber \(k_{\rm M}\) in light blue and the ohmic dissipation wavenumber \(k_{\eta}\) in dark blue. The red shaded area between \(k_{\rm b}\) and \(k_{\rm bs}\) corresponds to the low-wavenumber part of the bottleneck where the turbulent flow is rougher than for a \(-5/3\) power-law. The Roman numbers indicate the three distinct regions of dynamo excitation. The region of the weakest growth (II) is over-plotted in grey. The characteristic magnetic wavenumber \(k_{\rm M}\) can be fitted by two power laws (black dotted lines): \(k_{\rm M}/k_{\nu}\propto\rm Pr_{M}^{0.54}\) for \(\rm Pr_{M}\geq 0.05\) and \(k_{\rm M}/k_{\nu}\propto\rm Pr_{M}^{0.71}\) for \(\rm Pr_{M}\leq 0.05\). All wavenumbers are normalized by the viscous one \(k_{\nu}\). We find that the dynamo is hardest to excite if \(k_{\rm M}\) lies within the low-wavenumber side of the bottleneck. Leaving this region towards lower or higher wavenumbers makes the dynamo easier to excite. The inset shows \(k_{\rm M}/k_{\eta}\) as a function of \(\rm Pr_{M}\).
where the dynamo is hardest to excite (region II). This lets us conclude that the shallower slope of the low-wavenumber part of the bottleneck may indeed be responsible for enhancing \(\mathrm{Re}_{\mathrm{M}}^{\mathrm{crit}}\) in the interval \(0.04\leq\mathrm{Pr}_{\mathrm{M}}\leq 0.1\). Using this plot, we can now clearly explain the three regions of dynamo excitation. For \(0.1\leq\mathrm{Pr}_{\mathrm{M}}\leq 1\) the low-wavenumber part of bottleneck and the characteristic magnetic scale are completely decoupled. This makes the SSD easy to excite (region I). For \(0.04\leq\mathrm{Pr}_{\mathrm{M}}\leq 0.1\), (grey, region II), the dynamo is hardest to excite because of the shallower slope of the kinetic spectra. In region III, where \(\mathrm{Pr}_{\mathrm{M}}\leq 0.04\) the low-wavenumber part of bottleneck and the characteristic magnetic scale are again completely decoupled making the dynamo easier to excite.
Further, we find that the dependence of \(k_{\mathrm{M}}/k_{\nu}\) on \(\mathrm{Pr}_{\mathrm{M}}\) also differs between the regions. In region I \(k_{\mathrm{M}}/k_{\nu}\) depends on \(\mathrm{Pr}_{\mathrm{M}}\) via \(k_{\mathrm{M}}/k_{\nu}\propto\mathrm{Pr}_{\mathrm{M}}^{0.54}\) and in region II and III via \(k_{\mathrm{M}}/k_{\nu}\propto\mathrm{Pr}_{\mathrm{M}}^{0.71}\). This becomes particularly interesting when comparing the characteristic wavenumber \(k_{\mathrm{M}}\) with the ohmic dissipation wavenumber which is defined as \(k_{\eta}=k_{\nu}\mathrm{Pr}_{\mathrm{M}}^{3/4}\). In region I, we find a significant difference of \(k_{\mathrm{M}}\) and \(k_{\eta}\) in value and scaling. However, in region III the scaling of \(k_{\mathrm{M}}\) comes very close to the \(3/4\) scaling of \(k_{\eta}\). This behaviour can be even better seen in the inset of Fig. 5, where the ratio \(k_{\mathrm{M}}/k_{\eta}\) is \(0.3\) for \(\mathrm{Pr}_{\mathrm{M}}=1\) and tends towards unity for decreasing \(\mathrm{Pr}_{\mathrm{M}}\), but is likely to saturate below \(0.75\).
## Discussion
In conclusion, we find that the SSD is progressively easier to excite for magnetic Prandtl numbers below \(0.04\), in contrast to earlier findings, and thus is very likely to exist in the Sun and other cool stars. Provided saturation at sufficiently high levels, the SSD has been proposed to strongly influence the dynamics of solar-like stars: previous numerical studies, albeit at \(\mathrm{Pr}_{\mathrm{M}}\approx 1\), indicate that this influence concerns for example the angular momentum transport [19; 20], and the LSD [21; 22; 23; 24; 25]. Our kinematic study, however, only shows that a positive growth rate is possible at very low \(\mathrm{Pr}_{\mathrm{M}}\), but not whether an SSD is able to generate dynamically important field strengths. As the \(\mathrm{Re}_{\mathrm{M}}\) of the Sun and solar-like stars is several orders of magnitude higher than the extrapolated \(\mathrm{Re}_{\mathrm{M}}^{\mathrm{crit}}\) value of \(40\), we yet expect dynamically important SSDs as indicated by \(\mathrm{Pr}_{\mathrm{M}}=1\) simulations [15]. However, numerical simulations with \(\mathrm{Pr}_{\mathrm{M}}\) down to \(0.01\) show a decrease of the saturation strength with decreasing \(\mathrm{Pr}_{\mathrm{M}}\)[46].
The results of our study are well in agreement with previous numerical studies considering partly overlapping \(\mathrm{Pr}_{\mathrm{M}}\) ranges [6; 7; 8; 10]. Those found some discrepancies with the Kazantsev theory [45] for low \(\mathrm{Pr}_{\mathrm{M}}\), for example the narrowing down of the positive Kazantsev spectrum at low and intermediate wavenumbers, and the emergence of a negative slope instead at large wave numbers [7]. We could extend this regime to even lower \(\mathrm{Pr}_{\mathrm{M}}\) and therefore study these discrepancies further. For \(\mathrm{Pr}_{\mathrm{M}}\leq 0.005\) we find that the magnetic
spectrum shows a power-law scaling \(k^{-3}\), which is significantly steeper than the tentative \(k^{-1}\) one proposed in [7] for \(0.03\lesssim\Pr_{\mathrm{M}}\lesssim 0.07\) (but only for 8th-order hyperdiffusivity). This latest finding of such a steep power law in the magnetic spectrum challenges the current theoretical predictions and might indicate that the SSD operating at low \(\Pr_{\mathrm{M}}\) is fundamentally different from that at \(\Pr_{\mathrm{M}}\approx 1\).
Secondly, we find that the growth rates near the onset follow an \(\ln(\mathrm{Re}_{\mathrm{M}})\) dependence as predicted by [36; 37], and not a \(\mathrm{Re}_{\mathrm{M}}^{1/2}\) one as would result from intertial-range-driven SSD [1; 7]. We do not observe a tendency of the growth rate to become independent of \(\mathrm{Re}_{\mathrm{M}}\) at the highest \(\Pr_{\mathrm{M}}\) either, which could be an indication of an outer-scale driven SSD, as postulated by [7]. Furthermore, we find that the pre-factor of \(\gamma\propto\ln(\mathrm{Re}_{\mathrm{M}}/\mathrm{Re}_{\mathrm{M}}^{\mathrm{ crit}})\) is nearly constant with its mean around 0.022, in agreement with 0.023 of [10]. A constant value means that the logarithmic scaling is independent of \(\Pr_{\mathrm{M}}\) and seems to be of general validity.
Thirdly, we find that the measured characteristic magnetic wavenumber \(k_{\mathrm{M}}\) is always smaller than the estimated \(k_{\eta}\), and furthermore, \(k_{\mathrm{M}}\) is not always following the theory-predicted scaling of \(k_{\eta}\propto\Pr_{\mathrm{M}}^{3/4}\) with \(\Pr_{\mathrm{M}}\). For the region I, where \(\Pr_{\mathrm{M}}\) is close to 1, this discrepancy is up to a factor of three and the deviation from the expected \(\Pr_{\mathrm{M}}\)-scaling is most significant here. These discrepancies have been associated with the numerical setups injecting energy at a forcing scale far larger than the dissipation scale, i.e. \(k_{\mathrm{f}}\ll k_{\eta}\)[1]. Furthermore, our runs in region I also have relatively low \(\mathrm{Re}\) and therefore numerical effects are not dismissible. In region III (low \(\Pr_{\mathrm{M}}\)), \(k_{\mathrm{M}}/k_{\eta}\) is approaching the constant offset factor 0.75. Hence, the scaling of \(k_{\mathrm{M}}/k_{\nu}\) with \(\Pr_{\mathrm{M}}\) gets close to the expected one. This result again indicates that the SSD at low \(\Pr_{\mathrm{M}}\) is different from that at \(\Pr_{\mathrm{M}}\approx 1\).
An increase of \(\mathrm{Re}_{\mathrm{M}}^{\mathrm{crit}}\) with decreasing \(\Pr_{\mathrm{M}}\) followed by an asymptotic levelling-off for \(\Pr_{\mathrm{M}}\ll 1\) was expected in the light of theory and previous numerical studies. Instead, we found non-monotonic behavior as function of \(\Pr_{\mathrm{M}}\); we could relate it to the hydrodynamical phenomenon of the bottleneck. If the characteristic magnetic wavenumber lies in the positive-gradient part of the compensated spectrum, where the spectral slope is significantly reduced from \(-5/3\) to \(\sim-1.4\), the dynamo is hardest to excite (\(0.1\geq\Pr_{\mathrm{M}}\geq 0.04\)). For higher or lower \(\Pr_{\mathrm{M}}\), the dynamo becomes increasingly easier to excite. The local change in slope due to the bottleneck has often been related to an increase of the "roughness" of the flow [1; 10; 43], which is expected to harden dynamo excitation based on theoretical predictions [4; 9] from kinematic Kazantsev theory [45]. In line with theory, the roughness-increasing part of the bottleneck appears decisive in our results, however, only when \(k_{\mathrm{M}}\) is used as a criterion. The usage of \(k_{\eta}\) would in contrast suggest that the peak of the bottleneck is decisive [10]. Such interpretation appears incorrect, as the rough estimate of \(k_{\eta}\) employed here does not represent the magnetic spectrum adequately and the peak of the bottleneck does not coincide with the maximum of "roughness".
## Online Methods
### Numerical setup
For our simulations, we use a cubic Cartesian box with edge length \(L\) and solve the isothermal magnetohydrodynamic equations without gravity, similar as in [5; 47].
\[\frac{\mathrm{D}\mathbf{u}}{\mathrm{D}t} = -c_{\mathrm{s}}^{2}\mathbf{\nabla}\ln\rho+\mathbf{J}\times\mathbf{B}/\rho+\mathbf{ \nabla}\cdot(2\rho\nu\mathbf{\mathsf{S}})/\rho+\mathbf{f}, \tag{1}\] \[\frac{\partial\mathbf{A}}{\partial t} = \mathbf{u}\times\mathbf{B}+\eta\mathbf{\nabla}^{2}\mathbf{A},\] (2) \[\frac{\mathrm{D}\rho}{\mathrm{D}t} = -\mathbf{\nabla}\cdot(\rho\mathbf{u}), \tag{3}\]
where \(\mathbf{u}\) is the flow speed, \(c_{\mathrm{s}}\) is the sound speed, \(\rho\) is the mass density, and \(\mathbf{B}=\mathbf{\nabla}\times\mathbf{A}\) is the magnetic field with \(\mathbf{A}\) being the vector potential. \(\mathbf{J}=\mathbf{\nabla}\times\mathbf{B}/\mu_{0}\) is the current density with magnetic vacuum permeability \(\mu_{0}\), while \(\nu\) and \(\eta\) are constant kinematic viscosity and magnetic diffusivity, respectively. The rate-of-strain tensor \(\mathbf{S}_{ij}=(u_{i,j}+u_{j,i})/2-\delta_{ij}\mathbf{\nabla}\cdot\mathbf{u}/3\) is traceless. The forcing function \(\mathbf{f}\) provides random white-in-time non-helical transversal plane waves, which are added in each time step to the momentum equation, see [5] for details. The wavenumbers of the forcing lie in a narrow band around \(k_{\mathrm{f}}=2k_{1}\) with \(k_{1}=2\pi/L\). Its amplitude is chosen such that the Mach number \(\mathrm{Ma}=u_{\mathrm{rms}}/c_{\mathrm{s}}\) is always around \(0.082\), where \(u_{\mathrm{rms}}=\sqrt{\langle\mathbf{u}^{2}\rangle_{V}}\) is the volume and time-averaged root-mean-square value. The \(\mathrm{Ma}\) values of all runs are listed in Supplementary Material Table 1. To normalize the growth rate \(\lambda\), we use an estimated turnover time \(\tau=1/u_{\mathrm{rms}}k_{\mathrm{f}}\approx 6/k_{1}c_{\mathrm{s}}\). The boundary conditions are periodic for all quantities and we initialise the magnetic field with weak Gaussian noise.
Diffusion is controlled by the prescribed parameters \(\nu\) and \(\eta\). Accordingly, we define the fluid and magnetic Reynolds numbers with the forcing wavenumber \(k_{\mathrm{f}}\) as
\[\mathrm{Re}=u_{\mathrm{rms}}/\nu k_{\mathrm{f}},\quad\mathrm{Re}_{\mathrm{M}} =u_{\mathrm{rms}}/\eta k_{\mathrm{f}}. \tag{4}\]
We performed numerical free decay experiments (see Supplementary Material, Section 7), from which we confirm that the numerical diffusivities are negligible.
The spectral kinetic and magnetic energy densities are defined via
\[\int_{k}E_{\mathrm{kin}}(k)\,dk = u_{\mathrm{rms}}^{2}\,\langle\rho\rangle_{V}/2, \tag{5}\] \[\int_{k}E_{\mathrm{mag}}(k)\,dk = B_{\mathrm{rms}}^{2}/2\mu_{0}, \tag{6}\]
where \(B_{\rm rms}=\sqrt{\langle\mathbf{B}^{2}\rangle_{V}}\) is the volume-averaged root-mean-square value and \(\langle\rho\rangle_{V}\) the volume-averaged density.
Our numerical setup employs a drastically simplified model of turbulence compared to the actual one in the Sun. There, turbulence is driven by stratified rotating convection being of course neither isothermal nor isotropic. However, these simplifications were to date necessary when performing a parameter study at such high resolutions as we do. Nevertheless, we can connect our study to solar parameters in terms of \(\rm Pr_{M}\) and Ma. Their chosen values best represent the weakly stratified layers within the bulk of the solar convection zone, where \(\rm Pr_{M}\ll 1\) and \(\rm Ma\ll 1\). The anisotropy in the flow on small scales is much weaker there than near the surface and therefore close to our simplified setup.
### Data availability
Data for reproducing Figs. 2, 3, and 5 are included in the article and its supplementary information files. The raw data (time-series, spectra, slices, and snapshots) are provided through IDA/Fair-data service hosted at CSC, Finland, under [https://doi.org/10.23729/206af669-07fd-4a30-9968-b4ded5003014](https://doi.org/10.23729/206af669-07fd-4a30-9968-b4ded5003014). From the raw data, Figs. 1 and 4 can be reproduced.
### Code availability
We use the Pencil Code [48] to perform all simulations, with parallelized fast-Fourier-transforms to calculate the spectra on the fly [49]. Pencil Code is freely available under [https://github.com/pencil-code/](https://github.com/pencil-code/).
### Acknowledgements
We acknowledge fruitful discussions with Axel Brandenburg, Igor Rogachevskii, Alexander Schekochihin, and Jennifer Schober during the Nordita program on "Magnetic field evolution in low density or strongly stratified plasmas". Computing resources from CSC during the Mahti pilot project and from Max Planck Computing and Data Facility (MPCDF) are gratefully acknowledged. This project including all authors, has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (Project UniSDyn, grant agreement n:o 818665). This work was done in collaboration with the COFFIES DRIVE Science Center.
### Author Contributions Statement
JW was leading, but all the authors contributed, to the design and performing the numerical simulations. JW was leading the data analysis. MJKL was in charge of acquiring the computational resources from CSC. All the authors contributed to the interpretation of the results and writing up the manuscript.
### Competing Interests Statement
The authors declare no competing interests.
## References
* (1) Brandenburg, A., Subramanian, K.: Astrophysical magnetic fields and nonlinear dynamo theory. Phys. Rep. **417**, 1-209 (2005) [https://arxiv.org/abs/arXiv:astro-ph/0405052](https://arxiv.org/abs/arXiv:astro-ph/0405052). [https://doi.org/10.1016/j.physrep.2005.06.005](https://doi.org/10.1016/j.physrep.2005.06.005)
* (2) Charbonneau, P.: Dynamo models of the solar cycle. Living Reviews in Solar Physics **17**(1), 4 (2020). [https://doi.org/10.1007/s41116-020-00025-6](https://doi.org/10.1007/s41116-020-00025-6)
* (3) Childress, S., Gilbert, A.D.: Stretch, twist, fold: The fast dynamo. Lecture Notes in Physics Monographs **37** (1995)
* (4) Boldyrev, S., Cattaneo, F.: Magnetic-Field Generation in Kolmogorov Turbulence. Phys. Rev. Lett. **92**(14), 144501 (2004) [https://arxiv.org/abs/astro-ph/0310780](https://arxiv.org/abs/astro-ph/0310780) [astro-ph]. [https://doi.org/10.1103/PhysRevLett.92.144501](https://doi.org/10.1103/PhysRevLett.92.144501)
* (5) Haugen, N.E.L., Brandenburg, A.: Suppression of small scale dynamo action by an imposed magnetic field. Phys. Rev. E **70**(3), 036408 (2004) [https://arxiv.org/abs/arXiv:astro-ph/0402281](https://arxiv.org/abs/arXiv:astro-ph/0402281). [https://doi.org/10.1103/PhysRevE.70.036408](https://doi.org/10.1103/PhysRevE.70.036408)
* (6) Schekochihin, A.A., Cowley, S.C., Maron, J.L., McWilliams, J.C.: Critical Magnetic Prandtl Number for Small-Scale Dynamo. Phys. Rev. Lett. **92**(5), 054502 (2004) [https://arxiv.org/abs/astro-ph/0308336](https://arxiv.org/abs/astro-ph/0308336) [astro-ph]. [https://doi.org/10.1103/PhysRevLett.92.054502](https://doi.org/10.1103/PhysRevLett.92.054502)
* (7) Schekochihin, A.A., Iskakov, A.B., Cowley, S.C., McWilliams, J.C., Proctor, M.R.E., Yousef, T.A.: Fluctuation dynamo and turbulent induction at low magnetic Prandtl numbers. New Journal of Physics **9**(8), 300 (2007) [https://arxiv.org/abs/0704.2002](https://arxiv.org/abs/0704.2002) [physics.flu-dyn]. [https://doi.org/10.1088/1367-2630/9/8/300](https://doi.org/10.1088/1367-2630/9/8/300)
* (8) Iskakov, A.B., Schekochihin, A.A., Cowley, S.C., McWilliams, J.C., Proctor, M.R.E.: Numerical Demonstration of Fluctuation Dynamo at Low Magnetic Prandtl Numbers. Phys. Rev. Lett. **98**(20), 208501 (2007) [https://arxiv.org/abs/astro-ph/0702291](https://arxiv.org/abs/astro-ph/0702291) [astro-ph]. [https://doi.org/10.1103/PhysRevLett.98.208501](https://doi.org/10.1103/PhysRevLett.98.208501)
* (9) Schober, J., Schleicher, D., Bovino, S., Klessen, R.S.: Small-scale dynamo at low magnetic Prandtl numbers. Phys. Rev. E **86**(6), 066412 (2012)
[https://arxiv.org/abs/1212.5979](https://arxiv.org/abs/1212.5979) [astro-ph.CO]. [https://doi.org/10.1103/PhysRevE.86.066412](https://doi.org/10.1103/PhysRevE.86.066412)
* (10) Brandenburg, A., Haugen, N.E.L., Li, X.-Y., Subramanian, K.: Varying the forcing scale in low Prandtl number dynamos. MNRAS **479**(2), 2827-2833 (2018) [https://arxiv.org/abs/1805.01249](https://arxiv.org/abs/1805.01249) [physics.flu-dyn]. [https://doi.org/10.1093/mnras/sty1570](https://doi.org/10.1093/mnras/sty1570)
* (11) Stix, M.: The Sun: an Introduction. Springer, Berlin (2002)
* (12) Cattaneo, F.: On the Origin of Magnetic Fields in the Quiet Photosphere. ApJ **515**(1), 39-42 (1999). [https://doi.org/10.1086/311962](https://doi.org/10.1086/311962)
* (13) Vogler, A., Schussler, M.: A solar surface dynamo. A&A **465**(3), 43-46 (2007) [https://arxiv.org/abs/astro-ph/0702681](https://arxiv.org/abs/astro-ph/0702681) [astro-ph]. [https://doi.org/10.1051/0004-6361:20077253](https://doi.org/10.1051/0004-6361:20077253)
* (14) Kitiashvili, I.N., Kosovichev, A.G., Mansour, N.N., Wray, A.A.: Realistic Modeling of Local Dynamo Processes on the Sun. ApJ **809**(1), 84 (2015) [https://arxiv.org/abs/1506.08924](https://arxiv.org/abs/1506.08924) [astro-ph.SR]. [https://doi.org/10.1088/0004-637X/809/1/84](https://doi.org/10.1088/0004-637X/809/1/84)
* (15) Hotta, H., Rempel, M., Yokoyama, T.: Efficient Small-scale Dynamo in the Solar Convection Zone. ApJ **803**(1), 42 (2015) [https://arxiv.org/abs/1502.03846](https://arxiv.org/abs/1502.03846) [astro-ph.SR]. [https://doi.org/10.1088/0004-637X/803/1/42](https://doi.org/10.1088/0004-637X/803/1/42)
* (16) Rempel, M.: Numerical Simulations of Quiet Sun Magnetism: On the Contribution from a Small-scale Dynamo. ApJ **789**, 132 (2014) [https://arxiv.org/abs/1405.6814](https://arxiv.org/abs/1405.6814) [astro-ph.SR]. [https://doi.org/10.1088/0004-637X/789/2/132](https://doi.org/10.1088/0004-637X/789/2/132)
* (17) Rempel, M.: Small-scale Dynamo Simulations: Magnetic Field Amplification in Exploding Granules and the Role of Deep and Shallow Recirculation. ApJ **859**(2), 161 (2018) [https://arxiv.org/abs/1805.08390](https://arxiv.org/abs/1805.08390) [astro-ph.SR]. [https://doi.org/10.3847/1538-4357/aabba0](https://doi.org/10.3847/1538-4357/aabba0)
* (18) Riva, F., Steiner, O.: Methodology for estimating the magnetic Prandtl number and application to solar surface small-scale dynamo simulations. A&A **660**, 115 (2022) [https://arxiv.org/abs/2202.12115](https://arxiv.org/abs/2202.12115) [astro-ph.SR]. [https://doi.org/10.1051/0004-6361/202142644](https://doi.org/10.1051/0004-6361/202142644)
* (19) Kapyla, P.J., Kapyla, M.J., Olspert, N., Warnecke, J., Brandenburg, A.: Convection-driven spherical shell dynamos at varying Prandtl numbers. A&A **599**, 4 (2017) [https://arxiv.org/abs/1605.05885](https://arxiv.org/abs/1605.05885) [astro-ph.SR]. [https://doi.org/10.1051/0004-6361/201628973](https://doi.org/10.1051/0004-6361/201628973)
* (20) Hotta, H., Kusano, K.: Solar differential rotation reproduced
with high-resolution simulation. Nature Astronomy **5**, 1100-1102 (2021) [https://arxiv.org/abs/2109.06280](https://arxiv.org/abs/2109.06280) [astro-ph.SR]. [https://doi.org/10.1038/s41550-021-01459-0](https://doi.org/10.1038/s41550-021-01459-0)
* (21) Tobias, S.M., Cattaneo, F.: Shear-driven dynamo waves at high magnetic Reynolds number. Nature **497**(7450), 463-465 (2013). [https://doi.org/10.1038/nature12177](https://doi.org/10.1038/nature12177)
* (22) Bhat, P., Subramanian, K., Brandenburg, A.: A unified large/small-scale dynamo in helical turbulence. MNRAS **461**(1), 240-247 (2016) [https://arxiv.org/abs/1508.02706](https://arxiv.org/abs/1508.02706) [astro-ph.GA]. [https://doi.org/10.1093/mnras/stw1257](https://doi.org/10.1093/mnras/stw1257)
* (23) Squire, J., Bhattacharjee, A.: The magnetic shear-current effect: generation of large-scale magnetic fields by the small-scale dynamo. Journal of Plasma Physics **82**(2), 535820201 (2016) [https://arxiv.org/abs/1512.04511](https://arxiv.org/abs/1512.04511) [astro-ph.HE]. [https://doi.org/10.1017/S0022377816000258](https://doi.org/10.1017/S0022377816000258)
* (24) Hotta, H., Rempel, M., Yokoyama, T.: Large-scale magnetic fields at high reynolds numbers in magnetohydrodynamic simulations. Science **351**(6280), 1427-1430 (2016) [https://arxiv.org/abs/http://science.sciencemag.org/content/351/6280/1427.full.pdf](https://arxiv.org/abs/http://science.sciencemag.org/content/351/6280/1427.full.pdf). [https://doi.org/10.1126/science.aad1893](https://doi.org/10.1126/science.aad1893)
* (25) Vaisala, M.S., Pekkila, J., Kapyla, M.J., Rheinhardt, M., Shang, H., Krasnopolsky, R.: Interaction of Large- and Small-scale Dynamos in Isotropic Turbulent Flows from GPU-accelerated Simulations. ApJ **907**(2), 83 (2021) [https://arxiv.org/abs/2012.08758](https://arxiv.org/abs/2012.08758) [physics.flu-dyn]. [https://doi.org/10.3847/1538-4357/abecca](https://doi.org/10.3847/1538-4357/abecca)
* (26) Rempel, M.: Extension of the MURaM Radiative MHD Code for Coronal Simulations. ApJ **834**, 10 (2017) [https://arxiv.org/abs/1609.09818](https://arxiv.org/abs/1609.09818) [astro-ph.SR]. [https://doi.org/10.3847/1538-4357/834/1/10](https://doi.org/10.3847/1538-4357/834/1/10)
* (27) Kleint, L., Berdyugina, S.V., Shapiro, A.I., Bianda, M.: Solar turbulent magnetic fields: surprisingly homogeneous distribution during the solar minimum. A&A **524**, 37 (2010). [https://doi.org/10.1051/0004-6361/201015285](https://doi.org/10.1051/0004-6361/201015285)
* (28) Buehler, D., Lagg, A., Solanki, S.K.: Quiet Sun magnetic fields observed by Hinode: Support for a local dynamo. A&A **555**, 33 (2013) [https://arxiv.org/abs/1307.0789](https://arxiv.org/abs/1307.0789) [astro-ph.SR]. [https://doi.org/10.1051/0004-6361/201321152](https://doi.org/10.1051/0004-6361/201321152)
* (29) Lites, B.W., Centeno, R., McIntosh, S.W.: The solar cycle dependence of the weak internetwork flux. PASJ **66**, 4 (2014). [https://doi.org/10.1093/pasj/psu082](https://doi.org/10.1093/pasj/psu082)
* [30] Bellot Rubio, L., Orozco Suarez, D.: Quiet Sun magnetic fields: an observational view. Living Reviews in Solar Physics **16**(1), 1 (2019). [https://doi.org/10.1007/s41116-018-0017-1](https://doi.org/10.1007/s41116-018-0017-1)
* [31] Faurobert, M., Ricort, G.: Magnetic flux structuring of the quiet Sun internetwork. Center-to-limb analysis of solar-cycle variations. A&A **651**, 21 (2021) [https://arxiv.org/abs/2105.08657](https://arxiv.org/abs/2105.08657) [astro-ph.SR]. [https://doi.org/10.1051/0004-6361/202140705](https://doi.org/10.1051/0004-6361/202140705)
* [32] Korpi-Lagg, M.J., Korpi-Lagg, A., Olspert, N., Truong, H.-L.: Solar-Cycle Variation of quiet-Sun Magnetism and Surface Gravity Oscillation Mode. arXiv e-prints, 2205-04419 (2022) [https://arxiv.org/abs/2205.04419](https://arxiv.org/abs/2205.04419) [astro-ph.SR]
* [33] Tobias, S.M.: The turbulent dynamo. Journal of Fluid Mechanics **912**, 1 (2021) [https://arxiv.org/abs/1907.03685](https://arxiv.org/abs/1907.03685) [physics.flu-dyn]. [https://doi.org/10.1017/jfm.2020.1055](https://doi.org/10.1017/jfm.2020.1055)
* [34] Schekochihin, A.A., Haugen, N.E.L., Brandenburg, A., Cowley, S.C., Maron, J.L., McWilliams, J.C.: The Onset of a Small-Scale Turbulent Dynamo at Low Magnetic Prandtl Numbers. ApJ **625**(2), 115-118 (2005) [https://arxiv.org/abs/astro-ph/0412594](https://arxiv.org/abs/astro-ph/0412594) [astro-ph]. [https://doi.org/10.1086/431214](https://doi.org/10.1086/431214)
* [35] Tobias, S.M., Cattaneo, F., Boldyrev, S.: In: Davidson, P.A., Kaneda, Y., Sreenivasan, K.R. (eds.) MHD Dynamos and Turbulence, pp. 351-404. Cambridge University Press, Cambridge, UK (2012). [https://doi.org/10.1017/CBO9781139032810.010](https://doi.org/10.1017/CBO9781139032810.010)
* [36] Rogachevskii, I., Kleeorin, N.: Intermittency and anomalous scaling for magnetic fluctuations. Phys. Rev. E **56**(1), 417-426 (1997). [https://doi.org/10.1103/PhysRevE.56.417](https://doi.org/10.1103/PhysRevE.56.417)
* [37] Kleeorin, N., Rogachevskii, I.: Growth rate of small-scale dynamo at low magnetic Prandtl numbers. Phys. Scr **86**(1), 018404 (2012) [https://arxiv.org/abs/1112.3926](https://arxiv.org/abs/1112.3926) [astro-ph.SR]. [https://doi.org/10.1088/0031-8949/86/01/018404](https://doi.org/10.1088/0031-8949/86/01/018404)
* [38] Falkovich, G.: Bottleneck phenomenon in developed turbulence. Physics of Fluids **6**(4), 1411-1414 (1994). [https://doi.org/10.1063/1.868255](https://doi.org/10.1063/1.868255)
* [39] Lohse, D., Muller-Groeling, A.: Bottleneck Effects in Turbulence: Scaling Phenomena in r versus p Space. Phys. Rev. Lett. **74**(10), 1747-1750 (1995) [https://arxiv.org/abs/chao-dyn/9405002](https://arxiv.org/abs/chao-dyn/9405002) [nlin.CD]. [https://doi.org/10.1103/PhysRevLett.74.1747](https://doi.org/10.1103/PhysRevLett.74.1747)
* [40] She, Z.-S., Jackson, E.: On the universal form of energy spectra in fully
developed turbulence. Physics of Fluids A **5**(7), 1526-1528 (1993). [https://doi.org/10.1063/1.858591](https://doi.org/10.1063/1.858591)
* (41) Saddoughi, S.G., Veeravalli, S.V.: Local isotropy in turbulent boundary layers at high Reynolds number. Journal of Fluid Mechanics **268**, 333-372 (1994). [https://doi.org/10.1017/S0022112094001370](https://doi.org/10.1017/S0022112094001370)
* (42) Kuchler, C., Bewley, G., Bodenschatz, E.: Experimental Study of the Bottleneck in Fully Developed Turbulence. Journal of Statistical Physics **175**(3-4), 617-639 (2019) [https://arxiv.org/abs/1812.01370](https://arxiv.org/abs/1812.01370) [physics.flu-dyn]. [https://doi.org/10.1007/s10955-019-02251-1](https://doi.org/10.1007/s10955-019-02251-1)
* (43) Dobler, W., Haugen, N.E., Yousef, T.A., Brandenburg, A.: Bottleneck effect in three-dimensional turbulence simulations. Phys. Rev. E **68**(2), 026304 (2003) [https://arxiv.org/abs/astro-ph/0303324](https://arxiv.org/abs/astro-ph/0303324) [astro-ph]. [https://doi.org/10.1103/PhysRevE.68.026304](https://doi.org/10.1103/PhysRevE.68.026304)
* (44) Donzis, D.A., Sreenivasan, K.R.: The bottleneck effect and the Kolmogorov constant in isotropic turbulence. Journal of Fluid Mechanics **657**, 171-188 (2010). [https://doi.org/10.1017/S0022112010001400](https://doi.org/10.1017/S0022112010001400)
* (45) Kazantsev, A.P.: Enhancement of a Magnetic Field by a Conducting Fluid. Soviet Journal of Experimental and Theoretical Physics **26**, 1031 (1968)
* (46) Brandenburg, A.: Nonlinear Small-scale Dynamos at Low Magnetic Prandtl Numbers. ApJ **741**(2), 92 (2011) [https://arxiv.org/abs/1106.5777](https://arxiv.org/abs/1106.5777) [astro-ph.SR]. [https://doi.org/10.1088/0004-637X/741/2/92](https://doi.org/10.1088/0004-637X/741/2/92)
* (47) Brandenburg, A.: The Inverse Cascade and Nonlinear Alpha-Effect in Simulations of Isotropic Helical Hydromagnetic Turbulence. ApJ **550**, 824-840 (2001) [https://arxiv.org/abs/arXiv:astro-ph/0006186](https://arxiv.org/abs/arXiv:astro-ph/0006186). [https://doi.org/10.1086/319783](https://doi.org/10.1086/319783)
* (48) Pencil Code Collaboration, Brandenburg, A., Johansen, A., Bourdin, P., Dobler, W., Lyra, W., Rheinhardt, M., Bingert, S., Haugen, N., Mee, A., Gent, F., Babkovskaia, N., Yang, C.-C., Heinemann, T., Dintrans, B., Mitra, D., Candelaresi, S., Warnecke, J., Kapyla, P., Schreiber, A., Chatterjee, P., Kapyla, M., Li, X.-Y., Kruger, J., Aarnes, J., Sarson, G., Oishi, J., Schober, J., Plasson, R., Sandin, C., Karchniwy, E., Rodrigues, L., Hubbard, A., Guerrero, G., Snodin, A., Losada, I., Pekkila, J., Qian, C.: The Pencil Code, a modular MPI code for partial differential equations and particles: multipurpose and multiuser-maintained. JOSS **6**(58), 2807 (2021). [https://doi.org/10.21105/joss.02807](https://doi.org/10.21105/joss.02807)
* (49) Bourdin, P.-A.: Driving solar coronal MHD simulations on high-performance computers. Geophysical and Astrophysical Fluid Dynamics **114**(1-2), 235-260 (2020) [https://arxiv.org/abs/1908.08557](https://arxiv.org/abs/1908.08557) [astro-ph.SR]. [https://doi.org/10.1080/03091929.2019.1643849](https://doi.org/10.1080/03091929.2019.1643849)
**Numerical evidence for a small-scale dynamo approaching solar magnetic Prandtl numbers**
**Supplementary Material**
Jorn Warnecke\({}^{1}\), Maarit J. Korpi-Lagg\({}^{2,1,3}\), Frederick A. Gent\({}^{2,4}\) and Matthias Rheinhardt\({}^{2}\)
\({}^{1}\) Max-Planck-Institut fur Sonnensystemforschung, Justus-von-Liebig-Weg 3, D-37077 Gottingen, Germany
\({}^{2}\) Department of Computer Science, Aalto University, PO Box 15400, FI-00 076 Espoo, Finland
\({}^{3}\) Nordita, KTH Royal Institute of Technology & Stockholm University, Hannes Alfvens vag 12, SE-11419, Sweden
\({}^{4}\) School of Mathematics, Statistics and Physics, Newcastle University, NE1 7RU, UK
March 13, 2023
## 1 Discussion on the roughness of the flow
We calculate the slope in the low-wavenumber part of the bottleneck by fitting \(E_{\rm mag}\sim k^{\alpha_{\rm b}}\) in the interval \(k_{\rm bs}\ldots k_{\rm b}\). As shown in Supplementary Fig. 1, the values of \(\alpha_{\rm b}\) are significantly different from \(-5/3\) without the bottleneck. Furthermore, we find no clear systematic dependence of \(\alpha_{\rm b}\) on Re.
Figure 1: Slope of the low-wavenumber part of the bottleneck, \(\alpha_{\rm b}\), as a function of fluid Reynolds number Re. The blue line indicates the slope \(-5/3\) in the inertial range without bottleneck; the red line shows a power-law fit with \({\rm Re}^{0.03}\).
Spectra of a representative subset of the runs
Supplementary Figure 2: Kinetic energy spectra, for explanations see Fig. 4. The notation "\(k_{\nu}\!\rightarrow\)" indicates that \(k_{\nu}\) is outside the accessible \(k\) range.
## 3 Table of all runs
Supplementary Table 1: Overview of all performed runs.
\begin{tabular}{l l l l l l l} \hline \hline Run & Resolution & Pr\({}_{\rm M}\) & Re & Re\({}_{\rm M}\) & Ma & \(\lambda\tau\) & \(\sigma_{\lambda}\tau\) \\ \hline P10 & 256\({}^{3}\) & 1.00 & 46 & 46 & 0.075 & 0.0102 & 0.0015 \\ \hline A80 & 256\({}^{3}\) & 8.00 & 99 & 790 & 0.080 & 0.0962 & 0.0013 \\ A40 & 256\({}^{3}\) & 4.00 & 99 & 394 & 0.080 & 0.0794 & 0.0016 \\ A20 & 256\({}^{3}\) & 2.00 & 99 & 197 & 0.080 & 0.0557 & 0.0009 \\ A15 & 256\({}^{3}\) & 1.50 & 99 & 148 & 0.080 & 0.0458 & 0.0010 \\ A10 & 256\({}^{3}\) & 1.00 & 99 & 99 & 0.080 & 0.0315 & 0.0007 \\ A08 & 256\({}^{3}\) & 0.80 & 98 & 78 & 0.079 & 0.0200 & 0.0008 \\ A05 & 256\({}^{3}\) & 0.50 & 98 & 49 & 0.079 & -0.0008 & 0.0019 \\ A04 & 256\({}^{3}\) & 0.40 & 97 & 39 & 0.079 & -0.0052 & 0.0016 \\ \hline B20 & 256\({}^{3}\) & 2.00 & 405 & 811 & 0.081 & 0.1129 & 0.0010 \\ B10 & 256\({}^{3}\) & 1.00 & 404 & 404 & 0.081 & 0.0707 & 0.0004 \\ B05 & 256\({}^{3}\) & 0.50 & 404 & 202 & 0.081 & 0.0334 & 0.0019 \\ B0375 & 256\({}^{3}\) & 0.375 & 403 & 151 & 0.081 & 0.0182 & 0.0011 \\ B025 & 256\({}^{3}\) & 0.25 & 405 & 101 & 0.081 & 0.0015 & 0.0010 \\ B02 & 256\({}^{3}\) & 0.2 & 405 & 81 & 0.082 & -0.0060 & 0.0025 \\ B0125 & 256\({}^{3}\) & 0.125 & 405 & 51 & 0.082 & -0.0217 & 0.0025 \\ B01 & 256\({}^{3}\) & 0.1 & 404 & 40 & 0.082 & -0.0287 & 0.0015 \\ \hline C10 & 256\({}^{3}\) & 1.00 & 1022 & 1022 & 0.082 & 0.1428 & 0.0075 \\ C08 & 256\({}^{3}\) & 0.80 & 1014 & 811 & 0.082 & 0.1079 & 0.0028 \\ C04 & 256\({}^{3}\) & 0.40 & 1013 & 405 & 0.082 & 0.0486 & 0.0007 \\ C02 & 256\({}^{3}\) & 0.20 & 1008 & 201 & 0.081 & 0.0132 & 0.0009 \\ C015 & 256\({}^{3}\) & 0.15 & 1014 & 152 & 0.082 & 0.0027 & 0.0008 \\ C01 & 256\({}^{3}\) & 0.10 & 1013 & 101 & 0.081 & -0.0074 & 0.0010 \\ C008 & 256\({}^{3}\) & 0.08 & 1015 & 81 & 0.082 & -0.0154 & 0.0022 \\ C005 & 256\({}^{3}\) & 0.05 & 1019 & 51 & 0.082 & -0.0239 & 0.0014 \\ \hline D10 & 512\({}^{3}\) & 1.00 & 2053 & 2053 & 0.083 & 0.1853 & 0.0052 \\ D04 & 512\({}^{3}\) & 0.40 & 2038 & 815 & 0.082 & 0.0682 & 0.0017 \\ D02 & 512\({}^{3}\) & 0.20 & 2026 & 405 & 0.082 & 0.0235 & 0.0006 \\ D01 & 512\({}^{3}\) & 0.10 & 2038 & 204 & 0.082 & 0.0053 & 0.0021 \\ D0075 & 512\({}^{3}\) & 0.075 & 2029 & 152 & 0.082 & -0.0005 & 0.0015 \\ D005 & 512\({}^{3}\) & 0.05 & 2031 & 102 & 0.082 & -0.0058 & 0.0017 \\ D004 & 512\({}^{3}\) & 0.04 & 2034 & 81 & 0.082 & -0.0121 & 0.0009 \\ D0025 & 512\({}^{3}\) & 0.025 & 2030 & 51 & 0.082 & -0.0236 & 0.0017 \\ \hline E10 & 1024\({}^{3}\) & 1.00 & 4069 & 4069 & 0.082 & 0.2769 & 0.0126 \\ E08 & 1024\({}^{3}\) & 0.80 & 3999 & 3199 & 0.081 & 0.2141 & 0.0091 \\ E04 & 1024\({}^{3}\) & 0.40 & 4087 & 1635 & 0.082 & 0.1148 & 0.0324 \\ E02 & 1024\({}^{3}\) & 0.20 & 4064 & 813 & 0.082 & 0.0401 & 0.0025 \\ E01 & 1024\({}^{3}\) & 0.10 & 4075 & 407 & 0.082 & 0.0155 & 0.0009 \\ E008 & 1024\({}^{3}\) & 0.08 & 4089 & 325 & 0.082 & 0.0191 & 0.0025 \\ E005 & 1024\({}^{3}\) & 0.05 & 4090 & 205 & 0.082 & 0.0044 & 0.0021 \\ E00375 & 1024\({}^{3}\) & 0.0375 & 4121 & 155 & 0.083 & 0.0006 & 0.0024 \\ E003 & 1024\({}^{3}\) & 0.03 & 4084 & 123 & 0.082 & -0.0035 & 0.0012 \\ E002 & 1024\({}^{3}\) & 0.02 & 4037 & 81 & 0.081 & -0.0163 & 0.0023 \\ E001 & 1024\({}^{3}\) & 0.01 & 4043 & 40 & 0.082 & -0.0232 & 0.0029 \\ \hline \end{tabular}
\(\rm Pr_{M}\) is the magnetic Prandtl number, Re and Re\({}_{\rm M}\) are the fluid and magnetic Reynolds numbers, Ma = \(u_{\rm rms}/c_{\rm s}\) is the Mach number, \(\tau=1/u_{\rm rms}k_{\rm f}\) is a rough estimate for the turnover time, \(\lambda\) is the SSD growth rate with its error \(\sigma_{\lambda}\).
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline Run & Resolution & Pr\({}_{\rm M}\) & Re & Re\({}_{\rm M}\) & Ma & \(\lambda\tau\) & \(\sigma_{\lambda}\tau\) \\ \hline F01 & 2048\({}^{3}\) & 0.10 & 8028 & 803 & 0.081 & 0.0320 & 0.0046 \\ F005 & 2048\({}^{3}\) & 0.05 & 7959 & 398 & 0.080 & 0.0192 & 0.0029 \\ F0025 & 2048\({}^{3}\) & 0.025 & 8050 & 201 & 0.081 & 0.0108 & 0.0026 \\ F001875 & 2048\({}^{3}\) & 0.01875 & 8093 & 152 & 0.082 & 0.0039 & 0.0027 \\ F00125 & 2048\({}^{3}\) & 0.0125 & 8171 & 102 & 0.082 & -0.0004 & 0.0022 \\ F001 & 2048\({}^{3}\) & 0.01 & 8178 & 82 & 0.082 & -0.0009 & 0.0033 \\ F0005 & 2048\({}^{3}\) & 0.005 & 8217 & 41 & 0.083 & -0.0162 & 0.0035 \\ \hline FG00125 & 2048\({}^{3}\) & 0.0125 & 12580 & 157 & 0.084 & 0.0026 & 0.0015 \\ FG001 & 2048\({}^{3}\) & 0.01 & 12313 & 123 & 0.082 & -0.0041 & 0.0021 \\ FG0005 & 2048\({}^{3}\) & 0.005 & 12415 & 62 & 0.083 & -0.0146 & 0.0039 \\ \hline G00125 & 4096\({}^{3}\) & 0.0125 & 18391 & 229 & 0.093 & 0.0372 & 0.0063 \\ G001 & 4096\({}^{3}\) & 0.01 & 18200 & 182 & 0.092 & 0.0065 & 0.0062 \\ G000625 & 4096\({}^{3}\) & 0.00625 & 16126 & 101 & 0.081 & -0.0049 & 0.0069 \\ G0005 & 4096\({}^{3}\) & 0.005 & 16071 & 80 & 0.081 & -0.0152 & 0.0008 \\ \hline H0005 & 4096\({}^{3}\) & 0.005 & 32930 & 165 & 0.083 & 0.0103 & 0.0019 \\ H00031 & 4096\({}^{3}\) & 0.003125 & 32717 & 102 & 0.082 & 0.0021 & 0.0052 \\ H00025 & 4608\({}^{3}\) & 0.0025 & 32910 & 82 & 0.083 & -0.0097 & 0.0035 \\ \hline \end{tabular}
\end{table}
Table 1: continued.
## 4 Table of spectral properties of a subset of the runs
Supplementary Table 2: Selected runs with spectral properties.
\begin{tabular}{c c c c c c c c c} \hline \hline Pr\({}_{\rm M}\) & Re & Re\({}_{\rm M}\) & \(k_{\rm p}\) & \(k_{\rm bs}\) & \(k_{\rm b}\) & \(k_{\rm M}\) & \(k_{\eta}\) & \(\alpha_{\rm b}\) \\ \hline
[MISSING_PAGE_POST]
\hline \end{tabular}
\({\rm Pr_{M}}\) is the magnetic Prandtl number, Re and Re\({}_{\rm M}\) are the fluid and magnetic Reynolds numbers, respectively. \(k_{\rm v}=(\epsilon_{\rm K}/\nu^{3})^{1/4}\) is the viscous dissipation wavenumber with the viscous dissipation rate \(\epsilon_{\rm K}\). \(k_{\rm b}\) and \(k_{\rm b}\) locate the maximum of the bottleneck and its starting point (75% of the maximum), respectively. \(k_{\rm M}=\int_{k}E_{\rm mag}(k)\,dk/\int_{k}E_{\rm mag}(k)dk\) is the characteristic magnetic wavenumber. \(k_{\eta}=k_{\rm v}{\rm Pr_{M}^{3/4}}\) is the ohmic dissipation wavenumber and \(\alpha_{\rm b}\) is the slope of the low-wavenumber part of the bottleneck. All wavenumbers in units of \(k_{1}\).
Details of fitting procedures and error calculation
### Growth rates
The growth rate \(\lambda\) is calculated from the time series of \(B_{\rm rms}\) of the hydrodynamically saturated stage. For this, we fit \(B_{\rm rms}(t)\) with \(\exp(\lambda t)\) using a least-\(\chi^{2}\) fit. For a small number of runs, mostly at \({\rm Pr}_{\rm M}\approx 1\), we need to exclude that part of the time series, where no longer \(B_{\rm rms}\ll\sqrt{\rho u_{\rm rms}^{2}}\) holds, i.e. the kinematic stage is left. The majority of runs always stays in the kinematic stage though. The error of the growth rate, \(\sigma_{\lambda}\), is estimated from the maximum deviation of the fit, which is determined as follows: the time series is divided into three equal parts, each fitted to obtain its value \(\lambda_{i}\), \(i=1,\ldots,3\). We then use half the maximum difference to \(\lambda\) for the error, i.e. \(\sigma_{\lambda}=\max|\lambda_{i}-\lambda|/2\). The minimal length of the time series was chosen to guarantee that \(\sigma_{\lambda}\) is small enough while the maximum is bound by the resolution, hence the computational cost. For resolutions of \(256^{3}-512^{3}\), we have \(t_{\rm max}/\tau\approx 500\), for \(1024^{3}-2048^{3}\), \(t_{\rm max}/\tau\approx 150\) and for \(4096^{3}\), \(t_{\rm max}/\tau\approx 15\). \(\lambda\) and its error \(\sigma_{\lambda}\) are listed in Supplementary Table 1.
### Critical magnetic Reynolds number
To calculate the critical magnetic Reynolds number \({\rm Re}_{\rm M}^{\rm crit}\) for each \({\rm Pr}_{\rm M}\) set, we fit a logarithmic function to the growth rate, i.e. \(\lambda\tau=C_{1}\ln({\rm Re}_{\rm M})+C_{2}\). This functional form is motivated by the theoretical expectation of [4, 1] and also by the distribution of the growth rates in Fig. 3a. We again use a least-\(\chi^{2}\) fit, but take into account the growth rate errors, \(\sigma_{\lambda}\), for the weights. It turns out that the pre-factor \(C_{1}\) varies between \(0.017\) and \(0.028\) with a mean of \(0.022\). Thus, we conclude that the chosen functional form is appropriate. \({\rm Re}_{\rm M}^{\rm crit}({\rm Pr}_{\rm M})\) is calculated analytically from the zero of the fitting function, \({\rm Re}_{\rm M}^{\rm crit}=\exp(-C_{2}/C_{1})\), and its error \(\sigma_{{\rm Re}_{\rm M}^{\rm crit}}\) is derived directly from each fit, representing a one-sigma uncertainty of \({\rm Re}_{\rm M}^{\rm crit}\). The values and errors of \({\rm Re}_{\rm M}^{\rm crit}\) are then used in Fig. 3b and are listed in Supplementary Table 3 together with \(C_{1}\). For the relative error \(\sigma_{{\rm Re}_{\rm M}^{\rm crit}}/{\rm Re}_{\rm M}^{\rm crit}\), one finds for most of the runs values around \(10-20\%\). However, taking these errors into account the decrease of \({\rm Re}_{\rm M}^{\rm crit}\) as function of \({\rm Pr}_{\rm M}\) is still significant.
### Other fits
To obtain \({\rm Re}_{\rm M}^{\rm crit}({\rm Pr}_{\rm M})\propto{\rm Pr}_{\rm M}^{0.125}\) in Fig. 3b, we use also a least-\(\chi^{2}\) fit, employing the errors \(\sigma_{{\rm Re}_{\rm M}^{\rm crit}}\), similar for \(k_{\rm M}/k_{\nu}\propto{\rm Pr}_{\rm M}^{o}\) in Fig 4.
Magnetic energy transfer functions
Seeking further insight into the dynamo operating in the three regimes of \(\mathrm{Pr_{M}}\), we look at the spectral magnetic energy transfer functions. We follow the approach of [2], but using the convention by [3], where the contribution of compressibility is subsumed in the stretching/shearing and advection terms, \(T_{\mathrm{Str}}\) and \(T_{\mathrm{Adv}}\), respectively, as follows
\[T_{\mathrm{Str}}(\mathbf{k})=\hat{\mathbf{B}}(\mathbf{k})\cdot\overline{(+\mathbf{B}\cdot\mathbf{ \nabla}\mathbf{u}-\mathbf{B}\,\mathbf{\nabla}\cdot\mathbf{u}/2)}^{*}(\mathbf{k})/2\mu_{0}+\mathrm{ c.c.} \tag{1}\]
\[T_{\mathrm{Adv}}(\mathbf{k})=\hat{\mathbf{B}}(\mathbf{k})\cdot\overline{(-\mathbf{u}\cdot\mathbf{ \nabla}\mathbf{B}-\mathbf{B}\,\mathbf{\nabla}\cdot\mathbf{u}/2)}^{*}(\mathbf{k})/2\mu_{0}+\mathrm{ c.c.} \tag{2}\]
Here, the hats indicate the Fourier transform and c.c. refers to the complex conjugate expressions. In Supplementary Fig. 4a, we show these functions after shell integration, i.e. as functions of \(k=|\mathbf{k}|\), for six runs with \(0.005\leq\mathrm{Pr_{M}}\leq 0.8\). As expected, the curves peak at higher wavenumbers for higher \(\mathrm{Pr_{M}}\). We also look at the ratio of the wavenumbers at which \(T_{\mathrm{Str}}\) and \(T_{\mathrm{Adv}}\) has its maximum and minimum, respectively, \(k_{\mathrm{Str}}/k_{\mathrm{Adv}}\), which is determined from the curves after smoothing. We find that this ratio is maximal where the dynamo is hardest to excite, shown as grey shade in the Supplementary Fig. 4b. However, we should note that the simulations with the two lowest \(\mathrm{Pr_{M}}\) have a higher resolution than the other four, which could also influence the result. It remains unclear whether, or how, this enhanced ratio would relate to the difficulty in exciting the dynamo at \(0.1\geq\mathrm{Pr_{M}}\geq 0.04\).
Numerical diffusivity
In our low \(\Pr_{\rm M}\) simulations, \(\eta\gg\nu\), hence numerical diffusion might only play a role for the velocity field. We estimate the numerical viscosity in our simulations in two ways. Firstly, we follow the approach of [2], to estimate the fluid Reynolds number from the power spectra. For this we determine the Taylor microscale
\[\lambda_{\rm TM}=\left(\int_{k}E_{\rm kin}(k)dk\,/\!\int_{k}k^{2}\,E_{\rm kin}(k )dk\right)^{1/2}, \tag{3}\]
and the integral scale for turbulent motions
\[\lambda_{\rm int}=\int_{k}k^{-1}E_{\rm kin}(k)dk\,/\!\int_{k}E_{\rm kin}(k)dk. \tag{4}\]
These scales can be used to estimate the effective Reynolds number as
\[{\rm Re}_{\rm eff}\propto\left(\lambda_{\rm int}/\lambda_{\rm TM}\right)^{2}. \tag{5}\]
This is often used to estimate \({\rm Re}_{\rm eff}\) from observations or in simulations that use only numerical diffusivities, although the proportionality constant is unknown. As shown in Supplementary Fig. 5, the effective Re based on Equation (5) scales linearly with Re based on the prescribed value of \(\nu\). Assuming now plausibly numerical diffusion to enter mainly in the form of hyperdiffusion, we infer that from this linear relationship a notable influence of numerical diffusion can be ruled out.
In addition, we performed a flow decay experiment to estimate the numerical diffusion from the flow decay rate in the absence of all other terms. The resulting effective viscosity is to high precision the same as the prescribed \(\nu\) up to wavenumbers \(k\lesssim 1000\,k_{1}\), covering safely all scales relevant for the magnetic field. From these two results we conclude that our numerical simulations are not affected by numerical diffusion, and that the \(\Pr_{\rm M}\) regimes are accurately identified.
|
2308.15487
|
SA Unet Improved
|
Retinal vessels segmentation is well known problem in image processing on the
medical field. Good segmentation may help doctors take better decisions while
diagnose eyes disuses. This paper describes our work taking up the DRIVE
challenge which include segmentation on retinal vessels. We invented a new
method which combine using of StyleGAN2 and SA-Unet. Our innovation can help
any small data set segmentation problem.
|
Nadav Potesman, Ariel Rechtman
|
2023-08-18T06:26:47Z
|
http://arxiv.org/abs/2308.15487v1
|
# SA-UNet for Retinal Vessel improvment using StyleGAN2
###### Abstract
Retinal vessels segmentation is well known problem in image processing on the medical field. Good segmentation may help doctors take better decisions while diagnose eyes disuses. This paper describes our work taking up the DRIVE challenge which include segmentation on retinal vessels. We invented a new method which combine using of StyleGAN2 and SA-Unet. Our innovation can help any small data set segmentation problem.
Key words : Segmentation,retinal blood vessel,Neural Network, StyleGAN, U-Net.
## 1 Introduction
We introduce a method to improve the segmentation of retinal images by creating images and their corresponding segmentation maps. Our proposed solution include training a StylGAN2[KLA\({}^{+}\)19] with the DRIVE dataset1 and then segmenting the new synthesized images using SA-Unet[GSY\({}^{+}\)20] model which currently give State of the Art result on segmenting DRIVE images. We used the new generated images along with their segmentation in order to retrain SA-Unet to achieve better results.
Footnote 1: [https://drive.grand-challenge.org/Download/](https://drive.grand-challenge.org/Download/)
Over the several years Neural Networks (NN) became the best way to deal with many computer vision problems. While up to those days segmentation problems have been solved using Edge Detection and other standard image processing algorithms, NN have shown better performance.
GANs networks have done a tremendous effect on image generators in the last few years. At 2019 Kerras et al from NVIDIA group had show some breakthrough development when they presented StyleGAN network[KLA18] which is a style-base generator. StyleGAN images are very realistic, high-resolution and can be control easily throw latnet vectors.
Changlu Guo et al from Budapest University proposed a solution for DRIVE challenge. They solution is one of the most felicitous solution to the challenge this days. In this paper, the researchers propose a change of architecture of the U-Net[RFB15] and rename the new model as SA-Unet. The main difference between U-Net and SA-Unet is in the backbone of convolutions layers, and in the spatial attention block between the encoder and the decoder as shown in figure 1. Those changes increase the performance of the SA-Unet especially in the tiny vessel segmentation. The performance of SA-Unet on the DRIVE dataset is shown below (Table 1).
## 2 Related Work
Small dataset have been always one of the most problematic fields for machine learning algorithms, and especially for Neural Network algorithms. The main familiar way to increase the size of the
\begin{table}
\begin{tabular}{l|l|l|l|l|l} Method & Sensativity & Specificity & Accuracy & AUC & F1 Score \\ \hline SA-UNet & 0.8212 & 0.9840 & 0.9698 & 0.9864 & 0.8263 \\ U-Net & 0.7677 & 0.9857 & 0.9666 & 0.9789 & 0.8012 \\ IterNet & 0.7735 & 0.9838 & 0.9573 & 0.9816 & 0.8205 \\ \end{tabular}
\end{table}
Table 1: Performance comparison on the DRIVE dataset.
segment, it always have been a big challenge, because the size of vessel may be from single to pixel up to ten pixels and more. Even for normal human that mission consider to be a tough mission.
U-Net [14] have done a great job in segment the data. IterNet [15] network take advantage of U-Net by concatenate U-net with many mini-U-nets. IterNet action way is make at start rough segmentation map, and then improving the segmentation map by take apply the mini-U-nets on small regions.
SA-Unet [16] take the U-Net architecture but change the backbones of the convolutions layers and add the spatial attention between the encoder, and the decoder.
Gal ofir et al. [1] have done a step forward in that topic when they combine StyleGAN with Neural Network in propose to increase the size of the dataset, they took pre-trained IterNet and after generating synthetic images using StyleGan segment the images, after that they make manual corrections on the segmented data, and by that they have get over the problem of small dataset, with the price of less realistic images, and manual correction.
## 3 Data
The Digital Retinal Images for Vessel Extraction (DRIVE) dataset contains 20 images for train, and 20 images for test, images resolution is 565*584 in RGB format. Each Image have mask that cover the actual data of the Retinal. Train images contains a single manual segmentation of the vasculature. All human observers that manually segmented the vasculature were instructed and trained by an experienced ophthalmologist. They were asked to mark all pixels for which they were for at least 70 percent certain that were vessels.
The data also contains many various of eyes healthy ones and infected. We didn't make use of eyes status during our work.
## 4 Methods
### SA-Unet
DRIVE dataset is well-known challenge which have been solved in many ways. SA-Unet is one of the SOTA solution. The big advantage of SA-Unet is that it combine U-Net, SD-Unet, and
Figure 1: SA-UNet architecture.
special attention block. Despite it - they have a free well work code in github page2, so it make the work more easily. The architecture of SA-Unet is very similar to U-Net. But SA-Unet have less parameters to optimize, so the training process is much shorter, and overfitting problem are less likely to happen.
Footnote 2: [https://github.com/clguo/SA-UNet](https://github.com/clguo/SA-UNet)
Dropout have been replace with dropblock which discard region of interest instead of discard randomly weights [11], despite it, between any pair of convolutions layers there are Batch Normalization as shown in figure 2. Moreover SA-Unet came with spatial attention block between the encoder and the decoder. Attention have been in the last few years the more powerful technology in deep learning fields [12].
### StyleGAN
Augmented data always have been a big computer vision problem. There are many ways to augmented data, start from more trivial ways such as image classical transforms such as flip, clip, rotate etc. those methods achieve nice performance at start, but there were still a need to create images that will be close to the real image, but the with major part of different. Neural Networks open an opportunity to create a "deep fake" images, General Adversarial Networks(GAN) made an alternative way to create such images. Now days state of the art (SOTA) GAN is StyleGAN2 while in compare to SW-GAN and DCGAN [13, 1].
### Transfer Learning
One of the best way to train a Neural Network model is take a pre-trained network, and train only the final layers. We examined transfer learning vs. train a completely new Network from scratch. The way we chose to evaluate the network images wad FID metric which given in equation 1.
\[FID(x,g)=||(\mu_{x}-\mu_{g})^{2}||_{2}+Tr(\sigma_{x}+\sigma_{g}-2(\sigma_{x}- \sigma_{g})^{1/2}) \tag{1}\]
FID metric measure the distance between x the origin image, and g the generated image, by transforming the images to a feature space, mu stands for average, and sigma for covariance as in normal Guassian distribution, Tr stands for trace.
Our test had show that transfer learning is almost the same like training StyleGAN from scratch, the result of StyleGan synthesised image compare to real image can be show in figure 4.
### Suggested Pipeline
In our work we suggest an ensemble that may help all little dataset problems get better performance base on the same technique (figure 3).
1. Train the best Neural Network with the original dataset.
Figure 2: Drop Block
2. Train StyleGAN2 with the original dataset. transfer learning are also a good option.
3. Segment the synthetics generated image using the Net from part 1.
4. Train the same net from start using all the data.
5. (optional) Repeat section 2-4 until you get good enough results.
One of our inference is that it recommended start train the SOTA net using the generated data, and do the final epochs using the original data. It make sense, because the original data is the most accurate segment so it will cause the best fine tuning.
### Ensamle
Another approach we test on this paper is to use the above pipline together with the original SA-Unet in orer to create ensamle of networks. This will be described in detail in the next section.
## 5 Experiments
### Evaluation Metrics
In order to evaluate the model successful, we compare the segmentation result to the ground truth data, and for each pixel we examine if it True Positive(TP), False Positive (FP), False Negative (FN), or True Negative (TN). Then we define the Sensitivity(SE) which indicates the proportion of the true positive samples among all the predicted positive samples (TP/TP+FP) also known as Positive Predictive Value (PPV). Specificity (SP) is a metric that measures the proportion of positives that are correctly identified (TP/TP+FN) also known as True Positive Rate (TPR). ACC is a metric for measuring the ratio between the correctly classified pixels and the total pixels in the dataset (TP+TN / TP+TN+FP+FN). Finally there is a unique metric to evaluate segmentation which called F-measure (F1) [S\({}^{+}\)07] that defined by:
\[F_{1}=2*(PR*SE)/(PR+SE) \tag{2}\]
that metric compare the similarity and diversity of the testing datasets. And for the last - the most important metric is the Area Under the ROC Curve (AUC).
### Training Details
The training of both models (StylGAN2, SA-Unet) have been done using the google colab, pro+ edition. The preprocessing training include resize of the images so they will be in rectangular size in a power of 2 size, so the size that have been choose is 512*512 pixels.
Training StyleGAN2 have been done using ADATOR patch 3, using rotate and scale for augmentation. The metric "fid50k-full" have been chosen for loss metric with the ADAM optimizer for 4K ticks, starting with pre-trained StyleGAN on "ffhq512" dataset. After training we create 1000 images with truncation of 0.7.
Footnote 3: [https://github.com/NVlabs/stylegan2-ada-pytorch](https://github.com/NVlabs/stylegan2-ada-pytorch)
SA-Unet have been training at start with the parameters of the origin paper [G
Figure 4: Compare original image(left) to StyleGAN2 image(right) with zoom in
epochs, the first 100 epoch have been train with adaptive learning rate start with 0.001, and the last 50 epoch have been trained with learning rate started by 0.0001. For optimizer we choose ADAM optimizer and the loss chosen to be average of Dice coefficient and binary cross entropy. The size of the discard blocks of DropBlock is set to 7 as in original paper.
After training the StyleGAN2 and the SA-Unet we run 100 epochs of SA-Unet from scratch with [50, 200, 500, 1000] synthesised images, learning rate was 0.001, data memory limited us to batch size of 2. After that we done fine tuning with 50 epochs of the original DRIVE data set images. we got 0.98 Accuracy already using the synthesised data only, with 0.96 Accuracy and F1 score of 0.89.
After fine tuning we achieved better result, we compare our result to the SA-Unet original result, and present them in table 2.
Next we try all possible combination of training. First the real and then the synthetics (and vice versa), with and without augmentation (only on real, only on synthetics and both) and different number of synthetics images (50, 200, 500 or 1000).
### Result
We achieved some interesting working points, each have new SOTA on some metrics but also degradation on another (sometimes very small degradation). The best configuration of SA-UNET training was to start with the real DRIVE images for 100 epochs with augmentations (color and affine), and then another 50 epochs with 1000 synthetics images without augmentations. The AUC metric we got was 0.977 instead of 0.987 in the original paper, in the ACC we got the same result of 0.968, but the F1 score were to our favor 0.9363 instead of 0.8820 in the SA-Unet original paper. The training process exhibited in figure 5.
Behind that in the Precision we got 0.887 which is high above the 0.802 in the original SA-Unet, but in the Sensitivity we decrease the performance from 0.853 to 0.7295 in our model. When we look for the best F1 score we got 0.983 F1 score, with 0.94 AUC, 0.955 ACC and 0.969 precision. Another interesting working point we achieved is 0.982 AUC with 0.92 on the F1, 0.9674 in ACC and 0.8726 precision.
After trying all the above configurations (described at 5.2 section) we tested all the networks again but this time together with the original SA-UNET model, in an ensemble model as described above. Due to time limitations we tried only very simple ensemble technique that combine the predictions from both model to one segmentation map. This ensemble give us little improvement in the result. All the result are in table 2.
## 6 Conclusion
Adding synthetics generated images with GAN can help to get better results when we have lack of ground truth data. Its amazing that train the SA-Unet only on synthetics data can achieve comparable result to the SOTA model that have been trained on real data.
With help of StyleGAN, and without any need to manual work on each image, we managed to get SOTA on some metrics while we stay almost with the same result on the remained metrics. At the same time, we can't ignore that images created by StyleGAN did not manage to reach a good enough resolution in the small details, what cause to loose potential better results. That because the original SA-Unet weakness was in the very small vessels.
For future work we want to design more sophisticated ensemble model that can better utilize the advantages of each model - the original SA-Unet and one that trained on synthetics images.
\begin{table}
\begin{tabular}{l|l|l|l|l|l|l} Method & Sensitivity & Specificity & Accuracy & AUC & F1 Score & Precision \\ \hline SA-UNet & 0.853 & 0.980 & 0.969 & 0.987 & 0.882 & 0.802 \\ Only Synthesized data & 0.862 & 0.977 & 0.967 & 0.969 & 0.870 & 0.792 \\ First Result & 0.767 & 0.991 & 0.965 & 0.978 & 0.938 & 0.889 \\ Best F1 & 0.504 & 0.998 & 0.955 & 0.940 & 0.983 & 0.968 \\ Ensemble & 0.729 & 0.991 & 0.968 & 0.983 & 0.936 & 0.734 \\ Ensemble- Sensitivity & 0.888 & 0.982 & 0.968 & 0.985 & 0.835 & 0.734 \\ \end{tabular}
\end{table}
Table 2: Final performance comparison on the DRIVE dataset.
|
2305.04576
|
An Enhanced Sampling-Based Method With Modified Next-Best View Strategy
For 2D Autonomous Robot Exploration
|
Autonomous exploration is a new technology in the field of robotics that has
found widespread application due to its objective to help robots independently
localize, scan maps, and navigate any terrain without human control. Up to
present, the sampling-based exploration strategies have been the most effective
for aerial and ground vehicles equipped with depth sensors producing
three-dimensional point clouds. Those methods utilize the sampling task to
choose random points or make samples based on Rapidly-exploring Random Trees
(RRT). Then, they decide on frontiers or Next Best Views (NBV) with useful
volumetric information. However, most state-of-the-art sampling-based
methodology is challenging to implement in two-dimensional robots due to the
lack of environmental knowledge, thus resulting in a bad volumetric gain for
evaluating random destinations. This study proposed an enhanced sampling-based
solution for indoor robot exploration to decide Next Best View (NBV) in 2D
environments. Our method makes RRT until have the endpoints as frontiers and
evaluates those with the enhanced utility function. The volumetric information
obtained from environments was estimated using non-uniform distribution to
determine cells that are occupied and have an uncertain probability. Compared
to the sampling-based Frontier Detection and Receding Horizon NBV approaches,
the methodology executed performed better in Gazebo platform-simulated
environments, achieving a significantly larger explored area, with the average
distance and time traveled being reduced. Moreover, the operated proposed
method on an author-built 2D robot exploring the entire natural environment
confirms that the method is effective and applicable in real-world scenarios.
|
Dong Huu Quoc Tran, Hoang-Anh Phan, Hieu Dang Van, Tan Van Duong, Tung Thanh Bui, Van Nguyen Thi Thanh
|
2023-05-08T09:34:55Z
|
http://arxiv.org/abs/2305.04576v1
|
An Enhanced Sampling-Based Method With Modified Next-Best View Strategy For 2D Autonomous Robot Exploration
###### Abstract
Autonomous exploration is a new technology in the field of robotics that has found widespread application due to its objective to help robots independently localize, scan maps, and navigate any terrain without human control. Up to present, the sampling-based exploration strategies have been the most effective for aerial and ground vehicles equipped with depth sensors producing three-dimensional point clouds. Those methods utilize the sampling task to choose random points or make samples based on Rapidly-exploring Random Trees (RRT). Then, they decide on frontiers or Next Best Views (NBV) with useful volumetric information. However, most state-of-the-art sampling-based methodology is challenging to implement in two-dimensional robots due to the lack of environmental knowledge, thus resulting in a bad volumetric gain for evaluating random destinations. This study proposed an enhanced sampling-based solution for indoor robot exploration to decide Next Best View (NBV) in 2D environments. Our method makes RRT until have the endpoints as frontiers and evaluates those with the enhanced utility function. The volumetric information obtained from environments was estimated using non-uniform distribution to determine cells that are occupied and have an uncertain probability. Compared to the sampling-based Frontier Detection and Receding Horizon NBV approaches, the methodology executed performed better in Gazebo platform-simulated environments, achieving a significantly larger explored area, with the average distance and time traveled being reduced. Moreover, the operated proposed method on an author-built 2D robot exploring the entire natural environment confirms that the method is effective and applicable in real-world scenarios.
Sampling-based exploration, Next Best View, random exploring, 2D autonomous robot.
## I Introduction
In robotics research fields, following the discoveries of control theories, for a mobile robot system to drive intelligently, it is necessary to utilize the three fundamental robot control theory approaches: mapping, localization, and navigation [1]. As delineated in [2], an autonomous robot must construct a model of its surrounding environment by integrating localization and mapping and facilitating safe navigation.
The advantage of this combination lies in its capacity to optimally discover and generate maps by employing specialized planners, such as goal planners, path planners, or motion planners, which perform adaptive motion and real-time decision-making for map exploration or deliberate movement. Planners that autonomously cover the map tend to belong to a category known as exploration. Subsequently, the primary tier in exploration approaches involves generating a set of feasible actions the robot could execute, such as goals.
Hitherto, Frontier Detection and Random Exploration have emerged as the most effective methodologies for goal planners [3]. These two procedures separated common exploratory planning strategies into frontier-based and sampling-based [4]. Frontier-based approaches determine their planning actions from the boundaries between free and known space, referred to as frontiers. While sampling-based strategies generally aim to select random points or develop Rapidly-exploring Random Trees (RRTs) to calculate the exploration path. Notably, the sampling-based method choosing the Next Best View (NBV) was initially introduced [5], demonstrating its advantages in dynamic or uncertain contexts where pathways cannot be dependably precomputed. Conversely, frontier-based planners prove effective when robots possess environmental references and knowledge, allowing for expedited exploration in two-dimensional or three-dimensional spaces.
Nevertheless, in a 2D environment, the Frontier-based method occasionally leads to the robot becoming entrapped in detrimental situations due to frequent deficiencies in environmental knowledge, such as encountering uncertain obstructions and being unable to advance significantly toward its objectives. In contrast, the sampling-based NBV planner prevents the robot from entering newly discovered areas owing to its inability to circumvent the local minimum. As described in [6], the performance of sampling-based planners deteriorates in expansive environments or constrained scenarios characterized by narrow openings or bottlenecks. This results in a more substantial time computation requirement when the robot attempts to determine the optimal goal, owing to the revisiting of previously traversed sections or irregular resultant movement, thereby causing the robot to consume a longer duration and distance to achieve a covered map.
In this paper, a method for simultaneously exploring and mapping an unknown 2D space is developed. The environment was mapped using a laser sensor-generated occupancy grid map and the NBV strategies for determining the robot's movement path. Using the RRT algorithm theory, the proposed NBV method searched for exploration paths and decided on the first destination as the NBV point. The branch's nodes
were randomly selected according to a uniform distribution. Unless the number of iterations exceeds the initial setup, these branches stopped at unknown and recognized map borders. Then, our reward equation was applied to RRT vertices, decreasing computing costs and optimizing the predetermined objectives. Conditions for evaluating frontiers include the distance between two nodes and the information gain. This method achieved concentrated exploration similar to frontier-based methods, requiring fewer candidate locations for random trees and being biased to grow towards regions where the robot had not yet traveled. It aided robots in avoiding local minimums and reduced processing costs compared to RH-NBV Sampling-based approaches.
## II Related Work
The study of robotics in the twenty-first century has advanced significantly, leading to consistent enhancements in robotic intelligence. Simultaneous localization and mapping (SLAM) is a collection of techniques designed to address the challenges of mapping and localization concurrently [7]. The term active SLAM was first coined by Davison [8], wherein SLAM would be integrated with active perception to control robots and reduce the uncertainty of their localization and map representation. As a result, studying active perception (also called exploration) to find optimal actions for robots' 'ASLAM technique is essential, eliminating the need for human control over robot movement. The first exploration using adaptive planners was introduced by Thrun et al. [9].
With exploration tasks in mind, [10] introduced planners that choose actions and maximize knowledge of two variables of interest. This led to a new research direction: Investigating unknown areas of the environment that robots need to navigate and make evaluated decisions based on utility computation. Notably, strategies based on the NBV [11] and Frontier [12] theories are popular. The first frontier exploration research selected the closest frontier to the robot. Umari and Mukhopadhyay demonstrated the first use of the Sampling-based method with Frontier Detection, employing RRT algorithms to find frontiers [13]. Quinn et al. developed several geometric frontier-detection methods to improve the performance of previous algorithms by evaluating only a subset of observed free space [14]. The sample-based frontier detector algorithm introduced by [15] reduces the computational load of sampling to find frontiers by sensing the surrounding environment structure and using non-uniform distributed sampling adjacent sliding windows. Soni et al. presented a novel frontier tree approach for multi-robot systems [16].
Concerning the NBV hypothesis, the most prevalent approach is the sampling method employing rapidly random trees to determine optimal paths in known space, referred to as Receding Horizon NBV (RH-NBV) [5]. Extending this work, Bircher et al. presented another sampling-based receding horizon path planning paradigm [17], and Papachristos et al. delivered an uncertainty-aware exploration and mapping planning strategy using a belief space-based approach [18]. To facilitate the evaluation of the NBV path cost, Wang et al. propose a graph-structured roadmap [19], while Batinovic introduced a cuboid-based evaluation method that results in an enviably short computation time [20].
In contrast, due to the advantages of Receding Horizon NBV and classic frontier exploration planning, Selin et al. proposed combining both techniques [21]. Dai et al. suggested a hybrid exploration approach based on sampling-based and frontier-based methods by sampling candidate next-views from the map's frontiers [22]. Lu et al. presented the Frontier enhanced NBV method using a frontier planner as a global and NBV as a local planner [23].
With this research, numerous studies have utilized different algorithms for routing and embedding with SLAM. Trivun et al. developed an "ASLAM with Fast-SLAM, Particle Filter, A*" Global Planner, and DWA Local Planner for autonomous exploration and mapping in a dynamic indoor environment [24]. Bonetto et al. developed an omnidirectional robot to find frontiers and adjust its heading using a rotation sensor while actively navigating, aiming for the robot consistently to achieve the highest features of map representation [25].
## III Proposed Approach
The core idea of our exploration planner remains rooted in the NBV Sampling-based Planner theory: Determine a destination and reach it by sampling RRTs, evaluate the most suitable RRT for exploration, and select the first node of the RRT as the NBV. Fig. 1 visually represents the planner's structure.
Due to the robot's mobility and SLAM running continuously in the background, the robot's positions and scanned maps are constantly updated. A sampling-based method is employed to identify nodes as worthwhile destinations. The RRT algorithm is used as the sample-based and frontiers-based planner to find nodes or frontiers. A utility equation describing the expected
Fig. 1: Diagram illustrating the proposed autonomous exploration and mapping methodology. The SLAM module runs persistently in the background as our suggested planner identifies new destinations utilizing distribution sampling and information gain-based utility measurements.
information gain over time evaluates the nodes. Subsequently, the RRT vertex with the highest gain in the final node is selected. A navigation planner drives the robot to each destination from its current position to the NBV point of the chosen vertex.
Assume we have an occupancy grid map \(M\subset\mathbb{R}^{2}\) as the total space environment set covered by sensors, with cells \(\mathbf{m}\in M\) representing a two-dimensional point \(x_{M}\) on a coordinate system, which has \(P_{o}^{\mathbf{n}}(\mathbf{m})\) occupancy probabilities at time \(n\). This set, as the map, is frequently updated by adding new cells that the robot observes with \(P_{o}^{\mathbf{0}}(\mathbf{m})\). All previously added cells are updated using Bayesian Theory, with distribution \(p(\mathbf{m}|\operatorname{occ})\), where \(\operatorname{occ}\) denotes the likelihood of obstruction.
At the beginning of each planning iteration, the robot assumes a localized position and orientation, forming the two-dimensional vehicle configuration state, \(x_{0}=(x,y,\psi)^{T}\). We determine the cut-off steps as \(N_{max}\). In each planner sampling iteration, \(N_{T}\) increases by 1, and the planner terminates if \(N_{T}\) reaches \(N_{max}\). However, if the best gain \(g_{best}\) remains zero, the sampling loop continues until the final node of RRT is a frontier, or \(g_{best}>0\).
Predefined variables such as \(\epsilon\) for tree length and \(\alpha\) for overshoot view are included. A filter is applied to the RRT algorithm to adjust and eliminate uncertain nodes, dead locations, and out-of-map points. Our proposed approach steps can be outlined in Algorithm 1.
```
1:\(\epsilon,\alpha,N_{max}\gets intiVariables()\)
2:\(N_{T}\gets 0\)
3:\(x_{best},g_{best}\gets x_{0},g_{0}\)
4:\(T_{0}\leftarrow(x_{r}^{0},g_{0})\)
5:while\(N_{T}<N_{max}\) or \(g_{best}=0\)do
6:\(x_{rand}\gets randomModel\)
7:\(x_{near}\gets nearestNeighbor(x_{rand},T)\)\(\triangleright\) (1)
8:\(x_{new}\gets steerOvershoot(x_{rand},x_{near},\epsilon,\alpha)\)\(\triangleright\) (2)
9:\(g_{new}\gets explorationGain(M,T,x_{new})\)\(\triangleright\) (3)
10:\(T\leftarrow(x_{new},g_{new})\)
11:\(N_{T}\gets N_{T}+1\)
12:if\(g_{new}>g_{best}\)then
13:\(x_{best}\gets x_{new}\)
14:\(g_{best}=g_{new}\)
15:endif
16:endwhile
17:\(\beta\gets extractNextBestView(T)\)
18:return\(\beta\)
```
**Algorithm 1** Exploration Planner
In each loop, the \(randomModel\) uniform distribution function randomly selects a point \(x_{rand}\) on the map, and the \(nearestNeighbor\) function returns the \(x_{near}\) vertex of the \(T\) tree closest to the point \(x_{rand}\), as described in (1).
\[x_{near}\leftarrow\operatorname*{arg\,min}_{\forall x\in T}\lVert x-x_{rand}\rVert \tag{1}\]
The state \(x_{new}\), located between \(x_{rand}\) and \(x_{near}\) on the map, is determined by the \( steerOvershoot\) function, ensuring that \(x_{new}-x_{rand}\) is minimized, \(\lvert x_{new}-x_{near}\rvert\leq\epsilon\), and no obstacles exist in the space between \(x_{new}+\alpha\) and \(x_{near}\), as defined in (2).
\[x_{new}=\begin{cases}x_{rand},&\text{if }\lVert x_{rand}-x_{near}\rVert\leq \epsilon\\ x_{near},&\text{if }P_{o}\left(x_{near}+\alpha,\mathbf{m}\right)\geq 0.5\\ x_{near}+\epsilon,&\text{otherwise}\end{cases} \tag{2}\]
The tree \(T\) is expanded by adding \(x_{new}\) as a new vertex, and an edge is formed by connecting \(x_{new}\) and \(x_{near}\).
The Sampling-based Method uses the RRT algorithm to explore potential destinations for the robot without extending beyond the observed space. Only points \(x_{rand}\) within the known region of the space are sampled. The evaluation function \(explorationGain\) is employed to select the optimal nodes of the tree \(T\), calculated using (3).
\[g_{k}=g_{k-1}+G\left(M,\mathbf{m}_{k}\right)e^{-\lambda\lVert x_{k}-x_{k-1}\rVert} \tag{3}\]
Given that \(x_{k}\) is the node under consideration, \(x_{k-1}\) can be obtained through the nearest node of \(T\). The value \(\lambda\) represents the weight of the distance cost. The function \(G\left(M,\mathbf{m}_{k}\right)\) returns the gain of \(\mathbf{m}_{k}=H_{M}x_{k}\), referring to their \(n\) surrounding cells with radius \(r_{max}^{gain}\), weighted by (4), and \(\gamma\) is the weight of occupancy probability cost. Occupancy probability \(p(\mathbf{m}_{k}^{i})\) for each cell \(\mathbf{m}_{k}^{i}\) is calculated using (5).
\[G\left(M,\mathbf{m}_{k}\right)=\sum_{i=0}^{n}e^{-\gamma\left(1-2p(\mathbf{m}_{ k}^{i})\right)} \tag{4}\]
\[\mathbf{m}_{k}^{i}\sim U(0,2r_{max}^{gain})+m_{k} \tag{5}\]
Eventually, the point \(x_{new}\) is considered the current optimal destination of the search tree if its value \(g_{new}\) is greater than the previous value of \(g_{best}\) of the search tree. Once the loop concludes, the \(extractNextBestView\) function returns the first node of branch \(T\).
## IV Experiments And Results
In the evaluation phase, we present a system for autonomous exploration using a mobile robot equipped with a differential drive and a laser sensor. The modified proposed method was assessed through simulated two-dimensional experiments and implemented in a realistic environment. It was compared to the RH-NBV based on [5], adapted for a 2D grid occupancy map, and the Sampling-based Frontier Detection method based on [13]. Note that our proposed technique, the RH-NBV, and the Frontier method utilized the same RRT methodology with identical parameters for constructing RRTs.
### _System Overview_
Our study, referring to [13], developed a system for autonomous exploration using an indoor mobile robot with a differential drive for two-dimensional environments. The robot platform includes an Nvidia AGX Xavier embedded computer and a Hokuyo UST-05LX laser sensor. The laser scanner is mounted to scan the environment around the robot, covering a 240-degree front view with 720 sampling points and a maximum range of \(8m\).
The proposed method and experimental system are implemented as a Robot Operating System package and tested in our customized environments. The system comprises four components: Cartographer SLAM, A* Global Planning, DWA Local Planning, and our proposed approach, as depicted in Fig. 2.
### _Scenarios_
This section presents the results of our experiments in both the simulated maze and the realistic environment using identical parameters. The grid occupancy map resolution was set to \(r=0.05m\). The robot traveled with a maximum linear velocity and angular velocity of \(v_{max}=0.3m/s\) and \(\psi_{max}=1.0rad/s\), respectively. Some initial RRT and gain function parameters were manually selected for our suggested method, the RH-NBV, and the frontier method, declared by Table I.
### _Maze Simulated Environment_
Simulations were conducted in the \(20\times 20m^{2}\) maze environment. We develop an evaluation of the RH-NBV, Frontier, and our proposed methods by recording the essentials to cover specific average explored spaces after 20 iterations executing in 900 seconds. The primary evaluation criteria include the mean and standard deviation of distance, execution time, computation time, and average speed. Additionally, we analyze the success iterations, representing the number of times the robot covered \(120\)\(m^{2}\), \(240\)\(m^{2}\), and \(360\)\(m^{2}\) of the map.
As shown in Table II. Our proposed method consistently outperforms the other methods regarding distance traveled, execution time, computation time, and average speed across all coverage levels. Especially at \(360\)\(m^{2}\) (\(90\%\)) coverage, the RH-NBV method did not achieve any successful iterations, while our approach achieved most times. This demonstrates that the customized NBV is more effective in covering larger map areas.
Fig. 3 depicts top-down view images of the iteration result in the highest covered areas after \(900s\). Our method successfully solved the case, and the entire environment's target exploration (\(99\%\)) was completed. The frontier method could not see the goal because it was too far and stuck the robot in a harmful stage. In this case, it faced too close obstacles and could not make new motion decisions. With RH-NBV, the deteriorating situation due to immoral actions and uncertain goals makes the robot stuck in small, confined areas. Still, it cannot replan to escape this stage. The modified NBV approach enables the robot to avoid local minima better than the RH-NBV method and be more aware of its surroundings than the Frontier method, resulting in minimized localization errors.
The performance of the three methods was also evaluated based on the mean and standard deviation of the time, computational time, and distance traveled to reach \(70\%\) of the explored area after 20 runs. Depicted by Fig. 4, the modified NBV method indicates better time and space needed to execute, showcasing efficient exploration performance and precise map traversing in less time and length than the RH-NBV method.
### _Real Environment_
In this subsection, robot experiments are conducted in a natural indoor environment to evaluate the stability of our system using the proposed method. We aimed to determine whether the robot could easily map the entire environment. Fig. 5 shows a top-down map view after scanning most of the real indoor environment space.
The robot managed to cover \(203.54m^{2}\) (\(97\%\) of the environment) after traveling for \(292.93s\), with a distance of \(58.44m\), and using \(7.54s\) of computation time. These results demonstrate the effectiveness of our proposed Customized NBV method in a real-world setting, highlighting its robustness and applicability for efficient exploration tasks.
### _Discussion_
One of the critical factors contributing to the success of our modified NBV approach is its ability to avoid local minima more effectively than the RH-NBV method. This is primarily achieved through integrating the modified utility metric and boundaries-aware conditions, which enables the robot to make informed decisions about its next destination based on the current state of the environment. In addition, our modified NBV method maintains an uncertain awareness of the map by using additional gain summed from occupancy probability, allowing it to adapt its exploration strategy more effectively in response to environmental changes. This awareness resulted
Fig. 2: Overview of the experiment system framework. The main modules and their connections are shown in the system diagram.
in minimized localization errors and ensured more efficient exploration. Another notable aspect of our modified NBV approach is sampling random map cells for computing gain instead of making clustering or counting unmapped voxels, thus eliminating high computational costs.
## V Conclusion
In this study, we presented a modified Next Best View (NBV) approach for autonomous exploration using a mobile robot in two-dimensional environments. Our proposed method combines the benefits of the Frontier approach with an innovative exploration gain function to improve the robot's exploration efficiency and adaptability to its surroundings. The experiments conducted in a simulated maze and a realistic environment demonstrated that our modified NBV method consistently outperforms the Receding Horizon NBV (RH-NBV) and Frontier methods regarding the explored area, time efficiency, and exploration consistency. Comparing our suggested planner to state-of-the-art autonomous sampling-based exploration planners such as RH-NBV and Frontier demonstrates that the proposed algorithm is applicable and can be further refined for specific applications.
Evaluations of the proposed approach in an actual environment using a self-developed mobile robot are in progress. In the future, by using LIDARs and cameras, we intend to
Fig. 4: Comparision of travel distance, travel time, and computation time needed to drive the robot to cover 70% area of Maze Environment with different approaches.
Fig. 3: Top-down view images represent maps covered by three methods. The blue lines show the path. The red points show the NBV goals. A green marker is the start position of the robot, and a yellow marker is the robot’s position after 900s executing each approach.
construct a comprehensive strategy that can be used in 2D and 3D situations. The scalability of our modified NBV approach to multi-robot systems should be considered. The development of a collaborative exploration strategy, where multiple robots work together to explore the environment, could significantly improve the efficiency and coverage of the exploration process.
|
2308.05694
|
Identically distributed random vectors on locally compact Abelian groups
|
L. Klebanov proved the following theorem. Let $\xi_1, \dots, \xi_n$ be
independent random variables. Consider linear forms
$L_1=a_1\xi_1+\cdots+a_n\xi_n,$ $L_2=b_1\xi_1+\cdots+b_n\xi_n,$
$L_3=c_1\xi_1+\cdots+c_n\xi_n,$ $L_4=d_1\xi_1+\cdots+d_n\xi_n,$ where the
coefficients $a_j, b_j, c_j, d_j$ are real numbers. If the random vectors
$(L_1,L_2)$ and $(L_3,L_4)$ are identically distributed, then all $\xi_i$ for
which $a_id_j-b_ic_j\neq 0$ for all $j=\overline{1,n}$ are Gaussian random
variables. The present article is devoted to an analog of the Klebanov theorem
in the case when random variables take values in a locally compact Abelian
group and the coefficients of the linear forms are integers.
|
Margaryta Myronyuk
|
2023-08-10T16:55:51Z
|
http://arxiv.org/abs/2308.05694v1
|
# Identically Distributed Random Vectors on Locally Compact Abelian Groups
###### Abstract
L. Klebanov proved the following theorem. Let \(\xi_{1},\ldots,\xi_{n}\) be independent random variables. Consider linear forms \(L_{1}=a_{1}\xi_{1}+\cdots+a_{n}\xi_{n}\), \(L_{2}=b_{1}\xi_{1}+\cdots+b_{n}\xi_{n}\), \(L_{3}=c_{1}\xi_{1}+\cdots+c_{n}\xi_{n}\), \(L_{4}=d_{1}\xi_{1}+\cdots+d_{n}\xi_{n}\), where the coefficients \(a_{j},b_{j},c_{j},d_{j}\) are real numbers. If the random vectors \((L_{1},L_{2})\) and \((L_{3},L_{4})\) are identically distributed, then all \(\xi_{i}\) for which \(a_{i}d_{j}-b_{i}c_{j}\neq 0\) for all \(j=\overline{1,n}\) are Gaussian random variables. The present article is devoted to an analog of the Klebanov theorem in the case when random variables take values in a locally compact Abelian group and the coefficients of the linear forms are integers.
_Key words and phrases_: locally compact Abelian group, Gaussian distribution, Haar distribution, random variable, independence
2020 _Mathematics Subject Classification_: Primary 60B15; Secondary 62E10.
## 1 Introduction
Many researches are devoted to the studying of linear forms of independent real-valued random variables ([16]). The analytic theory of them started in the famous paper by Cramer. He proved the following decomposition theorem for a Gaussian distribution ([3]).
**The Cramer theorem.**_If a linear form \(L=\xi_{1}+\xi_{2}\) has a Gaussian distribution then the independent random variables \(\xi_{1},\xi_{2}\) also have Gaussian distributions._
Kac ([15]) and Bernstein ([1]) considered the sum \(L_{1}=\xi_{1}+\xi_{2}\) and the difference \(L_{2}=\xi_{1}-\xi_{2}\) and proved that the independence of the linear forms \(L_{1}\) and \(L_{2}\) implies that the independent random variables \(\xi_{1},\xi_{2}\) have Gaussian distributions. Darmois ([4]) and Skitovich ([28]) generalized the Kac-Bernstein theorem for two arbitrary linear forms.
**The Darmois-Skitovich theorem.**_Let \(a_{j}b_{j}\neq 0\) for all \(j=\overline{1,n}\). If linear forms \(L_{1}=a_{1}\xi_{1}+\cdots+a_{n}\xi_{n}\) and \(L_{2}=b_{1}\xi_{1}+\cdots+b_{n}\xi_{n}\) are independent then the independent random variables \(\xi_{1},\ldots,\xi_{n}\) have Gaussian distributions._
Heyde considered another statistical property of two arbitrary linear forms and proved the following theorem ([13]).
**The Heyde theorem.** _Let \(a_{j}b_{j}\neq 0\) and \(a_{i}b_{j}+a_{j}b_{i}\neq 0\) for all \(i,j=\overline{1,n}\). If the conditional distribution of \(L_{2}=b_{1}\xi_{1}+\cdots+b_{n}\xi_{n}\) given \(L_{1}=a_{1}\xi_{1}+\cdots+a_{n}\xi_{n}\) is symmetric then the independent random variables \(\xi_{1},\ldots,\xi_{n}\) have Gaussian distributions._
Klebanov considered four arbitrary linear forms of independent random variables and obtained the following result ([20]).
**The Klebanov theorem.** _Let \(\xi_{1},\ldots,\xi_{n}\) be independent random variables. Consider linear forms \(L_{1}=a_{1}\xi_{1}+\cdots+a_{n}\xi_{n},\)\(L_{2}=b_{1}\xi_{1}+\cdots+b_{n}\xi_{n},\)\(L_{3}=c_{1}\xi_{1}+\cdots+c_{n}\xi_{n},\)\(L_{4}=d_{1}\xi_{1}+\cdots+d_{n}\xi_{n},\) where the coefficients \(a_{j},b_{j},c_{j},d_{j}\) are real numbers. If the random vectors \((L_{1},L_{2})\) and \((L_{3},L_{4})\) are identically distributed, then all \(\xi_{i}\) for which \(a_{i}d_{j}-b_{i}c_{j}\neq 0\) for all \(j=\overline{1,n}\) are Gaussian random variables._
Note that the Klebanov theorem implies the Darmois-Skitovich theorem and the Heyde theorem. These and other researches devoted to the studying of linear forms of independent real-valued random variables can be founded e.g. in [1]-[4], [13], [14], [17]-[23], [28], see also [16] for more references.
Many branches of the classical probability theory and mathematical statistics have been generalized on different algebraic structures. In particular, there exist a lot of papers devoted to the studying of linear forms of independent random variables with values in locally compact Abelian groups. The group analogue or the Cramer theorem was proved in the paper [5]. Many papers are devoted to the generalizations of the Kac-Bernstein theorem, the Darmois-Skitovich theorem and the Heyde theorem (see e.g. [5], [6], [9], [24], [26], see also [8] and [11] for more references).
The aim of the present paper is to prove the analog of the Klebanov theorem in the case when random variables take values in a locally compact Abelian group and coefficients of the linear forms are integers.
## 2 Notation and definitions
In the article we use standard results on abstract harmonic analysis (see e.g. [12]).
Let \(X\) be a second countable locally compact Abelian group, \(Y=X^{*}\) be its character group, and \((x,y)\) be the value of a character \(y\in Y\) at an element \(x\in X\). Denote by \(Aut(X)\) the group of all topological automorphisms of the group \(X\). Let \(H\) be a subgroup of \(Y\). Denote by \(A(X,H)=\{x\in X:(x,y)=1\ \ \forall\ y\in H\}\) the annihilator of \(H\). For each integer \(n,\)\(n\neq 0,\) let \(f_{n}:X\mapsto X\) be the endomorphism \(f_{n}x=nx.\) Set \(X^{(n)}=f_{n}(X)\), \(X_{(n)}=Kerf_{n}\). Denote by \(\mathbb{Z}(n)\) the finite cyclic group of order \(n\).
Let \(M^{1}(X)\) be the convolution semigroup of probability distributions on \(X\),
\[\widehat{\mu}(y)=\int_{X}(x,y)d\mu(x)\]
be the characteristic function of a distribution \(\mu\in M^{1}(X)\), and \(\sigma(\mu)\) be the support of \(\mu\). If \(H\) is a closed subgroup of \(Y\) and \(\widehat{\mu}(y)=1\) for \(y\in H\), then \(\widehat{\mu}(y+h)=\widehat{\mu}(y)\) for all \(y\in Y\), \(h\in H\) and \(\sigma(\mu)\subset A(X,H)\). For \(\mu\in M^{1}(X)\) we define the distribution \(\bar{\mu}\in M^{1}(X)\) by the rule \(\bar{\mu}(B)=\mu(-B)\) for all Borel sets \(B\subset X\). Note that \(\widehat{\bar{\mu}}(y)=\overline{\widehat{\mu}(y)}\).
Let \(x\in X\). Denote by \(E_{x}\) the degenerate distribution concentrated at the point \(x\), and by \(D(X)\) the set of all degenerate distributions on \(X\). A distribution \(\gamma\in M^{1}(X)\) is called Gaussian ([27, SS4.6]) if its characteristic function can be represented in the form
\[\widehat{\gamma}(y)=(x,y)\exp\{-\varphi(y)\}, \tag{1}\]
where \(x\in X\) and \(\varphi(y)\) is a continuous nonnegative function satisfying the equation
\[\varphi(u+v)+\varphi(u-v)=2[\varphi(u)+\varphi(v)],\quad u,\ v\in Y. \tag{2}\]
Denote by \(\Gamma(X)\) the set of Gaussian distributions on \(X\). We note that according to this definition \(D(X)\subset\Gamma(X)\). Denote by \(I(X)\) the set of shifts of Haar distributions \(m_{K}\) of compact subgroups \(K\) of the group \(X\). Note that
\[\widehat{m}_{K}(y)=\left\{\begin{array}{ll}1,&y\in A(Y,K);\\ 0,&y\not\in A(Y,K).\end{array}\right. \tag{3}\]
We note that if a distribution \(\mu\in\Gamma(X)*I(X)\), i.e. \(\mu=\gamma*m_{K},\) where \(\gamma\in\Gamma(X)\), then \(\mu\) is invariant with respect to the compact subgroup \(K\subset X\) and under the natural homomorphism \(X\mapsto X/K\)\(\mu\) induces a Gaussian distribution on the factor group \(X/K\). Therefore the class \(\Gamma(X)*I(X)\) can be considered as a natural analogue of the class \(\Gamma(X)\) on locally compact Abelian groups.
Let \(f(y)\) be a function on \(Y\), and \(h\in Y.\) Denote by \(\Delta_{h}\) the finite difference operator
\[\Delta_{h}f(y)=f(y+h)-f(y).\]
A function \(f(y)\) on \(Y\) is called a polynomial if
\[\Delta_{h}^{l+1}f(y)=0\]
for some \(n\) and for all \(y,h\in Y\). The minimal \(l\) for which this equality holds is called the degree of the polynomial \(f(y)\).
An integer \(a\) is said to be admissible for a group \(X\) if \(X^{(a)}\neq\{0\}\). The admissibility of integers \(a\) is a group analogue of the condition \(a\neq 0\) for the case of \(X=\mathbb{R}\).
## 3 Main results
Let \(X\) be a second countable locally compact Abelian group. Let \(\xi_{1},\ldots,\xi_{n}\) be independent random variables with values in \(X\). Consider the linear forms
\[L_{1}=a_{1}\xi_{1}+\cdots+a_{n}\xi_{n}, \tag{4}\]
\[L_{2}=b_{1}\xi_{1}+\cdots+b_{n}\xi_{n}, \tag{5}\]
\[L_{3}=c_{1}\xi_{1}+\cdots+c_{n}\xi_{n}, \tag{6}\]
\[L_{4}=d_{1}\xi_{1}+\cdots+d_{n}\xi_{n}, \tag{7}\]
where the coefficients \(a_{j},b_{j},c_{j},d_{j}\) are integers. We will keep the designation \(L_{1}\), \(L_{2}\), \(L_{3}\) and \(L_{4}\) throughout the article.
The following theorem gives the description of locally compact Abelian groups for which the group analog of the Klebanov theorem takes place.
**Theorem 1**: _Let \(X\) be a second countable locally compact Abelian group. Let \(\xi_{1},\ldots,\xi_{n}\) be independent random variables with values in \(X\) and distributions \(\mu_{\xi_{j}}\) with non-vanishing characteristic functions. Consider the linear forms \((4)-(7)\), where the coefficients \(a_{j},b_{j},c_{j},d_{j}\) are integers. If the random vectors \((L_{1},L_{2})\) and \((L_{3},L_{4})\) are identically distributed, then for all those \(i\) for which_
\[a_{i}d_{j}-b_{i}c_{j}\mbox{ are admissible integers for $X$ for all $j=\overline{1,n}$}, \tag{8}\]
_the following statements hold:_
\((i)\) _If \(X\) is a torsion-free group then \(\mu_{\xi_{i}}\in\Gamma(X)\);_
\((ii)\) _If \(X=X_{(p)}\) then \(\mu_{\xi_{i}}\in D(X)\)._
**Remark 1**: The class of torsion-free locally compact Abelian groups is very wide. For example, it includes the real line \(\mathbb{R}\), the group of integers \(\mathbb{Z}\), the group of rational numbers \(\mathbb{Q}\), compact torsion-free Abelian groups, their products. Note that each compact torsion free Abelian group is topologically isomorphic to a group of the form
\[\Sigma_{a}^{\mathfrak{m}}\times\mathbf{P}_{p\in P}\Delta_{p}^{ \mathfrak{m}_{p}},\]
where \(P\) is the set of prime numbers, \(\mathfrak{m}\) and \(\mathfrak{m}_{p}\) are arbitrary cardinal numbers, \(\Sigma_{a}\) is the \(a\)-adic solenoid (\(a=(2,3,4,\dots)\)) and \(\Delta_{p}\) is the group of \(p\)-adic integers (\([12,\lx@sectionsign 25.8]\)).
**Remark 2**: If \(X=X_{(p)}\), where \(p\) is a fixed prime number, then \(X\) is topologically isomorphic to the group
\[\mathbb{Z}(p)^{\mathfrak{m}}\times\mathbb{Z}(p)^{\mathfrak{n}*}, \tag{9}\]
where \(\mathfrak{m}\) and \(\mathfrak{n}\) are arbitrary cardinal numbers, \(\mathbb{Z}(p)^{\mathfrak{m}}\) is considered in the product topology, and \(\mathbb{Z}(p)^{\mathfrak{n}*}\) is considered in the discrete topology (\([12,\lx@sectionsign 25.29]\)).
We need some lemmas. The following lemma allows us to reduce the proof of Theorem 1 to the study of solutions of some functional equation.
**Lemma 1**: _Let \(X\) be a second countable locally compact Abelian group, \(Y=X^{*}\). Let \(\xi_{1},\dots,\xi_{n}\) be independent random variables with values in \(X\) and distributions \(\mu_{\xi_{j}}\). Consider the linear forms \((4)-(7)\). The random vectors \((L_{1},L_{2})\) and \((L_{3},L_{4})\) are identically distributed if and only if the characteristic functions \(\widehat{\mu}_{\xi_{j}}(y)\) satisfy the equation_
\[\prod_{j=1}^{n}\widehat{\mu}_{\xi_{j}}(a_{j}u+b_{j}v)=\prod_{j=1}^{n}\widehat {\mu}_{\xi_{j}}(c_{j}u+d_{j}v),\quad u,v\in Y. \tag{10}\]
_Proof._ Using the independence of \(\xi_{1},...,\xi_{n}\), we obtain from the definition that
\[\widehat{\mu}_{(L_{1},L_{2})}(u,v)=\mathbf{E}\left[(L_{1},L_{2}) (u,v)\right]=\mathbf{E}\left[(L_{1},u)(L_{2},v)\right]=\] \[=\mathbf{E}\left[(a_{1}\xi_{1}+\cdots+a_{n}\xi_{n},u)(b_{1}\xi_{ 1}+\cdots+b_{n}\xi_{n},v)\right]=\] \[=\mathbf{E}\left[(a_{1}\xi_{1},u)\cdots(a_{n}\xi_{n},u)(b_{1}\xi_ {1},v)\cdots(b_{n}\xi_{n},v)\right]=\] \[=\mathbf{E}\left[(\xi_{1},a_{1}u)\cdots(\xi_{n},a_{n}u)(\xi_{1},b _{1}v)\cdots(\xi_{n},b_{n}v)\right]=\] \[=\mathbf{E}\left[(\xi_{1},a_{1}u+b_{1}v)\cdots(\xi_{n},a_{n}u+b_{ n}v)\right]=\prod_{j=1}^{n}\widehat{\mu}_{\xi_{j}}(a_{j}u+b_{j}v),\quad u,v\in Y.\]
Analogously we obtain that
\[\widehat{\mu}_{(L_{3},L_{4})}(u,v)=\prod_{j=1}^{n}\widehat{\mu}_{\xi_{j}}(c_{ j}u+d_{j}v),\quad u,v\in Y.\]
Since the vectors \((L_{1},L_{2})\) and \((L_{3},L_{4})\) are identically distributed, we have \(\widehat{\mu}_{(L_{1},L_{2})}(u,v)=\widehat{\mu}_{(L_{3},L_{4})}(u,v)\). Thus, equation (10) is valid. \(\square\)
The following lemma is a group analogue of the Cramer theorem on the decomposition of a Gaussian distribution.
**Lemma 2**: ([5]) _Let \(X\) be a second countable locally compact Abelian group containing no subgroup topologically isomorphic to the circle group. Let \(\gamma\in\Gamma(X)\) and \(\gamma=\gamma_{1}*\gamma_{2}\), where \(\gamma_{1},\gamma_{2}\in M^{1}(X)\). Then \(\gamma_{1},\gamma_{2}\in\Gamma(X)\)._
The following lemma is a group analogue of the Marcinkiewicz theorem.
**Lemma 3**: ([7]) _Let \(X\) be a second countable locally compact Abelian group containing no subgroup topologically isomorphic to the circle group. Let \(\mu\in M^{1}(X)\) and the characteristic function \(\widehat{\mu}(y)\) is of the form_
\[\widehat{\mu}(y)=\exp\{\varphi(y)\},\quad y\in Y,\]
_where \(\varphi(y)\) is a continuous polynomial. Then \(\mu\in\Gamma(X)\)._
**Lemma 4**: _Let \(Y\) be an Abelian group. Let \(\varphi_{j}(y)\) and \(\psi_{j}(y)\) be functions on \(Y\) which satisfy the equation_
\[\sum_{j=1}^{m}\varphi_{j}(a_{j}u+b_{j}v)=\sum_{j=1}^{n}\psi_{j}(c_{j}u+d_{j}v) +q(u,v),\quad u,v\in Y, \tag{11}\]
_where the coefficients \(a_{j},b_{j},c_{j},d_{j}\) are integers and \(q(u,v)\) is a continuous polynomial of degree \(l\) on the group \(Y^{2}\). Then each function \(\varphi_{j}(y)\) satisfies the equation_
\[\Delta_{a_{j}h+b_{j}k}^{l+1}\Delta_{(a_{j}b_{i_{1}}-b_{j}a_{i_{1}})l_{i_{1}}}...\Delta_{(a_{j}b_{i_{m-1}}-b_{j}a_{i_{m-1}})l_{i_{m-1}}}\Delta_{(a_{j}d_{1}- b_{j}c_{1})k_{1}}...\Delta_{(a_{j}d_{n}-b_{j}c_{n})k_{n}}\varphi_{j}(a_{j}u+b_{j}v)=\]
\[=0,\quad u,v\in Y, \tag{12}\]
_where \(h,k,k_{1},\ldots,k_{n}\) and \(l_{i_{1}},\ldots,l_{i_{m-1}}\) are arbitrary elements of \(Y\) and indexes \(i_{1},\ldots,i_{m-1}\) take values \(1,\ldots,j-1,j+1,\ldots,m\). Moreover, if \(q(u,v)\equiv 0\) then operator \(\Delta_{a_{j}h+b_{j}k}^{l+1}\) is missing from formula (4)._
_Proof._ Let \(k_{n}\) be an arbitrary element of \(Y\). Substitute \(u+d_{n}k_{n}\) for \(u\) and \(v-c_{n}k_{n}\) for \(v\) in (11) and subtract (11) from the resulting equation. We obtain
\[\sum_{j=1}^{m}\Delta_{(a_{j}d_{n}-b_{j}c_{n})k_{n}}\varphi_{j}(a_{j}u+b_{j}v)= \sum_{j=1}^{n-1}\Delta_{(c_{j}d_{n}-d_{j}c_{n})k_{n}}\psi_{j}(c_{j}u+d_{j}v)+ \Delta_{(d_{n}k_{n},-c_{n}k_{n})}q(u,v),\quad u,v\in Y. \tag{13}\]
The right-hand side of equation (13) no longer contains the function \(\psi_{n}\).
Let \(k_{n-1}\) be an arbitrary element of \(Y\). Substitute \(u+d_{n-1}k_{n-1}\) for \(u\) and \(v-c_{n-1}k_{n-1}\) for \(v\) in (13) and subtract (13) from the resulting equation. We obtain
\[\sum_{j=1}^{m}\Delta_{(a_{j}d_{n-1}-b_{j}c_{n-1})k_{n-1}}\Delta_{(a_{j}d_{n}-b _{j}c_{n})k_{n}}\varphi_{j}(a_{j}u+b_{j}v)=\]
\[\sum_{j=1}^{n-2}\Delta_{(c_{j}d_{n-1}-d_{j}c_{n-1})k_{n}}\Delta_{(c_{j}d_{n}-d _{j}c_{n})k_{n}}\psi_{j}(c_{j}u+d_{j}v)+\]
\[+\Delta_{(d_{n-1}k_{n-1},-c_{n-1}k_{n-1})}\Delta_{(d_{n}k_{n},-c_{n}k_{n})}q(u, v),\quad u,v\in Y. \tag{14}\]
The right-hand side of equation (3) no longer contains the function \(\psi_{n-1}\).
After \(n\) steps we come to the equation of the form
\[\sum_{j=1}^{m}\Delta_{(a_{j}d_{1}-b_{j}c_{1})k_{1}}...\Delta_{(a_{j}d_{n-1}-b_{j}c _{n-1})k_{n-1}}\Delta_{(a_{j}d_{n}-b_{j}c_{n})k_{n}}\varphi_{j}(a_{j}u+b_{j}v)=\]
\[=\Delta_{(d_{1}k_{1},-c_{1}k_{1})}...\Delta_{(d_{n}k_{n},-c_{n}k_{n})}q(u,v), \quad u,v\in Y, \tag{15}\]
where \(k_{1},\ldots,k_{n}\) are arbitrary elements of \(Y\).
Let \(l_{m}\) be an arbitrary element of \(Y\). Substitute \(u+b_{m}l_{m}\) for \(u\) and \(v-a_{m}l_{m}\) for \(v\) in (3) and subtract (3) from the resulting equation. We obtain
\[\sum_{j=1}^{m-1}\Delta_{(a_{j}b_{m}-b_{j}a_{m})l_{m}}\Delta_{(a_{j}d_{1}-b_{j} c_{1})k_{1}}...\Delta_{(a_{j}d_{n}-b_{j}c_{n})k_{n}}\varphi_{j}(a_{j}u+b_{j}v)=\]
\[=\Delta_{(b_{m}l_{m},-a_{m}l_{m})}\Delta_{(d_{1}k_{1},-c_{1}k_{1})}...\Delta_ {(d_{n}k_{n},-c_{n}k_{n})}q(u,v),\quad u,v\in Y. \tag{16}\]
The left-hand side of equation (3) no longer contains the function \(\varphi_{m}\).
After \(n+m-1\) steps we come to the equation of the form
\[\Delta_{(a_{1}b_{2}-b_{1}a_{2})l_{2}}...\Delta_{(a_{1}b_{m}-b_{1}a_{m})l_{m}} \Delta_{(a_{1}d_{1}-b_{1}c_{1})k_{1}}...\Delta_{(a_{1}d_{n}-b_{1}c_{n})k_{n}} \varphi_{1}(a_{1}u+b_{1}v)=\]
\[=\Delta_{(b_{2}l_{2},-a_{2}l_{2})}...\Delta_{(b_{m}l_{m},-a_{m}l_{m})}\Delta_{( d_{1}k_{1},-c_{1}k_{1})}...\Delta_{(d_{n}k_{n},-c_{n}k_{n})}q(u,v),\quad u,v\in Y, \tag{17}\]
where \(k_{1},\ldots,k_{n}\) and \(l_{2},\ldots,l_{m}\) are arbitrary elements of \(Y\).
Since \(q(u,u)\) is a polynomial of degree \(l\), applying the operator \(\Delta_{(h,k)}^{l+1}\) to (3), we get equation (4) for the function \(\varphi_{1}(y)\). By analogy we prove that each function \(\varphi_{j}(y)\) satisfies equation (4). \(\Box\)
_Proof of Theorem 1._ It follows from Lemma 1 that the characteristic functions \(\widehat{\mu}_{\xi_{j}}(y)\) satisfy equation (10). Renumbering functions in (10), we can assume that condition (8) is fulfilled for all \(i=\overline{1,m}\) and it is not fulfilled for all \(i=\overline{m+1,n}\), \(m\leq n\).
_Case 1._ First we consider case 1 when there exist admissible integers \(b_{i}a_{j}-b_{j}a_{i}\) for \(i\neq j\) (\(i,j=\overline{1,m}\)).
(i) Let \(X\) be a torsion-free group. In this case if an integer \(b_{i}a_{j}-b_{j}a_{i}\) is not admissible for \(X\) then \(b_{i}a_{j}-b_{j}a_{i}=0\). Also if an integer \(a\) is admissible for \(X\) then
\[\overline{Y^{(a)}}=Y\mbox{ for all nonzero integeres }a. \tag{18}\]
Note that if there exists \(j_{0}\) (\(j_{0}\in\{1,\ldots,n\}\)) such that \(c_{j_{0}}=d_{j_{0}}=0\) then there does not exist such \(i_{0}\) (\(i_{0}\in\{1,\ldots,m\}\)) that condition (8) is fulfilled. Therefore we can suppose from the beginning that there do not exist \(j_{0}\) (\(j_{0}\in\{1,\ldots,n\}\)) such that \(c_{j_{0}}=d_{j_{0}}=0\).
Renumbering functions in (10), we can assume that
\[a_{1},\ldots,a_{t}\neq 0,\quad a_{t+1}=\cdots=a_{m}=0,\]
\[b_{1},\ldots,b_{s}\neq 0,\quad b_{s+1}=\cdots=b_{t}=0,\quad b_{t+1},\ldots,b_{m} \neq 0,\]
where \(s\leq t\leq m\).
If \(a_{t+1}=\cdots=a_{m}=0\), then condition (8) implies that
\[c_{j}\neq 0\mbox{ for all }j=\overline{1,n}. \tag{19}\]
If \(b_{s+1}=\cdots=b_{t}=0\), then condition (8) implies that
\[d_{j}\neq 0\text{ for all }j=\overline{1,n}. \tag{20}\]
Also renumbering first \(s\) functions in (10), we can assume that
\[\frac{a_{1}}{b_{1}}=\cdots=\frac{a_{r_{1}}}{b_{r_{1}}}=\alpha_{1},\frac{a_{r_{ 1}+1}}{b_{r_{1}+1}}=\cdots=\frac{a_{r_{2}}}{b_{r_{2}}}=\alpha_{2},\cdots,\frac{a _{r_{k}+1}}{b_{r_{k}+1}}=\cdots=\frac{a_{s}}{b_{s}}=\alpha_{k+1}, \tag{21}\]
where \(1\leq r_{1}<r_{2}<\cdots<r_{k}<r_{k+1}\leq s\), and \(\alpha_{i}\neq\alpha_{j}\) for all \(i\neq j\).
Consider integers
\[A=b_{1}b_{r_{1}+1}\cdots b_{r_{k}+1},\quad B=a_{1}a_{r_{1}+1}\cdots a_{r_{k}+1}, \tag{22}\]
\[A_{0}=\frac{A}{b_{1}},\quad A_{j}=\frac{A}{b_{r_{j}+1}},\quad j=1,2,\ldots,k, \tag{23}\]
\[B_{0}=\frac{B}{a_{1}},\quad B_{j}=\frac{B}{a_{r_{j}+1}},\quad j=1,2,\ldots,k, \tag{24}\]
Substitute \(Au\) for \(u\) and \(Bv\) for \(v\) in (10). We get
\[\prod_{j=1}^{s}\widehat{\mu}_{\xi_{j}}(a_{j}Au+b_{j}Bv) \prod_{j=s+1}^{t}\widehat{\mu}_{\xi_{j}}(a_{j}Au)\prod_{j=t+1}^{m} \widehat{\mu}_{\xi_{j}}(b_{j}Bv)\prod_{j=m+1}^{n}\widehat{\mu}_{\xi_{j}}(a_{j} Au+b_{j}Bv)= \tag{25}\] \[=\prod_{j=1}^{n}\widehat{\mu}_{\xi_{j}}(c_{j}Au+d_{j}Bv),\quad u, v\in Y.\]
Taking into account (22)-(24), we rewrite (3) in the following form
\[\prod_{j=1}^{r_{1}}\widehat{\mu}_{\xi_{j}}(a_{j}b_{1}A_{0}u+b_{j} a_{1}B_{0}v)\prod_{j=r_{1}+1}^{r_{2}}\widehat{\mu}_{\xi_{j}}(a_{j}b_{r_{1}+1}A_{ 1}u+b_{j}a_{r_{1}+1}B_{1}v)\cdots\] \[\cdots\prod_{j=r_{k}+1}^{s}\widehat{\mu}_{\xi_{j}}(a_{j}b_{r_{k} +1}A_{k}u+b_{j}a_{r_{k}+1}B_{k}v)\prod_{j=s+1}^{t}\widehat{\mu}_{\xi_{j}}(a_{j }Au)\prod_{j=t+1}^{m}\widehat{\mu}_{\xi_{j}}(b_{j}Bv)\times\] \[\times\prod_{j=m+1}^{n}\widehat{\mu}_{\xi_{j}}(a_{j}Au+b_{j}Bv)= \prod_{j=1}^{n}\widehat{\mu}_{\xi_{j}}(C_{j}u+D_{j}v),\quad u,v\in Y, \tag{26}\]
where \(C_{j}=c_{j}A\), \(D_{j}=d_{j}B\). Taking into account (21), we obtain from (3)
\[\prod_{j=1}^{r_{1}}\widehat{\mu}_{\xi_{j}}\left(a_{1}b_{j}(A_{0} u+B_{0}v)\right)\prod_{j=r_{1}+1}^{r_{2}}\widehat{\mu}_{\xi_{j}}\left(a_{r_{1}+1} b_{j}(A_{1}u+B_{1}v)\right)\cdots\] \[\cdots\prod_{j=r_{k}+1}^{s}\widehat{\mu}_{\xi_{j}}\left(a_{r_{k}+ 1}b_{j}(A_{k}u+B_{k}v)\right)\prod_{j=s+1}^{t}\widehat{\mu}_{\xi_{j}}(a_{j}Au) \prod_{j=t+1}^{m}\widehat{\mu}_{\xi_{j}}(b_{j}Bv)\times\] \[\times\prod_{j=m+1}^{n}\widehat{\mu}_{\xi_{j}}(a_{j}Au+b_{j}Bv)= \prod_{j=1}^{n}\widehat{\mu}_{\xi_{j}}(C_{j}u+D_{j}v),\quad u,v\in Y. \tag{27}\]
Put
\[\widehat{\mu}_{0}(y)=\prod_{j=1}^{r_{1}}\widehat{\mu}_{\xi_{j}}\left(a_{1}b_{j}y \right),\quad\widehat{\mu}_{1}(y)=\prod_{j=r_{1}+1}^{r_{2}}\widehat{\mu}_{\xi_{ j}}\left(a_{r_{1}+1}b_{j}y\right),\cdots,\quad\widehat{\mu}_{k}(y)=\prod_{j=r_{k}+1}^{s} \widehat{\mu}_{\xi_{j}}\left(a_{r_{k}+1}b_{j}y\right),\]
\[\widehat{\mu}_{k+1}(y)=\prod_{j=s+1}^{t}\widehat{\mu}_{\xi_{j}}(a_{j}Ay),\quad \widehat{\mu}_{k+2}(y)=\prod_{j=t+1}^{m}\widehat{\mu}_{\xi_{j}}(b_{j}By).\]
Note that each \(\widehat{\mu}_{i}(y)\) is a characteristic function of a distribution \(\mu_{i}\) on \(X\). Namely,
\[\mu_{0}=f_{a_{1}b_{1}}(\mu_{\xi_{1}})*\cdots*f_{a_{1}b_{r_{1}}}(\mu_{\xi_{r_{ 1}}}),\]
\[\mu_{1}=f_{a_{r_{1}+1}b_{r_{1}+1}}(\mu_{\xi_{1}})*\cdots*f_{a_{r_{1}+1}b_{r_{ 2}}}(\mu_{\xi_{r_{1}}}),\]
\[\cdots,\]
\[\mu_{k}=f_{a_{r_{k}+1}b_{r_{k}+1}}(\mu_{\xi_{r_{k}+1}})*\cdots*f_{a_{r_{k}+1}b _{s}}(\mu_{\xi_{s}}),\]
\[\mu_{k+1}=f_{a_{s+1}A}(\mu_{\xi_{s+1}})*\cdots*f_{a_{t}A}(\mu_{\xi_{t}}),\]
\[\mu_{k+2}=f_{b_{t+1}B}(\mu_{\xi_{t+1}})*\cdots*f_{b_{m}B}(\mu_{\xi_{m}}).\]
If we prove that all \(\widehat{\mu}_{i}(y)\) (\(i=0,\overline{1,k+2}\)) are characteristic functions of Gaussian distributions then, taking into account Lemma 2 and (18), we obtain that all distributions \(\mu_{\xi_{j}}\in\Gamma(X)\) (\(j=\overline{1,m}\)). Now we get from (3)
\[\prod_{i=0}^{k}\widehat{\mu}_{i}\left(A_{i}u+B_{i}v\right)\ \widehat{\mu}_{k+1}(A_{k+1}u+B_{k+1}v) \widehat{\mu}_{k+2}(A_{k+2}u+B_{k+2}v)\prod_{j=m+1}^{n}\widehat{\mu}_{\xi_{j }}(A_{j}u+B_{j}v)=\]
\[=\prod_{j=1}^{n}\widehat{\mu}_{\xi_{j}}(C_{j}u+D_{j}v),\quad u,v\in Y, \tag{28}\]
where \(A_{k+1}=1\), \(B_{k+1}=0\); \(A_{k+2}=0\), \(B_{k+2}=1\); \(A_{j}=a_{j}A\), \(B_{j}=b_{j}B\) (\(j=\overline{m+1,n}\)). We have that
\[A_{i}D_{j}-B_{i}C_{j}\neq 0,\quad i=\overline{0,k+2}\ \mbox{and}\ j=\overline{1,n}. \tag{29}\]
Indeed, taking into account (8), (19), (20), (22)-(24), we get
\[A_{i}D_{j}-B_{i}C_{j}=\frac{A}{b_{r_{i}+1}}\cdot d_{j}B-\frac{B}{a_{r_{i}+1}} \cdot c_{j}A=\frac{AB}{a_{r_{i}+1}b_{r_{i}+1}}(a_{r_{i}+1}d_{j}-b_{r_{i}+1}c_ {j})\neq 0,\quad i=\overline{0,k},\]
\[A_{k+1}D_{j}-B_{k+1}C_{j}=D_{j}=d_{j}B\neq 0,\]
\[A_{k+2}D_{j}-B_{k+2}C_{j}=-C_{j}=-c_{j}A\neq 0.\]
Also we have that
\[A_{i}B_{j}-B_{i}A_{j}\neq 0,\quad i\neq j,\quad i=\overline{0,k+2},\quad j= \overline{0,k+2}\ \mbox{and}\ j=\overline{m+1,n}. \tag{30}\]
Indeed, taking into account (21), we have
\[A_{i}B_{j}-B_{i}A_{j}=\frac{A}{b_{r_{i}+1}}\cdot\frac{B}{a_{r_{j}+1}}-\frac{B} {a_{r_{i}+1}}\cdot\frac{A}{b_{r_{j}+1}}=\frac{AB(a_{r_{i}+1}b_{r_{j}+1}-b_{r_{i }+1}a_{r_{j}+1})}{b_{r_{i}+1}a_{r_{j}+1}a_{r_{i}+1}b_{r_{j}+1}}\neq 0,\ i,j= \overline{0,k},i\neq j,\]
\[A_{i}B_{k+1}-B_{i}A_{k+1}=-B_{i}\neq 0,\ i=0,k,\]
\[A_{i}B_{k+2}-B_{i}A_{k+2}=A_{i}\neq 0,\quad i=\overline{0,k},\]
\[A_{k+1}B_{k+2}-B_{k+1}A_{k+2}=A_{k+1}B_{k+2}=1\neq 0.\]
We make some notes about \(a_{j},b_{j}\) when \(j=\overline{m+1,n}\). First we note that if \(a_{j_{0}}=b_{j_{0}}=0\) for some \(j_{0}\) then the corresponding multiplier \(\widehat{\mu}_{\xi_{j_{0}}}\) is absent in equation (3). Therefore we can ignore this case. Since condition (8) is not valid for such \(a_{j},b_{j}\), we have \(a_{j}d_{\alpha}-b_{j}c_{\alpha}=0\) for some \(c_{\alpha},d_{\alpha}\). Since \(c_{\alpha},d_{\alpha}\) are not zero simultaneously, we can suppose for definiteness that \(d_{\alpha}\neq 0\). Note that if \(d_{\alpha}\neq 0\) then \(b_{j}\neq 0\). Then \(a_{j}=\frac{b_{j}c_{\alpha}}{d_{\alpha}}\). Taking into account condition (8), we have
\[A_{i}B_{j}-B_{i}A_{j}=\frac{A}{b_{r_{i}+1}}\cdot b_{j}B-\frac{B}{a_{r_{i}+1}} \cdot a_{j}A=\frac{AB(a_{r_{i}+1}b_{j}-b_{r_{i}+1}a_{j})}{b_{r_{i}+1}a_{r_{i}+ 1}}=\frac{AB(a_{r_{i}+1}b_{j}-b_{r_{i}+1}\frac{b_{j}c_{\alpha}}{d_{\alpha}})}{ b_{r_{i}+1}a_{r_{i}+1}}=\]
\[=\frac{ABb_{j}(a_{r_{i}+1}d_{\alpha}-b_{r_{i}+1}c_{\alpha})}{d_{\alpha}b_{r_{i }+1}a_{r_{i}+1}}\neq 0,\ i=\overline{0,k},j=\overline{m+1,n}.\]
If there exists a nonzero coefficient \(A_{k+1}\) then condition (20) is fulfilled. Therefore \(b_{j}\neq 0\) for \(j=\overline{m+1,n}\).
\[A_{k+1}B_{j}-B_{k+1}A_{j}=B_{j}=b_{j}B\neq 0,\quad j=\overline{m+1,n}.\]
If there exists a nonzero coefficient \(B_{k+2}\) then condition (19) is fulfilled. Therefore \(a_{j}\neq 0\) for \(j=\overline{m+1,n}\).
\[A_{k+2}B_{j}-B_{k+2}A_{j}=-A_{j}=a_{j}A\neq 0,\quad j=\overline{m+1,n}.\]
Now we can solve equation (3).
Put \(\rho_{i}=\mu_{i}*\overline{\mu}_{i}\), \(\varrho_{j}=\mu_{\xi_{j}}*\overline{\mu}_{\xi_{j}}\). Then we have \(\widehat{\rho}_{i}(y)=|\widehat{\mu}_{i}(y)|^{2}>0\), \(\widehat{\varrho}_{j}(y)=|\widehat{\mu}_{\xi_{j}}(y)|^{2}>0\), \(y\in Y\). The characteristic functions \(\widehat{\rho}_{i}(y)\) and \(\widehat{\varrho}_{j}(y)\) also satisfy equation (3), and all the factors in (3) are greater then zero.
Put \(\varphi_{i}(y)=\log\widehat{\rho}_{i}(y)\), \(\psi_{j}(y)=\log\widehat{\varrho}_{j}(y)\). It follows from (3) that the functions \(\varphi_{i}(y)\) and \(\psi_{j}(y)\) satisfy the following equation
\[\sum_{i=0}^{k+2}\varphi_{i}\left(A_{i}u+B_{i}v\right)+\sum_{j=m+1}^{n}\psi_{j }(A_{j}u+B_{j}v)=\sum_{j=1}^{n}\psi_{j}(C_{j}u+D_{j}v),\quad u,v\in Y. \tag{31}\]
It follows from Lemma 4 that for example the function \(\varphi_{0}(y)\) satisfies the equation (4) which takes the form
\[\Delta_{(A_{0}B_{1}-B_{0}A_{1})l_{1}}...\Delta_{(A_{0}B_{k+2}-B_{0}A_{k+2})l_{ m}}\Delta_{(A_{0}B_{m+1}-B_{0}A_{m+1})l_{1}}...\Delta_{(A_{0}B_{n}-B_{0}A_{n})l_{m}}\]
\[\Delta_{(A_{0}D_{1}-B_{0}C_{1})k_{1}}...\Delta_{(A_{0}D_{n}-B_{0}C_{n})k_{n}} \varphi_{0}(A_{0}y)=0,\quad y\in Y, \tag{32}\]
where \(k_{1},\ldots,k_{n}\) and \(l_{1},\ldots,l_{m}\) are arbitrary elements of \(Y\). It follows from equation (3) and conditions (18), (23), (24), (29), (30) that
\[\Delta_{h}^{m+n+1}\varphi_{0}(y)=0,\quad y,h\in Y. \tag{33}\]
Thus \(\varphi_{0}(y)\) is a continuous polynomial on the group \(Y\). Applying Lemma 3, we obtain that \(\rho_{0}\in\Gamma(X)\). Hence by Lemma 2\(\mu_{0}\in\Gamma(X)\) and respectively \(\mu_{\xi_{1}},\ldots,\mu_{\xi_{r_{1}}}\in\Gamma(X)\). By analogy we obtain that all \(\mu_{\xi_{j}}\in\Gamma(X)\)\((j=\overline{1,m})\).
(ii) Let \(X=X_{(p)}\), where \(p\) is a prime number (\(p>2\)). The proof of this case is the same as the proof of _Case 1_(i). There is only one difference: if an integer \(b_{i}a_{j}-b_{j}a_{i}\) is not admissible for \(X\) then \(b_{i}a_{j}-b_{j}a_{i}=0\ (mod\ p)\). At the end we note that since component of zero of the group \(X\) is equal to zero, we have in this case that \(\Gamma(X)=D(X)\).
_Case 2._ Now we consider case 2 when all integers \(b_{i}a_{j}-b_{j}a_{i}\) for \(i\neq j\) are not admissible for \(X\).
(i) Let \(X\) be a torsion-free group. In this case if all integers \(b_{i}a_{j}-b_{j}a_{i}\) are not admissible for \(X\) then all \(b_{i}a_{j}-b_{j}a_{i}=0\) for \(i\neq j\). Substitute \(b_{1}y\) for \(u\) and \(-a_{1}y\) for \(v\) in (10). Note that \(a_{j}b_{1}y-b_{j}a_{1}y=0\) for all \(j=\overline{1,n}\). We get
\[1=\prod_{j=1}^{n}\widehat{\mu}_{\xi_{j}}\left((b_{1}c_{j}-a_{1}d_{j})y\right), \quad y\in Y. \tag{34}\]
Taking into account condition (8), we get from (34) that all \(\widehat{\mu}_{\xi_{j}}(y)=1\) for all \(y\in Y\), i.e. all \(\mu_{\xi_{j}}\in D(X)\) (\(j=\overline{1,n}\)).
(ii) Let \(X=X_{(p)}\), where \(p\) is a prime number (\(p>2\)). In this case if all integers \(b_{i}a_{j}-b_{j}a_{i}\) are not admissible for \(X\) then all \(b_{i}a_{j}-b_{j}a_{i}=0\ (mod\ p)\) for \(i\neq j\). Arguing as in _Case 2_ (i) we obtain that all \(\mu_{\xi_{j}}\in D(X)\) (\(j=\overline{1,n}\)). \(\Box\)
Theorem 1 is exact in the following sense.
**Proposition 1**: _Let \(X\) be a second countable locally compact Abelian group. If \(X\) is not topologically isomorphic to any of the groups mentioned in Theorem 1, then there exist independent random variables \(\xi_{j},j=\overline{1,n},n\geq 2,\) with values in \(X\) and distributions \(\mu_{\xi_{j}}\) with non-vanishing characteristic functions, and integers \(a_{j},b_{j},c_{j},d_{j}\) such that the random vectors \((L_{1},L_{2})\) and \((L_{3},L_{4})\) are identically distributed, but for all those \(i\), for which condition (8) is valid, \(\mu_{\xi_{i}}\not\in\Gamma(X)\). The linear forms \(L_{1},L_{2},L_{3},L_{4}\) are defined by \((\ref{eq:2})-(\ref{eq:2})\)._
_Proof._ Let \(x_{0}\) be an element of order \(p\) in \(X\). Let \(K\) be a subgroup of \(X\) generated by the element \(x_{0}\). Then \(K\cong\mathbb{Z}(p)\). Let \(\xi_{1}\) and \(\xi_{2}\) be independent identically distributed random variables with values in \(K\) and with a distribution
\[\mu=mE_{0}+(1-m)E_{x_{0}}, \tag{35}\]
where \(1/2<m<1\). We consider the distribution \(\mu\) as a distribution on the group \(X\). We have that
\[\widehat{\mu}(y)=m+(1-m)(x_{0},y). \tag{36}\]
Let \(a_{j}=b_{j}=c_{j}=1\), \(d_{1}=d_{2}=1-p\), i.e. \(L_{1}=\xi_{1}+\xi_{2}\), \(L_{2}=\xi_{1}+\xi_{2}\), \(L_{3}=\xi_{1}+\xi_{2}\), \(L_{4}=(1-p)\xi_{1}+(1-p)\xi_{2}\). It is easy to verify that condition (8) is valid for \(i=1,2\). By Lemma 1 the vectors \((L_{1},L_{2})\) and \((L_{3},L_{4})\) are identically distributed if and only if the characteristic function \(\widehat{\mu}(y)\) satisfy equation (10) which takes the form
\[\widehat{\mu}(u+v)\widehat{\mu}(u+v)=\widehat{\mu}(u+(1-p)v)\widehat{\mu}(u+( 1-p)v),\quad u,v\in Y. \tag{37}\]
Rewrite this equation in the following form
\[\widehat{\mu}(u+v)\widehat{\mu}(u+v)=\widehat{\mu}(u+v-pv)\widehat{\mu}(u+v-pv ),\quad u,v\in Y. \tag{38}\]
Since \(\sigma(\mu)\subset K\subset X_{(p)}\), we have \(\widehat{\mu}(y+h)=\widehat{\mu}(y)\) for \(y\in Y^{(p)}\). Therefore equation (38) is an equality. Hence by Lemma 1 the vectors \((L_{1},L_{2})\) and \((L_{3},L_{4})\) are identically distributed. \(\Box\)
**Remark 3**: Let \(L_{1}\), \(L_{2}\) be as in Theorem 1. Suppose that \(L_{3}=L_{1}\) and \(L_{4}=-L_{2}\). If the vectors \((L_{1},L_{2})\) and \((L_{1},L_{2})\) are identically distributed, then the conditional distribution of \(L_{2}\) given \(L_{1}\) is symmetric. Condition (8) takes the form
\[a_{i}b_{j}+b_{i}a_{j}\mbox{ are admissible integers for }X\mbox{ for all }j=\overline{1,n}. \tag{39}\]
If we suppose that \(a_{j},b_{j}\) are admissible integers for \(X\) then we obtain from Theorem 1 the description of locally compact Abelian groups for which the group analogue of the Heyde theorem takes place ([26, Theorem 3.1]).
**Remark 4**: We consider two sets of independent random variables \(\xi_{1},\ldots,\xi_{n}\) and \(\xi^{\prime}_{1},\ldots,\xi^{\prime}_{n}\), where \(\xi_{j}\) and \(\xi^{\prime}_{j}\) are identically distributed. Let \(L_{1}\), \(L_{2}\) be as in Theorem 1. Suppose that \(L_{3}=L_{1}\) and \(L_{4}=L^{\prime}_{2}=b_{1}\xi^{\prime}_{1}+\cdots+b_{n}\xi^{\prime}_{n}\). If the vectors \((L_{1},L_{2})\) and \((L_{1},L^{\prime}_{2})\) are identically distributed, then \(L_{1}\) and \(L_{2}\) are independent. Condition (8) takes the form
\[a_{i}b_{i}\mbox{ are admissible integers for all }i=\overline{1,n}. \tag{40}\]
We obtain from Theorem 1 the description of locally compact Abelian groups for which the group analogue of the Darmois-Skitovich theorem takes place ([6, Theorem 3]).
The following theorem describes locally compact Abelian groups for which the group analogue of the Klebanov theorem takes place without assumption that the characteristic functions of the considering distributions of random variables do not vanish. Note that the obtained class of groups is changed (compare with Theorem 1).
**Theorem 2**: _Let \(X=\mathbb{R}^{n}\times D\), where \(n\geq 0\) and \(D\) is a discrete torsion-free group. Let \(\xi_{1},\ldots,\xi_{n}\) be independent random variables with values in \(X\) and distributions \(\mu_{\xi_{j}}\). Consider the linear forms (4)-(7), where the coefficients \(a_{j},b_{j},c_{j},d_{j}\) are integers. If the random vectors \((L_{1},L_{2})\) and \((L_{3},L_{4})\) are identically distributed, then for all those \(i\) for which condition (8) is valid \(\mu_{\xi_{i}}\in\Gamma(X)\)._
To prove Theorem 2, we need the following lemma.
**Lemma 5**: _Let \(K\) be a connected compact Abelian group and \(\mu_{j}\) be distributions on the group \(K^{*}\). Assume that the characteristic functions \(\widehat{\mu}_{j}(y)\) satisfy equation (10) on the group \(K\) and \(\widehat{\mu}_{j}(y)\geq 0\). Let \(a_{j},b_{j},c_{j},d_{j}\) be integers. For for all those \(i\) for which \(a_{i}d_{j}-b_{i}c_{j}\neq 0\) for all \(j=\overline{1,n}\), the characteristic function \(\widehat{\mu}_{i}(y)=1\), \(y\in K\)._
The proof of Lemma 5 is carried out in the same way as the proof of Lemma 3.1 of the paper [6] or the proof of Lemma 3.8 of the paper [26], but in our case it is based on the Klebanov theorem on the real line. We omitted the prove of Lemma 5 in our paper.
_Proof of Theorem 2._ Lemma 1 implies that the characteristic functions of distributions \(\mu_{\xi_{j}}\) satisfy equation (10). Put \(\nu_{j}=\mu_{\xi_{j}}*\overline{\mu}_{\xi_{j}}\). Then we have \(\widehat{\nu}_{j}(y)=|\widehat{\mu}_{\xi_{j}}(y)|^{2}\geq 0\), \(y\in Y\). The characteristic functions \(\widehat{\nu}_{j}(y)\) also satisfy equation (10). If we prove that \(\nu_{j}\in\Gamma(X)\), then it follows from Lemma 2, that \(\mu_{\xi_{j}}\in\Gamma(X)\). Therefore we can assume from the beginning that all factors \(\widehat{\mu}_{\xi_{j}}(y)\geq 0\) in (10).
Renumbering functions in (10), we can assume that condition (8) is fulfilled for all \(i=\overline{1,m}\) and it is not fulfilled for all \(i=\overline{m+1,n}\), \(m\leq n\).
Since \(X=\mathbb{R}^{n}\times D\), we have \(Y\approx\mathbb{R}^{n}\times K\), where \(K\) is a connected compact group. We can assume without loss of generality that \(Y=\mathbb{R}^{n}\times K\).
Consider the restriction of equation (10) to \(K\). It follows from Lemma 5 that \(\widehat{\mu}_{\xi_{i}}(y)=1\) on \(K\) for \(i=\overline{1,m}\). Then \(\widehat{\mu}_{\xi_{i}}(y+h)=\widehat{\mu}_{\xi_{i}}(y)\) for all \(y\in Y\), \(h\in K\). We consider the restriction of equation (10) to each one-dimensional subspace in \(\mathbb{R}^{n}\) and obtain from the Klebanov theorem that all restriction of the characteristic functions \(\widehat{\mu}_{\xi_{i}}(y)\) are the characteristic functions of Gaussian distributions for \(i=\overline{1,m}\). Therefore \(\widehat{\mu}_{\xi_{i}}(y)\) are the characteristic functions of Gaussian distributions for \(y\in\mathbb{R}^{n}\) and \(i=\overline{1,m}\). Hence \(\mu_{\xi_{i}}\in\Gamma(X)\) for \(i=\overline{1,m}\). \(\Box\)
Theorem 2 is exact in the following sense.
**Proposition 2**: _Let \(X\) be a second countable locally compact Abelian group. If \(X\) is not topologically isomorphic to the group mentioned in Theorem 2, then there exist independent random variables \(\xi_{j},j=\overline{1,n},n\geq 2,\) with values in \(X\) and distributions \(\mu_{\xi_{j}}\), and integers \(a_{j},b_{j},c_{j},d_{j}\) such that the random vectors \((L_{1},L_{2})\) and \((L_{3},L_{4})\) are identically distributed, but for all those \(i\), for which condition (8) is valid, \(\mu_{\xi_{i}}\not\in\Gamma(X)*I(X)\). The linear forms \(L_{1},L_{2},L_{3},L_{4}\) are defined by \((4)-(7)\)._
To prove proposition 2 we need the following lemma and remark.
**Lemma 6**: ([26]) _Let \(X\) be a second countable locally compact Abelian group. If either \(X\not\approx\mathbb{R}^{n}\times D\), where \(n\geq 0\) and \(D\) is a discrete torsion-free group, or \(X\neq X_{(2)}\), or \(X\neq X_{(3)}\), then there exist independent random variables \(\xi_{j},j=\overline{1,n},n\geq 2,\) with values in \(X\) and distributions \(\mu_{j}\), and admissible integers \(a_{j},\ b_{j}\) such that \(b_{i}a_{j}+b_{j}a_{i}\) are admissible integers for \(X\) for all \(i,j\), such that the conditional distribution of \(L_{2}\) given \(L_{1}\) is symmetric, but all \(\mu_{j}\not\in\Gamma(X)*I(X)\). The linear forms \(L_{1},L_{2}\) are defined by \((4)-(5)\)._
**Lemma 7**: _Let \(X=\mathbb{Z}(2)\) or \(X=\mathbb{Z}(3)\). Then there exist independent random variables \(\xi_{j},j=\overline{1,n},n\geq 2,\) with values in \(X\) and distributions \(\mu_{\xi_{j}}\), and integers \(a_{j},b_{j},c_{j},d_{j}\) such that the random vectors \((L_{1},L_{2})\) and \((L_{3},L_{4})\) are identically distributed, but for all those \(i\), for which condition (8) is valid, \(\mu_{i}\not\in I(X)\)._
_Proof._ Let \(L_{1}=\xi_{1}+...+\xi_{n-2}+\xi_{n-1}\), \(L_{2}=\xi_{1}+...+\xi_{n-2}+\xi_{n}\), \(L_{3}=\xi_{1}+...+\xi_{n-2}+\xi_{n-1}\), \(L_{4}=\xi_{n}\). Condition (8) is valid for all \(i=\overline{1,n-2}\). Indeed, we have in this case \(a_{i}d_{j}-b_{i}c_{j}=d_{j}-c_{j}=\pm 1\). Note that condition (8) is not valid for \(i=n-1\) and \(i=n\). Suppose that \(\mu_{\xi_{1}},\ldots,\mu_{\xi_{n-2}}\) are arbitrary distributions. Put \(\mu_{\xi_{n-1}}=\mu_{\xi_{n}}=m_{X}\) and verify that the random vectors \((L_{1},L_{2})\) and \((L_{3},L_{4})\) are identically distributed. By Lemma 1 it suffices to show that the characteristic functions of distributions \(\mu_{\xi_{j}}\) satisfy equation (10) which takes the form
\[\prod_{j=1}^{n-2}\widehat{\mu}_{\xi_{j}}(u+v)\widehat{m}_{X}(u)\widehat{m}_{X} (v)=\prod_{j=1}^{n-2}\widehat{\mu}_{\xi_{j}}(u)\widehat{m}_{X}(u)\widehat{m}_{ X}(v),\quad u,v\in Y. \tag{41}\]
Obviously, equation (41) is satisfied if \(u=v=0\). Suppose that either \(u\neq 0\) or \(v\neq 0\). Then \(\widehat{m}_{X}(u)\widehat{m}_{X}(v)=0\), i.e. the left-hand side and the right-hand side of equation (41) is equal to zero. It is clear that we can choose distributions \(\mu_{\xi_{1}},\ldots,\mu_{\xi_{n-2}}\) in such a way that \(\mu_{\xi_{j}}\notin I(X)\). \(\Box\)
_Proof of Proposition 2._ Suppose that either \(X\not\approx\mathbb{R}^{n}\times D\), where \(n\geq 0\) and \(D\) is a discrete torsion-free group, or \(X\neq X_{(2)}\), or \(X\neq X_{(3)}\). Let \(L_{1}\), \(L_{2}\) be as in (4) and (5). We set \(L_{3}=L_{1}\), \(L_{4}=-L_{2}\). Then condition (8) takes the form (39). The random vectors \((L_{1},L_{2})\) and \((L_{1},-L_{2})\) are
identically distributed if and only if the conditional distribution of \(L_{2}\) given \(L_{1}\) is symmetric. Now the statement of Proposition 2 follows from Lemma 6.
Suppose now that \(X=X_{(2)}\) or \(X=X_{(3)}\). Then the statement of Proposition 2 follows from Lemma 7.
**Remark 5**: As in Remark 3 we obtain from Theorem 2 that the group analogue of the Heyde theorem takes place for groups \(X=\mathbb{R}^{n}\times D\), where \(n\geq 0\) and \(D\) is a discrete torsion-free group (compare with [26, Theorem 3.6]). But it was proved in [26] that the group analogue of the Heyde theorem takes place for groups \(X=X_{(3)}\).
In general the group analogue of the Klebanov theorem does not take place for such groups. Let us suppose that \(X=X_{(3)}\) and in addition all \(a_{j},b_{j},c_{j},d_{j}\) are admissible integers for \(X\). Then we can assume without loss of generality that \(a_{j},b_{j},c_{j},d_{j}\) are equal to \(\pm 1\). Suppose also that condition (8) is fulfilled for \(i=\overline{1,m}\), where \(m\leq n\). It follows from condition (8) that if \(a_{i}=-b_{i}\) (\(1\leq i\leq m\)) then \(c_{i}=d_{i}\) for all \(i=\overline{1,n}\) or if \(a_{i}=b_{i}\) then \(c_{i}=-d_{i}\) for all \(i=\overline{1,n}\). For definiteness, suppose that \(a_{i}=-b_{i}\) for all \(i=\overline{1,m}\). Then it follows from (8) that \(c_{i}=d_{i}\) for all \(i=\overline{1,n}\). Thus equation (10) has the form
\[\prod_{j=1}^{m}\widehat{\mu}_{\xi_{j}}(a_{j}(u-v))\prod_{j=m+1}^{n}\widehat{ \mu}_{\xi_{j}}(a_{j}(u+v))=\prod_{j=1}^{n}\widehat{\mu}_{\xi_{j}}(c_{j}(u+v)), \quad u,v\in Y. \tag{42}\]
Putting \(v=-u\) in (42), we get
\[\prod_{j=1}^{m}\widehat{\mu}_{\xi_{j}}(2a_{j}u)=1,\quad u\in Y. \tag{43}\]
Since \(f_{2a_{j}}\in Aut(Y)\) we get from (43) that all \(\widehat{\mu}_{\xi_{j}}(y)=1\) for \(y\in Y\). Therefore all \(\mu_{\xi_{j}}\in D(X)\). Thus, in this special case the group analogue of the Klebanov theorem implies the group analogue of the Heyde theorem for \(X=X_{(3)}\).
**Remark 6**: As in Remark 4 we obtain from Theorem 2 that the group analogue of the Darmois-Skitovich theorem takes place for groups \(X=\mathbb{R}^{n}\times D\), where \(n\geq 0\) and \(D\) is a discrete torsion-free group (compare with [6, Theorem 1]). But it was proved in [6] that the group analogue of the Darmois-Skitovich theorem takes place for groups \(X=X_{(2)}\).
In general the group analogue of the Klebanov theorem does not take place for such groups. Let us suppose that \(X=X_{(2)}\) in Theorem 2. In addition we suppose that \(n=2k\) and \(a_{j},b_{j},c_{j}\) are admissible for only \(j=\overline{1,k}\) and \(d_{j}\) are admissible for only \(j=\overline{k+1,n}\). Then we can assume without loss of generality that \(a_{j}=b_{j}=c_{j}=1\) for \(j=\overline{1,k}\), \(a_{j}=b_{j}=c_{j}=0\) for \(j=\overline{k+1,n}\), and \(d_{j}=1\) for \(j=\overline{k+1,n}\), \(d_{j}=0\) for \(j=\overline{1,k}\). Thus equation (10) has the form
\[\prod_{j=1}^{k}\widehat{\mu}_{\xi_{j}}(u+v)=\prod_{j=1}^{k}\widehat{\mu}_{\xi _{j}}(u)\prod_{j=k+1}^{n}\widehat{\mu}_{\xi_{j}}(v),\quad u,v\in Y. \tag{44}\]
Putting \(v=u\) in (44), we get
\[\prod_{j=1}^{n}\widehat{\mu}_{\xi_{j}}(u)=1,\quad u\in Y. \tag{45}\]
We get from (45) that all \(\widehat{\mu}_{\xi_{j}}(y)=1\) for \(y\in Y\). Therefore all \(\mu_{\xi_{j}}\in D(X)\). Thus, in this special case the group analogue of the Klebanov theorem implies the group analogue of the Darmois-Skitovich theorem for \(X=X_{(2)}\).
**Remark 7**: Theorems 1 and 2 fail if we omit condition (8). Indeed, let \(\xi_{1}\) and \(\xi_{2}\) be independent identically distributed random variables with a distribution \(\mu\). Put \(L_{1}=L_{3}=\xi_{1}+\xi_{2}\), \(L_{2}=-L_{4}=\xi_{1}-\xi_{2}\). In this case condition (8) is not fulfilled. If the random vectors \((L_{1},L_{2})\) and \((L_{1},-L_{2})\) are identically distributed then Lemma 1 implies that the characteristic function \(\widehat{\mu}(y)\) satisfies equation (10) which takes the form
\[\widehat{\mu}(u+v)\widehat{\mu}(u-v)=\widehat{\mu}(u-v)\widehat{\mu}(u+v), \quad u,v\in Y. \tag{46}\]
Equation (46) is the identity for any function \(\widehat{\mu}(y)\). Therefore we can suppose that \(\mu\not\in\Gamma(X)*I(X)\).
## 4 Case of Q-independent random variables
In the article [17] A.M.Kagan and G.J.Szekely introduced a notion of \(Q\)-independence of random variables which generalizes the notion of independence of random variables. Then in [10] G.M. Feldman in a natural way introduced a notion of \(Q\)-independence of random variables taking values in a locally compact Abelian group. These studies were continued in [25] and in [26].
Let \(\xi_{1},\ldots,\xi_{n}\) be random variables with values in the group \(X\). The random variables \(\xi_{1},\ldots,\xi_{n}\) are \(Q\)-independent if the characteristic function of the vector \((\xi_{1},\ldots,\xi_{n})\) can be represented in the form
\[\widehat{\mu}_{(\xi_{1},\ldots,\xi_{n})}(y_{1},\ldots,y_{n})=\prod_{j=1}^{n} \widehat{\mu}_{\xi_{j}}(y_{j})\exp\{q(y_{1},\ldots,y_{n})\},\quad y_{j}\in Y, \tag{47}\]
where \(q(y_{1},\ldots,y_{n})\) is a continuous polynomial on the group \(Y^{n}\) such that \(q(0,\ldots,0)=0\).
We formulate an analogue of Lemma 1 for \(Q\)-independent random variables.
**Lemma 8**: _Let \(X\) be a second countable locally compact Abelian group, \(Y=X^{*}\). Let \(\xi_{1},\ldots,\xi_{n}\) be \(Q\)-independent random variables with values in \(X\) and distributions \(\mu_{\xi_{j}}\). Consider the linear forms \((4)-(7)\) where the coefficients \(a_{j},b_{j},c_{j},d_{j}\) are integers. The random vectors \((L_{1},L_{2})\) and \((L_{3},L_{4})\) are identically distributed if and only if the characteristic functions \(\widehat{\mu}_{\xi_{j}}(y)\) satisfy the equation_
\[\prod_{j=1}^{n}\widehat{\mu}_{\xi_{j}}(a_{j}u+b_{j}v)=\prod_{j=1}^{n}\widehat {\mu}_{\xi_{j}}(c_{j}u+d_{j}v)\exp\{q(u,v)\},\quad u,v\in Y, \tag{48}\]
_where \(q(u,v)\) is a continuous polynomial on the group \(Y^{2}\) such that \(q(0,0)=0\)._
_Proof._ Using the \(Q\)-independence of \(\xi_{1},\ldots,\xi_{n}\), in the same way as in Lemma 1, we obtain from the definition that
\[\widehat{\mu}_{(L_{1},L_{2})}(u,v)={\bf E}\left[(\xi_{1},a_{1}u+b _{1}v)\cdots(\xi_{n},a_{n}u+b_{n}v)\right]=\] \[=\prod_{j=1}^{n}\widehat{\mu}_{\xi_{j}}(a_{j}u+b_{j}v)\exp\{q_{1} (a_{1}u+b_{1}v,\ldots,a_{n}u+b_{n}v)\},\quad u,v\in Y.\]
Analogously we obtain that
\[\widehat{\mu}_{(L_{3},L_{4})}(u,v)=\prod_{j=1}^{n}\widehat{\mu}_{\xi_{j}}(c_{ j}u+d_{j}v)\exp\{q_{2}(c_{1}u+d_{1}v,\ldots,c_{n}u+d_{n}v)\},\quad u,v\in Y.\]
Since the vectors \((L_{1},L_{2})\) and \((L_{3},L_{4})\) are identically distributed, we have \(\widehat{\mu}_{(L_{1},L_{2})}(u,v)=\widehat{\mu}_{(L_{3},L_{4})}(u,v)\). Put \(q(u,v)=q_{2}(c_{1}u+d_{1}v,\ldots,c_{n}u+d_{n}v)-q_{1}(a_{1}u+b_{1}v,\ldots,a_{ n}u+b_{n}v)\). Thus, equation (48) is valid. \(\square\)
Theorem 1 holds true if we consider \(Q\)-independent random variables instead of independent random variables.
**Theorem 3**: _Let \(X\) be a second countable locally compact Abelian group. Let \(\xi_{1},\ldots,\xi_{n}\) be \(Q\)-independent random variables with values in \(X\) and distributions \(\mu_{\xi_{j}}\) with non-vanishing characteristic functions. Consider the linear forms \((4)-(7)\), where the coefficients \(a_{j},b_{j},c_{j},d_{j}\) are integers. If the random vectors \((L_{1},L_{2})\) and \((L_{3},L_{4})\) are identically distributed, then for all those \(i\), for which condition \((8)\) is valid, the following statements hold:_
\((i)\) _If_ \(X\) _is a torsion-free group then_ \(\mu_{\xi_{i}}\in\Gamma(X)\)_;_
\((ii)\) _If_ \(X=X_{(p)}\) _then_ \(\mu_{\xi_{i}}\in D(X)\)_._
_Proof._ The proof of (i) of Theorem 3 is almost the same as the proof of Theorem 1. Instead of equation (10) we have to consider equation (48). Instead of equation (31) we get the following equation
\[\sum_{i=0}^{m}\varphi_{i}\left(A_{i}u+B_{i}v\right)=\sum_{j=1}^{n}\psi_{j}(C_{ j}u+D_{j}v)+q(u,v),\quad u,v\in Y. \tag{49}\]
Then we apply Lemma 4 and complete the proof of (i) of Theorem 3 in the same way as Theorem 1.
The proof of (ii) of Theorem 3 follows from Theorem 1 and the following fact. It is well known that any continuous polynomial is equal to the constant on compact elements (see e.g. [8, SS5.7]). If the group \(Y\) consists only of compact elements then the connected component of zero of the group \(X\) is equal to zero (see e.g. [12, SS24.17]). Thus, independence and Q-independence of random variables are equivalent on groups \(X\) whose connected component of zero is equal to zero. Hence statement of (ii) of Theorem 3 follows from Theorem 1. \(\square\)
Theorem 2 also holds true if we consider \(Q\)-independent random variables instead of independent random variables.
**Theorem 4**: _Let \(X=\mathbb{R}^{n}\times D\), where \(n\geq 0\) and \(D\) is a discrete torsion-free group. Let \(\xi_{1},\ldots,\xi_{n}\) be \(Q\)-independent random variables with values in \(X\) and distributions \(\mu_{\xi_{j}}\). Consider the linear forms \((4)\)-\((7)\), where the coefficients \(a_{j},b_{j},c_{j},d_{j}\) are integers. If the random vectors \((L_{1},L_{2})\) and \((L_{3},L_{4})\) are identically distributed, then for all those \(i\) for which condition \((8)\) is valid \(\mu_{\xi_{i}}\in\Gamma(X)\)._
**Remark 8**: Since independent random variables are also \(Q\)-independent, Propositions 1 and 2 prove the exactness of Theorems 3 and 4.
## Acknowledgements
The author would like to thank the Volkswagen Foundation (VolkswagenStiftung), the Bielefeld University and Prof. Dr. Friedrich Gotze for the support and warm reception.
## Funding
This research was supported by VolkswagenStiftung - Az. 9C108.
|
2306.02457
|
Adaptive and Personalized Exercise Generation for Online Language
Learning
|
Adaptive learning aims to provide customized educational activities (e.g.,
exercises) to address individual learning needs. However, manual construction
and delivery of such activities is a laborious process. Thus, in this paper, we
study a novel task of adaptive and personalized exercise generation for online
language learning. To this end, we combine a knowledge tracing model that
estimates each student's evolving knowledge states from their learning history
and a controlled text generation model that generates exercise sentences based
on the student's current estimated knowledge state and instructor requirements
of desired properties (e.g., domain knowledge and difficulty). We train and
evaluate our model on real-world learner interaction data from Duolingo and
demonstrate that LMs guided by student states can generate superior exercises.
Then, we discuss the potential use of our model in educational applications
using various simulations. These simulations show that our model can adapt to
students' individual abilities and can facilitate their learning efficiency by
personalizing learning sequences.
|
Peng Cui, Mrinmaya Sachan
|
2023-06-04T20:18:40Z
|
http://arxiv.org/abs/2306.02457v1
|
# Adaptive and Personalized Exercise Generation
###### Abstract
Adaptive learning aims to provide customized educational activities (e.g., exercises) to address individual learning needs. However, manual construction and delivery of such activities is a laborious process. Thus, in this paper, we study a novel task of _adaptive_ and _personalized_ exercise generation for online language learning. To this end, we combine a knowledge tracing model that estimates each student's evolving knowledge states from their learning history and a controlled text generation model that generates exercise sentences based on the student's current estimated knowledge state and instructor requirements of desired properties (e.g., domain knowledge and difficulty). We train and evaluate our model on real-world learner interaction data from Duolingo and demonstrate that LMs guided by student states can generate superior exercises. Then, we discuss the potential use of our model in educational applications using various simulations. These simulations show that our model can adapt to students' individual abilities and can facilitate their learning efficiency by personalizing learning sequences.1
Footnote 1: Our implementation is available at [https://github.com/nlpcui/AdaptiveQG](https://github.com/nlpcui/AdaptiveQG).
## 1 Introduction
Adaptive learning technologies which continuously monitor student progress to dynamically adjust the level or type of learning materials based on the individual's abilities are quite popular Becker et al. (2018). Empirical studies have shown various benefits of adaptive learning, such as improved student learning outcomes Bailey et al. (2018); Holthaus et al. (2019), lower dropout rates Daines et al. (2016), and increased instructor satisfaction Yarnall et al. (2016). Despite their effectiveness, designing adaptive systems is challenging as it usually involves planning a series of exercises that is personalized and adaptive to each student, which requires diverse exercise planning as well as an understanding of the student learning process.
On the other hand, powered by advances in neural NLP, works have been done for automatically generating text-based exercises or questions for educational purposes in second language learning Heck and Meurers (2022); Perez and Cuadros (2017), mathematics Polozov et al. (2015); Zhou and Huang (2019); Wang et al. (2021), and computer science Susanti et al. (2017). Nevertheless, how to apply these approaches in adaptive systems remains an open question. First, existing methods largely rely on pre-defined question templates or specified information sources (e.g., a passage), thereby resulting in limited knowledge coverage and low question difficulty control, and as a consequence, do not meet each student's individual and nuanced learning needs. Besides, they are usually designed to generate standalone exercises, whereas adaptive learning systems usually require a continuous supply of exercises. Another related line of research studies exercise recommendation to customize learning conten
Figure 1: We first assess student knowledge states from their learning history and then generate exercises based on estimated states and instructor control of desired properties including domain knowledge (vocabulary) and difficulty levels (expected error numbers).
pabilities and goals (Wu et al., 2020; Huang et al., 2022). However, these systems are limited by the diversity of the exercise pool.
To address the above limitations, we study the task of exercise generation in the context of adaptive learning, where we hypothesize that a student's _dynamic knowledge state_ holds the key to generating _adaptive_ and _personalized_ exercises. Specifically, we ground our study in the domain of language learning to create exercise sentences for translation, of which Figure 1 illustrates the overall process. We start with an assumption about the dynamics between exercise difficulty, vocabulary, and a student's knowledge state (SS 3). Then, we propose an approach (SS 4) that marries knowledge tracing (KT; Corbett and Anderson (1994)), a technique for estimating students' mastery states of knowledge components from their learning history, with a controlled text generation model that generates the next exercise based on instructor requirements, such as specified _domain knowledge_ and _target difficulty_. We further explore various strategies to adapt the generation of exercises based on students' changing knowledge states. In doing this, our model not only supports personalized generation where the instructor (or the system) can express some desired properties of the generated exercises but is also adaptive to each student's learning progress.
We conduct extensive experiments on real-world student learning data from Duolingo2, a popular online language learning platform that offers structured and individualized learning content. Our results (SS 5) show that pre-trained LMs can help KT assess student language knowledge while student states estimated by KT can guide LMs to generate adaptive and personalized exercises. We further discuss the potential use of our model in educational applications with simulations. The simulations show that our model can dynamically adjust exercise difficulty to match individual learning progress and facilitate their learning efficiency by customizing exercise sequences.
Footnote 2: [https://www.duolingo.com/](https://www.duolingo.com/)
## 2 Related Work
**Adaptive Learning** technologies that dynamically monitor student progress and adjust the course content based on an individual's abilities have demonstrated various benefits in education (Becker et al., 2018). Such systems usually consist of three core components: (1) a _domain model_ which refers to the content and structure of the topic to be taught, (2) a _learner model_ which repeatedly measures and updates learner characteristics, and (3) an _adaption model_ which combines information from the domain and learner model to offer adaptive instructions (Vagale and Niedrite, 2012; Imhof et al., 2020). In this study, we build the learner model based on the KT technique and combine the domain and adaption model into an LM which generates learning content adaptively based on user features captured by the learner model.
**Knowledge Tracing**(Corbett and Anderson, 1994) is the technique to estimate students' knowledge mastery \(\mathbf{s}\) from their practiced exercises (\(\mathbf{e}\)) and responses (\(\mathbf{r}\)):
\[\mathbf{s_{t+1}}=f_{KT}((\mathrm{e_{1}},\mathrm{r_{1}}),(\mathrm{e_{2}}, \mathrm{r_{2}}),...,(\mathrm{e_{t}},\mathrm{r_{t}})). \tag{1}\]
Early KT approaches model \(f_{KT}\) as variants of logistic regression, such as Item Response Theory (IRT) and Additive Factor Model (AFM) (Cen et al., 2008), or probabilistic models such as Bayesian Knowledge Tracing (Corbett and Anderson, 1994) and its variants (Yudelson et al., 2013; Kaser et al., 2017). These approaches heavily rely on their assumptions of the learning process which are often incomplete. In recent years, neural networks have become the dominant method in this area. Piech et al. (2015) proposed the first Deep Knowledge Tracing model based on Recurrent Neural Networks. After that, various architectures have been applied to model different characteristics of learning, such as self-attention (Pandey and Karypis, 2019; Shin et al., 2021), memory networks (Abdelrahman and Wang, 2019), and graph neural networks (Tong et al., 2020).
**Exercise Generation**. Previous exercise generation approaches for language learning primarily retrieve and manipulate text to create fixed types of exercises, such as gap fill and multiple-choice exercises (Agarwal and Mannem, 2011; Perez and Cuadros, 2017; Heck and Meurers, 2022), which are limited by the richness of the corpus. Besides them, some Question Generation (QG) approaches have been proposed for educational purposes (Zhao et al., 2022; Wang et al., 2021). While some of them allow for user control of certain question properties, they do not consider learners' individual and dynamic learning needs and progress. Thus, they cannot achieve the goal of adaptive learning. Recently, Srivastava and Goodman (2021) proposed an adaptive question generation model that connects question difficulty with student knowledge.
However, it neither models students' fine-grained knowledge states nor provides control over domain knowledge. Consequently, it is insufficient for practical use.
**Controlled Text Generation (CTG)** methods aim to steer text generation toward certain attributes. Existing CTG approaches can be broadly classified into three types: directly training a class-conditional language model (CCLM) [21, 22, 23], guiding a model via an attribute discriminator [15, 16], or manipulating decoder's logits (also referred to as weighted decoding) [13, 17]. This study explores difficulty and lexical control in generating language learning exercises. Additionally, we seek to adapt the model's controllability to different users by building the dependency between control signals and individual states.
## 3 Problem Formalization
Let \(\mathcal{H}_{\leq n}=\{(e_{1},r_{1}),...,(e_{n},r_{n})\}\) be a student's **learning history** consisting of \(n\) exercises and responses. Here, \(e_{i}=\{w_{i,1},...,w_{i,|e_{i}|}\}\) is an **exercise sentence** for translation and \(r_{i}\in\{0,1\}^{|e_{i}|}\) is the **correctness label** for each word in \(e_{i}\). We generate the next exercise \(e_{n+1}\) based on:
* \(C_{n+1}\): **knowledge components** that should be involved in \(e_{n+1}\). In language learning, we consider a word as a knowledge component, and therefore \(C_{n+1}=\{c_{1},...,c_{|C_{n+1}|}|c_{*}\in\mathcal{V}\}\) is a subset of vocabulary \(\mathcal{V}\) that should be included in the output. In general, the knowledge components can be user or system defined based on the current learning material.
* \(\mathbf{s_{n+1}}\): a student's **knowledge state** for the knowledge components (the vocabulary) after \(n\) interactions. \(\mathbf{s_{n+1}}\) can be formalized as a \(|\mathcal{V}|\)-dimensional vector with each entry between 0 and 1 indicating the mastery probability of that word.
* \(d_{n+1}\): the **expected difficulty** of \(e_{n+1}\). We use individual performance to estimate problem difficulty. For a particular student, the difficulty of an exercise is defined as the expected number of word errors the student would make in translating it.
Given the above setting, we formalize our task as:
\[e_{n+1}=\operatorname*{arg\,max}_{e}P(e|\mathbf{s_{n+1}},d_{n+1},C_{n+1}), \tag{2}\]
where \(e_{n+1}\) satisfies the following constraints:
\[\forall c\in C_{n+1}:\exists i,e_{n+1:i+|c|}=c, \tag{3}\] \[d_{n+1}=\sum_{w\in e_{n+1}}(1-\mathbf{s_{n+1}}[w]), \tag{4}\]
corresponding to _word constraint_ and _difficulty constraint_, respectively. Here, \(\mathbf{s_{n+1}}[w]\) represents the correct probability of translating word \(w\); therefore, the sum of \(\{1-\mathbf{s}[w],w\in e\}\) is the expected number of errors in translating \(e\), which can be seen as a measure of the difficulty of \(e\).
Our task is distinct from previous CTG works in two aspects: 1) our control is _dynamic_; student states acting as control are also learnable; 2) there is a strong dependency among control signals (Eqs. 3 and 4), which is non-trivial to learn. Note that in this work, we measure difficulty via student performance and only consider vocabulary knowledge in defining s for simplicity. Other definitions of sentence difficulty (e.g., definitions that incorporate other types of linguistic knowledge such as syntax) can be explored in future work.
## 4 Methodology
Our model is illustrated in Figure 2. We first employ a knowledge tracer \(\mathcal{T}\) (SS 4.1) to estimate a student's time-varying knowledge states. Then, we build an LM-based exercise generator \(\mathcal{G}\) (SS 4.2) to create exercises based on estimated states and specified difficulty and knowledge components (words). We jointly optimize the two modules with an inconsistency loss (SS 4.3) at training and apply a constrained decoding strategy (SS 4.4) at inference. Finally, we discuss how our model can accommodate personalized learning recommendation algorithms on the fly (SS 4.5).
### Knowledge Tracing
The goal of our knowledge tracing model \(\mathcal{T}\) is to estimate a student's latest knowledge state \(\mathbf{s_{n+1}}\) given previous interactions \(\mathcal{H}_{\leq n}\). We adopt the deep knowledge tracing (DKT) model proposed by Piech et al. (2015). We concatenate past exercises as a word sequence \(\mathbf{e}_{1:n}=\{w_{1,1},...,w_{n,|e_{n}|}\}\) and past responses as a label sequence \(\mathbf{r}_{1:n}=\{r_{1,1},...,r_{n,|e_{n}|}\}\), where \(w_{i,j}\) and \(r_{i,j}\) represent the \(jth\) word or label of the \(ith\) exercise. Then we
convert the two sequences into word embeddings \(\vec{\mathbf{e}}_{1:n}\) and label embeddings \(\vec{\mathbf{r}}_{1:n}\) and send them to an LSTM encoder to predict the next state \(\mathbf{s}_{\mathbf{n+1}}\):
\[\mathbf{h}_{\mathbf{n}}=\mathrm{LSTM}(\vec{\mathbf{e}}_{n}+\vec{ \mathbf{r}}_{n};\mathbf{h}_{\mathbf{n-1}}), \tag{5}\] \[\mathbf{s}_{\mathbf{n+1}}=sigmoid(\mathrm{W}_{\mathrm{s}}*\mathbf{ h}_{\mathbf{n}}+\mathrm{b}_{\mathrm{s}}). \tag{6}\]
The model is trained to predict the binary word labels of the next exercise using the estimated knowledge state. The cross-entropy loss for a single student's history of \(N\) interactions is computed as:
\[\mathcal{L}_{ce}=\sum_{i=1}^{|N|}\sum_{j=1}^{|e_{i}|}\mathrm{CE}(r_{i,j}, \mathbf{s}_{i}[w_{i,j}]). \tag{7}\]
We adopt the regularization strategy proposed by Yeung and Yeung (2018) to stabilize training:
\[\mathcal{L}_{r_{\{1,2\}}}=\sum_{n=2}^{N}\sum_{i=1}^{|\mathcal{V}|}|\mathbf{s} _{\mathbf{n}}{}^{(i)}-\mathbf{s}_{\mathbf{n-1}}{}^{(i)}|^{\{1,2\}}, \tag{8}\]
where \(\mathcal{L}_{r_{1}}\) ensures that only the states of relevant knowledge components are updated, and \(\mathcal{L}_{r_{2}}\) penalizes the vibration. The final objective of \(\mathcal{T}\) is \(\mathcal{L}_{\mathcal{T}}=\mathcal{L}_{ce}+\lambda_{1}*\mathcal{L}_{r_{1}}+ \lambda_{2}*\mathcal{L}_{r_{2}}\) with \(\lambda\) for balance.
### Controllable Exercise Generator
Our exercise generator \(\mathcal{G}\) is fine-tuned from a pre-trained LM. Specifically, we generate an exercise \(e\) based on a student's current knowledge state \(\mathbf{s}\), target words \(C\), and expected difficulty \(d\) (we drop the interaction index to reduce clutter). We parameterize the inputs as follows:
\[\mathbf{x}=[f_{s}(\mathbf{s});f_{d}(d);Emb(c_{1},...,c_{|C|})], \tag{9}\]
where knowledge state \(\mathbf{s}\) and scalar difficulty \(d\) are projected to control vectors via two feedforward layers \(f_{s}\) and \(f_{d}\), and \(C\) are mapped to word embeddings. The training objective for generating a single exercise is defined as:
\[\mathcal{L}_{\mathcal{G}}=-\sum_{t}^{|e|}logP(w_{t}|w_{1},...,w_{t-1},\mathbf{ x}). \tag{10}\]
During training, we sample a proportion of words from reference exercises as \(C\) and calculate difficulty \(d\) from ground-truth correctness labels, whereas states \(\mathbf{s}\) are estimated by \(\mathcal{T}\). At inference, \(d\) and \(C\) can be determined by instructors or the system, allowing automated and human intervention.
### Joint Learning with Inconsistency Loss
We jointly optimize the knowledge tracer \(\mathcal{T}\) and exercise generator \(\mathcal{G}\) with an _inconsistency loss_ inspired by Cui and Hu (2021), enabling the two modules to learn from each other. Concretely, after generating an exercise \(e\), we calculate its difficulty using input state \(\mathbf{s}\) via Eq. 4, which should be as close to the input difficulty \(d\) as possible:
\[\mathcal{L}_{inc}=|d-\sum_{w\in e}(1-\mathbf{s}[w])|. \tag{11}\]
Since the second term is non-differentiable due to the \(argmax\) operation involved in producing \(e\), we replace it with "soft" tokens:
\[\mathcal{L}_{inc}=|d-\sum_{t}^{|e|}(1-\mathbf{p}_{t}\odot\mathbf{s})|, \tag{12}\]
where \(\mathbf{p}_{t}=softmax(\mathbf{o}_{t}/\tau)\) is the \(t^{th}\) distribution normalized from its logits \(\mathbf{o}_{t}\in\mathbb{R}^{|\mathcal{V}|}\) with a temperature parameter \(\tau\), and \(\odot\) represents dot product.
For the generator \(\mathcal{G}\), this loss constrains the generation toward the target difficulty. For \(\mathcal{T}\), the LM distributions \(p_{\theta}\) provide similarity information between vocabulary words. This is analogous to the relationship of knowledge components, which has been shown helpful in knowledge tracing Tong et al. (2020). The final objective of our model is \(\mathcal{L}=\mathcal{L}_{\mathcal{T}}+\gamma_{1}\mathcal{L}_{\mathcal{G}}+ \gamma_{2}\mathcal{L}_{inc}\).
### Lexical Difficulty Constrained Decoding
We propose a beam search-based decoding algorithm to enforce the constraints introduced in SS 3. At each step, we update the beam according to:
\[Y_{t}\!=\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
where \(Y_{t}\) is the set of decoded hypotheses in step \(t\) and \(k\) is the beam size. The first term is the standard objective of beam search and the second term is a weighted combination of additional scoring functions in terms of the satisfaction of different constraints. We formulate our constraints \(\mathcal{F}\) in Eqs. 3 and 4 as:
\[F_{c}(\mathbf{y})=\sum_{c\in\mathcal{C}}I(c,\mathbf{y}),\text{ \ and }F_{d}( \mathbf{y})=-|d-h(\mathbf{y})|,\]
corresponding to the satisfaction of word constraint and difficulty constraint, respectively. \(I(c,y)\) is a Boolean predicate indicating whether word \(c\) is included in sequence \(\mathbf{y}\) and \(h(\mathbf{y})\) calculates its difficulty via Eq. 4.
Succinctly, the decoding algorithm works in three steps. First, we **expand** the current \(k\) hypotheses to \(k\times|\mathcal{V}|\) candidates. Then, we **prune** the search space by dropping candidates that are not in the top-\(k_{F}\) list of any scoring functions \(F\). Finally, we **rescore** the pruned candidates based on the full objective (Eq. 13) and select the \(k\)-best ones to update the beam.
However, we found that greedily applying \(F_{d}\) in the rescoring step would bias the decoder toward sequences with difficult words in the earlier steps. Drawing inspiration from Lu et al. (2022), we use lookahead heuristics that incorporate future estimates into the decoding process. Concretely, to score a subsequence \(\mathbf{y}_{<t}\), we first greedily decode the next \(l+1\) steps "soft" tokens (i.e., distributions): \(\mathbf{\tilde{y}}_{t:t+l}\)=\([\mathbf{p}_{t},...,\mathbf{p}_{t+l}]\). Then, we combine the constraint satisfaction of decoded \(\mathbf{y}_{<t}\) and the estimated future \(\mathbf{\tilde{y}}_{t:t+l}\):
\[\tilde{F}_{c}(\mathbf{y}_{<t})\!=\!\sum_{c\in\mathcal{C}}\max(I(c, \mathbf{y}_{<t}),\max_{j\in[t,t+l]}P(y_{j}=c)),\] \[\tilde{F}_{d}(\mathbf{y}_{<t})=-|d-h(\mathbf{y}_{<t})-\sum_{j=t} ^{t+l}1-\mathbf{p}_{j}\odot\mathbf{s}|.\]
The procedure of our decoding algorithm is in Appendix A.
### Plug-and-Play Personalized Generation
Our model can be flexibly plugged into an existing personalized learning recommendation algorithm to automatically generate novel and customized exercises. We showcase this functionality using the \(\mathtt{ExpectMax}\) curriculum planning strategy derived from DKT. Given a student's current state \(\mathbf{s}_{\mathbf{n}}\), we can calculate the expected knowledge state after practicing a new exercise \(e\) using our KT model \(\mathcal{T}\):
\[\tilde{\mathbf{s}}_{n+1}=\sum_{r\in\{0,1\}^{|c|}}P(r)*\mathcal{T}(\mathbf{s}_{ n},(e,r)), \tag{14}\]
where \(\mathcal{T}(\cdot)\) computes the updated knowledge state given a new interaction \((e,r)\). The probability of label sequence \(r\) is computed from \(\mathbf{s}_{n}\) assuming conditional independence \(P(r)=\prod_{i=1}^{|e|}P(r_{i})\), where \(P(r_{i})=\mathbf{s}_{n}[e_{i}]\). \(\mathtt{ExpectMax}\) scores \(e\) based on how well it can improve a student's average knowledge state, i.e., \(F_{k}(e)=\overline{\tilde{\mathbf{s}}}_{n+1}-\overline{\mathbf{s}}_{n}\), where \(\overline{\mathbf{s}}\) denotes mean of the vector. We incorporate \(F_{k}\) into the decoding objective (Eq. 13) and call it \(\mathtt{ExpectMax}\)-\(\mathtt{Gen}\).
In principle, our model can accommodate different recommendation algorithms with different ranking functions \(F_{k}\). The key benefit is that our model can _generate_ novel exercises, while retrieval-based systems can only _select_ exercises from an existing pool.
## 5 Experimental Results and Analysis
We experiment on the English track of Duolingo Second Language Acquisition Modeling (SLAM) dataset (Settles et al., 2018), which contains about 1 million interactions of 2.6k learners over the first 30 days of learning a second language. For each student, we use the first 80% of interactions for training, and the subsequent and the last 10% for validation and testing, respectively. Details of the dataset and experimental setup are in Appendix B.
We first evaluate the ability of the KT model to estimate student knowledge states in SS 5.1. Then, we analyze the effectiveness of the exercise generator in SS 5.2. Lastly, we showcase the superiority of our model in two educational scenarios with simulation experiments in SS 5.3.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**Word-level**} & \multicolumn{2}{c}{**Exercise-level**} \\ \cline{2-5} & **Seen** & **Unseen** & **Seen** & **Unseen** \\ \hline Ensemble & 73.41 & 70.58 & 65.55 & 64.93 \\ Standard DKT & 80.46 & 75.54 & 72.32 & 71.54 \\ \hline \(\mathtt{DKT}_{\mathrm{LM},r=0.5}\) & 80.47 & 75.51 & 72.39 & 71.47 \\ \(\mathtt{DKT}_{\mathrm{LM},r=1.0}\) & 80.49 & 75.54 & 72.38 & 71.49 \\ \(\mathtt{DKT}_{\mathrm{LM},r=2.0}\) & **80.55** & **75.69** & **72.41** & **71.74** \\ \(\mathtt{DKT}_{\mathrm{LM},r=3.0}\) & 80.54 & 75.48 & 72.33 & 71.52 \\ \(\mathtt{DKT}_{\mathrm{LM},r=5.0}\) & 80.31 & 75.46 & 72.28 & 71.50 \\ \hline \hline \end{tabular}
\end{table}
Table 1: AUC (\(\times\) 100) performance of knowledge tracing models on seen and unseen text examples. Exercise-level results are obtained by averaging word-level predictions.
### Knowledge Tracing Evaluation
We use the standard **AUC (ROC)** as the metric of knowledge tracing in accordance with Settles et al. (2018). We denote our DKT model jointly trained with the LM-based exercise generator as \(\text{DKT}_{\text{LM}}\) and compare it with the following baselines: 1) \(\boxed{\text{\text{\text{\text{\text{\text{E}}}}}}}\)(Osika et al., 2018) which is one of the winning methods of the SLAM challenge that combines a RNN and a GBDT classifier. We reimplement this model to use texts only as input and remove other side features, such as response time. We do this because we are interested in its performance in a _general_ setting where we do not assume the availability of diverse side information; 2) the \(\boxed{\text{\text{\text{\text{\text{standard}}}}}}\)DKT (Piech et al., 2015) which is trained only with the KT loss \(\mathcal{L_{T}}\). We use it to verify whether jointly learning with an LM can help predict student language knowledge.
We present the results in Table 1, where we can see that DKT outperforms the Ensemble model when only text features are used, and our best model \(\text{DKT}_{\text{LM},\tau=2}\) outperforms DKT on all metrics. We hypothesize the performance gain comes from the word similarity information entailed in the output distributions \(p_{\theta}\) of the LM. This can be regarded as the relationship between knowledge components, which is demonstrated effective in knowledge tracing (Tong et al., 2020). To verify this, we tune the temperature \(\tau\) which controls the sparsity of output distributions: \(\tau\to 0\) produces a sparse distribution that is too assertive and provides little relationship information, while \(\tau\rightarrow\infty\) produces a uniform distribution where all words are evenly related. The results in the second section of Table 1 suggest that a medium \(\tau\) improves the performance, while a small (\(\tau\)=1) or large (\(\tau\)=5) one is harmful, particularly for predicting unseen data. The broader message from this observation is that the knowledge encoded in pre-trained LMs has the potential to improve knowledge tracing in the domain of language learning. We also conduct an analysis of the influence of regularization terms Eq. 8, detailed in Appendix C.
### Exercise Generation Evaluation
The main results of exercise generation are presented in Table 2, which are split according to whether the exercises are seen in the training set. Evaluation metrics include reference-based **BLEU**(Papineni et al., 2002) and **METEOR**(Banerjee and Lavie, 2005), **KC-Coverage** which is the percentage of target knowledge components (words) that appear in the outputs, **D-MAE** which is the mean absolute error between the input difficulty and output difficulty, **Invalid** which is the percentage of exercises that have grammar errors detected using an automatic tool3. Since we generate exercises for language learning, we expect a valid exercise to be grammatically correct. We analyze the performance from the following aspects.
Footnote 3: [https://github.com/jxmorris12/language_tool_python](https://github.com/jxmorris12/language_tool_python).
**Lexical Controllability**. We first examine the lexical controllability of our model, which is crucial for generating personalized exercises for language learning. We compare our model with two baselines:1) \(\text{EG}_{\mathcal{H}}\) which generates the next exercise based on the student's historical interactions; and 2) \(\text{AGQ}_{t+d}\)4 which generates the next exercise based on historical interactions and a target difficulty. The two baselines perform poorly on BLEU, METEOR, and KC-Coverage metrics, particularly
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{**Models**} & \multicolumn{2}{c}{**BLEU \(\uparrow\)**} & \multicolumn{2}{c}{**METEOR \(\uparrow\)**} & \multicolumn{2}{c}{**KC-Coverage (\%) \(\uparrow\)**} & \multicolumn{2}{c}{**D-MAE \(\downarrow\)**} & \multirow{2}{*}{**Invalid (\%) \(\downarrow\)**} \\ \cline{2-3} \cline{5-10} & **Seen** & **Unseen** & **Seen** & **Unseen** & **Seen** & **Unseen** & **Seen** & **Unseen** \\ \hline \(\text{EG}_{M}\) & 9.23 & \textless{}0.01 & 18.79 & 6.05 & 14.26 & 2.49 & 0.396 & 1.500 & **0.071** \\ \(\text{AQ}_{t+d}\) & 10.28 & \textless{}0.01 & 20.15 & 7.16 & 15.84 & 2.95 & 0.463 & 0.985 & 1.674 \\ \(\text{EG}_{C}\) & 18.41 & 5.21 & 45.36 & 36.14 & **99.77** & 90.63 & 0.367 & 0.837 & 0.301 \\ \(\text{EG}_{C+d}\) & 11.84 & 15.94 & 40.89 & 42.10 & 96.23 & 91.62 & 0.564 & 0.679 & 0.385 \\ \hline \(\text{APEG}_{\text{s}+C+d}\) & **22.47** & **34.60** & **56.15** & **44.01** & 99.61 & **95.71** & **0.246** & **0.604** & 0.283 \\ - joint learning & 22.01 & 33.15 & 55.80 & 42.85 & 99.63 & 94.08 & 0.251 & 0.619 & 0.281 \\ - constrained decoding & 21.58 & 32.06 & 55.43 & 40.49 & 99.59 & 94.77 & 0.263 & 0.681 & 0.277 \\ \hline Upper bound & 53.65 & 41.24 & 74.97 & 52.10 & 99.75 & 95.96 & 0.060 & 0.302 & 0.233 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results of exercise generation. APEG is our proposed model, and AQG is an adaptively difficulty-controlled question generation model proposed by Srivastava and Goodman (2021). The subscripts represent whether historical interactions (\(\mathcal{H}\)), target words (\(C\)), difficulty (\(d\)), and student state (\(\mathbf{s}\)) are used to generate exercises.
for unseen data. This indicates that they cannot predict the accurate content of the next exercise based on historical data or difficulty information, possibly because there is no strong connection within a sequence of exercises or such connection cannot be captured by an LM. We note that EG\({}_{\mathcal{H}}\) performs well on the validness metric. However, upon inspecting its results, we found the model almost only copies exercises from history, with less than 0.02% novel generations. The same issue is observed in AQG\({}_{\mathcal{H}+d}\) where more than 90% exercises are repetitive. We follow Srivastava and Goodman (2021) to improve its novelty using a repetition penalty during the generation, but this results in far more invalid exercises (1.7%). In comparison, our model achieves a better balance between generalization ability and fluency.
**Effect of Student Modeling**. To investigate whether student modeling helps exercise generation, we build two baselines without student knowledge states: 1) EG\({}_{C}\) which conditions generation on target KCs (words) only, and 2) EG\({}_{C+d}\) on both target words and difficulty. The former variant can be considered a keyword-to-text generation model, while the latter imposes additional difficulty control. Our full model APEG\({}_{\mathbf{s}+C+d}\) significantly outperforms both of them, which proves our aforementioned hypothesis that a student's dynamic knowledge states must be considered in generating adaptive and personalized exercises. An interesting observation is that incorporating difficulty control improves the performance on unseen data, indicating the model to some degree learns generalizable difficulty information. Nevertheless, our further analysis shows the model is not adaptive to students of different abilities, which will be discussed in SS 5.3.
**Ablation Study**. The key challenge of our task is to learn the dependency between student knowledge, vocabulary, and exercise difficulty (Eqs. 3 and 4). To understand which parts of our model contribute to this goal, we build two ablated variants by removing the joint learning strategy (SS 4.3) and the constrained decoding algorithm (SS 4.4), respectively. As shown in the second section of Table 2, the search-based method is slightly better than the learning-based method, while combining them leads to the best performance.
We further explore the effect of the lookahead strategy on difficulty constraints. Table 3 presents the ablation results on the validation set, where we can see lookahead strategy improves both generation quality and controllability. To understand how it works, we measure the distribution of difficulty in different regions of exercise sentences. Such distribution is computed as the accumulated word difficulty in four equally sized segments of 2000 sampled sentences. As shown in Figure 3, the difficult words of reference exercises are largely concentrated in the \(2^{nd}\) and \(4^{th}\) quarter. Our decoding algorithm with lookahead produces a similar result, while removing lookahead would bias the distribution toward \(2^{nd}\) and \(3^{rd}\) quarter. This confirms our assumption that naively applying \(F_{d}\) would greedily select difficult words in the early steps, which is not the distribution of reference exercises. Our decoding algorithm avoids this issue by estimating the future and therefore achieves better results.
**Upper Bound Analysis**. When we train our model, we use ground-truth difficulty \(d\) and target words \(C\) obtained from references; however, the student states \(\mathbf{s}\) are estimated from the KT model. We conduct an upper bound analysis to understand the influence of the accuracy of \(\mathbf{s}\) on the generation performance. Since a student's actual mastery of every vocabulary word is not available, we choose to replace the ground-truth difficulty levels \(d\) with those estimated from \(\mathbf{s}\). As shown in the last section of Table 2, all metrics are considerably boosted when the inconsistency between states \(\mathbf{s}\) and difficulty \(d\) is eliminated. This again proves the effect
Figure 3: Distributions of accumulated word difficulty in four equally sized segments of 2000 sampled exercise sentences.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & **BLEU \(\uparrow\)** & **Coverage (\%) \(\uparrow\)** & **D-MAE \(\downarrow\)** \\ \hline w/o lookahead & 20.46 & 99.18 & 0.263 \\ w/ lookahead & **21.20** & **99.30** & **0.257** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of generation performance with and without lookahead on the validation set.
of incorporating student states and explains how such information comes to play: the knowledge states explicitly convey the dynamics between control signals \(d\), \(\mathcal{C}\), and target exercises \(e\), which is non-trivial to learn by the model itself.
**Case Study**. We provide a few cases in Table 4. We can see our model can dynamically adjust the exercise content according to specified words, target difficulty, as well as students' different mastery states of the vocabulary. The exercises generated for advanced students (avg. state = 0.65) are generally more difficult than for poor students (avg. state = 0.32) under the same input difficulty.
### Educational Applications
In this subsection, we showcase the potential applications of our model in two educational scenarios with simulation experiments.
#### 5.3.1 Adaptive Difficulty Calibration
A crucial requirement for adaptive learning systems is to dynamically adjust the difficulty of learning items to match each student's learning progress (Becker et al., 2018). However, previous difficulty-controlled question generation approaches are mainly based on inherent problem difficulty, independent of individual abilities (Susanti et al., 2017; Kumar et al., 2019). Ideally, our model can achieve this goal by learning the dependency between difficulty and student knowledge states. To verify this, we generate 50 additional exercises of specified difficulties for each student after their existing interactions. At each step, we construct input by sampling a target word from the vocabulary and a difficulty level from a uniform distribution \([1,3]\). We compare our full model APEG\({}_{s+C+d}\) with its variant EG\({}_{C+d}\) which achieves the best difficulty controllability for unseen data. This baseline can be considered a vanilla non-adaptive difficulty-controlled exercise generation model.
In this simulation, we are interested in whether the difficulty controllability of our model can adapt to students of various knowledge levels. To this end, we rank students based on their average knowledge states \(\mathbf{\bar{s}}\) and split the result accordingly. As shown in Figure 4, the difficulty controllability of the baseline is not reliable across different groups. In particular, it tends to generate harder (up to \(2\times d_{in}\)) exercises for the bottom 10 percentile students but easier (up to \(\frac{1}{2}\times d_{in}\)) ones for the top 10 percentile students, although it performs well for the intermediate 80 percentile students. In comparison, our adaptive model is also slightly biased toward the intermediate group but much more consistent than the baseline, with less than 20% fluctuations on average. Besides, we can see from the shadows that the baseline experiences huge variances at each step, indicating it is not adaptive to different knowledge states, even though the students within a group are at a similar level.
\begin{table}
\begin{tabular}{l l l l} \hline \hline \(\mathbf{d_{in}}\) & **Target words** & **Generated exercises** & \(\mathbf{d_{out}}\) \\ \hline \multicolumn{4}{c}{Avg. knowledge state \$ = 0.32} \\ \hline
1.0 & \{men\} & Fifteen men & 1.25 \\
2.0 & \{study\} & 1 study English & 2.18 \\
3.0 & \{airport\} & Where is the airport? & 2.73 \\ \hline \multicolumn{4}{c}{Avg. knowledge state \$ = 0.65} \\ \hline
1.0 & \{profile\} & He has a famous profile & 0.94 \\
2.0 & \{white, bitter\} & The white mushroom is bitter & 1.75 \\
3.0 & \{hit, nail\} & She hit the nail on the head & 2.89 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Examples of exercises based on different controls. \(d_{in}\) is the input difficulty while \(d_{out}\) is the output difficulty estimated by our knowledge tracing model. The degree of highlight represents a student’s mastery of vocabulary words (the darker the harder).
Figure 4: Generating 50 additional exercises of specified difficulty levels for different student groups using APEG\({}_{s+C+d}\) (adaptive) and non-adaptive EG\({}_{C+d}\) models. The Y-axis is the ratio of output difficulty \(d_{out}\) to input difficulty \(d_{in}\); the closer to 1 (dotted line) the better. Solid lines are averaged results of group students at each step, and shadows represent standard deviations.
#### 5.3.2 Improving Learning Efficiency
We now examine whether our model can be used to improve student learning efficiency by personalizing exercise sequences. To this end, we customize 30 continuous exercises for 50 sampled students using our proposed ExpectiMax-Gen (SS 4.5) and the original ExpectiMax. Both of them aim to maximize the expected knowledge state of the next step \(\overline{\mathbf{s}}_{n+1}\). For the former, at each step, we first find the best single word that can maximize \(\overline{\mathbf{s}}_{n+1}\) and then generate the next exercise based on the selected word and a fixed difficulty of 1. For the latter, we directly select the best exercise from the pool. We update students' knowledge states after each practice and repeat this process until we collect 30 exercises. We compare the change in \(\overline{\mathbf{s}}\) to measure which strategy is more efficient in improving students' knowledge.
The simulation results are shown in Figure 5. We also include a randomly selected exercise sequence as a lower bound, which turns out to harm student learning most of the time. The decrease in knowledge state is possibly caused by overly difficult exercises which would lead to wrong answers and reduce the predicted probability. Under the same practice opportunities, exercises generated by ExpectiMax-Gen lead to faster knowledge growth than those selected by ExpectiMax. Upon further inspection, we found about 70% of them are unseen in the corpus. This explains the efficiency of ExpectiMax-Gen as it can create novel exercises targeting individual needs on the fly while ExpectiMax is limited by the pool.
#### 5.3.3 Qualitative Discussions on Simulation
Our simulations are based on the DKT model. We note that some previous studies have observed inconsistencies between DKT behaviors and the human learning process (Shen et al., 2021). Thus, we adopt a simple regularization approach (Eqs. 5 and 6) to alleviate such inconsistencies (Yeung and Yeung, 2018), which we found can reduce the variance of simulation results and improve KT performance (Appendix C).
A popular argument regarding the relationship between the difficulty of learning content and student outcomes is that the level of difficulty should be set just above the learner's current knowledge, i.e., \(d\approx 0.5\)(Settles and Meeder, 2016; Gallego-Duran et al., 2018). During the simulations, we found ExpectiMax does not follow this heuristic but tends to generate relatively easy exercises (\(d<0.3\) mostly) repeatedly using certain words, consistent with the finding in Tschiatschek et al. (2022). One possible reason is that easier exercises are more likely to produce correct answers, which in turn increases the averaged predicted probability of DKT (i.e., estimated knowledge state).
Nevertheless, the above observations do not influence our conclusion as the superiority of our model comes from its ability to adapt to students' knowledge (SS 5.3.1) and generate customized exercises targeting individual needs (SS 5.3.2), independent of the simulation policy.
## 6 Conclusion
We propose an adaptive and personalized exercise generation model combining recent advances in knowledge tracing and controllable generation using pre-trained LMs. Our approach works by learning the dynamics between exercise difficulty and student vocabulary knowledge in the domain of language learning. Experimental results on real-world language learning data from Duolingo demonstrate that our model can generate adaptive and personalized exercises needed in an Educational setting. We further showcase our model's applicability in Education with simulation studies.
## Ethics Statement
The learner data used in this study are anonymized by Settles et al. (2018) and, to the best of our knowledge, do not contain sensitive information. We foresee no further ethical or privacy concerns with the work.
Figure 5: Simulation results over 30 exercises. The X-axis is the number of exercises, and the Y-axis is students’ average predicted knowledge state \(\overline{\mathbf{s}}\) indicating a student’s overall mastery of the vocabulary.
### Limitations
We state the limitations of this work from the following aspects. First, we make an initial assumption about the dynamics between exercise difficulty, vocabulary, and student knowledge. While we believe our assumption is sensible in the domain of language learning, we acknowledge that we make some simplifications for the ease of modeling. For example, we measure difficulty using individual performance, whereas a better way could be combining it with inherent problem difficulty, e.g., text complexity. Besides, we only consider vocabulary mastery in defining student knowledge and predicting their performance. Exploring more dimensions of language knowledge (e.g., syntax) might lead to a finer-grained personalization. Second, our model relies on student learning logs to estimate their real-time knowledge states. This model might face the cold start problem when dealing with insufficient history. Though it is beyond the scope of this study, techniques like computerized adaptive testing can be used to combat this problem. Lastly, due to the lack of a real learning environment, we discuss the educational promise of our model with simulation experiments. In the future, a user study can be incorporated to validate our conclusions.
|
2304.04893
|
EVKG: An Interlinked and Interoperable Electric Vehicle Knowledge Graph
for Smart Transportation System
|
Over the past decade, the electric vehicle industry has experienced
unprecedented growth and diversification, resulting in a complex ecosystem. To
effectively manage this multifaceted field, we present an EV-centric knowledge
graph (EVKG) as a comprehensive, cross-domain, extensible, and open geospatial
knowledge management system. The EVKG encapsulates essential EV-related
knowledge, including EV adoption, electric vehicle supply equipment, and
electricity transmission network, to support decision-making related to EV
technology development, infrastructure planning, and policy-making by providing
timely and accurate information and analysis. To enrich and contextualize the
EVKG, we integrate the developed EV-relevant ontology modules from existing
well-known knowledge graphs and ontologies. This integration enables
interoperability with other knowledge graphs in the Linked Data Open Cloud,
enhancing the EVKG's value as a knowledge hub for EV decision-making. Using six
competency questions, we demonstrate how the EVKG can be used to answer various
types of EV-related questions, providing critical insights into the EV
ecosystem. Our EVKG provides an efficient and effective approach for managing
the complex and diverse EV industry. By consolidating critical EV-related
knowledge into a single, easily accessible resource, the EVKG supports
decision-makers in making informed choices about EV technology development,
infrastructure planning, and policy-making. As a flexible and extensible
platform, the EVKG is capable of accommodating a wide range of data sources,
enabling it to evolve alongside the rapidly changing EV landscape.
|
Yanlin Qi, Gengchen Mai, Rui Zhu, Michael Zhang
|
2023-04-10T23:01:02Z
|
http://arxiv.org/abs/2304.04893v1
|
EVKG: An Interlinked and Interoperable Electric Vehicle Knowledge Graph for Smart Transportation System
###### Abstract
Over the past decade, the electric vehicle industry has experienced unprecedented growth and diversification, resulting in a complex ecosystem. To effectively manage this multifaceted field, we present an EV-centric knowledge graph (EVKG) as a comprehensive, cross-domain, extensible, and open geospatial knowledge management system. The EVKG encapsulates essential EV-related knowledge, including EV adoption, electric vehicle supply equipment, and electricity transmission network, to support decision-making related to EV technology development, infrastructure planning, and policy-making by providing timely and accurate information and analysis. To enrich and contextualize the EVKG, we integrate the developed EV-relevant ontology modules from existing well-known knowledge graphs and ontologies. This integration enables interoperability with other knowledge graphs in the Linked Data Open Cloud, enhancing the EVKG's value as a knowledge hub for EV decision-making. Using six competency questions, we demonstrate how the EVKG can be used to answer various types of EV-related questions, providing critical insights into the EV ecosystem. Our EVKG provides an efficient and effective approach for managing the complex and diverse EV industry. By consolidating critical EV-related knowledge into a single, easily accessible resource, the EVKG supports decision-makers in making informed choices about EV technology development, infrastructure planning, and policy-making. As a flexible and extensible platform, the EVKG is capable of accommodating a wide range of data sources, enabling it to evolve alongside the rapidly changing EV landscape.
**Keywords:** Electric Vehicle, Knowledge Graph, Ontology, Transportation Management
## 1 Introduction
As electric vehicles (EVs) resonate across the global automotive industry, technological innovations and environmental regulatory on gasoline-powered vehicles further accelerate the expansion of EV market (Bonges III and Lusk, 2016). While global light auto sales lost 8.1%, the
market of battery electric vehicles (BEVs) and plug-in hybrid electric vehicles (PHEVs) increased by 62% with a total of 4.3 million sales during the first half of 2022 (Irle, 2022). Such a trend shows an emphatic global shift to vehicle electrification. The characteristics of the EV market also distinguish itself from the traditional internal combustion engine (ICE) vehicles (Jenn et al., 2020; Husain et al., 2021).
Firstly, the advent of EVs has brought significant changes to the automobile industry, particularly in terms of vehicle configuration and charger compatibility. Unlike ICE vehicles, which have a standardized design, EVs require unique components that are specifically developed for them, such as the battery pack (Bonges III and Lusk, 2016). This key difference not only affects the way that EVs are designed and manufactured but also has significant implications for the way that they are charged and maintained (Das et al., 2020). Such a transition has fostered many established automobile makers to develop new models to support EV, alongside numerous start-ups entering the market. Vehicle configurations, therefore, become diverse across EV manufacturers by a selection of unique makers and models, especially on the battery capacities and charger types. Today, no universal charger or connector type exists in the EV market (Center and Kettles, 2015) and none vehicle is compatible with every type of charger. For instance, Tesla's Superchargers are exclusively designed to be used on certain Tesla EV models. The diverse configurations of EVs as well as their chargers have created a pressing need for advanced data and knowledge management techniques to organize and interlink this vast amount of information. In this context, the capability of an interlinked data repository such as a knowledge graph to explicitly describe the relationships between different EV models and chargers would prove valuable, as it would provide a solution for managing the complex EV market.
Secondly, there exists an incompatibility of EV with electric vehicle supply equipments (EVSE). Unlike filling up a gas tank within a few minutes, replenishing the power of EVs can take a significant amount of time. Most EV charging stations are also decentralized and have a complex structure, operated by diverse providers with different constraints, such as parking limits, membership requirements, charging levels, and connector constraints (Rajendran et al., 2021). One main challenge posed by such heterogeneity of EV and EVSE systems is the significant variations in their spatial distribution which can cause supply and demand imbalance in certain local regions (Carlton and Sultana, 2022). EV adoption rates can be influenced by various external factors, such as regulatory mandates, sustainability goals, and user income levels (Song and Potoglou, 2020). These factors often lead to uneven spatial distributions of EV adoption across different spatial scales. However, the installment of EVSEs is struggling to keep pace with the rate of EV adoptions at the current stage (Csiszar et al., 2019). Additionally, the data format heterogeneity of EV and EVSE makes semantic interoperability impossible. Different types of electric vehicles and charging stations have their own unique features and specifications, which can result in differences in communication protocols and data formats (Anil and Sivraj, 2020). These differences in EV and EVSE characteristics can create barriers to interoperability, making it complicated to charge different types of electric vehicles at different types of charging stations. This data format heterogeneity also makes it complicated for charging infrastructure planning. To overcome these challenges, integrating EV and EVSE data into an interlinked data repository such as a knowledge graph is a viable solution to breaking down data silos and gaining a more comprehensive understanding of the overall system. By ensuring data interoperability, it is possible to establish consistent data collection and reporting standards, enabling easier comparisons and insights.
Meanwhile, it is crucial to carefully select charging station sites to ensure the sustainability of
the whole power grid system. Unlike gas stations for refueling ICE vehicles, EVSE infrastructure plays a critical role in the flow of electricity and significantly affects the power grid sustainability (Das et al., 2020). As the number of EVs increases, the high demand for electricity from EVSEs may overload the electricity system, leading to power blackouts with far-reaching consequences if not managed correctly (Dharmakeerthi et al., 2012). In contrast, the power grid may need to control the electricity flow to charging stations to ensure they receive a consistent power supply (Yang et al., 2014). However, in reality, EVSEs management and power grid management are usually operated independently by different agencies without a framework for sufficient data sharing which may lead to unpredictable consequences in the future. Without integrating data from EVSEs and the power grid, it can be challenging to deploy charging infrastructure strategically. This mismatch between charging infrastructure demand and supply can result in inconvenience for EV users and a slower rate of EV adoption. Therefore, an integrated data repository (e.g., a knowledge graph) of EVSEs and the power grid data is also essential for an efficient management of EV charging infrastructure planning, which can accelerate EV adoption and achieve sustainable transportation.
The EV industry is a rapidly evolving and complex field, with numerous stakeholders, technologies, and policies involved (Kley et al., 2011). While there are many open EV datasets available (Calearo et al., 2021; Amara-Ouali et al., 2021), the challenge lies in effectively managing, sharing, and interoperating knowledge across different systems. This complexity makes it challenging to integrate and manage diverse sources of information and knowledge related to the EV industry, which highlights the need for a comprehensive and cohesive EV knowledge management system. A smart EV knowledge management system is crucial for the efficient integration and analysis of data from various sources, including EV manufacturers, charging station operators, utilities, and government agencies. It can provide valuable insights into key industry trends, challenges, and opportunities, and help to achieve data and semantic interoperability, and facilitating communication and collaboration among different stakeholders in the EV industry (Cao et al., 2021). Moreover, such a system can support decision-making related to EV technology development, infrastructure planning, and policymaking by providing timely and accurate information and analysis.
To address the challenges in the EV industry, knowledge graphs (KGs) have emerged as a promising solution (Noy et al., 2019). As a novel data paradigm, knowledge graphs are a combination of technologies, terminologies, and data cultures for densely interconnecting (Web-scale) data across domains in a human and machine-readable format (Bizer et al., 2011; Janowicz et al., 2022). With an ontology, or so-called knowledge graph schema, to encode the terminology semantically, KGs also foster interoperability across different domains (Hitzler, 2021; Zhu et al., 2022). Today, open KGs such as _DBpedia_(Auer et al., 2007), and _Wikidata_(Vrandecic, 2012; Vrandecic and Krotzsch, 2014) are considered valuable assets for exploiting a broad scope of cross-domain linked data. Other than these general-purpose graphs, we also have various large-scale geospatial knowledge graphs such as _GeoNames1_(Ahlers, 2013) and _KnowWhereGraph2_(Janowicz, 2021; Janowicz et al., 2022) which have shown unique superiority in uplifting environmental intelligence as well.
Footnote 1: [https://www.geonames.org/](https://www.geonames.org/)
Footnote 2: [https://knowwheregraph.org/](https://knowwheregraph.org/)
In this work, inspired by the challenges faced by EV charging systems and recent advancements in KG, we develop an EV-centric knowledge graph, called EVKG, that serves as an interlinked,
cross-domain, scalable, and open data repository to help pace toward more smart EV knowledge management system. Meanwhile, this work will provide an ontology for various EV-related knowledge, which enables rigorous logical interpretation and machine-actionability. With the proposed ontology, EVKG would enable effective integration of critical spatial and semantic information of electric vehicles including the EV charging infrastructures, the electricity transmission network, and the electric vehicle adoptions at different spatial scales. The contribution of this paper can be summarized as follows:
1. To introduce reusability and interoperability, we design an ontology for electric vehicles including Electric Vehicle Adoption Ontology Module, Electric Vehicle Charging Infrastructure Ontology Module, and Electric Transmission Network Ontology Module. Instead of reinventing the wheel, We reuse many ontology design patterns to model the spatial, temporal, and semantics aspect of EV data based on GeoSPARQL (Battle and Kolas, 2011), Time Ontology (Hobbs and Pan, 2006), SOSA SSN Ontology (Janowicz et al., 2019), and KnowWhereGraph (Janowicz et al., 2022) ontology. Figure 1 illustrates the overall ontology design patterns.
2. Based on the proposed EV ontology, we construct an electric vehicle knowledge graph called EVKG which includes different geospatial and semantics information about EV data, such as EV charing infrastructures, the electricity transmission network, and the electric vehicle adoptions at different administrative scales. GraphDB3 is used as the triple store to support GeoSPARQL-enabled KG queries. Footnote 3: [https://www.ontotext.com/products/graphdb/](https://www.ontotext.com/products/graphdb/)
3. We link the constructed EVKG with some existing knowledge graphs such as GNIS-LD (Regalia et al., 2018), Wikidata (Vrandecic and Krotzsch, 2014), and KnowWhereGraph (Janowicz, 2021). For example, instead of redefining the place hierarchy, we reuse the administration region entities kwg-ont:ZipCodeArea and kwg-ont:AdministrativeRegion_3 from KnowWhereGraph which are also linked (owl:sameAs) to Wikidata, GNIS-LD, and DBpedia. Instead of regenerating the transportation network triples, we directly use the national highway planning network subgraph from the KnowWhereGraph.
4. We propose six competency questions from different EV usage scenarios which are classified into three different question groups to illustrate how we can use EVKG to effectively solve EV and transportation-related questions.
This paper is organized as follows: we discuss some related work in Section 2. Then, in Section 3, we discuss how the EVKG is constructed including the data sources as well as the EVKG ontology design patterns. Next, we demonstrate how we can use EVKG to answer various EV-related competency questions in Section 4. We conclude this paper in Section 5 and discuss the limitations and future works.
Related Work
As the electric vehicle sector has become a research hotspot, many studies in the EV sector have been conducted to tackle various issues related to transportation electrification. These studies span a range of subjects, including strategic charging infrastructure placement (Dong et al., 2014; Micari et al., 2017), E-mobility recommendation (Lee and Wood, 2020), transportation equity (Hardinghaus et al., 2020), emergency response (Li et al., 2022; Feng et al., 2020), and many others (Namdeo et al., 2014; Chen et al., 2013). Due to the interconnected nature of the EV sector with other systems, many of these studies require interdisciplinary strategies and the integration of multiple, cross-domain data sources. To address this problem, ontologies have been developed to achieve semantic interoperability across different data sources. For instance, Scrocca et al. (2021) introduced the Urban IoT ontology, which conceptualized the data exchange between service providers and operating IoT devices in the urban area. However, this ontology primarily focuses on the micro-level interactions between EVs and charging infrastructure and does not offer links to external knowledge graph resources or their ontologies. Additionally, the Urban IoT and many other ontology works (Santos et al., 2018) do not deploy their ontologies for real-world data silos to provide accessible knowledge graph resources. In contrast, in our work, we not only provide an ontology to model different aspects of EVs, but also utilize the developed ontology to construct a large-scale EV knowledge graph and connect it to various external knowledge graphs.
Knowledge graphs (KGs) are a novel paradigm for retrieving, reusing, and integrating data from heterogeneous data sources and representing data in a human and machine-readable format (Noy et al., 2019; Janowicz et al., 2022). As an important type of knowledge graph, geospatial knowledge graphs (GeoKG) are essentially a symbolic representation of geospatial knowledge. It has become an indispensable component of Symbolic GeoAI (Mai et al., 2022) and supports various intelligent geospatial applications such as qualitative spatial reasoning (Freksa, 1991; Zhu et al., 2022; Cai et al., 2022), geographic entity recognition and resolution (Alex et al., 2015; Gritta et al., 2018), geographic knowledge graph summarization (Yan et al., 2019), geographic question answering (Mai et al., 2020; Scheider et al., 2021; Mai et al., 2021), and so on. Nowadays, there are multiple large-scale, open-sourced geospatial knowledge graphs available to use including _GeoNames_(Ahlers, 2013), _LinkedGeoData_(Auer et al., 2009), _YAGO2_(Hoffart et al., 2013), _GNIS-LD_(Regalia et al., 2018), and _KnowWhereGraph_(Janowicz, 2021; Janowicz et al., 2022).
Among all these geospatial knowledge graphs, KnowWhereGraph (KWG) (Janowicz, 2021; Janowicz et al., 2022) is a newly created large-scale cross-domain knowledge graph that integrates datasets at the human-environment interface. It contains various geospatial, demographic, and environmental datasets. Currently, it hosts more than 12 billion information triples which makes it one of the largest geographic knowledge graphs. Please refer to KnowWhereGraph website4 for a detailed description of the graph. Moreover, KWG also provides a collection of geoenrichment services (Mai et al., 2019, 2022) and visualization interfaces (Liu et al., 2022) on top of the graph which allows the no-expert users to explore, utilize, and analyze graph data seamlessly without any knowledge of Semantic Web. The graph also contains co-reference resolution links to multiple existing knowledge graphs including Wikidata (Vrandecic, 2012), GNIS-LD (Regalia et al., 2018), GeoNames (Ahlers, 2013), etc. In this work, instead of regenerating subgraphs to describe place
hierarchy and transportation networks, we build co-reference resolution links between EVKG and KWG on the US zip code area entities and reuse the ontology and subgraph of the national highway planning network. Please refer to Section 3.1 for a detailed description.
## 3 Electric Vehicle Knowledge Graph
### Data Sources
In this work, we construct the electric vehicle knowledge graph (EVKG) by retrieving, cleaning, integrating, and synthesizing information from various electric vehicle-related public data sources. Specifically, we develop the EVKG based on external data sources in terms of four aspects including electric vehicle basic specifications, electric vehicle registration information, electric vehicle charging infrastructure, and electricity transmission networks. Moreover, we also link EVKG with other open-sourced knowledge graphs such as KnowWhereGraph (Janowicz et al., 2022), GNIS-LD (Regalia et al., 2018) which are also connected with DBpedia (Auer et al., 2007), Wikidata (Vrandecic, 2012), and GeoNames (Ahlers, 2013). Three criteria are used to select EV-related data sources: 1) finer spatial resolution: EV-data should be recorded in small geographic units (e.g., zip code level); 2) finer temporal resolution: EV-data should be updated frequently (e.g., annually); 3) reliable data resources: EV-data should be collected from reliable organizations and institutions (e.g., governments and large NGOs). Based on these criteria, we collect data from the open-source data repositories listed below.
Electric Vehicle Adoption.To collect the electric vehicle's basic specifications (e.g., makes, models, manufacturers, etc.) and track the adoption of electric vehicles annually across different geographic units, we use the electric vehicle registration records amongst local administrative region to indicate snapshots in time of the electric vehicles "on the road". As one of the groundswell support for improving public data accessibility in transportation electrification, the Atlas EV Hub5 provides a website for electric vehicle registration databases for multiple US states. In this work, we utilize the Atlas EV registration data and aggregate data records to the zip code level to protect user privacy while enabling fine-grained geographic data access. We develop a triplification pipeline to consume the Atlas EV Hub and build the electric vehicle adoption sub-graph. This sub-graph will be continuously updated automatically along with the Atlas EV Hub. The Atlas EV Hub collects raw data from vehicle registration agencies, with each row containing a vehicle registration record including the first 8 digits of the Vehicles Identification Number (VIN-8), the corresponding zip code, the model year, and registration valid/expiration data. This could enable policymakers to track the rapidly-evolving conditions of the electric vehicle market without leaking the individual EV registration data since the last six digits of the whole VIN are erased. Meanwhile, with the VIN-8 code and model year provided, we can identify the vehicle information at product levels including vehicle make/model, duty-level category, use case, etc. For each unique type of electric vehicle product, we additionally collect its compatible charger types and connector types from the public repository of EVS specifications 6.
Footnote 5: [https://www.atlasevhub.com/materials/state-ev-registration-data/#data](https://www.atlasevhub.com/materials/state-ev-registration-data/#data)
Electric Vehicle Charging Infrastructure.Charging stations are essential infrastructures that safely deliver energy from the electric grid to the battery of an electric vehicle. The U.S. Department of Energy establishes an open-source repository of nationwide electric vehicle charging stations7 and provides continuously updated data to the public. We build a triplification pipeline which uses this repository as the main data source to build and continuously update the proposed charging station sub-graph of EVKG. More specifically, each station consists of information regarding its geo-location, charging capacity, usage restrictions, and other contextual attributes.
Footnote 7: [https://afdc.energy.gov/fuels/electricity_locations.html#/find/nearest?fuel=ELEC](https://afdc.energy.gov/fuels/electricity_locations.html#/find/nearest?fuel=ELEC)
Electric Transmission Network.Electric transmission can be seen as a bulk movement of electrical energy from the generating sites such as power plants to electrical substations. The electric transmission network acts as the backbone for the transport of electric power energy across large geographic regions. We integrate the electric transmission network data into our EVKG and enrich them with geographical and contextual information. This aims at facilitating the construction of a macro-grid system with seamless interconnection with the charging infrastructure and the electric vehicle adoption market, which further helps to meet the incoming surge of electric vehicle charging demand. We, therefore, collect the electric power transmission network data across the U.S. from the Homeland Infrastructure Foundation Level Database (HIFLD)8. This repository depicts the main infrastructure components and their interconnections of the electric transmission network, including electric energy transmission lines, power plants, and substations in the U.S. Both the spatial and contextual features in the power sector (e.g., voltage levels, capacities, and install ways) are integrated into our electric transmission network subgraph.
Footnote 8: [https://hifld-geoplatform.opendata.arcgis.com/datasets/geoplatform::electric-power-transmission-lines/](https://hifld-geoplatform.opendata.arcgis.com/datasets/geoplatform::electric-power-transmission-lines/)
Place Hierarchy and Transportation Network.Since KWG covers several types of multiscale geospatial reregions which are also essential for electric vehicle management in various place hierarchies (e.g., zip code areas, transportation networks, U.S. cities, counties, states, etc.), we link the constructed EVKG with KnowWhereGraph instead of reinventing the knowledge graph schema and regenerating knowledge graph triples for this information. More concretely, we first build co-reference resolution links (i.e., owl:sameAs) between EVKG and KWG on the U.S. zip code area entities. Moreover, as the road network is also a crucial subsystem to encompass in EVKG, we reuse the ontology and subgraph of the national highway planning network from the KWG graph. There are three advantages of this practice. First, by co-referencing our zip code entities to those in the KWG, we can skip the process of regenerating all the place hierarchy triples but directly use those from KWG. Second, KWG also links its zip code entities to many geographic features such as natural disasters, soil profiles, climate zones, etc. By using the zip code entities as the transit node, we can ask questions across graphs like _which zip code areas had more than 10 electric vehicle charging stations in 2021 but were affected by multiple wildfires in the past_. Third, since KWG also contains various co-reference resolution links to other knowledge graphs, this practice essentially makes our EVKG become an important component in the Linked Open Data Cloud (Assaf et al., 2015).
### EVKG Ontology
While a broad range of domains and application facets are worth encompassing, the EVKG ontology strikes a balance between intricate expressiveness and brief logical statements. In contrast with other already available microscopic ontology suites in the EV space, the EVKG takes a more macroscopic perspective and targets to support the backbone of an integrated ecosystem of the EV-centric environment across geographic regions. Concretely, EVKG models and revolves around the most important classes and relationships of electric vehicle adoption, electric vehicle charging infrastructure availability, electric transmission networks, and major road networks with geo-enrichment.
The proposed EVKG ontology could serve as the backbone for building an ecosystem that represents, retrieves, and integrates these heterogeneous data in the EV domain. By linking the EVKG with external knowledge graphs, we target to further address the bottlenecks of scalability and data silos problems while incorporating diverse data across a wide range of domains outside of the EV domain. The ontology is formally expressed in the W3C-recommended framework and uses the language of RDF9 and OWL10. Notably, the designed ontology integrates not just disparate data silos, but importantly, also their relationships, in a way that is readable for both humans and machines. The ontology module of each subgraph is visualized in Figure 1 and will be discussed as follows.
Footnote 9: [https://www.w3.org/RDF/](https://www.w3.org/RDF/)
Footnote 10: [https://www.w3.org/OWL/](https://www.w3.org/OWL/)
#### 3.2.1 Electric Vehicle Adoption Ontology Module
The upper left dashed box in Figure 1 illustrates our electric vehicle adoption ontology module. One critical class in this module is ev-ont:ElectricVehicleRegistrationCollection which indicates the class of a collection of electric vehicle registrations that share the same spatial and temporal scope, as well as product information by using properties: ev-ont:hasSpatialScope, ev-ont:hasTemporalScope, and ev-ont:hasProductInfo. The idea of using a class to denote a collection of EV registration collection is inspired by the sosa:ObservationCollection class proposed by the extensions to the Semantic Sensor Network Ontology [22, 23]. In the following, we will use an instance evr:evregcol_1 of the ev-ont:ElectricVehicleRegistrationCollection class as an example to demonstrate how to use this module. Here, evr:evregcol_1 represents a specific registration collection of 36 electric vehicle registration records which share the same spatial and temporal scope, as well as product information. First, all these records have the same spatial scope by using property ev-ont:hasSpatialScope - the ZIP code area 07677 with type kwg-ont:ZipCodeArea. Second, they have the same temporal scope (ev-ont:hasTemporalScope) of the year 2019 which are denoted as '?2019'^^^^^^xsd:year. Third, this collection has the same electric vehicle product information of the type ev-ont:ElectricVehicleProduct by using property ev-ont:hasProductInfo. This product information has a maker type evr:BMW which is an instance of ev-ont:MakeType. It also has a product type - i3 which is an instance of ev-ont:ModelType which has a model year
'?018''''^xsd:gYear by using the property ev-ont:hasModelYear. We also use ev-ont:isWithTechnology to associate each product information entity with its technology type, e.g., battery electric vehicle of type ev-ont:Technology. The product information entity also has a ev-ont:Manufacturer - BMW of North America Inc., a ev-ont:VehicleUseCase of compact, and a ev-ont:WeightLevel of light-duty vehicles whose weight level is less than 6,000 lbs. This product information entity also specifies the charger type (ev-ont:ChargerType) - Level 2 and DC fast charging, and connector type (ev-ont:ConnectorType) - J1772 connector and CCS connector. Here, EVKG defines two properties of ev-ont:hasMatachableChargerType and ev-ont:hasMatachableConnectorType to link the charging specification differences with respect to vehicle makes and models to the charging infrastructure.
Figure 1: The ontology design of the electric vehicle knowledge graph. Orange nodes indicate classes while yellow nodes indicate literals. Blue nodes indicate these classes are reused from other ontologies including GeoSPARQL, Time Ontology, and KnowWhereGraph ontology. White arrows without edge labels refer to the rdfs:subClassOf predicate.
#### 3.2.2 Electric Vehicle Charging Infrastructure Ontology Module
The lower part of Figure 1 shows our Electric Vehicle Charging Infrastructure Ontology Module. As the key class in this module, ev-ont:ChargingStation indicates a class of units of electric vehicle charging infrastructure for refueling electric vehicles. In EVKG, we model ev-ont:ChargingStation as a subclass of geo:Feature which is defined by the GeoSPARQL ontology (Battle and Kolas, 2011). So each charging station will have a sf:Point type node to indicate its spatial footprint. Moreover, we explicitly materialize the spatial relations between a charging station and a zip code area (kwg-ont:ZipCodeArea) as knowledge graph triples by reusing the spatial relation properties defined by KnowWhereGraph - kwg-ont:sfWithIn and kwg-ont:sfContains. The basic attributes of a charging station such as its opening time, operating hours, parking restrictions, pricing scheme are simply modeled by dedicated properties: ev-ont:hasOpenTime, ev-ont:hasOperatingTime, ev-ont:hasParkingRestriction, and ev-ont:hasPricingScheme.
A charging station may host multiple chargers and some of them could share the same characteristics. Chargers may have different charging levels and connector types since different electric vehicles have different matchable connector types and not all chargers offer all types of EVs. However, given a set of chargers that are provided by the same charging station and share the same characteristics, generating assertions for each of them could be bothersome and cause unnecessary redundancy. To strike the balance between triple store storage efficiency and semantic accuracy, EVKG defines the concept of ev-ont:ChargerCollection to model a set of chargers provided by one charging station that belongs to the same ev-ont:ChargerType and with the same ev-ont:ConnectorType. The number of chargers contained in an ev-ont:ChargerCollection is expressed by using the ev-ont:hasAmount property. Notably, EVKG only distinguishes different ev-ont:ChargerType by using the charging levels that represent their charging speeds, not based on different brands of them. In the U.S. market, electric vehicle supply equipment (EVSE) comes in three levels: Level 1, Level 2, and Level 3. While Level 1 and Level 2 alternating-current (AC) chargers require long hours to recharge the battery to its full capacity, direct-current fast chargers (DCFCs) can recharge a battery by 80% in a 15-minute charge session which reduces charging waiting time and indirectly decreases the demand for a larger number of charging stations. Moreover, unlike gas stations, there is no universal connector type (i.e., the socket through which you connect a charger to the car) for EV charging. For instance, J1772COMBO (Combined Charging System, CCS), CHAdeMO, and TESLA are three widely accepted connector standards in the U.S. Therefore, EVKG models ev-ont:ConnectorType and ev-ont:ChargerType for each ev-ont:ChargerCollection.
It is also important to distinguish between different types of ev-ont:ChargingStation. As the user access permission of an ev-ont:ChargingStation determines its role in providing public services, EVKG distinguishes between ev-ont:PublicChargingStation and ev-ont:PrivateChargingStation. The former features in its availability to the general public, including shopping center parking, on-street parking, and non-reserved multi-family parking lots, etc., while the latter specifies a charging station that provides exclusive services for designated user groups (ev-ont:ChargingUserGroup) (e.g., designated employee parking). Connecting a charging station to be part of a charging network (ev-ont:ChargingNetwork) improves its online accessibility to EV users and enables its ability to set pricing of EV charg
ing or resell the electricity. An ev-ont:ChargingNetwork means a data management system deployed on electric vehicle supply equipment and connected via an online connection. EVKG defines this type of charging station as ev-ont:NetworkedChargingStation. Each ev-ont:NetworkedChargingStation is connected to its ev-ont:ChargingNetwork using the ev-ont:isUnderChargingNetwork property. In comparison, EVKG uses the concept of ev-ont:NonNetworkedChargingStation to represent the non-networked station that is a stand-alone unit without access control and not available online. The outlined subclasses of ev-ont:ChargingStation are linked to it via rdfs:subClassOf relation.
#### 3.2.3 Electric Transmission Network Ontology Module
The upper right dashed box in Figure 1 contains our Electric Transmission Network Ontology Module which encompasses the important concepts and roles of the power transmission infrastructure defined in the EVKG. Unlike some ontologies in the power system domain that provide detailed descriptions of the grid assets (Huang and Zhou, 2015), EVKG focuses on enabling the geo-enrichment service (Mai et al., 2019) and cross-domain integrations with other EVKG's subgraphs. In EVKG, the core components of a typical transmission network are conceptualized as geo:Feature, which includes the power generator (ev-ont:PowerPlant) where the electricity is produced, the transmission line (ev-ont:TransmissionLine) that carries electricity over long distances, and the transmission substation (ev-ont:Substation) that connects two or more transmission lines. Each of them is associated with their spatial footprints with type geo:Geometry.
In addition to geometric and topological properties, the developed ontology module introduces a list of data type properties to describe the attributes of each power transmission infrastructure component. First, the electricity capacities of ev-ont:PowerPlant and voltage capacities of ev-ont:Substation are defined as data type properties - ev-ont:hasSummerCapacity, ev-ont:hasWinterCapacity, ev-ont:hasOperatingCapacity, and ev-ont:hasMinVoltage, ev-ont:hasMaxVoltage respectively. The serving status of transmission lines, powerplants, and substations are defined as ev-ont:ServingStaus by using object type properties - ev-ont:hasLineStatus, ev-ont:hasPlantStatus, and ev-ont:hasStationStatus correspondingly. The attributes of transmission lines such as voltage class, current types, install way, etc. are modeled as object-type properties. All attribute nodes are entities of type ev-ont:LineAttribute. The reason of using object type properties instead of datatype properties is that those line attributes contain more complex and type-level information. For example, ev-ont:VoltageClass indicates various voltage levels of transmission lines. We model the owner of transmission lines as type ev-ont:TransmissionLineOwner.
Result and Case Study
### Statistics about Electric-Vehicle-Knowledge-Graph
At the date of submitting this paper (January 2023), EVKG consists of 83 classes and 69 properties that describe the core concepts in the EV sector. Based on the collected data, more than 27 millions statements are currently included in the EVKG. The entities used for describing the road network are directly collected from KnowWhereGraph, and are directly encapsulated into our EVKG. More detailed statistics can be found in Table 1. Moreover, EVKG will be updated regularly by including newly installed charging stations, incoming EV registrations, new electric vehicle products, upgraded transmission networks, and others. The ontology and knowledge graph of EVKG are available here11.
Footnote 11: [https://github.com/EVKG/evkg](https://github.com/EVKG/evkg)
### Competency Questions
One strength of the presented EVKG is that it is highly versatile, allowing for answering various competency questions focusing on different domains and use cases. These competency questions can be answered by executing SPARQL queries12, a standard query language for RDF-based knowledge graphs, that span multiple topics of interest under the EV theme, making a wide range of information readily accessible to users.
Footnote 12: [https://www.w3.org/TR/rdf-sparql-query/](https://www.w3.org/TR/rdf-sparql-query/)
In this section, we discuss multiple exemplary competency questions together with the corresponding queries to showcase the intended use of the designed EVKG ontology and its associated EVKG resource. These examples highlight the diverse range of possibilities offered by the graph from semantic and geospatial queries to cross-domain queries that are expected to be answered based on information across different data silos. These queries are grouped into three main categories. Noted that the contents inside the brackets of each competency question indicate property, specific entities, and classes which can be easily replaced with similar properties, entities, or classes. The discussed queries can be found in the published Github repository and the graph endpoint, which can be executed with ease.
#### Group 1: Semantic and Geospatial Questions
**Q1. Semantic Questions**_Which [electric vehicle products] have charging cables that match the [CHADeMO connector type]?_
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline Key class & Charging & Charger & ElectricVehicle & ElectricVehicle & Transmission & \multirow{2}{*}{Substation} & Power & Road & RoadSegment \\ & Station & Collection & RegistrationCollection & Product & Line & Plant & Segment & Node \\ \hline Number of Entities & 50, 143 & 52, 370 & 429, 682 & 10,602 & 93, 047 & 75, 327 & 12, 556 & 538, 014 & 1, 076, 028 \\ \hline \multicolumn{10}{|c|}{Total number of statements:27, 608, 442} \\ \multicolumn{10}{|c|}{Total number of entities: 4, 298, 217} \\ \multicolumn{10}{|c|}{Total number of properties: 69} \\ \multicolumn{10}{|c|}{Total number of classes:83} \\ \hline \end{tabular}
\end{table}
Table 1: **Statistics of the electric vehicle knowledge graph (EVKG)**
Unlike the simple and universal fuel fill inlet found on fuel vehicles, not all electric vehicles are manufactured with the same charging capabilities. The variety of connector types and charging cable standards across different EV models make it difficult to identify the charging capability of different electric vehicle products. This complexity can be further compounded by factors such as weight, cost, and space limitations, which may affect the acceptance rate and type of on-board fast charging cables for specific EV models. Such a complicated situation makes it confusing for identifying the charging capability of different electric vehicle products. Fortunately, EVKG can provide a solution to this confusion by providing quick and easy access to knowledge about the semantic descriptions of different electric vehicle products through SPARQL queries (see Listing 1 in Appendix 6.1).
Q2. Geospatial Questions_Which [charging stations/road segments/transmission lines/power plants/ substations] are [located in/pass through] [King county]?_
To answer the above geospatial questions in a typical GIS environment, we need to collect geospatial datasets from various domains, clean them, and reproject them into a shared proper geographic/projection coordinate system as various GIS layers. And then we can write spatial queries across different GIS layers to answer these questions. In contrast, EVKG can answer these geospatial questions with simple geospatial SPARQL queries (see Listing 2 in Appendix 6.1). EVKG contains various types of geospatial features. As described in Section 3.2, we use GeoSAPARQL ontology to encode each geospatial feature's spatial footprints and use KnowWhereGraph's spatial relation properties (e.g., kwg-ont:sfWithin) to explicitly generate some spatial relation triples to speed up the geospatial queries. So the above geospatial questions can be answered by either using SPARQL queries with GeoSPARQL functions such as geo:sfContains which explicitly compute spatial relations based on the stored spatial footprints of geospatial entities, or using the prematerialized spatial relation properties such as kwg-ont:sfContains. The drawback of using the first approach is that, as shown by multiple studies (Regalia et al., 2019; Mai et al., 2021, 2022c), answering topological questions by explicitly computing the spatial relations among the stored geometries will suffer from sliver polygon problem which might lead to wrong answers. For example, as shown in Mai et al. (2022c), dbr:Seal Beach,_California intersects dbr:Orange_County,_California based on their OpenStreetMap polygon geometries although we know the former should locate inside of the later one. By using the the prematerialized spatial relation triples, we can avoid this kind of problems.
Q3. Semantic and Geospatial Questions_Which and where are the [public charging stations] operating ["24 hours daily"] that a [Nissan Leaf 2021] vehicle with a membership of the [Charge-Point] network can use for [fast charging] within ZIP code [95814]?_
The EVKG can serve as a search engine for personalized E-mobility with its rich semantic and geospatial resources. Compared to the general fuel station retrieval services, charging station retrieval encounters more variety in semantic preferences. Often, when people search for charging stations, they consider both geospatial locations and other specific characteristics. For instance, EV drivers searching for charging may consider the user restrictions, charging network membership requirements, and connector types, in addition to the distance and charging speed of a station. The answer of Q3 obtained from EVKG (see Listing 3 in Appendix 6.1) is visualized as the target selection in Figure 2.
### Group 2: Spatial and Temporal Aggregation Questions
The EVKG offers a powerful and versatile platform for quantitatively understanding the complex interplay between different types of EV chargers, geospatial regions, and EV adoptions. It allows for a wide range of data analysis and aggregation. Questions 4-5 are two examples presenting a through examination of the distribution of electric vehicle adoption and charging infrastructure, distinguished by the utilized types of connectors over time and space. These examples serve as a demonstration to showcase the extensive capabilities of EVKG in delivering an all-encompassing and comprehensive overview of the electric vehicle theme.
Acting as the crucial coupling between electric vehicles and charging equipment, connectors play a critical role in the seamless transmission of power. The strategic placement and design of charging infrastructure equipped with proper types of connectors can greatly impact the growth and evolution of the automotive industry, as well as the efficiency and stability of power grid systems. In short, connectors are essential building blocks that impact the future of sustainable transportation and energy. Therefore, not only should the EV charging infrastructure match the EV adoption (demand) in terms of the total numbers, but it should also provide a tailored level of chargers with matchable connector types to ensure compatibility with different types of electric vehicles. Otherwise, if there is overproduction and sale of electric vehicles with a specific connector type, but insufficient infrastructure support, it would result in a critical shortage of public charging resources. Similarly, if there is an excessive deployment of charging infrastructure for that particular connector type, it would cause a waste of investment and resources. This highlights the importance of carefully considering and balancing the production and infrastructure to ensure optimal utilization and long-term success of the EV market. A comprehensive and inclusive approach should encompass the variations in both the spatial and temporal dimensions of these elements. From a temporal perspective, the deployment of charging stations, with their varying numbers of EVSEs, as well as the adoption of electric vehicles, have undergone significant evolution over time. When considering the spatial dimension, there is a noticeable heterogeneity across various local regions. Understanding these changes and distributions can provide valuable insights for policymakers and
Figure 2: The illustration of the charging station distribution with semantic conditions within the ZIP code area 95814 in Sacramento City. The red triangular represents the target selection that satisfies all the geospatial and semantic conditions defined by Competency Question Q3.
planners, helping them to make informed decisions about public funding investments and improve policies accordingly.
**Q4. Temporal Aggregation Questions** _How does the fast charging resource of the [CCS], [CHAdeMO], and [TESLA] types per matchable electric vehicle evolve over the temporal scope in [New Jersey]?_
While the J1772 has become the universal standard for Level 2 charging in North America, the variety of connector types for fast charging may demand a more comprehensive understanding to evaluate the markets of EV adoptions and charging infrastructures. By querying the direct-current fast chargers (DCFCs) and grouping them according to connector categories, temporal scopes, and administrative levels such as by state (see Listing 4 in Appendix 6.1), it is possible to gain valuable insights into the current state of different types of EVs and corresponding fast charging infrastructure. Additionally, by comparing the number of DCFCs to the number of registered electric vehicles (see Listing 5 in Appendix 6.1), we are able to calculate the DCFC resource per registered EV (see Listing 6 in Appendix 6.1), providing a clear picture of the fast charging availability for different types of electric vehicles.
As seen in Figure 3, our EVKG data show that both the number of electric vehicles and the number of fast charging infrastructures in the New Jersey state continue to grow during the years with data provided. However, it also highlights that the availability of public fast charging for specific types of vehicles, including Tesla and vehicles applicable to the CCS connector, may not be increasing as quickly as their adoption rates. This is likely due to the deficiency in constructing these specific types of charging stations, which is not keeping pace with the surging demand for them. By breaking down data silos of EV registration records and charging infrastructure repositories, our EVKG is able to provide an integrated view of these two domains, and give a more consolidated and accurate understanding of such questions.
**Q5. Spatial Aggregation Queries** _How many registered electric vehicles equipped with the [CCS] type connector are there in each [ZIP code areas] of [New Jersey] in [2021]? How many
Figure 3: The annual variation trend of the EV registration numbers, direct-current fast charging (DCFC) infrastructure numbers, and average charger share per registered EV across the entire state of New Jersey from 2017 to 2021, regarding different types of connectors. For the years in which no data is presented, there are gaps in the raw data source that has yet to be filled. These three figures are visualizations of the results of the SPARQL query that answer Competency Question Q4.
[CCS chargers] are there in those [zip code areas]? What about the CCS Charger per EV with CCS-type connectors in each [zip code area]?_
Public charging stations are not as ubiquitous as gas stations. This disparity addresses the importance of evaluating the spatial distribution of EV registrations and charging resources to effectively plan for charging infrastructure at a more granular level. Among the various types of fast charging options available, the CCS connector market is particularly complex. While companies like Tesla have established their own charging network exclusively for their vehicles, and there are a limited number of EVs compatible with CHAdeMO connectors, the CCS standard is supported by a diverse range of charge point operators and automobile manufacturers. This complexity adds an extra layer of consideration for those involved in charging infrastructure planning and deployment. With just simple queries (see Listing 7-8 in Appendix 6.1), our EVKG enables end users of easy access to analyzing spatial information about targeted elements at a highly specific ZIP code level. Figure 3(a) and 3(b) visualize the results of Q5.
### Group 3: Cross-Domain Questions
**Q6. Cross-Domain Complex Questions** _Which zip code areas in the New Jersey State with significant charging resource shortage can potentially take advantage of the high-voltage transmission lines that pass through for installing the direct electricity source for DCFC stations?_
The advert impacts of EV charging stations loads on the voltage profile of distribution networks have been studied by a number of researchers(Geske et al., 2010; Juanuwattanakul and Masoum, 2011; Zhang et al., 2016). Typically, the fast charging stations could overload the distribution
Figure 4: The spatial distributions of the CCS-type electric vehicle registrations and average charging infrastructure share aggregated at zip code level across the New Jersey state in 2021. Figure 3(a) and 3(b) are visualizations of the results of the SPARQL queries that answer Competency Question Q5.
system when connecting to the distribution side.The immediate connection/disconnection of EV loads may also cause power quality issues. Therefore, fast/super-fast charging stations are best fed from transmission lines (Rahman et al., 2020). The placement of EV charging stations may also need to be prioritized in areas that experience a significant shortage of charging resources from the equity perspective. From a viewpoint of charging infrastructure planning, our EVKG can offer informative insights to enhance the strategic planning of charging station placement. To achieve this, we can further take the advantage of our EVKG to answer this complex question. Based on the results in Q5, and the fact that the available DC fast chargers require electricity inputs of at least 480 volts, we can query the EVKG SPARQL endpoint to retrieve the zip code areas that are passed through by such transmission lines above this voltage threshold, in addition to the constraint of low charging resource share and a larger number of registered EVs (see Listing 9-10 in Appendix 6.1). The target zip code areas are illustrated in Figure 5. As this is an exemplary answer, the result may be further improved by more well-defined considerations.
## 5 Conclusion and Discussion
While an emphatic global shift to vehicle electrification is undergone, there are still factors impeding its success. Inadequate data sharing and integration can be among the top factors. In this work, the EVKG is developed to serve as a comprehensive knowledge management to support more efficient EV charging and infrastructure planning. The ontology integrates critical aspects of EVs including EV adoption, EV charging infrastructure, and electricity transmission network. The EVKG is complemented with several EV-related subgraphs from KnowWhereGraph that are about transportation networks and place hierarchy.
Figure 5: The distribution of the zip code areas in New Jersey which satisfy the requirement specified in Competency Question Q6. The boundaries of the selected zip code areas are highlighted in bright blue.
The development of the EVKG includes identifying core modules in the EV industry and collecting data repositories that represent these aspects. An ontology is designed to model these key concepts and their logical relationships, which are encapsulated into different ontology modules. The EVKG contextualizes and enriches the data sources into a united graph structure format, making it efficient to consolidate and query. The strengths of the EVKG are demonstrated through several exemplary competency questions. To this end, we are able to provide an integration framework that can efficiently support smart EV knowledge management and make an extensive range of EV applications effortless.
Despite only selected data sources being included in the current version of EVKG, the developed ontology enables the further incorporation of other data resources. With existing open KGs and more data silos to be identified, the EVKG would grow continuously so as to work more remarkably in the future. For instance, the constructed EVKG can serve as the backbone of E-mobility. With adequate information integration of the geo-location and the semantic annotations of the EVSE infrastructure and EV models, decision-making on matching EV charging stations would be much more efficient. Moreover, with its scalable nature, the EV-charging-centric knowledge graph would enable convenient linking and integration with existing open geo-enrichment service stacks (Mai et al., 2019, 2022b; Janowicz et al., 2022). With more use case-focused data for additional environmental disasters (e.g., floods or wildfires) or urban planning data, a wide range of knowledge reasoning and discovery can be conducted downstream. This work also makes EV-related data-driven decision-making and data analytics substantially more effective, accessible, and affordable.
In the future, we plan to further integrate EVKG with other open knowledge graph repositories, such as Wikidata and GeoNames, so that more socioeconomic and environmental challenges beyond electric transportation can be comprehensively addressed. Furthermore, a set of shape constraints can be developed to validate the incoming data to EVKG so as to improve the data quality (Zhu et al., 2021a). Last but not least, new analytical methods that can work on knowledge graphs could be studied to advance the discovery of insights from a multidisciplinary context.
|
2303.11587
|
A study on sensitivity and stability analysis of non-stationary
$α$-fractal functions
|
This article aims to study fractal interpolation functions corresponding to a
sequence of iterated function systems (IFSs). For a suitable choice of a
sequence of IFS parameters, the corresponding non-stationary fractal function
is a better approximant for the non-smooth approximant. In this regard, we
first construct the non-stationary interpolant in the Lipschitz space and study
some topological properties of the associated non-linear fractal operator.
Next, we discuss the stability of the interpolant having small perturbations.
Also, we investigate the sensitivity with respect to the perturbations of the
IFS parameters by providing an upper bound of errors acquired in the
approximation process. In the end, we study the continuous dependence of the
proposed interpolant on different IFS parameters.
|
Anarul Islam Mondal, Sangita Jha
|
2023-03-21T04:37:52Z
|
http://arxiv.org/abs/2303.11587v1
|
# A study on sensitivity and stability analysis of non-stationary \(\alpha\)-fractal functions
###### Abstract.
This article aims to study fractal interpolation functions corresponding to a sequence of iterated function systems (IFSs). For a suitable choice of a sequence of IFS parameters, the corresponding non-stationary fractal function is a better approximant for the non-smooth approximant. In this regard, we first construct the non-stationary interpolant in the Lipschitz space and study some topological properties of the associated non-linear fractal operator. Next, we discuss the stability of the interpolant having small perturbations. Also, we investigate the sensitivity with respect to the perturbations of the IFS parameters by providing an upper bound of errors acquired in the approximation process. In the end, we study the continuous dependence of the proposed interpolant on different IFS parameters.
Key words and phrases:Fractal functions (primary) and Non-stationary iterated function system, and Approximation and Stability 2020 Mathematics Subject Classification: 28A80 (primary) 26A18 and 35B41 and 41A30 and 46B70
## 1. Introduction
The representation of arbitrary functions or data sets in terms of simple classical functions like polynomials, trigonometric functions, or exponentials is one of the main ideas of numerical analysis and approximation theory. Traditional approaches may not produce an approximant with the required precision when dealing with irregular forms, such as real-world signals like time series, financial series, climatic data, and bioelectric recordings. To address the irregularity in different practical situations, Barnsley [3] introduced the notion of fractal interpolation functions (FIFs). The reader can consult the books [4, 15] and the sources provided there for a clear explanation of several crucial subjects in this direction.
FIFs have some advantages over traditional interpolation functions. FIFs, in general, are self-similar/affine, and the Hausdorff Besicovitch dimensions of their graphs are non-integers. The reader can consult [2, 8, 21, 23] for dimensional analysis of stationary fractal functions. The primary advantage of fractal interpolants over classical interpolants is generating both smooth and nonsmooth interpolants, depending on the choice of the IFS parameters. It is important to note that relatively few techniques-including subdivision schemes, another well-liked technique, produce nonsmooth interpolant [10]. Subdivision methods and fractal interpolation have recently been linked in some attempts [11, 14].
Motivated by the work of Barnsley, a family of non-affine fractal functions \(f^{\alpha}\)-also known as \(\alpha\)-fractal functions corresponding to a continuous function \(f\) on a closed and bounded interval \(I\) of \(\mathbb{R}\) was studied by Navascues [17]. The function \(f^{\alpha}\) approximates and interpolates \(f\). This method of fractal perturbation produces an operator, which links the theory of FIF to the area of Operator theory, Approximation theory, Functional analysis, and Harmonic analysis.
Evidently, a lot of work has been done in the development of fractal approximations (stationary) [1, 5, 6, 7, 9, 20, 22], and it is still a subject of very active research, with an extensive list of connections and applications. But on the other hand, many questions remain to be settled in the non-stationary case, and more specifically in the case of a sequence of IFSs. The difficulty lies in the fact that the standard tools are not well developed in the setting of a sequence of IFSs. Consequently, advances in non-stationary IFS are required.
Using the fixed points of contractive operators for a specific type of IFS is a helpful way to build fractals [12]. In the case of usual fractal functions, a common procedure of construction is to consider an operator on a complete space and define the fractal function as the unique fixed point [25]. However, if the operator is substituted by a sequence of maps, then we face problems with similar construction. Levin et al.[14] investigate the trajectory of contraction mappings which produces limit attractors at various scales with different features or shapes. Recently, Massopust [16] advanced fractal function to a new and more
###### Abstract
We consider the non-stationary \(\alpha\)-fractal functions on a closed and bounded interval \(I\) of \(\mathbb{R}\). We consider the non-stationary \(
**Definition 2.2**.: Let \(T:\mathbf{X}\rightarrow\mathbf{X}\) be a contraction map on a complete metric space \((\mathbf{X},d)\). The forward iterates of \(T\) are transformations \(T^{\circ\,n}:\mathbf{X}\rightarrow\mathbf{X}\) defined by \(T^{\circ\,0}(x)=x\) and \(T^{\circ\,(n+1)}(x)=T(T^{\circ\,n}(x))=\underbrace{T\circ T\circ\cdots\circ T }_{(n+1\text{ times})}(x)\) for \(n\in\mathbb{N}\cup\{0\}\).
**Definition 2.3**.: (Forward and Backward Trajectories) Let \(\mathbf{X}\) be a metric space and \(\{T_{r}\}_{r\in\mathbb{N}}\) be a sequence of Lipschitz maps on \(\mathbf{X}\). We define forward and backward trajectories respectively
\[\phi_{r}:=T_{r}\ \circ\ T_{r-1}\ \circ\ \ldots\ \circ\ T_{1}\ \text{ and }\ \psi_{r}:=T_{1}\ \circ\ T_{2}\ \circ\ \ldots\ \circ\ T_{r}.\]
The subject now concerns the convergence of general trajectories, i.e., under what conditions the forward and backward trajectories will converge. In addition, we look for the trajectories producing new types of fractal sets. Recently, Levin et al. [14] observed that the backward trajectories converge under relatively mild conditions and may produce a new class of fractal sets. In [14], the authors use the assumption of compact invariant domain to guarantee the convergence of backward trajectories. In [19], Navascues and Verma replace the compact invariant domain condition by a weaker condition. We now recall the result in the following.
**Proposition 2.4**.: [19, Proposition 2.6] Let \(\{T_{r}\}_{r\in\mathbb{N}}\) be a sequence of Lipschitz maps on a complete metric space \((\mathbf{X},d)\) with Lipschitz constant \(c_{r}\). If \(\exists\ x^{*}\) in the space such that the sequence \(\{d(x^{*},T_{r}(x^{*}))\}\) is bounded, and \(\sum_{r=1}^{\infty}\prod_{i=1}^{r}c_{i}<\infty,\) then the sequence \(\{\psi_{r}(x)\}\) converges for all \(x\in\mathbf{X}\) to a unique limit \(\bar{x}\).
### Stationary Fractal Interpolation Functions
For a fixed \(k\in\mathbb{N}\), we denote by \(\mathbb{N}_{k}\) the first \(k\) natural numbers and \(\mathbb{N}_{k}^{0}=\mathbb{N}_{k}\cup\{0\}\). Let \(I=[a,b]\) and define a partition \(\Delta\) on \(I\) by
\[\Delta=\{(x_{0},x_{1},\ldots,x_{N}):a=x_{0}<x_{1}<\cdots<x_{N}=b\}.\]
For \(i\in\mathbb{N}_{N}\), let \(I_{i}=[x_{i-1},x_{i}]\). Suppose the affine maps \(l_{i}:I\to I_{i}\) are such that
\[l_{i}(x_{0})=x_{i-1}\,\ l_{i}(x_{N})=x_{i},\text{ and }|l_{i}(z_{1})-l_{i}(z_{ 2})|\leq l|z_{1}-z_{2}|\ \forall z_{1},\ z_{2}\in I, \tag{2.1}\]
where \(0\leq l<1\). Set \(\mathcal{M}=I\times\mathbb{R}\). Let the \(N\) continuous maps \(F_{i}:\mathcal{M}\rightarrow\mathbb{R},\ i\in\mathbb{N}_{N}\) be such that
\[F_{i}(x_{0},y_{0})=y_{i-1},\ \ \ \ F_{i}(x_{N},y_{N})=y_{i}, \tag{2.2}\]
and \(F_{i}\) is a contraction in the second variable for \(i\in\mathbb{N}_{N}\). The following are the most chosen maps for the formation of IFS
\[\begin{cases}l_{i}(x)=a_{i}x+e_{i},\\ F_{i}(x,y)=\alpha_{i}(x)y+q_{i}(x),\ \ \ i\in\mathbb{N}_{N},\end{cases} \tag{2.3}\]
where \(a_{i},e_{i}\) can be determined by conditions (2.1). \(\alpha_{i}(x)\) are scaling functions satisfying \(\|\alpha_{i}\|_{\infty}<1\) and \(q_{i}(x)\) are suitable continuous functions such that the condition (2.2) is satisfied. For \(i\in\mathbb{N}_{N}\), we define \(W_{i}:\mathcal{M}\to I_{i}\times\mathbb{R}\) by
\[W_{i}(x,y)=(l_{i}(x),F_{i}(x,y))\ \ \forall(x,y)\in\mathcal{M}.\]
The complete metric space \((\mathcal{M},d_{H})\) with the above \(N\)-maps \(\{W_{i}:i\in\mathbb{N}_{N}\}\) forms an IFS. The uniqueness of the attractor of the IFS is given by Barnsley; mentioned below.
**Theorem 2.5**.: [3] _The IFS \(\mathcal{I}=\{\mathcal{M};W_{i}:i\in\mathbb{N}_{N}\}\) admits a unique attractor \(G\) and \(G\) is the graph of a continuous function \(f:I\rightarrow\mathbb{R}\) which interpolates the given data._
### Stationary \(\alpha\)-fractal Functions
Let \(\mathcal{C}(I)\) be the set of all continuous functions on a compact interval \(I\). Let \(f\in\mathcal{C}(I)\) be a prescribed function. Let us choose the function \(q_{i}(x)\) in (2.3) as
\[q_{i}(x)=f(l_{i}(x))-\alpha_{i}(x)b(x),\ i\in\mathbb{N}_{N},\]
where \(\alpha_{i}:I\rightarrow\mathbb{R}\) are continuous scaling functions satisfying \(\|\alpha_{i}\|_{\infty}<1\) and \(b:I\rightarrow\mathbb{R}\) be continuous function such that \(b\neq f\), \(b(x_{0})=f(x_{0})\) and \(b(x_{N})=f(x_{N})\). By Theorem 2.5, the corresponding IFS has a unique attractor \(G\), which is the graph of a continuous map, namely \(f^{\alpha}_{\Delta,b}:I\rightarrow\mathbb{R}\) that interpolates the given data set. The map is known as the \(\alpha\)-fractal function.
Though the stationary fractal interpolation interpolates irregular functions very well, it depends on local data points. To get fractal interpolation that is independent of local data points and gets more reflexibility, Massopust [16] introduced non-stationary fractal interpolation by taking a sequence of IFSs. In [19], the authors studied the parameterized non-stationary FIFs in \(\mathcal{C}(I)\).
### Non-stationary \(\alpha\)-fractal Functions
Let \(f\in\mathcal{C}(I)\) and \(r\in\mathbb{N}\). We use the following notation:
\[\alpha_{r}:=(\alpha_{1,r},\alpha_{2,r},\ldots,\alpha_{N,r}),\ \ \alpha:=\{\alpha_{r}\}_{r\in\mathbb{N}}\ \text{ and }\ b:=\{b_{r}\}_{r\in\mathbb{N}}.\]
Let \(\alpha_{i,r}:I\to\mathbb{R}\) be continuous functions such that
\[\|\alpha\|_{\infty}=\sup\{\|\alpha_{r}\|_{\infty}:r\in\mathbb{N}\}<1,\ \ \text{where}\ \ \|\alpha_{r}\|_{\infty}=\sup\{\|\alpha_{i,r}\|_{\infty}:i\in\mathbb{N}_{N}\},\]
and \(b_{r}\in\mathcal{C}(I)\) such that
\[b_{r}\neq f,\ b_{r}(x_{0})=f(x_{0})\ \text{and}\ b_{r}(x_{N})=f(x_{N}). \tag{2.4}\]
To define a sequence of IFSs, we use the following sequence of continuous maps
\[\begin{cases}l_{i}(x)=a_{i}x+e_{i}=\frac{x_{i}-x_{i-1}}{x_{N}-x_{0}}x+\frac{x_ {N}x_{i-1}-x_{0}x_{i}}{x_{N}-x_{0}},\\ F_{i,r}(x,y)=\alpha_{i,r}(x)y+f(l_{i}(x))-\alpha_{i,r}(x)b_{r}(x),\ \ \ i\in\mathbb{N}_{N}.\end{cases} \tag{2.5}\]
For each \(i\in\mathbb{N}_{N}\), we define
\[W_{i,r}:\mathcal{M}\to I_{i}\times\mathbb{R}\ \text{by}\ W_{i,r}(x,y)=(l_{i}(x),F_{i,r}(x,y)).\]
Now we have a sequence of IFSs \(\mathcal{I}_{r}=\{\mathcal{M};W_{i,r}:i\in\mathbb{N}_{N}\}\). Let
\[\mathcal{C}_{f}(I)=\{g\in\mathcal{C}(I):g(x_{i})=f(x_{i}),\ i=0,N\}.\]
Then \(\mathcal{C}_{f}(I)\) is a complete metric space. For \(r\in\mathbb{N}\), we define a sequence of Read-Bajraktarevic (RB) operators \(T^{\alpha_{r}}:\mathcal{C}_{f}(I)\to\mathcal{C}_{f}(I)\) by
\[\begin{split}(T^{\alpha_{r}}g)(x)&=F_{i,r}({l_{i}}^ {-1}(x),\ g({l_{i}}^{-1}(x)))\\ &=f(x)+\alpha_{i,r}(Q_{i}(x))\cdot g(Q_{i}(x))-\alpha_{i,r}(Q_{i} (x))\cdot b_{r}(Q_{i}(x)),\end{split} \tag{2.6}\]
for \(x\in I_{i},\ i\in\mathbb{N}_{N}\), where \(Q_{i}(x)=l_{i}^{-1}(x)\).
The above operator is well defined and for any function \(h\in\mathcal{C}_{f}(I)\), the sequence of backward trajectories \(\{T^{\alpha_{1}}\ o\ T^{\alpha_{2}}\ o\ldots o\ T^{\alpha_{r}}h\}\) converges to a map \(f^{\alpha}\) of \(C_{f}(I)\)[19]. The map \(f^{\alpha}\) is the unique map that satisfies the following equation
\[f^{\alpha}(x)=f(x)+\lim_{r\to\infty}\sum_{j=1}^{r}\alpha_{i,1}(Q_{i}(x))\ldots \alpha_{i,j}(Q_{i}^{j}(x))(f-b_{j})(Q_{i}^{j}(x)). \tag{2.7}\]
\(f^{\alpha}\) is called the non-stationary \(\alpha\)-fractal interpolation function.
## 3. Non-stationary \(\alpha\)-fractal function on Lipschitz Space
Let \(g:I\to\mathbb{R}\) be a function. For \(0<d\leq 1\), define
\[Lip_{d}(g)\ =\ \sup\left\{\frac{|g(x)-g(y)|}{|x-y|^{d}}:x,y\in I\ \text{and}\ x\neq y\right\}.\]
The Lipschitz space is defined as \(Lip_{d}(I)\ =\ \{g:I\to\mathbb{R}\ :\ Lip_{d}(g)<\infty\}\). Define \(\|g\|_{d}=\max\{\|g\|_{\infty},Lip_{d}(g)\}\). It is routine to show that \((Lip_{d}(I),\|.\|_{d})\) is a Banach space. For more details of the Lipschitz functions in an arbitrary Banach space, please refer to [13]. Let
\[Lip_{d,f}(I)=\{g\in Lip_{d}(I):g(x_{0})=f(x_{0}),g(x_{N})=f(x_{N})\}.\]
Then being a closed subspace of the Banach space \(Lip_{d}(I)\), \(Lip_{d,f}(I)\) is a Banach space.
**Theorem 3.1**.: _Let \(f\in Lip_{d}(I)\). Let \(r\in\mathbb{N},b_{r}\in Lip_{d,f}(I)\) be such that \(\|b\|_{d}:=\sup\limits_{r\in\mathbb{N}}\|b_{r}\|_{d}<\infty\) and the scaling functions \(\alpha_{i,r}\in Lip_{d}(I)\) are chosen such that \(\max\limits_{i\in\mathbb{N}_{N}}\left(\frac{\|a_{i,r}\|_{d}}{a_{i}^{d}}\right)< \frac{1}{2}\). We define a sequence of RB operators \(T^{\alpha_{r}}:Lip_{d,f}(I)\to Lip_{d,f}(I)\) by_
\[(T^{\alpha_{r}}g)(x)=f(x)+\alpha_{i,r}(Q_{i}(x))\cdot g(Q_{i}(x))-\alpha_{i,r} (Q_{i}(x))\cdot b_{r}(Q_{i}(x)), \tag{3.1}\]
_for \(x\in I_{i},\ i\in\mathbb{N}_{N}\). Then the following hold._
1. _The RB operator_ \(T^{\alpha_{r}}\) _defined in equation (_3.1_) is well defined on_ \(Lip_{d,f}(I)\)_._
2. _In fact,_ \(T^{\alpha_{r}}:Lip_{d,f}(I)\to Lip_{d,f}(I)\subset Lip_{d}(I)\) _is a contraction map._
3. _There exists a unique function_ \(f^{\alpha}_{b,Lip_{d}}\in Lip_{d,f}(I)\) _such that the sequence_ \(\{T^{\alpha_{1}}\,o\,T^{\alpha_{2}}\,o\ldots o\,T^{\alpha_{r}}g\}\) _converges to the map_ \(f^{\alpha}_{b,Lip_{d}}\) _for every_ \(g\in Lip_{d,f}(I)\)_._
Proof.:
1. The norm defined on \(Lip_{d,f}(I)\) is \(\|f\|_{d}=\max\{\|f\|_{\infty},Lip_{d}(f)\}\) so that, \(\|T^{\alpha_{r}}f\|_{d}=\max\{\|T^{\alpha_{r}}f\|_{\infty},Lip_{d}(T^{\alpha_ {r}}f)\}\). From the definition of RB operators, we have \[(T^{\alpha_{r}}g)(x)=f(x)+\alpha_{i,r}(Q_{i}(x))\cdot(g-b_{r})(Q_{i}(x)).\] Now, \[Lip_{d}(T^{\alpha_{r}}g) =\sup_{\begin{subarray}{c}x,y\in I\\ x\neq y\end{subarray}}\frac{|T^{\alpha_{r}}g(x)-T^{\alpha_{r}}g(y)|}{|x-y|^{d}}\] \[=\sup_{\begin{subarray}{c}x,y\in I_{i}\\ x\neq y\end{subarray}}\frac{|f(x)-f(y)+\alpha_{i,r}(Q_{i}(x)).(g-b_{r})(Q_{i}( x))-\alpha_{i,r}(Q_{i}(y)).(g-b_{r})(Q_{i}(y))|}{|x-y|^{d}}\] \[\leq\sup_{\begin{subarray}{c}x,y\in I\\ x\neq y\end{subarray}}\frac{|f(x)-f(y)|}{|x-y|^{d}}+\max_{i\in\mathbb{N}_{N}} \sup_{\begin{subarray}{c}x,y\in I_{i}\\ x\neq y\end{subarray}}\frac{|\alpha_{i,r}(Q_{i}(x)).[(g-b_{r})(Q_{i}(x))-(g-b_{ r})(Q_{i}(y))]|}{|x-y|^{d}}\] \[\qquad\qquad\qquad\qquad\qquad+\max_{i\in\mathbb{N}_{N}}\sup_{ \begin{subarray}{c}x,y\in I_{i}\\ x\neq y\end{subarray}}\frac{|(g-b_{r})(Q_{i}(y))[\alpha_{i,r}(Q_{i}(x))-\alpha_ {i,r}(Q_{i}(y))]|}{|x-y|^{d}}\] \[\leq Lip_{d}(f)+\max_{i\in\mathbb{N}_{N}}\|\alpha_{i,r}\|_{ \infty}\sup_{\begin{subarray}{c}x,y\in I_{i}\\ x\neq y\end{subarray}}\frac{|(g(Q_{i}(x))-g(Q_{i}(y)))|+|(b_{r}(Q_{i}(x))-b_{r }(Q_{i}(y)))|}{|x-y|^{d}}\] \[\qquad\qquad\qquad+\|g-b_{r}\|_{\infty}\max_{i\in\mathbb{N}_{N}} \sup_{\begin{subarray}{c}x,y\in I_{i}\\ x\neq y\end{subarray}}\frac{|(\alpha_{i,r}(Q_{i}(x))-\alpha_{i,r}(Q_{i}(y)))|}{| x-y|^{d}}\] \[=Lip_{d}(f)+\max_{i\in\mathbb{N}_{N}}\|\alpha_{i,r}\|_{\infty} \sup_{\begin{subarray}{c}x,y\in I_{i}\\ x\neq y\end{subarray}}\frac{|(g(Q_{i}(x))-g(Q_{i}(y)))|+|(b_{r}(Q_{i}(x))-b_{r }(Q_{i}(y)))|}{a_{i}^{d}|Q_{i}(x)-Q_{i}(y)|^{d}}\] \[\qquad\qquad\qquad+\|g-b_{r}\|_{\infty}\max_{i\in\mathbb{N}_{N}} \sup_{\begin{subarray}{c}x,y\in I_{i}\\ x\neq y\end{subarray}}\frac{|(\alpha_{i,r}(Q_{i}(x))-\alpha_{i,r}(Q_{i}(y)))|}{a_{ i}^{d}|Q_{i}(x)-Q_{i}(y)|^{d}}\] \[=Lip_{d}(f)+\max_{i\in\mathbb{N}_{N}}\left(\frac{\|\alpha_{i,r}\|_{ \infty}}{a_{i}^{d}}\right)\sup_{\begin{subarray}{c}\tilde{x},\tilde{y}\in I \\ \tilde{x}\neq\tilde{y}\end{subarray}}\frac{|g(\tilde{x})-g(\tilde{y})|+|b_{r}( \tilde{x})-b_{r}(\tilde{y}))|}{|\tilde{x}-\tilde{y}|^{d}}\] \[\qquad\qquad\qquad+\|g-b_{r}\|_{\infty}\max_{i\in\mathbb{N}_{N}} \sup_{\begin{subarray}{c}\tilde{x},\tilde{y}\in I\\ \tilde{x}\neq\tilde{y}\end{subarray}}\frac{|(\alpha_{i,r}(\tilde{x})-\alpha_{i,r }(\tilde{y}))|}{a_{i}^{d}|\tilde{x}-\tilde{y}|^{d}}\] \[\leq Lip_{d}(f)+\max_{i\in\mathbb{N}_{N}}\left(\frac{\|\alpha_{i,r}\|_{ \infty}}{a_{i}^{d}}\right)(Lip_{d}(g)+Lip_{d}(b_{r}))+\max_{i\in\mathbb{N}_{N}} \left(\frac{Lip_{d}(\alpha_{i,r})}{a_{i}^{d}}\right)(\|g\|_{\infty}+\|b_{r}\|_{ \infty})\] \[\leq Lip_{d}(f)+\max_{i\in\mathbb{N}_{N}}\left(\frac{\|\alpha_{i,r}\|_{ d}}{a_{i}^{d}}\right)(Lip_{d}(g)+Lip_{d}(b_{r})+\|g\|_{\infty}+\|b_{r}\|_{ \infty}).\] As \(f,g,b_{r}\in Lip_{d,f}(I)\), the above estimation ensures that \(Lip_{d}(T^{\alpha_{r}}g)<\infty\) and so that \(T^{\alpha_{r}}g\in Lip_{d}(I)\). Also \(T^{\alpha_{r}}g(x_{0})=f(x_{0})\) and \(T^{\alpha_{r}}g(x_{N})=f(x_{N})\). Hence \(T^{\alpha_{r}}g\in Lip_{d,f}(I)\) and the RB operator \(T^{\alpha_{r}}\) defined in equation (2.6) is well defined on \(Lip_{d,f}(I)\).
2. For \(x\in I_{i}\), \[|(T^{\alpha_{r}}g_{1}-T^{\alpha_{r}}g_{2})(x)| =|\alpha_{i,r}(Q_{i}(x))\|(g_{1}-g_{2})(Q_{i}(x))|\] \[\leq\max_{i\in\mathbb{N}_{N}}(\|\alpha_{i,r}\|_{\infty})\|g_{1}-g_{ 2}\|_{\infty},\]
and hence (3.2) \[\|(T^{\alpha_{r}}g_{1}-T^{\alpha_{r}}g_{2})\|_{\infty}\leq\max_{i\in\mathbb{N}_{N }}(\|\alpha_{i,r}\|_{\infty})\|g_{1}-g_{2}\|_{\infty}.\] Using similar steps in the estimation of \(Lip_{d}(T^{\alpha_{r}}g)\), we obtain (3.3) \[Lip_{d}(T^{\alpha_{r}}g_{1}-T^{\alpha_{r}}g_{2})\leq\max_{i\in\mathbb{N}_{N }}\left(\frac{\|\alpha_{i,r}\|_{d}}{a_{i}^{d}}\right)(Lip_{d}(g_{1}-g_{2})+\|g _{1}-g_{2}\|_{\infty}).\] Combining (3.2) and (3.3), we get \[\|(T^{\alpha_{r}}g_{1}-T^{\alpha_{r}}g_{2})\|_{d}\] \[=\max\left\{\|(T^{\alpha_{r}}g_{1}-T^{\alpha_{r}}g_{2})\|_{ \infty},Lip_{d}(T^{\alpha_{r}}g_{1}-T^{\alpha_{r}}g_{2})\right\}\] \[\leq\max\left\{\max_{i\in\mathbb{N}_{N}}(\|\alpha_{i,r}\|_{\infty })\|g_{1}-g_{2}\|_{\infty},\max_{i\in\mathbb{N}_{N}}\left(\frac{\|\alpha_{i,r} \|_{d}}{a_{i}^{d}}\right)(Lip_{d}(g_{1}-g_{2})+\|g_{1}-g_{2}\|_{\infty})\right\}\] \[\leq\max_{i\in\mathbb{N}_{N}}\left(\frac{\|\alpha_{i,r}\|_{d}}{a_ {i}^{d}}\right)\max\left\{\|g_{1}-g_{2}\|_{d},2\|g_{1}-g_{2}\|_{d}\right\}\] \[=2\max_{i\in\mathbb{N}_{N}}\left(\frac{\|\alpha_{i,r}\|_{d}}{a_{i }^{d}}\right)\|g_{1}-g_{2}\|_{d}.\] By assumptions on the sequence of scaling functions, we can ensure that \(T^{\alpha_{r}}\) is a contraction.
3. Let \(g\in Lip_{d}(I)\) be an arbitrary function. We have to check if the sequence \(\{\|T^{\alpha_{r}}g-g\|_{d}\}\) is bounded. Now, \[Lip_{d}(T^{\alpha_{r}}g-g) =\sup_{\begin{subarray}{c}x,y\in I\\ x\neq y\end{subarray}}\frac{|(T^{\alpha_{r}}g-g)(x)-(T^{\alpha_{r}}g-g)(y)|}{| x-y|^{d}}\] \[\leq\sup_{\begin{subarray}{c}x,y\in I\\ x\neq y\end{subarray}}\frac{|T^{\alpha_{r}}g(x)-T^{\alpha_{r}}g(y)|+|g(x)-g(y)| }{|x-y|^{d}}\] \[=\sup_{\begin{subarray}{c}x,y\in I\\ x\neq y\end{subarray}}\frac{|T^{\alpha_{r}}g(x)-T^{\alpha_{r}}g(y)|}{|x-y|^{d} }+\sup_{\begin{subarray}{c}x,y\in I\\ x\neq y\end{subarray}}\frac{|g(x)-g(y)|}{|x-y|^{d}}\] \[=Lip_{d}(T^{\alpha_{r}}g)+Lip_{d}(g)\] \[\leq Lip_{d}(f)+\max_{i\in\mathbb{N}_{N}}\left(\frac{\|\alpha_{i,r}\|_{d}}{a_{i}^{d}}\right)(Lip_{d}(g)+Lip_{d}(b_{r})+\|g\|_{\infty}+\|b_{r }\|_{\infty})+Lip_{d}(g)\] \[\leq Lip_{d}(f)+\left(1+\max_{i\in\mathbb{N}_{N}}\left(\frac{\| \alpha_{i}\|_{d}}{a_{i}^{d}}\right)\right)Lip_{d}(g)+2\max_{i\in\mathbb{N}_{N }}\left(\frac{\|\alpha_{i}\|_{d}}{a_{i}^{d}}\right)\|b\|_{d}\] (i) where \(\|\alpha_{i}\|_{d}:=\sup_{r\in\mathbb{N}}\|\alpha_{i,r}\|_{d}\). Also, \[|(T^{\alpha_{r}}g-g)(x)| =|(f-g)(x)|+|\alpha_{i,r}(Q_{i}(x))|\cdot|(g-b_{r})(Q_{i}(x))|\] \[\leq\|f-g\|_{\infty}+\|\alpha\|_{\infty}\|g-b_{r}\|_{\infty}\] \[\leq\|f-g\|_{\infty}+\|\alpha\|_{\infty}(\|g\|_{\infty}+\|b_{r}\| _{\infty})\] \[\leq\|f-g\|_{d}+\|\alpha\|_{\infty}(\|g\|_{d}+\|b_{r}\|_{d}).\] Hence, (ii) \[\|T^{\alpha_{r}}g-g\|_{\infty}\leq\|f-g\|_{d}+\|\alpha\|_{\infty}(\|g\|_{d}+\| b\|_{d}).\] Combining (i) and (ii), we get that the bound of \(\|T^{\alpha_{r}}g-g\|_{d}\) is independent of \(r\). Using Proposition 2.4, \(\exists\) a unique \(f_{b,Lip_{d}}^{\alpha}\in Lip_{d,f}(I)\) such that \(f_{b,Lip_{d}}^{\alpha}=\lim_{r\to\infty}T^{\alpha_{1}}\)\(o\)\(T^{\alpha_{2}}\)\(o\)\(\ldots\)\(o\)\(T^{\alpha_{r}}g\) for any \(g\in Lip_{d,f}(I)\). This completes the proof of the theorem.
**Definition 3.2**.: The function \(f^{\alpha}_{b,Lip_{d}}\) is called a Lipschitz non-stationary \(\alpha\)-fractal function with respect to \(f,\alpha,b\) and the partition \(\Delta\).
_Remark 3.3_.: As each \(T^{\alpha_{r}}\) is a contraction, there exists a unique stationary \(\alpha\)-fractal function \(f^{\alpha}_{r}\) such that \(T^{\alpha_{r}}(f^{\alpha}_{r})=f^{\alpha}_{r}\) and it satisfies the functional equation:
\[f^{\alpha}_{r}(x)=F_{i,r}\ (\ Q_{i}(x),\ f^{\alpha}_{r}(Q_{i}(x))))\ \ \forall\ \ x\in I_{i},\]
where \(Q_{i}(x):={l_{i}}^{-1}(x)\). That is,
\[f^{\alpha}_{r}(x)=f(x)+\alpha_{i,r}(Q_{i}(x)).f^{\alpha}_{r}(Q_{i}(x))-\alpha_ {i,r}(Q_{i}(x))b_{r}(Q_{i}(x)).\]
## 4. A Nonlinear Fractal Operator on \(Lip_{d}(I)\)
Suppose \(L_{r}:Lip_{d}(I)\to Lip_{d}(I)\) is a sequence of operators such that \(\|L\|_{\infty}:=\sup\limits_{r\in\mathbb{N}}\|L_{r}\|_{\infty}<\infty\) and satisfy \((L_{r}(f))(x_{0})=f(x_{0})\) and \((L_{r}(f))(x_{N})=f(x_{N})\). We set \(b_{r}=L_{r}f\). The corresponding non-stationary \(\alpha\)-fractal function will be denoted by \(f^{\alpha}_{b}\).
**Definition 4.1**.: Let \(f\in Lip_{d}(I)\) and \(\Delta\) be fixed. We define the \(\alpha\)-fractal operator \(\mathfrak{F}^{\alpha}_{b}\equiv\mathfrak{F}^{\alpha}_{\Delta,b}\) as
\[\mathfrak{F}^{\alpha}_{b}:Lip_{d}(I)\subset\mathcal{C}(I)\to\mathcal{C}(I),\ \ \mathfrak{F}^{\alpha}_{b}(f)=f^{\alpha}_{b}.\]
_Remark 4.2_.: In the case of a stationary fractal function, a similar construction is well studied in the literature [17, 18]. If we take \(\alpha_{i,r}=\alpha_{i}\forall\ r\in\mathbb{N},i\in\mathbb{N}_{N}\) and \(b_{r}=Lf\ \forall\ r\in\mathbb{N}\), where \(L:Lip_{d}(I)\to Lip_{d}(I)\) is an operator such that \((L(f))(x_{0})=f(x_{0})\) and \((L(f))(x_{N})=f(x_{N})\). Then the non-stationary \(\alpha\)-fractal function will coincide with the stationary one.
Our next concern is to study the error approximation in the non-stationary perturbation process. The error bound in the different fractal approximations is well-studied in the stationary case [18].
**Proposition 4.3**.: Let \(f^{\alpha}_{b}\) be the non-stationary FIF corresponding to the seed function \(f\in Lip_{d}(I)\). Then we have the following error bound
\[\|f^{\alpha}_{b}-f\|_{\infty}\leq\frac{\|\alpha\|_{\infty}}{1-\|\alpha\|_{ \infty}}\sup\limits_{r\in\mathbb{N}}\{\|f-L_{r}(f)\|_{\infty}\}. \tag{4.1}\]
Proof.: The proof is similar to that given in Theorem 4.1. of [19].
_Corollary 4.4_.: Let \(f\in Lip_{d}(I)\) be the germ function and \(f^{\alpha}_{b}\) be the corresponding non-stationary FIF. Then for any \(j\in\mathbb{N}\), we have the following inequality
\[\|f^{\alpha}_{b}-L_{j}(f)\|_{\infty}\leq\frac{1}{1-\|\alpha\|_{\infty}}\sup \limits_{r\in\mathbb{N}}\{\|f-L_{r}(f)\|_{\infty}\}.\]
Proof.: Let \(j\in\mathbb{N}\). Using inequality (4.1), we get
\[\|f^{\alpha}_{b}-L_{j}(f)\|_{\infty} =\|f^{\alpha}_{b}-f+f-L_{j}(f)\|_{\infty}\] \[\leq\|f^{\alpha}_{b}-f\|_{\infty}+\|f-L_{j}(f)\|_{\infty}\] \[\leq\frac{\|\alpha\|_{\infty}}{1-\|\alpha\|_{\infty}}\sup\limits_ {r\in\mathbb{N}}\{\|f-L_{r}(f)\|_{\infty}\}+\|f-L_{j}(f)\|_{\infty}\] \[\leq\frac{1}{1-\|\alpha\|_{\infty}}\sup\limits_{r\in\mathbb{N}}\{ \|f-L_{r}(f)\|_{\infty}\}.\]
Based on the same arguments used in [19], we know if \(L_{r}\) is linear, then \(\mathfrak{F}^{\alpha}_{b}\) is a linear operator. In order to keep track of this, let us write it down in the next proposition:
**Proposition 4.5**.: The fractal operator \(\mathfrak{F}^{\alpha}_{b}\) is a linear operator, provided that the sequence of operators \(L_{r}:Lip_{d}(I)\to Lip_{d}(I)\) are linear for each \(r\in\mathbb{N}\).
Unless otherwise specified, note that we do not assume that \(L_{r}\) is linear. As a result, the fractal operator is typically nonlinear (not necessarily linear). With regard to the conventional setting of fractal operators spread throughout the literature [17, 18], the present findings abandon the general assumption of linearity and boundedness of the map \(L_{r}\). Consequently, the research presented here may uncover possible applications of the fractal operator within the theory of unbounded and nonlinear operators.
Let us now collect some standard definitions of operators of interest in nonlinear functional analysis and perturbation theory. Let \(A,B\) be two normed linear spaces.
**Definition 4.6**.: If an operator \(\mathcal{T}:A\to B\) maps bounded sets to bounded sets, then it is said to be topologically bounded.
**Definition 4.7**.: Let \(\mathcal{T}_{1}:D(\mathcal{T}_{1})\subset A\to B\) and \(\mathcal{T}_{2}:D(\mathcal{T}_{2})\subset A\to B\) be two operators such that \(D(\mathcal{T}_{2})\subset D(\mathcal{T}_{1})\). If \(\mathcal{T}_{1},\mathcal{T}_{2}\) satisfy the following inequality
\[\|\mathcal{T}_{1}(u)\|\leq t_{1}\|u\|+t_{2}\|\mathcal{T}_{2}(u)\|\;\;\forall \;u\in D(\mathcal{T}_{2}),\]
where \(t_{1}\) and \(t_{2}\) are some non-negative constants, then \(\mathcal{T}_{1}\) is said to be relatively (norm) bounded with respect to \(\mathcal{T}_{2}\) or simply \(\mathcal{T}_{2}\)-bounded. The \(\mathcal{T}_{2}\)-bound of \(\mathcal{T}_{1}\) is defined as the infimum of all possible values of \(t_{2}\) satisfying the aforementioned inequality.
**Definition 4.8**.: An operator \(\mathcal{T}:A\to B\) is said to be Lipschitz if there exists a constant \(q>0\) such that
\[\|\mathcal{T}(u)-\mathcal{T}(v)\|\leq q\|u-v\|\;\forall\;u,v\in A.\]
For a Lipschitz operator \(\mathcal{T}:A\to B\), the Lipschitz constant of \(\mathcal{T}\) is denoted by \(|\mathcal{T}|\).
**Definition 4.9**.: Let \(\mathcal{T}_{1}:D(\mathcal{T}_{1})\subset A\to B\) and \(\mathcal{T}_{2}:D(\mathcal{T}_{2})\subset A\to B\) be two operators such that \(D(\mathcal{T}_{2})\subset D(\mathcal{T}_{1})\). If \(\mathcal{T}_{1},\mathcal{T}_{2}\) satisfies the following inequality
\[\|\mathcal{T}_{1}(u)-\mathcal{T}_{1}(v)\|\leq M_{1}\|u-v\|+M_{2}\|\mathcal{T}_ {2}(u)-\mathcal{T}_{2}(v)\|\;\forall u,v\in D(\mathcal{T}_{1}),\]
where \(M_{1}\) and \(M_{2}\) are non-negative constants, then we say that \(\mathcal{T}_{1}\) is relatively Lipschitz with respect to \(\mathcal{T}_{2}\) or simply \(\mathcal{T}_{2}\)-Lipschitz. The infimum of all such values of \(M_{2}\) is called the \(\mathcal{T}_{2}\)-Lipschitz constant of \(\mathcal{T}_{1}\).
**Proposition 4.10**.: The non-stationary fractal operator \(\mathfrak{R}_{b}^{\alpha}:Lip_{d}(I)\to\mathcal{C}(I)\) is continuous whenever \(L_{r}:Lip_{d}(I)\to Lip_{d}(I)\) is continuous for each \(r\in\mathbb{N}\).
Proof.: Let \((f_{n})_{n\in\mathbb{N}}\) be a convergent sequence in \(Lip_{d}(I)\), converges to \(f\in Lip_{d}(I)\). We have,
\[f_{b}^{\alpha}(x)=f(x)+\lim_{r\to\infty}\sum_{j=1}^{r}\alpha_{i,1}(Q_{i}(x)) \ldots\alpha_{i,j}(Q_{i}^{j}(x))(f-L_{j}f)(Q_{i}^{j}(x)).\]
Now,
\[|(f_{n})_{b}^{\alpha}(x)-f_{b}^{\alpha}(x)|\] \[\leq|f_{n}(x)-f(x)|+|\lim_{r\to\infty}\sum_{j=1}^{r}\alpha_{i,1}( Q_{i}(x))\ldots\alpha_{i,j}(Q_{i}^{j}(x))(f_{n}-f-L_{j}f_{n}+L_{j}f)(Q_{i}^{j}(x))|\] \[\leq\|f_{n}-f\|_{\infty}+\lim_{r\to\infty}\sum_{j=1}^{r}\|\alpha \|_{\infty}^{j}(\|f_{n}-f\|_{\infty}+\|L_{j}f_{n}-L_{j}f\|_{\infty}).\]
Since the inequality holds for all \(x\in I\), we have
\[\|(f_{n})_{b}^{\alpha}-f_{b}^{\alpha}\|\leq\|f_{n}-f\|_{\infty}+\lim_{r\to \infty}\sum_{j=1}^{r}\|\alpha\|_{\infty}^{j}(\|f_{n}-f\|_{\infty}+\|L_{j}f_{n}- L_{j}f\|_{\infty}).\]
As the sequence \((f_{n})\) converges to \(f\), we get our desired result using continuity of \(L_{j},j\in\mathbb{N}\).
**Proposition 4.11**.: If for each \(r\in\mathbb{N}\), the operators \(L_{r}:Lip_{d}(I)\to Lip_{d}(I)\) is a Lipschitz operator with Lipschitz constant \(|L_{r}|\), then the non-stationary fractal operator \(\mathfrak{R}_{b}^{\alpha}:Lip_{d}(I)\to\mathcal{C}(I)\) is also a Lipschitz operator, and \(|\mathfrak{R}_{b}^{\alpha}|\leq\dfrac{1+|L||\alpha\|_{\infty}}{1-\|\alpha\|_{ \infty}}\), where \(|L|:=\sup_{r\in\mathbb{N}}|L_{r}|<\infty\).
Proof.: Let \(f,g\in Lip_{d}(I)\). Then
\[f_{b}^{\alpha}(x) =f(x)+\lim_{r\to\infty}\sum_{j=1}^{r}\alpha_{i,1}(Q_{i}(x))\ldots \alpha_{i,j}(Q_{i}^{j}(x))(f-L_{j}f)(Q_{i}^{j}(x)),\] \[g_{b}^{\alpha}(x) =g(x)+\lim_{r\to\infty}\sum_{j=1}^{r}\alpha_{i,1}(Q_{i}(x))\ldots \alpha_{i,j}(Q_{i}^{j}(x))(g-L_{j}g)(Q_{i}^{j}(x)).\]
Therefore,
\[|f_{b}^{\alpha}(x)-g_{b}^{\alpha}(x)| =|f(x)+\lim_{r\to\infty}\sum_{j=1}^{r}\alpha_{i,1}(Q_{i}(x))\ldots \alpha_{i,j}(Q_{i}^{j}(x))(f-L_{j}f)(Q_{i}^{j}(x))\] \[\qquad\quad-g(x)-\lim_{r\to\infty}\sum_{j=1}^{r}\alpha_{i,1}(Q_{i} (x))\ldots\alpha_{i,j}(Q_{i}^{j}(x))(g-L_{j}g)(Q_{i}^{j}(x))|\] \[\leq|f(x)-g(x)|+|\lim_{r\to\infty}\sum_{j=1}^{r}\alpha_{i,1}(Q_{i }(x))\ldots\alpha_{i,j}(Q_{i}^{j}(x))(f-g-L_{j}f+L_{j}g)(Q_{i}^{j}(x))|\] \[\leq\|f-g\|_{\infty}+\lim_{r\to\infty}\sum_{j=1}^{r}\|\alpha\|_{ \infty}^{j}(\|f-g\|_{\infty}+\|L_{j}f-L_{j}g\|_{\infty})\] \[\leq\|f-g\|_{\infty}+\lim_{r\to\infty}\sum_{j=1}^{r}\|\alpha\|_{ \infty}^{j}(\|f-g\|_{\infty}+|L_{j}|\cdot\|f-g\|_{\infty})\] \[\leq\left(1+\sum_{j=1}^{\infty}\|\alpha\|_{\infty}^{j}(1+|L|) \right)\cdot\|f-g\|_{\infty}\] \[=\left(1+\frac{\|\alpha\|_{\infty}}{1-\|\alpha\|_{\infty}}(1+|L|) \right)\cdot\|f-g\|_{\infty}\] \[=\frac{1+|L|\cdot\|\alpha\|_{\infty}}{1-\|\alpha\|_{\infty}}\cdot \|f-g\|_{\infty}.\]
This holds for every \(x\in I\), hence
\[\|\mathfrak{F}_{b}^{\alpha}(f)-\mathfrak{F}_{b}^{\alpha}(g)\|=\|f_{b}^{\alpha }-g_{b}^{\alpha}\|\leq\frac{1+|L|\cdot\|\alpha\|_{\infty}}{1-\|\alpha\|_{ \infty}}\|f-g\|_{\infty}.\]
This concludes the proof.
**Proposition 4.12**.: The non-stationary fractal operator \(\mathfrak{F}_{b}^{\alpha}:Lip_{d}(I)\to\mathcal{C}(I)\) is topologically bounded provided that \(L_{r}:Lip_{d}(I)\to Lip_{d}(I)\) is uniformly bounded.
Proof.: Let \(f\) be a function in \(Lip_{d}(I)\). We have,
\[|f_{b}^{\alpha}(x)| \leq|f(x)|+|\lim_{r\to\infty}\sum_{j=1}^{r}\alpha_{i,1}(Q_{i}(x) )\ldots\alpha_{i,j}(Q_{i}^{j}(x))(f-L_{j}f)(Q_{i}^{j}(x))|\] \[\leq\|f\|_{\infty}+\lim_{r\to\infty}\sum_{j=1}^{r}\|\alpha\|_{ \infty}^{j}\|f-L_{j}f\|_{\infty}\] \[\leq\|f\|_{\infty}+\lim_{r\to\infty}\sum_{j=1}^{r}\|\alpha\|_{ \infty}^{j}(\|f\|_{\infty}+\|L_{j}f\|_{\infty})\] \[\leq(1+\sum_{j=1}^{\infty}\|\alpha\|_{\infty}^{j})\|f\|_{\infty}+ \sum_{j=1}^{\infty}\|\alpha\|_{\infty}^{j}\|L_{j}f\|_{\infty}\] \[=\frac{1}{1-\|\alpha\|_{\infty}}\|f\|_{\infty}\ +\sum_{j=1}^{\infty}\| \alpha\|_{\infty}^{j}\|L_{j}f\|_{\infty}.\]
Hence
\[\|\mathfrak{F}_{b}^{\alpha}(f)\|_{\infty}\ =\ \|f_{b}^{\alpha}\|_{\infty}\ \leq\ \frac{1}{1-\|\alpha\|_{\infty}}\|f\|_{\infty}\ +\sum_{j=1}^{\infty}\|\alpha\|_{\infty}^{j}\|L_{j}f\|_{\infty}.\]
Since \(L_{j}(j\in\mathbb{N})\) is uniformly bounded, it follows from the above inequality that the operator \(\mathfrak{F}_{b}^{\alpha}\) is topologically bounded.
In the following propositions of this section, we assume that \(L_{r}\) be a sequence of linear operators such that there exists a linear operator \(L\) satisfying \(\|Lf\|_{\infty}=\sup_{r\in\mathbb{N}}\|L_{r}f\|_{\infty}\). Let's move on to the following proposition using this presumption.
**Proposition 4.13**.: The non-stationary fractal operator \(\mathfrak{F}_{b}^{\alpha}:Lip_{d}(I)\to\mathcal{C}(I)\) is relatively Lipschitz with respect to \(L\) with \(L\)-Lispchitz constant of \(\mathfrak{F}_{b}^{\alpha}\) not exceeding \(\frac{\|\alpha\|_{\infty}}{1-\|\alpha\|_{\infty}}\).
Proof.: Let \(f,g\in Lip_{d}(I)\). Then the functions will satisfy the following equation
\[f_{b}^{\alpha}(x)=f(x)+\lim_{r\to\infty}\sum_{j=1}^{r}\alpha_{i,1}(Q_{i}(x)) \ldots\alpha_{i,j}(Q_{i}^{j}(x))(f-L_{j}f)(Q_{i}^{j}(x)).\]
Now,
\[|f_{b}^{\alpha}(x)-g_{b}^{\alpha}(x)|\] \[=|f(x)+\lim_{r\to\infty}\sum_{j=1}^{r}\alpha_{i,1}(Q_{i}(x)) \ldots\alpha_{i,j}(Q_{i}^{j}(x))(f-L_{j}f)(Q_{i}^{j}(x))\] \[\qquad\qquad-g(x)-\lim_{r\to\infty}\sum_{j=1}^{r}\alpha_{i,1}(Q_{ i}(x))\ldots\alpha_{i,j}(Q_{i}^{j}(x))(g-L_{j}g)(Q_{i}^{j}(x))|\] \[\leq|f(x)-g(x)|+|\lim_{r\to\infty}\sum_{j=1}^{r}\alpha_{i,1}(Q_{ i}(x))\ldots\alpha_{i,j}(Q_{i}^{j}(x))(f-g-L_{j}f+L_{j}g)(Q_{i}^{j}(x))|\] \[\leq\|f-g\|_{\infty}+\lim_{r\to\infty}\sum_{j=1}^{r}\|\alpha\|_{ \infty}^{j}(\|f-g\|_{\infty}+\|L_{j}f-L_{j}g\|_{\infty})\] \[\leq\|f-g\|_{\infty}+\lim_{r\to\infty}\sum_{j=1}^{r}\|\alpha\|_{ \infty}^{j}(\|f-g\|_{\infty}+\|L_{j}(f-g)\|_{\infty})\] \[\leq\left(1+\sum_{j=1}^{\infty}\|\alpha\|_{\infty}^{j}\right)\|f- g\|_{\infty}+\left(\sum_{j=1}^{\infty}\|\alpha\|_{\infty}^{j}\right)\|L(f-g)\|_{\infty}\] \[=\left(\frac{1}{1-\|\alpha\|_{\infty}}\right)\|f-g\|_{\infty}+ \left(\frac{\|\alpha\|_{\infty}}{1-\|\alpha\|_{\infty}}\right)\|Lf-Lg\|_{ \infty}.\]
For all \(x\), the abovementioned inequality is true, hence
\[\|\mathfrak{F}_{b}^{\alpha}(f)-\mathfrak{F}_{b}^{\alpha}(g)\|_{\infty}=\|f_{ b}^{\alpha}-g_{b}^{\alpha}\|_{\infty}\leq\left(\frac{1}{1-\|\alpha\|_{\infty}} \right)\|f-g\|_{\infty}+\left(\frac{\|\alpha\|_{\infty}}{1-\|\alpha\|_{ \infty}}\right)\|Lf-Lg\|_{\infty}.\]
This completes the proof.
**Proposition 4.14**.: The non-stationary fractal operator \(\mathfrak{F}_{b}^{\alpha}:Lip_{d}(I)\to\mathcal{C}(I)\) is relatively bounded with respect to \(L\) with \(L\)-bound is less than or equal to \(\frac{\|\alpha\|_{\infty}}{1-\|\alpha\|_{\infty}}\).
Proof.: Let \(f\) be an arbitrary function in \(Lip_{d}(I)\). From (2.6), we have
\[f_{b}^{\alpha}(x)=f(x)+\lim_{r\to\infty}\sum_{j=1}^{r}\alpha_{i,1}(Q_{i}(x)) \ldots\alpha_{i,j}(Q_{i}^{j}(x))(f-L_{j}f)(Q_{i}^{j}(x)).\]
Therefore,
\[|f_{b}^{\alpha}(x)| =|f(x)|+|\lim_{r\to\infty}\sum_{j=1}^{r}\alpha_{i,1}(Q_{i}(x))\dots \alpha_{i,j}(Q_{i}^{j}(x))(f-L_{j}f)(Q_{i}^{j}(x))|\] \[\leq\|f\|_{\infty}+\lim_{r\to\infty}\sum_{j=1}^{r}\|\alpha\|_{ \infty}^{j}\|f-L_{j}f\|_{\infty}\] \[\leq\|f\|_{\infty}+\lim_{r\to\infty}\sum_{j=1}^{r}\|\alpha\|_{ \infty}^{j}(\|f\|_{\infty}+\|L_{j}f\|_{\infty})\] \[\leq\|f\|_{\infty}+\lim_{r\to\infty}\sum_{j=1}^{r}\|\alpha\|_{ \infty}^{j}(\|f\|_{\infty}+\|Lf\|_{\infty})\] \[=\|f\|_{\infty}+\frac{\|\alpha\|_{\infty}}{1-\|\alpha\|_{\infty}} (\|f\|_{\infty}+\|Lf\|_{\infty})\] \[=\frac{1}{1-\|\alpha\|_{\infty}}\|f\|_{\infty}+\frac{\|\alpha\|_ {\infty}}{1-\|\alpha\|_{\infty}}\|Lf\|_{\infty}.\]
The aforementioned inequality holds for all \(x\), hence
\[\|\mathfrak{F}_{b}^{\alpha}(f)\|_{\infty}\ =\ \|f_{b}^{\alpha}\|_{\infty}\ \leq\ \frac{1}{1-\|\alpha\|_{\infty}}\|f\|_{\infty}\ +\ \frac{\|\alpha\|_{\infty}}{1-\|\alpha\|_{\infty}}\|Lf\|_{\infty}. \tag{4.2}\]
This proves our claim.
## 5. Stability and sensitivity analysis
Let us now investigate the stability of the FIF with changeable parameters produced by IFS \(\mathcal{I}_{r}=\{\mathcal{M};W_{i,r}(x,y)=(l_{i}(x),F_{i,r}(x,y)),i\in \mathbb{N}_{N}\}\), where the maps are defined in (2.5) and \(\mathcal{M}=I\times[k_{1},k_{2}]\subset\mathbb{R}^{2}\). The similar results for the stationary case can be observed in [26]. Let \(\bar{\mathbf{D}}:=\{(x_{i},\bar{y}_{i}):i\in\mathbb{N}_{N}^{0}\}\) be another set of interpolation points in \(\mathcal{M}\) which can be considered as the perturbations of ordinates of the points in \(\mathbf{D}:=\{(x_{i},y_{i})\in I\times[k_{1},k_{2}]:i\in\mathbb{N}_{N}^{0}\}\). For the data set \(\bar{\mathbf{D}}\), an IFS can be defined by \(\bar{\mathcal{I}}_{m}=\{\mathcal{M};\bar{W}_{i,r}(x,y)=(l_{i}(x),\bar{F}_{i,r }(x,y)),i\in\mathbb{N}_{N}\}\), where \(l_{i}(x),\ i\in\mathbb{N}_{N},\) are the maps defined in (2.5), and \(\bar{F}_{i,r}\) are defined as
\[\bar{F}_{i,r}=\alpha_{i,r}(x)y+\hat{f}(l_{i}(x))-\alpha_{i,r}(x)\hat{b}_{r}(x ),\ \ i\in\mathbb{N}_{N},\ \ r\in\mathbb{N}. \tag{5.1}\]
Here we consider the base functions \(b_{r}\) and perturbed base functions \(\bar{b}_{r}\) in \(\mathcal{C}_{f}(I)\) such that \(\sup_{r\in\mathbb{N}}\|b_{r}\|_{\infty}<\infty\) and \(\sup_{r\in\mathbb{N}}\|\bar{b}_{r}\|_{\infty}<\infty\).
**Theorem 5.1**.: _Let \(\mathbf{D}:=\{(x_{i},y_{i}):i\in\mathbb{N}_{N}^{0}\}\) and \(\bar{\mathbf{D}}:=\{(x_{i},\bar{y}_{i}):i\in\mathbb{N}_{N}^{0}\}\) be two data sets in \(\mathcal{M}\). Let \(f_{b}^{\alpha}\) be the non-stationary FIF for \(\mathbf{D}\) generated by the sequence of IFSs \(\mathcal{I}_{r}=\{\mathcal{M};W_{i,r}(x,y)=(l_{i}(x),F_{i,r}(x,y)),i\in\mathbb{ N}_{N}\}\) defined in (2.5) and \(\bar{f}_{b}^{\alpha}\) be the non-stationary FIF for \(\bar{\mathbf{D}}\) generated by the sequence of IFSs \(\bar{\mathcal{I}}_{r}=\{\mathcal{M};\bar{W}_{i,r}(x,y)=(l_{i}(x),\bar{F}_{i,r }(x,y)),i\in\mathbb{N}_{N}\}\) defined through (5.1). Then we have,_
\[\|f_{b}^{\alpha}-\bar{f}_{b}^{\alpha}\|_{\infty}\leq\frac{\|f-\hat{f}\|_{ \infty}+\|\alpha\|_{\infty}\cdot\sup_{r\in\mathbb{N}}\{\|b_{r}-\hat{b}_{r}\|_{ \infty}\}}{1-\|\alpha\|_{\infty}}. \tag{5.2}\]
Proof.: From (2.7), we have
\[f_{b}^{\alpha}(x)=f(x)+\lim_{r\to\infty}\sum_{j=1}^{r}\alpha_{i,1}(Q_{i}(x)) \dots\alpha_{i,j}(Q_{i}^{j}(x))(f-b_{j})(Q_{i}^{j}(x)).\]
Therefore,
\[|f_{b}^{\alpha}(x)-\hat{f}_{b}^{\alpha}(x)|\] \[=|f(x)+\lim_{r\to\infty}\sum_{j=1}^{r}\alpha_{i,1}(Q_{i}(x))\dots \alpha_{i,j}(Q_{i}^{j}(x))(f-b_{j})(Q_{i}^{j}(x))-\hat{f}(x)\] \[\qquad\qquad-\lim_{r\to\infty}\sum_{j=1}^{r}\alpha_{i,1}(Q_{i}(x)) \dots\alpha_{i,j}(Q_{i}^{j}(x))(\hat{f}-\hat{b}_{j})(Q_{i}^{j}(x))|\] \[\leq|f(x)-\hat{f}(x)|+|\lim_{r\to\infty}\sum_{j=1}^{r}\alpha_{i,1} (Q_{i}(x))\dots\alpha_{i,j}(Q_{i}^{j}(x))(f-\hat{f}-b_{j}+\hat{b}_{j})(Q_{i}^{ j}(x))|\] \[\leq\|f-\hat{f}\|_{\infty}+\lim_{r\to\infty}\sum_{j=1}^{r}\| \alpha\|_{\infty}^{j}(\|f-\hat{f}\|_{\infty}+\|b_{j}-\hat{b}_{j}\|_{\infty})\] \[\leq\left(1+\sum_{j=1}^{\infty}\|\alpha\|_{\infty}^{j}\right)\|f -\hat{f}\|_{\infty}+\left(\lim_{r\to\infty}\sum_{j=1}^{r}\|\alpha\|_{\infty}^ {j}\right)\sup_{r\in\mathbb{N}}\{\|b_{r}-\hat{b}_{r}\|_{\infty}\}\] \[=\left(1+\frac{\|\alpha\|_{\infty}}{1-\|\alpha\|_{\infty}}\right) \|f-\hat{f}\|_{\infty}+\left(\frac{\|\alpha\|_{\infty}}{1-\|\alpha\|_{\infty}} \right)\sup_{r\in\mathbb{N}}\{\|b_{r}-\hat{b}_{r}\|_{\infty}\}\] \[=\frac{\|f-\hat{f}\|_{\infty}+\|\alpha\|_{\infty}\cdot\sup_{r\in \mathbb{N}}\{\|b_{r}-\hat{b}_{r}\|_{\infty}\}}{1-\|\alpha\|_{\infty}}.\]
The above inequality holds for all \(x\in I\); hence the inequality (5.2) follows.
_Remark 5.2_.: Let \(f,\hat{f}\) be two piecewise linear functions through the interpolation data sets \(\mathbf{D}\) and \(\bar{\mathbf{D}}\) respectively. Also assume that \(b_{r}=\hat{b}_{r}=b\) is a linear function passing through the points \((x_{0},y_{0})\) and \((x_{N},y_{N})\). Then we have
\[\|f_{b}^{\alpha}-\bar{f}_{b}^{\alpha}\|_{\infty}\leq\frac{1+\|\alpha\|_{ \infty}}{1-\|\alpha\|_{\infty}}\max_{i\in\mathbb{N}_{N}^{\prime}}\{|y_{i}- \bar{y}_{i}|\},\]
which is the same result for stationary FIF given in [26]. So, our result can be treated as a generalisation of the existing result.
_Remark 5.3_.: Perturbations of abscissas of interpolation points can be taken to affect the values of the non-stationary FIF associated with the interpolation points. Also, perturbations of both abscissas and ordinates may be considered to examine the stability of the non-stationary FIF. For more details, the reader is invited to read the paper of Wang and Yu [26].
Next, we discuss the sensitivity of the non-stationary \(\alpha\)-FIF defined by the IFS \(\mathcal{I}_{r}\). Let \(f,b_{r},\alpha_{i,r}\) be as defined before and \(T_{i,r}:\mathcal{M}\to\mathbb{R},i\in\mathbb{N}_{N},r\in\mathbb{N}\), be a sequence of continuous functions on \(\mathcal{M}\) such that for all \((x,y)\in\mathcal{M}\),
\[T_{i,r}(x,y)=f(x)+[\alpha_{i,r}(Q_{i}(x))+t_{i,r}\theta_{i,r}(Q_{i}(x))](g-b_{r })(Q_{i}(x))+s_{i,r}\phi_{i,r}(Q_{i}(x)),\]
where \(t_{i,r},s_{i,r}\) are parameters of perturbation satisfying \(|t_{i,r}|<1\) and \(|s_{i,r}|<1\), \(\phi_{i,r},\theta_{i,r}\in Lip_{d}(I)\) satisfying \(\max\limits_{i\in\mathbb{N}_{N}}\{\|\alpha_{i,r}+t_{i,r}\theta_{i,r}\|_{\infty} \}<1\) and \(\phi_{i,r}(x_{0})=\phi_{i,r}(x_{N})=0.\) The function \(T_{i,r}\) is a perturbation of the function \(F_{i,r}\) for each \(i\in\mathbb{N}_{N},r\in\mathbb{N}\). Thus the IFS \(\mathcal{I}^{\prime}_{r}=\{\mathcal{M};(l_{i}(x),T_{i,r}(x,y)),i\in\mathbb{N}_ {N}\}\) may be treated as the perturbation IFS of the IFS \(\mathcal{I}_{r}=\{\mathcal{M};(l_{i}(x),F_{i,r}(x,y)),i\in\mathbb{N}_{N}\}\). For each \(r\in\mathbb{N},i\in\mathbb{N}_{N},T_{i,r}\) is also contractive in the second variable and it satisfy
\[T_{i,r}(x_{0},y_{0})=y_{i-1},\ \ T_{i,r}(x_{N},y_{N})=y_{i}.\]
Therefore the IFS \(\mathcal{I}^{\prime}_{r}=\{\mathcal{M};(l_{i}(x),T_{i,r}(x,y)),i\in\mathbb{N} _{N}\}\) determines a unique non-stationary FIF, denoted by \(f_{b,s}^{\alpha,t}\).
**Proposition 5.4**.: Let \(\mathbf{D}:=\{(x_{i},y_{i}):i\in\mathbb{N}_{N}^{0}\}\) be a data set in \(\mathcal{M}\). Let \(f_{b}^{\alpha}\) be the non-stationary FIFs corresponding to the sequence of IFSs \(\mathcal{I}_{r}=\{\mathcal{M};(l_{i}(x),F_{i,r}(x,y)),i\in\mathbb{N}_{N}\}\) defined in (2.5) and \(f_{b,s}^{\alpha,t}\) be the non-stationary FIF generated by the sequence of IFSs \(\mathcal{I}^{\prime}_{r}=\{\mathcal{M};(l_{i}(x),T_{i,r}(x,y)),i\in\mathbb{N} _{N}\}\). Then
\[\|f_{b,s}^{\alpha,t}-f_{b}^{\alpha}\|_{\infty}\leq\frac{\|\phi\|_{\infty}}{1-\| \alpha\|_{\infty}-\|t\|_{\infty}\|\theta\|_{\infty}}\|s\|_{\infty}+\frac{\| \theta\|_{\infty}\sup\{\|f-b_{r}\|_{\infty}\}}{(1-\|\alpha\|_{\infty})(1-\| \alpha\|_{\infty}-\|t\|_{\infty}\|\theta\|_{\infty})}\|t\|_{\infty}, \tag{5.3}\]
where
\[\|\phi\|_{\infty} =\sup_{r\in\mathbb{N}}\{\max_{i\in\mathbb{N}_{N}}\|\phi_{i,r}\|_ {\infty}\},\ \|\theta\|_{\infty}=\sup_{r\in\mathbb{N}}\{\max_{i\in\mathbb{N}_{N}}\| \theta_{i,r}\|_{\infty}\},\] \[\|s\|_{\infty} =\sup_{r\in\mathbb{N}}\{\max_{i\in\mathbb{N}_{N}}|s_{i,r}|\},\ \|t\|_{\infty}=\sup_{r\in\mathbb{N}}\{\max_{i\in\mathbb{N}_{N}}|t_{i,r}|\}.\]
Proof.: From (2.7), we have
\[f_{b}^{\alpha}(x)-f(x)=\lim_{r\to\infty}\sum_{j=1}^{r}\alpha_{i,1}(Q_{i}(x)) \ldots\alpha_{i,j}(Q_{i}^{j}(x))(f-b_{j})(Q_{i}^{j}(x)). \tag{5.4}\]
For \(r\in\mathbb{N}\), let us define an RB operator \(V^{\alpha_{r}}\) on \(\mathcal{M}\) by
\[(V^{\alpha_{r}}g)(x) =T_{i,r}({l_{i}}^{-1}(x),\ g({l_{i}}^{-1}(x)))\] \[=f(x)+[\alpha_{i,r}(Q_{i}(x))+t_{i,r}\theta_{i,r}(Q_{i}(x))](g-b_ {r})(Q_{i}(x))+s_{i,r}\phi_{i,r}(Q_{i}(x))\] \[=f(x)+[\mathbf{a}_{r}(x)+\mathbf{c}_{r}(x)](g-b_{r})(Q_{i}(x))+s_ {i,r}\phi_{i,r}(Q_{i}(x)),\]
where \(\mathbf{a}_{r}(x)=\alpha_{i,r}(Q_{i}(x))\) and \(\mathbf{c}_{r}(x)=t_{i,r}\theta_{i,r}(Q_{i}(x))\).
\[V^{\alpha_{1}}\circ V^{\alpha_{2}}\circ\cdots\circ V^{\alpha_{r }}f(x)-f(x)\] \[=\Big{[}\mathbf{a}_{1}(x)+\mathbf{c}_{1}(x)\Big{]}\Big{(}V^{ \alpha_{2}}\circ V^{\alpha_{3}}\circ\cdots\circ V^{\alpha_{r}}f-b_{1}\Big{)}( Q_{i}(x))+s_{i,1}\phi_{i,1}(Q_{i}(x))\]
Using induction, we obtain
\[V^{\alpha_{1}}\circ V^{\alpha_{2}}\circ\cdots\circ V^{\alpha_{ r}}f(x)-f(x)\] \[=\sum_{j=1}^{r}\Big{[}\mathbf{a}_{1}(x)+\mathbf{c}_{1}(x)\Big{]} \Big{[}\mathbf{a}_{2}(x)+\mathbf{c}_{2}(x)\Big{]}\ldots\Big{[}\mathbf{a}_{j}( x)+\mathbf{c}_{j}(x)\Big{]}(f-b_{j})(Q_{i}^{j}(x))\] \[\quad+\sum_{j=1}^{r}s_{i,j}\phi_{i,j}(Q_{i}^{j}(x))\Big{[} \mathbf{a}_{1}(x)+\mathbf{c}_{1}(x)\Big{]}\Big{[}\mathbf{a}_{2}(x)+\mathbf{c} _{2}(x)\Big{]}\ldots\Big{[}\mathbf{a}_{j-1}(x)+\mathbf{c}_{j-1}(x)\Big{]},\]
where \(Q_{i}^{j}\) is a suitable finite composition of mappings \(Q_{i}\). Now, taking the limit as \(r\to\infty\), we get
\[f_{b,s}^{\alpha,t}(x)-f(x)\] \[=\lim_{r\to\infty}\sum_{j=1}^{r}\Big{[}\mathbf{a}_{1}(x)+\mathbf{ c}_{1}(x)\Big{]}\Big{[}\mathbf{a}_{2}(x)+\mathbf{c}_{2}(x)\Big{]}\ldots\Big{[} \mathbf{a}_{j}(x)+\mathbf{c}_{j}(x)\Big{]}(f-b_{j})(Q_{i}^{j}(x))\] \[\quad+\lim_{r\to\infty}\sum_{j=1}^{r}s_{i,j}\phi_{i,j}(Q_{i}^{j}( x))\Big{[}\mathbf{a}_{1}(x)+\mathbf{c}_{1}(x)\Big{]}\Big{[}\mathbf{a}_{2}(x)+ \mathbf{c}_{2}(x)\Big{]}\ldots\Big{[}\mathbf{a}_{j-1}(x)+\mathbf{c}_{j-1}(x) \Big{]}. \tag{5.5}\]
Subtracting (5.4) from (5.5), we get
\[f_{b,s}^{\alpha,t}(x)-f_{b}^{\alpha}(x)\] \[=\lim_{r\to\infty}\sum_{j=1}^{r}s_{i,j}\phi_{i,j}(Q_{i}^{j}(x)) \Big{[}\mathbf{a}_{1}(x)+\mathbf{c}_{1}(x)\Big{]}\Big{[}\mathbf{a}_{2}(x)+ \mathbf{c}_{2}(x)\Big{]}\dots\Big{[}\mathbf{a}_{j-1}(x)+\mathbf{c}_{j-1}(x) \Big{]}\] \[\qquad\qquad-\mathbf{a}_{1}(x)\mathbf{a}_{2}(x)\dots\mathbf{a}_ {j}(x)\Big{]}(f-b_{j})(Q_{i}^{j}(x))\] \[=\lim_{r\to\infty}\sum_{j=1}^{r}s_{i,j}\phi_{i,j}(Q_{i}^{j}(x)) \Big{[}\mathbf{a}_{1}(x)+\mathbf{c}_{1}(x)\Big{]}\Big{[}\mathbf{a}_{2}(x)+ \mathbf{c}_{2}(x)\Big{]}\dots\Big{[}\mathbf{a}_{j-1}(x)+\mathbf{c}_{j-1}(x) \Big{]}\] \[\qquad\qquad+\lim_{r\to\infty}\sum_{j=1}^{r}\bigg{[}\mathbf{a}_{1 }(x)\cdot\mathbf{a}_{2}(x)\dots\mathbf{a}_{j-1}(x)\cdot\mathbf{c}_{j}(x)+ \mathbf{a}_{1}(x)\cdot\mathbf{a}_{2}(x)\dots\mathbf{a}_{j-2}(x)\cdot\mathbf{c }_{j-1}(x)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\times[\mathbf{a}_{j}(x)+\mathbf{c}_{j}(x)]\] \[\qquad\qquad\qquad+\mathbf{a}_{1}(x)\cdot\mathbf{a}_{2}(x)\dots \mathbf{a}_{j-3}(x)\cdot\mathbf{c}_{j-2}(x)[\mathbf{a}_{j-1}(x)+\mathbf{c}_{ j-1}(x)][\mathbf{a}_{j}(x)+\mathbf{c}_{j}(x)]\] \[\qquad\qquad+\dots+\mathbf{c}_{1}(x)[\mathbf{a}_{2}(x)+\mathbf{c }_{2}(x)][\mathbf{a}_{3}(x)+\mathbf{c}_{3}(x)]\dots[\mathbf{a}_{j}(x)+ \mathbf{c}_{j}(x)](f-b_{j})(Q_{i}^{j}(x))\bigg{]}.\]
Let \(\mathbf{a}=\sup_{r\in\mathbb{N}}\{\|\mathbf{a}_{r}\|_{\infty}\}=\|\alpha\|_{\infty}\) and \(\mathbf{c}=\sup_{r\in\mathbb{N}}\{\|\mathbf{c}_{r}\|_{\infty}\}=\|t\|_{\infty} \|\theta\|_{\infty}\). Therefore,
\[|f_{b,s}^{\alpha,t}(x)-f_{b}^{\alpha}(x)|\] \[\leq\lim_{r\to\infty}\sum_{j=1}^{r}\|s\|_{\infty}\|\phi\|_{\infty} (\mathbf{a}+\mathbf{c})^{j-1}+\sup\{\|f-b_{r}\|_{\infty}\}\] \[\qquad\times\lim_{r\to\infty}\sum_{j=1}^{r}\bigg{[}\mathbf{a}^{j-1 }\cdot\mathbf{c}+\mathbf{a}^{j-2}\cdot\mathbf{c}(\mathbf{a}+\mathbf{c})+ \mathbf{a}^{j-3}\cdot\mathbf{c}(\mathbf{a}+\mathbf{c})^{2}+\dots+\mathbf{c}( \mathbf{a}+\mathbf{c})^{j-1}\bigg{]},\] \[=\|s\|_{\infty}\|\phi\|_{\infty}\lim_{r\to\infty}\sum_{j=1}^{r}( \mathbf{a}+\mathbf{c})^{j-1}+\sup\{\|f-b_{r}\|_{\infty}\}\] \[\qquad\qquad\times\mathbf{c}\lim_{r\to\infty}\sum_{j=1}^{r}\Big{[} \mathbf{a}^{j-1}+\mathbf{a}^{j-2}\cdot(\mathbf{a}+\mathbf{c})+\mathbf{a}^{j-3 }\cdot(\mathbf{a}+\mathbf{c})^{2}+\dots+(\mathbf{a}+\mathbf{c})^{j-1}\Big{]}\] \[=\|s\|_{\infty}\|\phi\|_{\infty}\frac{1}{1-\mathbf{a}-\mathbf{c}} +\sup\{\|f-b_{r}\|_{\infty}\}\] \[\qquad\times\mathbf{c}\sum_{j=1}^{\infty}(\mathbf{a}+\mathbf{c}) ^{j-1}\bigg{[}1+\left(\frac{\mathbf{a}}{\mathbf{a}+\mathbf{c}}\right)+\left( \frac{\mathbf{a}}{\mathbf{a}+\mathbf{c}}\right)^{2}+\dots+\left(\frac{\mathbf{ a}}{\mathbf{a}+\mathbf{c}}\right)^{j-1}\bigg{]}\] \[=\frac{\|\phi\|_{\infty}\|s\|_{\infty}}{1-\mathbf{a}-\mathbf{c}} +\sup\{\|f-b_{r}\|_{\infty}\}\mathbf{c}\sum_{j=1}^{\infty}(\mathbf{a}+ \mathbf{c})^{j-1}\times\left(\frac{1-\left(\frac{\mathbf{a}}{\mathbf{a}+ \mathbf{c}}\right)^{j}}{1-\left(\frac{\mathbf{a}}{\mathbf{a}+\mathbf{c}} \right)}\right)\] \[=\frac{\|\phi\|_{\infty}\|s\|_{\infty}}{1-\mathbf{a}-\mathbf{c}} +\sup\{\|f-b_{r}\|_{\infty}\}\sum_{j=1}^{\infty}\Big{(}(\mathbf{a}+\mathbf{c}) ^{j}-(\mathbf{a})^{j}\Big{)}\] \[=\frac{\|\phi\|_{\infty}\|s\|_{\infty}}{1-\mathbf{a}-\mathbf{c}} +\sup\{\|f-b_{r}\|_{\infty}\}\left(\frac{\mathbf{a}+\mathbf{c}}{1-\mathbf{a}- \mathbf{c}}-\frac{\mathbf{a}}{1-\mathbf{a}}\right)\] \[=\frac{\|\phi\|_{\infty}\|s\|_{\infty}}{1-\mathbf{a}-\mathbf{c}} \|s\|_{\infty}+\frac{\|\theta\|_{\infty}\sup\{\|f-b_{r}\|_{\infty}\}}{(1- \mathbf{a})(1-\mathbf{a}-\mathbf{c})}\|t\|_{\infty}.\]
The above inequality holds for each \(x\in\mathcal{M}\), hence
\[\|f_{b,s}^{\alpha,t}-f_{b}^{\alpha}\|_{\infty} \leq\frac{\|\phi\|_{\infty}}{1-\|\alpha\|_{\infty}-\|t\|_{\infty}\| \theta\|_{\infty}}\|s\|_{\infty}\] \[\quad+\frac{\|\theta\|_{\infty}\sup\{\|f-b_{r}\|_{\infty}\}}{(1- \|\alpha\|_{\infty})(1-\|\alpha\|_{\infty}-\|t\|_{\infty}\|\theta\|_{\infty})} \|t\|_{\infty}.\]
## 6. Continuous dependence on parameters \(b,\alpha,\text{ and }\triangle\).
In this section, we will investigate the continuous dependence of the non-stationary \(\alpha\)-fractal function on different IFS parameters. The reader can refer to [24] for the same study in the stationary case. We will start with the continuous dependence of \(f_{\Delta,b}^{\alpha}\) on the sequence of base functions \(b:=\{b_{r}\}\).
**Theorem 6.1**.: _Let \(f\in\mathcal{C}(I),\) and the partition \(\triangle,\) sequence of scale functions \(\alpha_{r}\in\mathcal{C}(I),\ r\in\mathbb{N}\) with \(\|\alpha\|_{\infty}<1\) are fixed. Let \(A=\{b_{r}\in\mathcal{C}(I):b_{r}(x)=f(x),\ \forall\ x=x_{0},x_{N}\}.\) Then, the map \(\mathcal{A}:A\to\mathcal{C}(I)\) defined by_
\[\mathcal{A}(b)=f_{\triangle,b}^{\alpha}\]
_is Lipschitz continuous._
Proof.: From Section 2.4, we obtain that \(f_{\triangle,b}^{\alpha}\) is unique for a fixed sequence of scale function \(\alpha_{r}\), a partition \(\triangle\), and a suitable sequence of the base function \(b_{r}\in\mathcal{C}(I)\). Further, \(f_{\triangle,b}^{\alpha}\) satisfies the functional equation: for all \(x\in I_{i},\ i\in\mathbb{N}_{N}\), we have
\[f_{\Delta,b}^{\alpha}(x)=f(x)+\lim_{r\to\infty}\sum_{j=1}^{r}\alpha_{i,1}(Q_{i }(x))\ldots\alpha_{i,j}(Q_{i}^{j}(x))(f-b_{j})(Q_{i}^{j}(x)).\]
Let \(b_{r},c_{r}\in A,\text{ for }r\in\mathbb{N}.\) Then
\[\mathcal{A}(b)(x)=f_{\Delta,b}^{\alpha}(x)=f(x)+\lim_{r\to\infty}\sum_{j=1}^{ r}\alpha_{i,1}(Q_{i}(x))\ldots\alpha_{i,j}(Q_{i}^{j}(x))(f-b_{j})(Q_{i}^{j}(x)),\]
and
\[\mathcal{A}(c)(x)=f_{\Delta,c}^{\alpha}(x)=f(x)+\lim_{r\to\infty}\sum_{j=1}^{ r}\alpha_{i,1}(Q_{i}(x))\ldots\alpha_{i,j}(Q_{i}^{j}(x))(f-c_{j})(Q_{i}^{j}(x)).\]
On subtraction, we get for \(x\in I_{i},\)
\[\mathcal{A}(b)(x)-\mathcal{A}(c)(x)=\lim_{r\to\infty}\sum_{j=1}^{r}\alpha_{i, 1}(Q_{i}(x))\ldots\alpha_{i,j}(Q_{i}^{j}(x))(c_{j}-b_{j})(Q_{i}^{j}(x)).\]
Therefore,
\[\big{|}\mathcal{A}(b)(x)-\mathcal{A}(c)(x)\big{|} \leq\lim_{r\to\infty}\sum_{j=1}^{r}\big{|}\alpha_{i,1}(Q_{i}(x)) \ldots\alpha_{i,j}(Q_{i}^{j}(x))(c_{j}-b_{j})(Q_{i}^{j}(x))\big{|}\] \[\leq\sum_{j=1}^{\infty}\big{\|}\alpha\big{\|}_{\infty}^{j}\|c_{j} -b_{j}\|_{\infty}\] \[=\frac{\|\alpha\|_{\infty}}{1-\|\alpha\|_{\infty}}\|b-c\|_{\infty}.\]
For all \(x\in I\), the aforementioned inequality holds. Hence,
\[\|\mathcal{A}(b)-\mathcal{A}(c)\|_{\infty}\leq\frac{\|\alpha\|_{\infty}}{1- \|\alpha\|_{\infty}}\|b-c\|_{\infty}.\]
This shows that \(\mathcal{A}\) is a Lipschitz continuous map with Lipschitz constant \(\frac{\|\alpha\|_{\infty}}{1-\|\alpha\|_{\infty}}.\)
**Theorem 6.2**.: _Let \(f\in\mathcal{C}(I),\) sequence of base function \(b_{r}\) and the partition \(\triangle,\) are fixed. Let \(B=\{\alpha=\{\alpha_{r}\}:\alpha_{i,r}\in Lip_{d}(I)\) and \(\|\alpha\|_{\infty}\leq s<1\), where \(s\) is a fixed number\(\}\). Then the map \(\mathcal{B}:B\rightarrow\mathcal{C}(I),\) defined by_
\[\mathcal{B}(\alpha)=f_{\triangle,b}^{\alpha}\]
_is continuous._
Proof.: For a fixed partition \(\triangle\), a scale function \(\alpha\), and a suitable sequence of base function \(b_{r}\), the map \(f_{\triangle,b}^{\alpha}\) is unique. Further, \(f_{\triangle,b}^{\alpha}\) satisfies the functional equation: for all \(x\in I_{i},\ i\in\mathbb{N}_{N}\) we have
\[f_{\Delta,b}^{\alpha}(x)=f(x)+\lim_{r\rightarrow\infty}\sum_{j=1}^{r}\alpha_{ i,1}(Q_{i}(x))\ldots\alpha_{i,j}(Q_{i}^{j}(x))(f-b_{j})(Q_{i}^{j}(x)). \tag{6.1}\]
Let \(\alpha,\beta\in B\), then from the above functional equation, we have
\[\mathcal{B}(\alpha)(x)=f(x)+\lim_{r\rightarrow\infty}\sum_{j=1}^{r}\alpha_{ i,1}(Q_{i}(x))\ldots\alpha_{i,j}(Q_{i}^{j}(x))(f-b_{j})(Q_{i}^{j}(x)),\]
and
\[\mathcal{B}(\beta)(x)=f(x)+\lim_{r\rightarrow\infty}\sum_{j=1}^{r}\beta_{i,1} (Q_{i}(x))...\beta_{i,j}(Q_{i}^{j}(x))(f-b_{j})(Q_{i}^{j}(x)).\]
To show that \(\mathcal{B}\) is continuous at \(\alpha\), we subtract one from other of the above two equations, for \(x\in I_{i},\ i\in\mathbb{N}_{N},\) we have
\[\mathcal{B}(\alpha)(x)-\mathcal{B}(\beta)(x)\] \[= \lim_{r\rightarrow\infty}\sum_{j=1}^{r}\alpha_{i,1}(Q_{i}(x)) \ldots\alpha_{i,j}(Q_{i}^{j}(x))(f-b_{j})(Q_{i}^{j}(x))\] \[-\lim_{r\rightarrow\infty}\sum_{j=1}^{r}\beta_{i,1}(Q_{i}(x)) \ldots\beta_{i,j}(Q_{i}^{j}(x))(f-b_{j})(Q_{i}^{j}(x))\] \[= \lim_{r\rightarrow\infty}\sum_{j=1}^{r}\Big{(}\alpha_{i,1}(Q_{i} (x))\ldots\alpha_{i,j}(Q_{i}^{j}(x))-\beta_{i,1}(Q_{i}(x))\ldots\beta_{i,j}(Q _{i}^{j}(x))\Big{)}(f-b_{j})(Q_{i}^{j}(x))\] \[= \lim_{r\rightarrow\infty}\sum_{j=1}^{r}\Big{(}(\alpha_{i,1}- \beta_{i,1})(Q_{i}(x))\beta_{i,2}(Q_{i}^{2}(x))\ldots\beta_{i,j}(Q_{i}^{j}(x))\] \[+\alpha_{i,1}(Q_{i}(x))(\alpha_{i,2}-\beta_{i,2})(Q_{i}^{2}(x)) \beta_{i,3}(Q_{i}^{3}(x))\ldots\beta_{i,j}(Q_{i}^{j}(x))\] \[+\cdots+\alpha_{i,1}(Q_{i}(x))\alpha_{i,2}(Q_{i}^{2}(x))\ldots \alpha_{i,j-1}(Q_{i}^{j-1}(x))(\alpha_{i,j}-\beta_{i,j})(Q_{i}^{j}(x))\Big{)} (f-b_{j})(Q_{i}^{j}(x)).\]
Therefore,
\[\Big{|}\mathcal{B}(\alpha)(x)-\mathcal{B}(\beta)(x)\Big{|}\] \[\leq \lim_{r\rightarrow\infty}\sum_{j=1}^{r}\Big{|}(\alpha_{i,1}-\beta _{i,1})(Q_{i}(x))\beta_{i,2}(Q_{i}^{2}(x))\ldots\beta_{i,j}(Q_{i}^{j}(x))\Big{|}\] \[+\Big{|}\alpha_{i,1}(Q_{i}(x))(\alpha_{i,2}-\beta_{i,2})(Q_{i}^{2 }(x))\beta_{i,3}(Q_{i}^{3}(x))\ldots\beta_{i,j}(Q_{i}^{j}(x))\Big{|}+\ldots\] \[+\Big{|}\alpha_{i,1}(Q_{i}(x))\alpha_{i,2}(Q_{i}^{2}(x))\ldots \alpha_{i,j-1}(Q_{i}^{j-1}(x))(\alpha_{i,j}-\beta_{i,j})(Q_{i}^{j}(x))\Big{|} \Big{|}(f-b_{j})(Q_{i}^{j}(x))\Big{|}\] \[\leq \lim_{r\rightarrow\infty}\sum_{j=1}^{r}\Big{(}\|\alpha_{1}-\beta _{1}\|_{\infty}\|\beta_{i,2}\|_{\infty}\ldots\|\beta_{i,j}\|_{\infty}+\|\alpha _{i,1}\|_{\infty}\|\alpha_{2}-\beta_{2}\|_{\infty}\|\beta_{i,3}\|_{\infty} \ldots\|\beta_{i,j}\|_{\infty}+\ldots\] \[+\ \|\alpha_{i,1}\|_{\infty}\|\alpha_{i,2}\|_{\infty}\ldots\|\alpha _{i,j-1}\|_{\infty}\|\alpha_{j}-\beta_{j}\|_{\infty}\Big{)}\|f-b_{j}\|_{\infty}\]
\[\leq \lim_{r\to\infty}\sum_{j=1}^{r}\Big{(}\|\alpha-\beta\|_{\infty}\|\beta \|_{\infty}^{j-1}+\|\alpha\|_{\infty}\|\alpha-\beta\|_{\infty}\|\beta\|_{ \infty}^{j-2}+\cdots+\|\alpha\|_{\infty}^{j-1}\|\alpha-\beta\|_{\infty}\Big{)} \|f-b\|_{\infty}\] \[= \|f-b\|_{\infty}\|\alpha-\beta\|_{\infty}\lim_{r\to\infty}\sum_{j =1}^{r}\|\alpha\|_{\infty}^{j-1}\bigg{\{}1+\bigg{(}\frac{\|\beta\|_{\infty}}{ \|\alpha\|_{\infty}}\bigg{)}+\bigg{(}\frac{\|\beta\|_{\infty}}{\|\alpha\|_{ \infty}}\bigg{)}^{2}+\cdots+\bigg{(}\frac{\|\beta\|_{\infty}}{\|\alpha\|_{ \infty}}\bigg{)}^{j-1}\bigg{\}}.\]
Without loss of generality, let \(\|\beta\|_{\infty}<\|\alpha\|_{\infty}\). Then
\[\Big{|}\mathcal{B}(\alpha)(x)-\mathcal{B}(\beta)(x)\Big{|}\] \[\leq\|f-b\|_{\infty}\|\alpha-\beta\|_{\infty}\sum_{j=1}^{\infty} \|\alpha\|_{\infty}^{j-1}\left(\frac{1-\bigg{(}\frac{\|\beta\|_{\infty}}{\| \alpha\|_{\infty}}\bigg{)}^{j}}{1-\bigg{(}\frac{\|\beta\|_{\infty}}{\|\alpha \|_{\infty}}\bigg{)}}\right)\] \[=\|f-b\|_{\infty}\frac{\|\alpha-\beta\|_{\infty}}{\|\alpha\|_{ \infty}-\|\beta\|_{\infty}}\sum_{j=1}^{\infty}\big{(}\|\alpha\|_{\infty}^{j}- \|\beta\|_{\infty}^{j}\big{)}\] \[=\|f-b\|_{\infty}\frac{\|\alpha-\beta\|_{\infty}}{\big{(}1-\| \alpha\|_{\infty}\big{)}\cdot(1-\|\beta\|_{\infty})}\] \[\leq\|\alpha-\beta\|_{\infty}\frac{\|f-b\|_{\infty}}{(1-s)^{2}}.\]
The aforementioned inequality holds for all \(x\in I\), therefore
\[\Big{|}\mathcal{B}(\alpha)-\mathcal{B}(\beta)\Big{|}\leq\|\alpha-\beta\|_{ \infty}\frac{\|f-b\|_{\infty}}{(1-s)^{2}}.\]
Since \(\|f_{\Delta,b}^{\alpha}\|_{\infty}\) is bounded and \(\alpha\) is fixed, we have \(\mathcal{B}\) is continuous at \(\alpha\). As \(\alpha\) was taken arbitrarily, \(\mathcal{B}\) is continuous on \(B\).
Our next goal is to study the continuous dependence of \(f_{\Delta,b}^{\alpha}\) on the partition \(\Delta\). In this regard, let us recall the following theorem.
**Theorem 6.3**.: _[_4_, Theorem 11.1]_ _Let \((X,d)\) be a complete metric space and \((P,d_{p})\) be a metric space of parameters. Let \(\{X;w_{1},w_{2},\ldots w_{N}\}\) be a hyperbolic IFS with contractivity \(c\). For \(n\in\mathbb{N}_{N}\), let \(w_{n}\) depend on the parameter \(p\in(P,d_{p})\) subject to the condition \(d\big{(}w_{n_{p}}(x),w_{n_{q}}(x)\big{)}\leq K\)\(d(p,q)\) for all \(x\in X\) with \(K\) independent of \(n,p\), or \(x_{,}\). Then the attractor \(A(p)\in\mathcal{H}(X)\) depends continuously on the parameter \(p\in P\) with respect to the Hausdorff metric \(h\) induced by \(d\)._
_Remark 6.4_.: Let \(b_{r}:\mathcal{C}(I)\to\mathcal{C}(I)\) be such that \(\|b\|_{\infty}:=\sup\limits_{r\in\mathbb{N}}\|b_{r}\|_{\infty}<\infty\). Then from the bound of non-stationary \(\alpha\)-fractal function, we have
\[\|f_{b}^{\alpha}\|_{\infty}\leq\|f\|_{\infty}+\frac{\|\alpha\|_{\infty}}{1-\| \alpha\|_{\infty}}\|f-b\|_{\infty},\]
where \(\|f-b\|_{\infty}:=\sup\limits_{r\in\mathbb{N}}\{\|f-b_{r}\|_{\infty}\}.\) This demonstrates that the non-stationary \(\alpha\)-fractal function is constrained by a fixed value independent of partition \(\triangle\). For any acceptable partition of \(I\), it is enough to work with \(X=I\times[-R,R],\) where \(R=\|f\|_{\infty}+\frac{\|\alpha\|_{\infty}}{1-\|\alpha\|_{\infty}}\|f-b\|_{\infty}\).
**Theorem 6.5**.: _Let \(f,\;b_{r}\in Lip_{d}(I)\) be such that \(b_{r}(x_{0})=f(x_{0})\) and \(b_{r}(x_{N})=f(x_{N})\), with Lipschitz constants \(k_{f},\;k_{b_{r}}\) respectively. Suppose that the scale function \(\alpha_{i,r}\in Lip_{d}(I)\) with Lipschitz constant \(k_{\alpha_{r}}\) such that \(\|\alpha\|_{\infty}<1\). Let the collection of all partitions of \(I\) be denoted by \(\mathcal{P}(I),\) that is,_
\[\mathcal{P}(I)=\{\triangle:\triangle=\{x_{0},x_{1},\ldots,x_{N}\}\text{ such that }x_{0}<x_{1}<\cdots<x_{N}\}.\]
_Then, the mapping \(\mathcal{D}:\mathcal{P}(I)\to\mathcal{C}(I)\) defined by \(\mathcal{D}(\triangle)=f_{\triangle,b}^{\alpha}\) is continuous._
Proof.: Let \(\Delta:=\{x_{i}:i=0,1,2,\ldots,N;x_{0}<x_{1}<\cdots<x_{N}\}\) and \(\tilde{\Delta}:=\{\tilde{x}_{i}:i=0,1,2,\ldots,N;x_{0}=\tilde{x}_{0}<\tilde{x}_{ 1}<\cdots<\tilde{x}_{N}=x_{N}\}\) be two partitions of \(I\). Let \(W_{i,r}(x,y)=\big{(}l_{i}(x),F_{i,r}(x,y)\big{)},\) where \(l_{i}(x)\) and \(F_{i,r}(x,y)\) as in (2.5) and \(X=I\times[-R,R]\). As the maps \(l_{i},F_{i,r}\) and \(W_{i,r}\) depend on the partition
chosen, hence we denote the above maps corresponding to the partition \(\Delta\) by \(l_{i}^{\Delta},F_{i,r}^{\Delta}\) and \(W_{i,r}^{\Delta}\) respectively. Therefore,
\[\big{|}l_{i}^{\Delta}(x)-l_{i}^{\tilde{\Delta}}(x)\big{|} =\frac{1}{x_{N}-x_{0}}\big{|}(x_{i}-\tilde{x}_{i})(x-x_{0})+( \tilde{x}_{i-1}-x_{i-1})(x-x_{N})\big{|}\] \[\leq|x_{i}-\tilde{x}_{i}|+|\tilde{x}_{i-1}-x_{i-1}|\] \[\leq 2\|\Delta-\tilde{\Delta}\|_{2},\]
and
\[\big{|}F_{i,r}^{\Delta}(x,y)-F_{i,r}^{\tilde{\Delta}}(x,y)\big{|} =\Big{|}\alpha_{i,r}(x)y+f\big{(}l_{i}^{\Delta}(x)\big{)}-\alpha_{ i,r}(x)b_{r}(x)\] \[\quad-[\alpha_{i,r}(x)y+f\big{(}l_{i}^{\tilde{\Delta}}(x)\big{)}- \alpha_{i,r}(x)b_{r}(x)]\Big{|}\] \[\leq 2k_{f}\big{\|}l_{i}^{\Delta}(x)-l_{i}^{\tilde{\Delta}}(x) \big{\|}_{2}\] \[\leq 2k_{f}\|\Delta-\tilde{\Delta}\|_{2},\]
We define a metric \(\mu\) on \(\mathbb{R}^{2}\) by
\[\mu\big{(}(x,y),(x^{\prime},y^{\prime})\big{)}=|x-x^{\prime}|+\theta|y-y^{ \prime}|,\]
where \(\theta\) is a suitable constant mentioned below.
Using similar calculations as in Theorem 2.1. of [4], we obtain that \(\{X;W_{i,r}\}\) is a hyperbolic IFS with respect to the metric \(\mu\) for
\[\theta<\frac{1-A}{Rk_{\alpha}+Ak_{f}+\|\alpha\|_{\infty}k_{b}+\|b\|_{\infty}k _{\alpha}},\]
where \(k_{\alpha}=\max\{k_{\alpha_{r}}:r\in\mathbb{N}\},\;k_{b}=\max\{k_{b_{r}}:r \in\mathbb{N}\},A=\max\{a_{i}:i\in\mathbb{N}\}\).
Consequently,
\[\mu\big{(}W_{i,r}^{\Delta}(x,y),W_{i,r}^{\tilde{\Delta}}(x,y)\big{)} \leq 2\|\Delta-\tilde{\Delta}\|_{2}+\theta 2k_{f}\|\Delta- \tilde{\Delta}\|_{2}\] \[=2(1+\theta k_{f})\|\Delta-\tilde{\Delta}\|_{2}.\]
Therefore, the IFS maps \(W_{i,r}\) depend continuously on the partition \(\Delta\in\mathcal{P}(I)\). Consequently, Theorem 6.3 asserts that the attractor \(G(\triangle)\in\mathcal{H}(X)\) depends continuously on \(\Delta\in\mathcal{P}(I)\) with respect to the Hausdorff metric \(h\). Therefore, for given \(\epsilon>0\) and \(\Delta\in\mathcal{P}(I)\), we have
\[\|\mathcal{D}(\tilde{\Delta})-\mathcal{D}(\Delta)\|_{\infty}=\|f_{\Delta,b}^{ \alpha}-f_{\Delta,b}^{\alpha}\|_{\infty}<\epsilon\text{ whenever }\|\tilde{\Delta}-\Delta\|_{2}<\delta.\]
That is, \(\mathcal{D}\) is continuous at \(\Delta\). Since \(\Delta\in\mathcal{P}(I)\) is arbitrary, \(\mathcal{D}\) is continuous on \(\mathcal{P}(I)\).
## Conclusion
In this article, we studied several analytical properties of the non-stationary fractal operator corresponding to the non-stationary FIFs. Note that in the construction of our proposed interpolant, the crucial IFS parameters are the sequence of base functions and scaling functions. In literature, it is studied that the fractal dimension of FIFs (stationary) depends on the IFS parameters. Therefore, we attempt to compute the fractal dimensions of the non-stationary interpolant in our future investigation by selecting suitable parameters.
|
2307.03474
|
Realizing the $s$-permutahedron via flow polytopes
|
Ceballos and Pons introduced the $s$-weak order on $s$-decreasing trees, for
any weak composition $s$. They proved that it has a lattice structure and
further conjectured that it can be realized as the $1$-skeleton of a polyhedral
subdivision of a polytope. We answer their conjecture in the case where $s$ is
a strict composition by providing three geometric realizations of the
$s$-permutahedron. The first one is the dual graph of a triangulation of a flow
polytope of high dimension. The second one, obtained using the Cayley trick, is
the dual graph of a fine mixed subdivision of a sum of hypercubes that has the
conjectured dimension. The third one, obtained using tropical geometry, is the
$1$-skeleton of a polyhedral complex for which we can provide explicit
coordinates of the vertices and whose support is a permutahedron as
conjectured.
|
Rafael S. González D'León, Alejandro H. Morales, Eva Philippe, Daniel Tamayo Jiménez, Martha Yip
|
2023-07-07T09:27:50Z
|
http://arxiv.org/abs/2307.03474v1
|
# Realizing the s-permutahedron via flow polytopes
###### Abstract.
Ceballos and Pons introduced the s-weak order on s-decreasing trees, for any weak composition s. They proved that it has a lattice structure and further conjectured that it can be realized as the 1-skeleton of a polyhedral subdivision of a polytope. We answer their conjecture in the case where s is a strict composition by providing three geometric realizations of the s-permutahedron. The first one is the dual graph of a triangulation of a flow polytope of high dimension. The second one, obtained using the Cayley trick, is the dual graph of a fine mixed subdivision of a sum of hypercubes that has the conjectured dimension. The third one, obtained using tropical geometry, is the 1-skeleton of a polyhedral complex for which we can provide explicit coordinates of the vertices and whose support is a permutahedron as conjectured.
Key words and phrases:s-weak order, s-decreasing trees, Stirling s-permutations, flow polytopes, geometric realization, polyhedral subdivision, Cayley trick Partially supported by the MATH-AMSUD project 22-MATH-01 ALGonCOMB. AHM is partially supported by NSF grants DMS-1855536 and DMS-22030407, EP is supported by grants ANR-17-CE40-0018 and ANR-21-CE48-0020 of the French National Research Agency ANR (projects CAPPS and PAGCAP), DTJ is supported by grant ANR-21-CE48-0020 of the French National Research Agency ANR (project PAGCAP), and MY is partially supported by Simons collaboration grant 964456.
## 1. Introduction
The starting point of this work is a conjecture of Ceballos and Pons ([8, Conjecture 1], also Conjecture 1.2 below) stating that a certain combinatorial complex on s-decreasing trees can be geometrically realized as a polyhedral subdivision of a polytope. The family of s-decreasing trees is parameterized by _weak compositions_\(\mathrm{s}=(s_{1},\ldots,s_{n})\) where \(s_{i}\) are nonnegative integers for \(i=1,2,\ldots,n\). Ceballos and Pons [8, 9] showed that for every s, the set of s-decreasing trees admits a lattice structure called the s-weak order. In the special case when \(\mathrm{s}=(1,\ldots,1)\), the set of s-decreasing trees is in bijection with the set of permutations of \([n]:=\{1,\ldots,n\}\), and the s-weak order is the classical (right) weak order on the permutations of \([n]\). In the same way that the weak order on permutations restricts to the lattice on Catalan objects introduced by Tamari in [44] (as implied by a classical bijection of Stanley [42, Section 1.5]), Ceballos and Pons show in [9, Theorem 2.2] that the s-weak order restricts to the s-Tamari lattice. The s-Tamari lattice was first introduced by Preville-Ratelle and Viennot [36] as the \(\nu\)-Tamari lattice on grid paths weakly above the path \(\nu=NE^{s_{n}}\ldots NE^{s_{1}}\) (see [8, Theorem 3.5] for the isomorphism between the \(\nu\)-Tamari and the s-Tamari lattices). It is a further generalization of the \(m\)-Tamari lattice, that is the case when \(\mathrm{s}=(m,\ldots,m)\), introduced by Bergeron and Preville-Ratelle in [4] to study the Frobenius characteristic of the space of higher diagonal coinvariant spaces.
### s-decreasing trees, \(\mathrm{s}\)-weak order and the \(\mathrm{s}\)-permutahedron
Let \(\mathrm{s}=(s_{1},\ldots,s_{n})\) be a weak composition. An _\(\mathrm{s}\)-decreasing tree_\(T\) is a planar rooted tree on \(n\) internal vertices (called nodes), labeled by \([n]\), such that the node labeled \(i\) has \(s_{i}+1\) children and any descendant \(j\) of \(i\) satisfies \(j<i\). We denote by \(\mathcal{T}_{0}^{i},\ldots,\mathcal{T}_{s_{i}}^{i}\) the subtrees of node \(i\) from left to right.
We denote by \(\mathcal{T}_{\mathrm{s}}\) the set of all s-decreasing trees. Note that the value of \(s_{1}\) is inconsequential for determining the combinatorial properties of \(\mathcal{T}_{\mathrm{s}}\), so without loss of generality we may assume that \(s_{1}=1\) throughout this article. It is known (see for example [6, Section 5.1] and Corollary 3.7) that the number of s-decreasing trees is given by
\[\#\mathcal{T}_{\mathrm{s}}=(1+s_{n})(1+s_{n}+s_{n-1})\cdots(1+s_{n}+s_{n-1}+ \cdots+s_{2}), \tag{1.1}\]
which can be viewed as a generalization of the factorial numbers since this formula reduces to \(n!\) in the case \(\mathrm{s}=(1,\ldots,1)\).
Let \(T\) be an s-decreasing tree. We denote by \(\mathrm{inv}(T)\) the multiset of _tree-inversions_ of \(T\) formed by pairs \((c,a)\) with multiplicity (also called cardinality)
\[\#_{T}(c,a)=\left\{\begin{array}{ll}0,&\mbox{ if $a$ is left of $c$,}\\ i,&\mbox{ if $a\in T_{i}^{c}$,}\\ s_{c},&\mbox{ if $a$ is right of $c$,}\end{array}\right.\]
for all \(1\leq a<c\leq n\).
In [8, Definition 2.5] Ceballos and Pons introduced the _\(\mathrm{s}\)-weak order_\(\trianglelefteq\) on \(\mathcal{T}_{\mathrm{s}}\) and showed in [9, Theorem 1.21] that it has the structure of a lattice. For s-decreasing trees \(R\) and \(T\) we define \(R\trianglelefteq T\) if \(\mathrm{inv}(R)\subseteq\mathrm{inv}(T)\).
To understand the cover relations in the s-weak order we define the notion of ascents. An _ascent_ on an s-decreasing tree \(T\) is a pair \((a,c)\) satisfying
1. \(a\in T^{c}_{i}\) for some \(0\leq i<s_{c}\),
2. if \(a<b<c\) and \(a\in T^{b}_{i}\), then \(i=s_{b}\),
3. if \(s_{a}>0\), then \(T^{a}_{s_{a}}\) consists of only one leaf.
Similarly, a _descent_ of \(T\) is a pair \((a,c)\) such that \(a\in T^{c}_{i}\) for some \(0<i\leq s_{c}\), if \(a<b<c\) and \(a\in T^{b}_{j}\) then \(j=0\), and if \(s_{a}>0\) then \(T^{a}_{0}\) consists of only one leaf. The notions of ascents and descents on s-decreasing trees generalize the same concepts from classical permutations, see Lemma 2.1.
A cover relation of the s-weak order can be seen either as an s_-tree rotation_ along an ascent \((a,c)\) (see [9, Definition 1.30]) or equivalently as taking the transitive closure of the multiset of inversions obtained from \(\operatorname{inv}(T)\) after increasing \(\#_{T}(c,a)\) by \(1\) ([9, Theorem 1.32]). If \(A\) is a subset of ascents of \(T\), we denote by \(T+A\) the s-decreasing tree whose inversion set is given by the transitive closure of \(\operatorname{inv}(T)+A\).
**Definition 1.1** (Definition 4.1 [8]).: The (combinatorial) s_-permutahedron_, denoted \(\operatorname{Perm_{s}}\), is the combinatorial complex with faces \((T,A)\) where \(T\) is an s-decreasing tree and \(A\) is a subset of ascents of \(T\).
In the s-permutahedron, the face \((T,A)\) is contained in \((T^{\prime},A^{\prime})\) if and only if \([T,T+A]\subseteq[T^{\prime},T^{\prime}+A^{\prime}]\) as intervals in the s-weak order. In particular, the vertices of \(\operatorname{Perm_{s}}\) are the s-decreasing trees and the edges correspond to s-tree rotations.
It turns out that when s is a (strict) composition, that is \(s_{i}>0\) for all \(i\), the properties of the s-weak order and the s-permutahedron can also be described in terms of Stirling s_-permutations_. These are multipermutations of \([n]\) avoiding the pattern \(121\) (a number \(j\) somewhere in between two occurrences \(i\) with \(i<j\)) and with \(s_{i}\) occurrences of \(i\) for each \(i\in[n]\). These multipermutations generalize the family of permutations (the case when \(\operatorname{s}=(1,\ldots,1)\)) and the family of Stirling permutations (the case when \(\operatorname{s}=(2,\ldots,2)\)) initially introduced by Gessel and Stanley in [18]. A further generalization of Stirling permutations (to the case when \(\operatorname{s}=(m,\ldots,m)\)) was studied by Park in [34, 33, 32]. Further study of combinatorial formulas and statistics on s-Stirling permutations such as descents, ascents and plateaux have been carried out by many other authors (see for example [5, 23, 27]). We refer the reader to Gessel's note in [19] which includes a list of articles on the family of s-Stirling permutations.
Figure 1 shows the Hasse diagram of the s-weak order for the case \(\operatorname{s}=(1,2,1)\). The vertices are indexed by s-decreasing trees and Stirling s-permutations.
### Geometric realizations of the s-permutahedron
From Figure 1 the reader can already appreciate how the s-permutahedron may be geometrically realizable. Ceballos and Pons posed the following conjecture on realizations of \(\operatorname{Perm_{s}}\).
**Conjecture 1.2** ([8, Conjecture 1]).: _Let \(\operatorname{s}\) be a weak composition. The s-permutahedron \(\operatorname{\mathit{Perm_{s}}}\) can be realized as a polyhedral subdivision of a polytope which is combinatorially isomorphic to the zonotope \(\sum_{1\leq i<j\leq n}s_{j}[\mathbf{e_{i}},\mathbf{e_{j}}]\), where \((\mathbf{e_{i}})_{1\leq i\leq n}\) is the canonical basis of \(\mathbb{R}^{n}\) and \([\mathbf{e_{i}},\mathbf{e_{j}}]\) indicates the convex hull of \(\mathbf{e_{i}}\) and \(\mathbf{e_{j}}\)._
Our goal in the present article is to provide solutions to Conjecture 1.2 in the case when s is a strict composition (see Theorem 1.3). We will use techniques similar to those that were previously employed for realizing the s-associahedron, which is a closely related combinatorial complex whose 1-skeleton is the (undirected) Hasse diagram of the s-Tamari lattice.
Ceballos, Padrol and Sarmiento [7] realized the Hasse diagram of the s-Tamari lattice as the edge graph of a polyhedral complex which is dual to a subdivision of a subpolytope of a product of simplices and to a fine mixed subdivision of a generalized permutahedron.
A different realization of the s-Tamari lattice was given by Bell, Gonzalez D'Leon, Mayorga Cetina and Yip [3] via flow polytopes.
A flow polytope \(\mathcal{F}_{G}\) is the set of valid flows on a directed acyclic graph \(G\). Geometric information about this polytope can be recovered from combinatorial information of the graph, for example computing the volume or constructing certain triangulations. In the recent literature there has been an increased interest in developing techniques in this direction, see for example [10, 22, 28, 29, 30]. In particular, Danilov, Karzanov and Koshevoy [11] described a method for obtaining regular unimodular triangulations for \(\mathcal{F}_{G}\) by placing a structure on \(G\) called a _framing_ (see Section 3.1). Bell et al. [3] used the method of Danilov, Karzanov and Koshevoy to show that the flow polytope on the s-_caracol graph_ car(s) (see the left side of Figure 2) has a framed triangulation whose dual graph is the s-Tamari lattice.
Figure 1. The s-weak order (1-skeleton of the s-permutahedron) for the case \(\mathrm{s}=(1,2,1)\). The vertices are indexed by s-decreasing trees and Stirling s-permutations.
In a similar vein, we introduce the graph \(\operatorname{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{}}}}}}{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{}}}}}}{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{}}}}}}{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{}}}}}} {\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\}}}}}} {\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\}}}}}}}{ {\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\}}}}}}}{ {\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\}}}}}}}{{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\}}}}}}}{{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{}}}}}}}{ {\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{}}}}}}}{ {\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ }}}}}}}}{ {\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ }}}}}}}}{ {\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ }}}}}}}}{ {\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{}}}}}}}}{ {\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{}}}}}}}}}{ {\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ }}}}}}}}{ {\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{}}}}}}}}{ {\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{{ }}}}}}}{{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{}}}}}}}}{{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{{ }}}}}}}{{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{{}}}}}}}}{ {\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{{}}}}}}}{ {\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{{\mathrm{ }}}}}}}{{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{}}}}}}}}{{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{}}}}}}}{}}{ {\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{{}}}}}}}{ {\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{}}}}}}}{{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{}}}}}}}{ {\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{}}}}}}}{{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{}}}}}}}{{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{}}}}}}}{ {\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{}}}}}}}{{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{}}}}}}}{ {\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{}}}}}}}{ {\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{}}}}}}}{ {\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm}}}}}}}{ {\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm}}}}}}}{ {\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm}}}}}}}{ {\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrmmathrmmathrm}}}}}}}{ {\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrmmathrmmathrmmathrm}}}}}}{ {\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrmmathrmmathrmmathrmmathrmmathrm}}}}}}{ {\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrmmathrmmathrmmathrmmathrm{\mathrmmathrm{\mathrmmathrm{\mathrmmathrmmathrm}}}}}}{ {\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrmmathrmmathrm{\mathrmmathrm{\mathrmmathrm{\mathrmmathrmmathrm}}}}}{ {\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrmmathrmmathrm{\mathrmmathrm{\mathrmmathrmmathrmmathrm{\mathrmmathrm}}}}}{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrmmathrmmathrm{\mathrmmathrmmathrmmathrm{\mathrm}}}}}{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrmmathrmmathrm{\mathrmmathrmmathrmmathrmmathrmmathrmmathrm{\mathrmmathrmmathrm}}}}}{ {\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrmmathrm{\mathrmmathrmmathrmmathrmmathrmmathrm{\mathrmmathrmmathrm}}}}{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrmmathrm{\mathrmmathrmmathrmmathrmmathrmmathrm{\mathrm}}}}{ {\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrmmathrmmathrmmathrmmathrm{\mathrmmathrmmathrmmathrm}}}}{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrmmathrm{\mathrmmathrmmathrmmathrmmathrm{\mathrmmathrmmathrm}}}}{ {\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm}}}}}}{ {\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm}}}}}}}}}}}}}}}{{}{}}{}{}}{}}{}{}{}}{}}{}{}}{}}{}{}}{}{}}{}}{}{}}{}}{}{}}{}}{}}{}}}{}}\}}{}}\}{}}\}}\}\}\)
**Structure of the paper.** In Section 2 we translate the combinatorics of s-decreasing trees, and the definitions of the s-weak order and the s-permutahedron to the language of Stirling s-permutations which we will use extensively throughout this paper. In Section 3 we provide the necessary background on flow polytopes which we will need for our geometric constructions and present our first geometric realization of the s-permutahedron as the dual of a framed triangulation of the flow polytope \(\mathcal{F}_{\operatorname{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm}{\mathrm{\mathrm{\mathrmmathrmmathrm{\mathrmmathrmmathrm{\cdot}{\mathrm{\mathrm{{\mathrmmathrmmathrm{{{{{}}}}}}}}}}}}{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{{\mathrm{\mathrm{{\mathrm{}}}}}}}}}}}{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{{\mathrm{{}}}}}}}}{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{{\mathrm{}}}}}}}}{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{{\mathrm{{\mathrm{}}}}}}}{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\}}}}}{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{{{\mathrm{{}}}}}}{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\}}}}}{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{{{{\}}}}}}{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\}}}}}}{\mathrm{\mathrm{\mathrm{ {\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\}}}}}}{\mathrm{\mathrm{\mathrm{\mathrm{{\mathrm{\mathrm{ \mathrm{\}}}}}}{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\}}}}}}}{{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\}}}}}}}{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\{\mathrm{\mathrm{\mathrm{\mathrm{\}}}}}}}}{{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\{\mathrm{\mathrm{\}}}}}}}}{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\}}}}}}}}{{ \mathrm{\mathrm
We leave an open question on realizing those formulas as a geometric Lidskii-type decomposition of \(\mathcal{F}_{\mathrm{oru(s)}}\). Finally, in Section 7 we discuss ongoing work and some ramifications of our results.
## 2. Combinatorics of the s-permutahedron in the language of Stirling s-permutations
The s-weak order can also be described by Stirling s-permutations, which we now define. Throughout the remainder of this article, unless specified, s is assumed to be a composition (i.e. a vector with positive integer entries). This restriction is necessary for us to connect the combinatorics of s-decreasing trees to the underlying geometry.
Let \(\mathrm{s}=(s_{1},\ldots,s_{n})\) be a composition. A _Stirling_\(\mathrm{s}\)-_permutation_ is a permutation of the word \(1^{s_{1}}2^{s_{2}}\ldots n^{s_{n}}\) that avoids the pattern \(121\), which means that there is never a letter \(j\) in between two occurrences of \(i\) with \(i<j\). We denote by \(\mathcal{W}_{\mathrm{s}}\) the set of all Stirling s-permutations.
The set of Stirling s-permutations is in bijection with the set of s-decreasing trees. This bijection is obtained by reading nodes along the in-order traversal of the _caverns_ (spaces between consecutive siblings) of an s-decreasing trees (see Figure 3). Note that this bijection induces a correspondence between the prefixes of a Stirling s-permutation \(w\) and the leaves of its corresponding tree \(T(w)\).
Analogous to the case of classical permutations, the cover relation in the s-weak order can be described in terms of transpositions of substrings in Stirling s-permutations.
Let \(w\) be a Stirling s-permutation. For \(a\in[n]\), we define the _\(a\)-block_\(B_{a}\) of \(w\) to be the shortest substring of \(w\) containing all \(s_{a}\) occurrences of \(a\). In Example 2.3, we see that the 5-block of \(w=33725455716\) is \(B_{5}=5455\). Note that an \(a\)-block of \(w\) necessarily starts and ends with \(a\) by minimality, and contains only letters in \([a]\) because \(w\) is 121-avoiding. Furthermore for \(a<c\), \(w\) contains the consecutive substring \(ac\) if and only if it is of the form \(w=u_{1}B_{a}cu_{2}\), where \(u_{1}\) and \(u_{2}\) denote consecutive substrings of \(w\).
Let \(w\) be a Stirling s-permutation. A pair \((a,c)\) with \(1\leq a<c\leq n\) is called an _ascent_ of \(w\) if \(ac\) is a consecutive substring of \(w\). It is a _descent_ of \(w\) if \(ca\) is a substring of \(w\). If \(w\) is of the form \(w=u_{1}B_{a}cu_{2}\) and \(a<c\), the _transposition_ of \(w\) along the ascent \((a,c)\) is the Stirling s-permutation \(u_{1}CB_{a}u_{2}\). We denote by \(\mathrm{inv}(w)\) the multiset of inversions formed by pairs \((c,a)\) with multiplicity \(\#_{w}(c,a)\in[0,s_{c}]\) the number of occurrences of \(c\) that precede the \(a\)-block in \(w\). As in the case of tree-rotations, if \(A\) is a subset of ascents of \(w\), we denote by \(w+A\) the Stirling s-permutation whose inversion set is the transitive closure of \(\mathrm{inv}(w)+A\)
Figure 3. A \((2,1,1,2)\)-decreasing tree with vertices labeled via in-order. The corresponding Stirling s-permutation is \(w=244113\).
We have the following correspondence between concepts on the family of \(s\)-decreasing trees and on the family of Stirling \(s\)-permutations, whose proof follows easily from the definitions.
**Lemma 2.1**.: _Let \(w\) be a Stirling \(\mathrm{s}\)-permutation and \(T(w)\) its corresponding \(\mathrm{s}\)-decreasing tree. Let \(1\leq a<c\leq n\)._
* _The pair_ \((a,c)\) _is an ascent of_ \(T(w)\) _if and only if it is an ascent of_ \(w\)_._
* _The pair_ \((a,c)\) _is a descent of_ \(T(w)\) _if and only if it is a descent of_ \(w\)_._
* \(\#_{T(w)}(c,a)=\#_{w}(c,a)\)_._
_Moreover, suppose \((a,c)\) is an ascent of \(T=T(w)\) so that \(w\) is the of the form \(w=u_{1}B_{a}cu_{2}\). Then \(T^{\prime}\) is the \(\mathrm{s}\)-tree rotation of \(T\) along \((a,c)\) if and only if \(T^{\prime}=T(w^{\prime})\) where \(w^{\prime}=u_{1}cB_{a}u_{2}\)._
**Corollary 2.2**.: _Let \(w\) and \(w^{\prime}\) be Stirling \(\mathrm{s}\)-permutations. Then \(w^{\prime}\) covers \(w\) in the \(\mathrm{s}\)-weak order if and only if \(w^{\prime}\) is the transposition of \(w\) along an ascent._
_Example 2.3_.: Let \(\mathrm{s}=(1,1,2,1,3,1,2)\) and consider the \(\mathrm{s}\)-permutation \(w=33725455716\). The transposition of \(w\) along the ascent \((5,7)\) switches the \(5\)-block of \(w\) with the \(7\) that immediately follows it and yields \(w^{\prime}=33727545516\). The corresponding \(\mathrm{s}\)-decreasing tree \(T=T(w)\) is shown on the left of Figure 4. The rotation of \(T\) along the ascent \((5,7)\) yields \(T^{\prime}=T(w^{\prime})\).
_Remark 2.4_.: The \(\mathrm{s}\)-permutahedron \(\mathrm{Perm_{s}}\) of Definition 1.1 can be alternatively defined as the combinatorial complex with faces \((w,A)\) where \(w\) is a Stirling \(\mathrm{s}\)-permutation and \(A\) is a subset of ascents of \(w\).
## 3. Subdivisions of flow polytopes
### Background on DKK triangulations of flow polytopes
Let \(G=(V,E)\) be a loopless connected oriented graph on vertices \(V=\{v_{0},\ldots,v_{n}\}\) with edges oriented from \(v_{i}\) to \(v_{j}\) if \(i<j\) and such that \(v_{0}\) (respectively \(v_{n}\)) is the only source (respectively sink) of \(G\). A vertex is an _inner vertex_ if it is not a source and not a sink. We allow multiple edges between pairs of vertices. For any vertex \(v_{i}\) we denote by \(\mathcal{I}_{i}\) its set of incoming edges, and by \(\mathcal{O}_{i}\) its set of outgoing edges.
Figure 4. A rotation of the \(\mathrm{s}\)-decreasing tree \(T\) along the ascent \((5,7)\) yields \(T^{\prime}\). \(T\) corresponds with \(w=33725455716\) and \(T^{\prime}\) corresponds with \(w^{\prime}=33727545516\), which is the transposition of the \(5\)-block of \(w\) with the \(7\) that immediately follows it.
Given a vector \(\mathbf{a}=(a_{0},a_{1}\ldots,a_{n-1},a_{n})\) such that \(\sum_{i}a_{i}=0\), a _flow_ of \(G\) with _netflow_\(\mathbf{a}\) is a vector \((f_{e})_{e\in E}\in(\mathbb{R}_{\geq 0})^{E}\) such that: \(\sum_{e\in\mathcal{I}_{i}}f_{e}+a_{i}=\sum_{e\in\mathcal{O}_{i}}f_{e}\) for all \(i\in[0,n]\). A flow \((f_{e})_{e\in E}\) of \(G\) is called an _integer flow_ if all \(f_{e}\) are integers. We denote by \(\mathcal{F}_{G}^{\mathbb{Z}}(\mathbf{a})\) the set of integer flows of \(G\) with netflow \(\mathbf{a}\). The _flow polytope_ of \(G\) is
\[\mathcal{F}_{G}(\mathbf{a})=\Big{\{}(f_{e})_{e\in E}\text{ flow of $G$ with netflow $\mathbf{a}$}\Big{\}}\subseteq\mathbb{R}^{E}.\]
When the netflow is not specified, i.e. when we write \(\mathcal{F}_{G}\), it is assumed to be \(\mathbf{a}=(1,0,\ldots,0,-1)\). In this case, \(\mathcal{F}_{G}\) is a polytope of dimension \(|E|-|V|+1\), and the vertices of \(\mathcal{F}_{G}\) correspond exactly to indicator vectors of the routes of \(G\) ([17, Corollary 3.1]), where a _route_ of \(G\) is a path from \(v_{0}\) to \(v_{n}\) i.e. a sequence of edges \(((v_{0},v_{k_{1}}),(v_{k_{1}},v_{k_{2}}),\ldots,(v_{k_{l}},v_{n}))\), with \(0<k_{1}<k_{2}<\ldots<k_{l}<n\).
Flow polytopes admit several nice subdivisions that can be understood via combinatorial properties of the graph \(G\), in particular the triangulations defined by Danilov, Karzanov and Koshevoy in [11], that will be our main tool for obtaining geometric realizations of s-permutahedra.
Let \(P\) be a route of \(G\) that contains vertices \(v_{i}\) and \(v_{j}\). We denote by \(\mathit{Pv}_{i}\) the prefix of \(P\) that ends at \(v_{i}\), \(v_{i}P\) the suffix of \(P\) that starts at \(v_{i}\) and \(v_{i}\mathit{Pv}_{j}\) the subroute of \(P\) that starts at \(v_{i}\) and ends at \(v_{j}\).
A _framing_\(\preceq\) of \(G\) is a choice of linear orders \(\preceq_{\mathcal{I}_{i}}\) and \(\preceq_{\mathcal{O}_{i}}\) on the sets of incoming and outgoing edges for each inner vertex \(v_{i}\). This induces a total order on the set of partial routes from \(v_{0}\) to \(v_{i}\) (respectively from \(v_{i}\) to \(v_{n}\)) by taking \(\mathit{Pv}_{i}\preceq\mathit{Qv}_{i}\) if \(e_{P}\preceq_{\mathcal{I}_{j}}e_{Q}\) where \(v_{j}\) is the first vertex after which the two partial routes coincide, and \(e_{P}\), \(e_{Q}\) are the edges of \(P\) and \(Q\) that end at \(v_{j}\) (respectively \(v_{i}P\preceq v_{i}Q\) if \(e_{P}\preceq_{\mathcal{O}_{j}}e_{Q}\) where \(v_{j}\) is the last vertex before which the two partial routes coincide, and \(e_{P}\), \(e_{Q}\) are the edges of \(P\) and \(Q\) that start at \(v_{j}\)). When \(G\) is endowed with such a framing \(\preceq\), we say that \(G\) is _framed_. See Figure 5(a) for an example.
Let \(P\) and \(Q\) be routes of \(G\) with a common subroute between inner vertices \(v_{i}\) and \(v_{j}\) (possibly with \(v_{i}=v_{j}\)). We say that \(P\) and \(Q\) are _in conflict_ at \([v_{i},v_{j}]\) if the initial parts \(\mathit{Pv}_{i}\) and \(Qv_{i}\) are ordered differently than the final parts \(v_{j}P,v_{j}Q\). Otherwise we say that \(P\) and \(Q\) are _coherent_ at \([v_{i},v_{j}]\). We say that \(P\) and \(Q\) are _coherent_ if they are coherent at each common inner subroute. See Figure 5.
The relation on routes being coherent is reflexive and symmetric and we can consider sets of mutually coherent routes which are called the _cliques_ of \((G,\preceq)\). We denote by \(\mathit{Cliques}(G,\preceq)\) the set of cliques of \((G,\preceq)\), and \(\mathit{MaxCliques}(G,\preceq)\) the set of maximal collection of cliques under inclusion. For a set of routes \(C\), let \(\Delta_{C}\) be the convex hull of the vertices of \(\mathcal{F}_{G}\) corresponding to the elements in \(C\).
**Theorem 3.1** ([11, Theorem 1 & 2]).: _The simplices \(\{\Delta_{C}\,|\,C\in\mathit{MaxCliques}(G,\preceq)\}\) are the maximal cells of a regular triangulation of \(\mathcal{F}_{G}\)._
Figure 5. Illustration of routes \(P\) and \(Q\) that are coherent (on the left) and in conflict (on the right) with respect to the framing where the incoming/outgoing edges at each vertex are ordered from top to bottom.
Proof.: The formulation in [11] is in terms of the cone \(\mathcal{F}_{+}\) of flows with any netflow \(\mathbf{a}=(\lambda,0,\ldots,0,-\lambda)\), for \(\lambda\in\mathbb{R}_{+}\). We only need to intersect this cone with the affine hyperplane corresponding to taking \(\lambda=1\) to obtain the theorem in our formulation for the flow polytope \(\mathcal{F}_{G}\).
The triangulation obtained this way is the _DKK triangulation_ of \(\mathcal{F}_{G}\) with respect to the framing \(\preceq\) and we denote it by \(\mathit{Triang}_{\mathit{DKK}}(G,\preceq)\). The word _regular_ in the above theorem is related to the existence of an admissible height function, see the definition in Section 5.1 and Section 5.2.1.
Postnikov [35] and Stanley [39] used a recursive _subdivision lemma_ (see [28, Proposition 4.1]) to show that the volume of \(\mathcal{F}_{G}\) equals the number of integer flows in \(\mathcal{F}_{G}^{\mathbb{Z}}(\mathbf{d})\), with \(\mathbf{d}=(0,d_{1},\ldots,d_{n-1},-\sum_{i}d_{i})\) and \(d_{i}=\mathrm{indeg}_{G}(v_{i})-1\) (see Corollary 3.3). This recursive subdivision can be made compatible with DKK triangulations in what are called _framed Postnikov-Stanley triangulations_[30, Section 7.1]. As explained by Meszaros, Morales, and Striker [30, Definition 7.5], this leads to the following explicit bijection between the maximal cliques of \((G,\preceq)\) and the integer flows on \(G\) with netflow \(\mathbf{d}\).
Let \((G,\preceq)\) be a framed graph with netflow \(\mathbf{d}=(0,d_{1},\ldots,d_{n-1},-\sum_{i}d_{i})\) where \(d_{i}=\mathrm{indeg}_{G}(v_{i})-1\). We define the function
\[\Omega_{G,\preceq}:\mathrm{MaxCliques}(G,\preceq)\to\mathcal{F}_{G}^{\mathbb{ Z}}(\mathbf{d}):C\mapsto(n_{C}(e)-1)_{e\in E(G)},\]
where \(n_{C}(e)\) is the number of times the edge \(e=(v_{i},v_{j})\) appears in the set of prefixes \(\{Pv_{j}\mid P\in C\}\) of the maximal clique.
**Theorem 3.2** ([30, Theorem 7.8]).: _Given a framed graph \((G,\preceq)\), the map \(\Omega_{G,\preceq}\) is a bijection between maximal cliques in \(\mathrm{MaxCliques}(G,\preceq)\) and integer flows in \(\mathcal{F}_{G}^{\mathbb{Z}}(\mathbf{d})\)._
As a corollary, we obtain the following result of Postnikov-Stanley (unpublished, see [28, Sec. 6] and [31, Sec. 3] for proofs) and Baldoni-Vergne [1, Thm. 38].
**Corollary 3.3** ([39] and [1, Thm. 38]).: _For a graph \(G\) on \(\{v_{0},\ldots,v_{n}\}\) with netflow \(\mathbf{d}=(0,d_{1},\ldots,d_{n-1},-\sum_{i}d_{i})\) where \(d_{i}=\mathrm{indeg}(v_{i})-1\), we have that_
\[\mathrm{vol}\,\mathcal{F}_{G}(1,0,\ldots,0,-1)=\#\mathcal{F}_{G}^{\mathbb{Z}} (\mathbf{d}),\]
_where \(\mathrm{vol}\) denotes the normalized volume of a polytope._
### The flow polytope realization
We introduce the s-oruga graphs along with a fixed framing, and apply the combinatorial method of Danilov, Karzanov and Koshevoy to obtain a triangulation of the associated flow polytope. Combining previous results, we will have bijections between s-decreasing trees \(\mathcal{T}_{\mathrm{s}}\), Stirling s-permutations \(\mathcal{W}_{\mathrm{s}}\), integer \(\mathbf{d}\)-flows on \(\mathrm{oru}(\mathrm{s})\), and maximal cliques in the framed graph \((\mathrm{oru}(\mathrm{s}),\preceq)\), represented in the following diagram:
#### 3.2.1. The \(\mathrm{s}\)-oruga graph and a DKK triangulation of its flow polytope
**Definition 3.4**.: Let \(\mathrm{s}=(s_{1},\ldots,s_{n})\) be a composition, and for convenience of notation we also set \(s_{n+1}=2\). The framed graph \((\mathrm{oru}(\mathrm{s}),\preceq)\) consists of vertices \(\{v_{-1},v_{0},\ldots,v_{n}\}\) and
* for \(i\in[1,n+1]\), there are \(s_{i}-1\)_source-edges_\((v_{-1},v_{n+1-i})\) labeled \(e_{1}^{i}\),..., \(e_{s_{i}-1}^{i}\),
* for \(i\in[1,n]\), there are two edges \((v_{n+1-i-1},v_{n+1-i})\) called _bump_ and _dip_ labeled \(e_{0}^{i}\) and \(e_{s_{i}}^{i}\),
* the incoming edges of \(v_{n+1-i}\) are ordered \(e_{j}^{i}\prec_{\mathcal{I}_{n+1-i}}e_{k}^{i}\) for \(0\leq j<k\leq s_{i}\),
* the outgoing edges of \(v_{n+1-i}\) are ordered \(e_{0}^{i-1}\prec_{\mathcal{O}_{n+1-i}}e_{s_{i-1}}^{i-1}\).
We call \(\operatorname{\text{\rm{oru}}(s)}\) the _s-oruga graph_. We will also denote by \(\operatorname{\text{\rm{oru}}}_{n}\) the induced subgraph of \(\operatorname{\text{\rm{oru}}(s)}\) with vertices \(\{v_{0},\ldots,v_{n}\}\) and call this the _oruga graph_ of length \(n\).
Figure 6a and Figure 6b show examples of this construction. In this article, we choose to draw the graph \(\operatorname{\text{\rm{oru}}(s)}\) in such a way that the framing of the incoming and outgoing edges at each inner vertex is ordered from "top to bottom". Note that the corresponding flow polytope \(\mathcal{F}_{\operatorname{\text{\rm{oru}}(s)}}\) has dimension \(|\operatorname{s}|=\sum_{i=1}^{n}s_{i}\).
The routes of \(\operatorname{\text{\rm{oru}}(s)}\) will play a key role, thus we will describe them as \(\operatorname{\text{\rm{R}}}(k,t,\delta)\) intuitively as follows. Every route of \(\operatorname{\text{\rm{oru}}(s)}\) starts from \(v_{-1}\), lands in a vertex \(v_{n+1-k}\) via a source-edge labelled \(e_{t}^{k}\) and then follows \(k-1\) edges that are either bumps or dips denoted by a \(01\)-vector \(\delta\).
Formally, for \(k\in[n+1]\), \(t\in[1,s_{k}-1]\), and \(\delta=(\delta_{1},\ldots,\delta_{k-1})\in\{0,1\}^{k-1}\), we denote by \(\operatorname{\text{\rm{R}}}(k,t,\delta)\) the sequence of edges \((e_{t_{k}}^{k}\), \(e_{t_{k-1}}^{k-1}\),..., \(e_{t_{1}}^{1})\) where
* \(t_{k}=t\),
* for all \(j\in[1,k-1]\), \(t_{j}=\delta_{j}s_{j}\).
_Remark 3.5_.: Although the graph \(\operatorname{\text{\rm{oru}}(s)}\) starts with vertex \(v_{-1}\) instead of \(v_{0}\) all the technology of Section 3.1 can be applied to it. To see this, we can either contract the edge \(e_{1}^{n+1}\) to obtain a graph whose flow polytope is integrally equivalent to \(\mathcal{F}_{G}\), or simply relabel the vertices
Figure 6. (a) The graph \(\operatorname{\text{\rm{oru}}(s)}\) for \(s=(2,3,2,2)\) with framing shown in red. That is, an ordering on incoming edges and outgoing edges at each inner vertex from “top to bottom”. (b) The graph \(\operatorname{\text{\rm{oru}}(s)}\) for \(s=(1,2,1)\) with edge labels.
Figure 7. The s-oruga graph \(\operatorname{\text{\rm{oru}}(s)}\) for \(\operatorname{\text{\rm{s}}}=(1,1,2,1,3,1,2)\). The framing orders \(\mathcal{I}_{3}\) and \(\mathcal{O}_{3}\) are shown at vertex \(v_{3}\). The route \(\operatorname{\text{\rm{R}}}(5,2,(1,0,1,1))\) is shown in bold blue.
with \([0,n+1]\). The resulting graph has flows and routes directly in bijection with the flows and routes of \(\operatorname{oru(s)}\).
**Proposition 3.6**.: _Let \(\operatorname{s}\) be a composition and let \(\mathbf{d}=(0,0,s_{n},s_{n-1},\ldots,s_{2},-\sum_{i=2}^{n}s_{i})\). The set of Stirling \(\operatorname{s}\)-permutations is in bijection with the set of integer \(\mathbf{d}\)-flows of \(\operatorname{oru(s)}\)._
Proof.: First, we notice that an integer flow \((f_{e})_{e}\) on \(\operatorname{oru(s)}\) with netflow \(\mathbf{d}\) necessarily has zero flow on every source-edge, so \((f_{e})_{e}\) is characterized by the fact that the total flow on each pair of bump and dip edges satisfies \(f_{e_{0}^{i}}+f_{e_{s_{i}}^{i}}=s_{n}+\cdots+s_{i+1}\) for all \(i\in[1,n-1]\). Thus to describe an integer \(\mathbf{d}\)-flow on \(\operatorname{oru(s)}\), it is enough to determine the flow on the bump edges \(e_{0}^{i}\) for all \(i\in[1,n-1]\). Given a Stirling \(\operatorname{s}\)-permutation \(w\), let \(f_{e_{0}^{i}}\) be the number of letters strictly greater than \(i\) that occur before the \(i\)-block \(B_{i}\) in \(w\). This implies \(0\leq f_{e_{0}^{i}}\leq s_{n}+\cdots+s_{i+1}\), and thus defines an integer \(\mathbf{d}\)-flow on \(\operatorname{oru(s)}\).
Conversely, any Stirling \(\operatorname{s}\)-permutation can be built iteratively by an insertion algorithm associated to a choice of integers \(f_{e_{0}^{i}}\in[0,s_{n}+\cdots+s_{i+1}]\) for \(i\in[1,n-1]\) in the following way. Start with a block of \(s_{n}\) consecutive copies of \(n\) (step \(i=0\)). At step \(i\) for \(i\in[1,n-1]\), there are \(s_{n}+\cdots+s_{n-i+1}+1\) possible positions for the next insertion. We insert a block of \(s_{n-i}\) consecutive copies of \((n-i)\) in the \((f_{e_{0}^{n-i}})\)-th position. This creates a \(121\)-avoiding permutation of the word \(1^{s_{1}}2^{s_{2}}\cdots n^{s_{n}}\).
See Figure 8 for an example illustrating the insertion algorithm described in the proof of Proposition 3.6.
By Corollary 3.3, since the normalized volume of the flow polytope \(\mathcal{F}_{\operatorname{oru(s)}}\) is the number of integer \(\mathbf{d}\)-flows on \(\operatorname{oru(s)}\), then we obtain the following as an immediate corollary.
**Corollary 3.7**.: _Given a composition \(\operatorname{s}\), then_
\[\operatorname{vol}\mathcal{F}_{\operatorname{oru(s)}}=\#\mathcal{T}_{ \operatorname{s}}=\#\mathcal{W}_{\operatorname{s}}=\prod_{i=1}^{n-1}\left(1+s _{n-i+1}+s_{n-i+2}+\cdots+s_{n}\right).\]
_Remark 3.8_.: We can also give an explicit correspondence between \(\operatorname{s}\)-decreasing trees and integer \(\mathbf{d}\)-flows of \(\operatorname{oru(s)}\). Note that this correspondence holds in the more general setting
Figure 8. An integer \(\mathbf{d}\)-flow of \(\operatorname{oru(s)}\) (only the flow on the bump edges is shown) and the steps of the insertion algorithm of Proposition 3.6 that output the corresponding Stirling \(\operatorname{s}\)-permutation \(w=33725455716\).
where \(\mathrm{s}\) is a weak composition and we consider integer \(\mathbf{d}\)-flows on the oruga graph \(\mathrm{oru}_{n}\) since the source-edges do not play any role.
Given an integer \(\mathbf{d}\)-flow \((f_{e})_{e}\) of \(\mathrm{oru}(\mathrm{s})\) (note again that it is enough to know the values \(f_{e_{0}^{i}}\) for \(i\in[1,n-1]\) to determine the entire integer flow), we build an \(\mathrm{s}\)-decreasing tree inductively as follows. Start with the tree given by the node \(n\) and \(s_{n}+1\) leaves. At step \(i\) for \(i\in[1,n-1]\), we have a partial \(\mathrm{s}\)-decreasing tree with labelled nodes \(n\) to \(n+1-i\), and \(1+\sum_{k=n+1-i}^{n}s_{k}\) leaves that we momentarily label from \(0\) to \(\sum_{k=n+1-i}^{n}s_{k}\) along the counterclockwise traversal of the partial tree. Attach the next node \(n-i\), with \(s_{n-i}+1\) pending leaves, to the leaf of the partial tree labeled \(f_{e_{0}^{n-i}}\). This procedure produces decreasing trees with the correct number of children at each node. Hence, after the \(n\)-th step we obtain an \(\mathrm{s}\)-decreasing tree. Reciprocally, any \(\mathrm{s}\)-decreasing tree can be built iteratively in this way, so it is associated to a choice of integers \(f_{e_{0}^{i}}\in[0,\sum_{k=n+1-i}^{n}s_{k}]\) for all \(i\in[1,n-1]\). The interested reader can verify that this procedure applied to the flow in the example of Figure 8 produces the tree \(T\) on the left of Figure 4.
We can now explicitly describe the DKK maximal cliques of coherent routes of \(\mathrm{oru}(\mathrm{s})\) via Stirling \(\mathrm{s}\)-permutations. This is an important construction for the results which follow.
Let \(\mathrm{s}\) be a composition, and \(u\) a (possibly empty) prefix of a Stirling \(\mathrm{s}\)-permutation. For all \(a\in[1,n]\), we denote by \(t_{a}\) (or \(t_{a}(u)\) if \(u\) is not clear from the context) the number of occurrences of \(a\) in \(u\), and we denote by \(c\) the smallest value in \([1,n]\) such that \(0<t_{c}<s_{c}\). If there is no such value, we set \(c=n+1\) and \(t_{n+1}=1\). The definition of \(c\) implies that for all \(a<c\), either \(t_{a}=0\) or \(t_{a}=s_{a}\). Then we define \(\mathrm{R}[u]\) to be the route \((e_{t_{c}}^{c},\,e_{t_{c-1}}^{c-1},\,\ldots,\,e_{t_{1}}^{1})\).
For example, for the subword \(u=3372545\) of \(w=33725455716\) in the example of Figure 8 we have that \(c=5\), \(t_{5}=2,t_{4}=1,t_{3}=2,t_{2}=1,t_{1}=0\) so \(\mathrm{R}[u]=(e_{2}^{5},\,e_{1}^{4},e_{2}^{3},e_{1}^{2},e_{0}^{1})=\mathrm{ R}(5,2,(1,1,1,0))\).
Let \(w\) be a Stirling \(\mathrm{s}\)-permutation. For \(i\in[\,|\,\mathrm{s}\,|]\), we denote by \(w_{i}\) the \(i\)-th letter of \(w\), and for \(i\in[0,|\,\mathrm{s}\,|]\) we denote by \(w_{[i]}\) the prefix of \(w\) of length \(i\), with \(w_{[0]}:=\emptyset\). Let \(\Delta_{w}\) be the set of routes \(\{\mathrm{R}[w_{[i]}]\,|\,i\in[0,|\,\mathrm{s}\,|]\}\) and identify it with the simplex whose vertices are the indicator vectors of these routes.
Note that each maximal clique always contains the routes \(\mathrm{R}[w_{[0]}]=(e_{1}^{n+1},e_{0}^{n},\ldots,e_{0}^{1})=\mathrm{R}(n+1,1,( 0)^{n})\) and \(\mathrm{R}[w_{[|\,\mathrm{s}\,|]}]=(e_{1}^{n+1},e_{s_{n}}^{n},\ldots,e_{s_{1}}^ {1})=\mathrm{R}(n+1,1,(1)^{n})\). See Figure 9 for the example of \(\Delta_{w}\) corresponding to the Stirling \((1,2,1)\)-permutation \(w=3221\).
Figure 9. The maximal clique \(\Delta_{w}=\{\mathrm{R}[w_{[0]}],\ldots,\mathrm{R}[w_{[|\,\mathrm{s}\,|]}]\}\) corresponding to the Stirling \((1,2,1)\)-permutation \(w=3221\).
**Lemma 3.9**.: _The maximal simplices of \(\operatorname{Triang}_{DKK}(\operatorname{oru(s)},\preceq)\) are exactly the simplices \(\Delta_{w}\) where \(w\) ranges over all Stirling \(\operatorname{s}\)-permutations._
Proof.: Recall that by Theorem 3.1 the maximal simplices of \(\operatorname{Triang}_{DKK}(\operatorname{oru(s)},\preceq)\) are the simplices \(\Delta_{C}\), where \(C\) is a maximal clique of coherent routes of \((\operatorname{oru(s)},\preceq)\).
Let \(w\) be a Stirling s-permutation. We will check that \(\Delta_{w}\) is a clique of coherent routes of \((\operatorname{oru(s)},\preceq)\). Let \(1\leq i<i^{\prime}\leq[\,|\operatorname{s}\,|]\) index two routes \(\operatorname{R}[w_{[i]}]\) and \(\operatorname{R}[w_{[i^{\prime}]}]\) in \(\Delta_{w}\). Since \(i<i^{\prime}\), we have that \(t_{a}(w_{[i]})\leq t_{a}(w_{[i^{\prime}]})\) for all \(a\in[n]\). Thus, for any vertex \(v_{n+1-a}\) that appears in both routes \(\operatorname{R}[w_{[i]}]\) and \(\operatorname{R}[w_{[i^{\prime}]}]\), we have that the incoming (respectively outgoing) edge of \(\operatorname{R}[w_{[i]}]\) precedes the incoming (respectively outgoing) edge of \(\operatorname{R}[w_{[i^{\prime}]}]\) for the order \(\preceq\). Hence, the routes \(\operatorname{R}[w_{[i]}]\) and \(\operatorname{R}[w_{[i^{\prime}]}]\) are coherent, and \(\Delta_{w}\) is a clique for the coherence relation. Moreover, since \(\Delta_{w}\) has \(|\operatorname{s}\,|+1=\dim(\mathcal{F}_{\operatorname{oru(s)}})+1\) elements, it is a maximal clique, and corresponds to a maximal simplex in the DKK triangulation of \(\mathcal{F}_{\operatorname{oru(s)}}\).
Now, suppose that \(w^{\prime}\) is a Stirling s-permutation distinct from \(w\). We need to check that \(\Delta_{w}\neq\Delta_{w^{\prime}}\). Suppose that the minimal index \(i\in[1,|\operatorname{s}\,|-1]\) such that \(w_{i}\neq w^{\prime}_{i}\) satisfies \(w_{i}<w^{\prime}_{i}\). Then \(\operatorname{R}[w^{\prime}_{[i]}]\) cannot belong to \(\Delta_{w}\). Indeed, if we denote \(a\) the value of \(w_{i}\), we have that \(e^{a}_{t_{a}(w_{[i]})-1}\) is an edge of the route \(\operatorname{R}[w^{\prime}_{[i]}]\) but for any \(j>i\), \(t_{a}(w_{[j]})\geq t_{a}(w_{[i]})\), so the route \(\operatorname{R}[w_{[j]}]\) does not contain this edge. Thus, the map \(w\mapsto\Delta_{w}\) is an injection from Stirling s-permutations to maximal simplices of \(\operatorname{Triang}_{DKK}(\operatorname{oru(s)},\preceq)\).
Then, it follows from the bijection between s-decreasing trees and maximal simplices of \(\operatorname{Triang}_{DKK}(\operatorname{oru(s)},\preceq)\) (Remark 3.8) and the bijection between s-decreasing trees and Stirling s-permutations (Section 2) that this injection is a bijection.
#### 3.2.2. The (unoriented) Hasse diagram of the \(\operatorname{s}\)-weak order
We show that the graph dual to the triangulation \(\operatorname{Triang}_{DKK}(\operatorname{oru(s)},\preceq)\) coincides with the (unoriented) Hasse diagram of the s-weak order.
**Theorem 3.10**.: _Let \(s=(s_{1},\ldots,s_{n})\) be a composition. Let \(w\) and \(w^{\prime}\) be two Stirling \(\operatorname{s}\)-permutations. There is a cover relation between \(w\) and \(w^{\prime}\) in the \(\operatorname{s}\)-weak order if and only if the simplices \(\Delta_{w}\) and \(\Delta_{w^{\prime}}\) are adjacent in \(\operatorname{Triang}_{DKK}(\operatorname{oru(s)},\preceq)\)._
Figure 10 shows the graph dual to the DKK triangulation of \(\mathcal{F}_{\operatorname{oru(s)}}\) for \(s=(1,2,1)\), which corresponds to the (unoriented) Hasse diagram of the \((1,2,1)\)-weak order. Note that in this Figure we are omitting the routes \(\operatorname{R}[w_{[0]}]\) and \(\operatorname{R}[w_{[\,\operatorname{s}\,|]}]\) since both appear in \(\Delta_{w}\) for every \(w\in\mathcal{W}_{(1,2,1)}\).
Proof.: Suppose that \(w^{\prime}\) is obtained from \(w\) by a transposition along the ascent \((a,c)\). It follows from Corollary 2.2 that \(w=u_{1}B_{a}cu_{2}\) and \(w^{\prime}=u_{1}CB_{a}u_{2}\), where \(B_{a}\) is the \(a\)-block of \(w\). We denote by \(\ell(u)\) the length of a word \(u\). For all \(i\in[0,\ell(u_{1})]\) and \(i\in[\ell(u_{1})+\ell(B_{a})+1,|\operatorname{s}\,|]\), the routes \(\operatorname{R}[w_{[i]}]\) and \(\operatorname{R}[w^{\prime}_{[i]}]\) are equal since \(w_{[i]}\) and \(w^{\prime}_{[i]}\) have the same letters. For all \(i\in[\ell(u_{1})+1,\ell(u_{1})+\ell(B_{a})-1]\), the routes \(\operatorname{R}[w_{[i]}]\) and \(\operatorname{R}[w^{\prime}_{[i+1]}]\) are equal as well. Indeed, for such \(i\) we have that \(t_{b}(w_{[i]})=t_{b}(w^{\prime}_{[i+1]})\) for all \(b\in[n]\setminus\{c\}\) and since \(0<t_{a}(w_{[i]})<s_{a}\) (because we are reading the substring \(B_{a}\)) with \(a<c\), the value of \(t_{c}(w_{[i]})\) (respectively \(t_{c}(w^{\prime}_{[i+1]})\)) does not play a role in the route \(\operatorname{R}[w_{[i]}]\) (respectively \(\operatorname{R}[w^{\prime}_{[i+1]}]\)). Hence, the vertices of \(\Delta_{w}\) and \(\Delta_{w^{\prime}}\) differ only in one element: \(\operatorname{R}[w_{[\ell(u_{1})+\ell(B_{a})]}]\in\Delta_{w}\) corresponding to the prefix \(u_{1}B_{a}\) of \(w\) and \(\operatorname{R}[w^{\prime}_{[\ell(u_{1})+1]}]\in\Delta_{w^{\prime}}\) corresponding to the prefix \(u_{1}c\) of \(w^{\prime}\). This means that the corresponding simplices in the DKK triangulation of \(\mathcal{F}_{\operatorname{oru(s)}}\) share a common facet.
Reciprocally, suppose that the vertices of \(\Delta_{w}\) and \(\Delta_{w^{\prime}}\) differ only in one element. We denote \(u_{1}\) the longest common prefix of \(w\) and \(w^{\prime}\). We denote \(a:=w_{\ell(u_{1})+1}\), \(c:=w^{\prime}_{\ell(u_{1})+1}\) and suppose that \(a<c\). Let \(B_{a}\) be the \(a\)-block of \(w\). Then:
* The substring \(u_{1}\) contains no occurrence of \(a\), because otherwise there would be a subword \(aca\) in \(w^{\prime}\), which contradicts the 121-pattern avoidance.
* The route \(\mathrm{R}[w_{[\ell(u_{1})+\ell(B_{a})]}]\), corresponding to the prefix \(u_{1}B_{a}\) of \(w\), is in \(\Delta_{w}\) but not in \(\Delta_{w^{\prime}}\).
* The route \(\mathrm{R}[w^{\prime}_{[\ell(u_{1})+1]}]\), corresponding to the prefix \(u_{1}c\) of \(w^{\prime}\), is in \(\Delta_{w^{\prime}}\) but not in \(\Delta_{w}\).
Thus, the only possibility that \(\Delta_{w}\) and \(\Delta_{w^{\prime}}\) differ only on these elements is that \(w=u_{1}B_{a}cu_{2}\) and \(w^{\prime}=u_{1}cB_{a}u_{2}\), where \(u_{2}\) is their longest common suffix. This means that there is an s-tree rotation along the ascent \((a,c)\) between \(w\) and \(w^{\prime}\).
In this situation, we will say that the common facet of \(\Delta_{w}\) and \(\Delta_{w^{\prime}}\) is associated to the transposition of \(w\) along \((a,c)\). Note that such facets are exactly the interior facets (codimension-1 simplices) of \(\mathrm{Triang}_{DKK}(\mathrm{oru(s)},\preceq)\).
Figure 10. The s-permutahedron for the case \(\mathrm{s}=(1,2,1)\). The vertices are indexed by the following combinatorial objects: s-decreasing trees, Stirling s-permutations, maximal cliques of routes (omitting \(\mathrm{R}[w_{[0]}]\) and \(\mathrm{R}[w_{[\|\,s\,]}]\)), and integer \(\mathbf{d}\)-flows (in red on the topmost graph). They correspond to simplices or maximal mixed cells in our first and second realizations.
#### 3.2.3. Higher faces of the \(\mathrm{s}\)-permutahedron
We show that the faces of the \(\mathrm{s}\)-permutahedron other than vertices and edges are also encoded in the triangulation \(\mathrm{Triang}_{DKK}(\mathrm{oru}(\mathrm{s}),\preceq)\), after proving several technical results.
We say that a simplex of \(\mathrm{Triang}_{DKK}(\mathrm{oru}(\mathrm{s}),\preceq)\) is _interior_ if it is not contained in the boundary of the polytope \(\mathcal{F}_{\mathrm{oru}(\mathrm{s})}\). Otherwise it is in the boundary of \(\mathrm{Triang}_{DKK}(\mathrm{oru}(\mathrm{s}),\preceq)\).
**Lemma 3.11**.: _Let \(w\) be a Stirling \(\mathrm{s}\)-permutation. Let \(\mathrm{R}=\mathrm{R}(k,t,\delta)\) be a route of \(\mathrm{oru}(\mathrm{s})\). Then, \(\mathrm{R}\) is a vertex of \(\Delta_{w}\) if and only if the multiset of inversions of \(w\) satisfies the following inequalities:_
1. \(\#_{w}(k,i)\geq t\) _for all_ \(1\leq i<k\) _such that_ \(\delta_{i}=0\)_,_
2. \(\#_{w}(k,i)\leq t\) _for all_ \(1\leq i<k\) _such that_ \(\delta_{i}=1\)_,_
3. \(\#_{w}(j,i)=0\) _for all_ \(1\leq i<j<k\) _such that_ \((\delta_{i},\delta_{j})=(1,0)\)_,_
4. \(\#_{w}(j,i)=s_{j}\) _for all_ \(1\leq i<j<k\) _such that_ \((\delta_{i},\delta_{j})=(0,1)\)_._
We say that the route \(\mathrm{R}\)_implies_ these inequalities on inversion sets.
Proof.: (\(\Rightarrow\)) Suppose that \(\mathrm{R}\) is a vertex of \(\Delta_{w}\). It means that \(\mathrm{R}=\mathrm{R}[w_{[r]}]\) for a certain \(r\in[0,|\mathrm{s}\,|]\) and it conveys information on the prefix \(u=w_{[r]}\). More precisely, \(t\) is the number of occurrences of \(k\) in \(u\), and for all \(1\leq i<k\), the number of occurrences of \(i\) in \(u\) is either \(0\) if \(\delta_{i}=0\) or \(s_{i}\) if \(\delta_{i}=1\). This gives the announced inequalities on the inversion set of \(w\).
(\(\Leftarrow\)) Reciprocally, suppose that the inversion set of \(w\) satisfies these inequalities. Then there is a prefix \(u=w_{[r]}\) of \(w\) that contains no occurrence of \(i\) for \(i\) such that \(\delta_{i}=0\), all \(s_{i}\) occurrences of \(i\) for \(i\) such that \(\delta_{i}=1\), and exactly \(t\) occurrences of \(k\). Then this prefix is exactly the one associated to the route \(\mathrm{R}[u]=\mathrm{R}(k,t,\delta)=\mathrm{R}\).
Let \(w\) be a Stirling \(\mathrm{s}\)-permutation, \(A\) a subset of ascents of \(w\), and \(1\leq a<c\leq n\) such that \(\#_{w}(c,a)<s_{c}\). We say that the pair \((a,c)\) is _A-dependent_ in \(w\) if there is a sequence \(a\leq b_{1}<\ldots<b_{k}<b_{k+1}=c\) such that:
1. \(b_{1}\) is the greatest letter strictly smaller than \(c\) such that the \(a\)-block is contained in the \(b_{1}\)-block,
2. for all \(i\in[k-1]\), the \(b_{i}\)-block is directly followed by the \(b_{i+1}\)-block,
3. the \(b_{k}\)-block is directly followed by an occurrence of \(c\),
4. \((b_{i},b_{i+1})\in A\) for all \(i\in[k]\).
Note that in particular, every ascent \((a,c)\) in \(A\) is \(A\)-dependent taking \(k=1\), \(b_{1}=a\) and \(b_{2}=c\).
For example for the Stirling \(\mathrm{s}\)-permutation \(w=33725455716\) and \(A=\{(2,5),(5,7),(1,6)\}\) there is an \(A\)-dependency between \(2\) and \(7\) given by the sequence \(b_{1}=2\), \(b_{2}=5\) and \(b_{3}=7\) but there is no \(A\)-dependency between \(2\) and \(6\) since the second occurrence of \(7\) does not form a block and it is followed by \(1\).
**Proposition 3.12**.: _Let \(w\) be a Stirling \(\mathrm{s}\)-permutation and \(A\) a subset of its ascents. Then we have that_
\[\mathrm{inv}(w+A)=\begin{cases}\#_{w}(c,a)+1&\text{if $(a,c)$ is $A$-dependent in $w$}\\ \#_{w}(c,a)&\text{otherwise.}\end{cases}\]
_Example 3.13_.: If \(w=33725455716\) and \(A=\{(2,5),(5,7),(1,6)\}\), then the resulting Stirling \(\mathrm{s}\)-permutation is \(w+A=33775245561\) and the pairs whose inversion number has been increased by \(1\) are \(\{(2,5),(2,7),(4,7),(5,7),(1,6)\}\).
Proof.: Let be \(I\) the multiset of inversions defined by
\[\#_{I}(c,a):=\begin{cases}\#_{w}(c,a)+1&\text{if $(a,c)$ is $A$-dependent in $w$}\\ \#_{w}(c,a)&\text{otherwise.}\end{cases}\]
Note that if \((a,c)\) is \(A\)-dependent and \(d>c\) this implies that \(\#_{I}(d,c)=\#_{I}(d,a)\). Indeed, in this case either both \((a,d)\) and \((c,d)\) are \(A\)-dependent or both are not.
We will prove that \(I=\operatorname{inv}(w+A)\) by showing that \(I\) is the smallest transitive multiset of inversions that contains \(\operatorname{inv}(w)+A\). Following [8, Definition 2.3] or [9, Definition 1.14], by _transitivity_ of \(I\) we understand that if \(a<b<c\), then \(\#_{I}(b,a)=0\) or \(\#_{I}(c,a)\geq\#_{I}(c,b)\).
First, it is clear that \(I\) contains \(\operatorname{inv}(w)+A\) since every pair in \(A\) is \(A\)-dependent.
Let us show that any transitive multiset of inversions \(I^{\prime}\) that contains \(\operatorname{inv}(w)+A\) necessarily contains \(I\). Note that for such \(I^{\prime}\) we have \(\#_{I^{\prime}}(c,a)\geq\#_{w}(c,a)\) for all pairs \((a,c)\). Let \((a,c)\) be \(A\)-dependent with an associated sequence \(a\leq b_{1}<\ldots<b_{k}<b_{k+1}=c\) and we proceed by induction on \(k\).
If \(k=1\), we have that:
* either \(b_{1}=a\) and \((a,c)\) is in \(A\), and directly \(\#_{I^{\prime}}(c,a)\geq\#_{w}(c,a)+1\),
* or \(a<b_{1}<c\), which in this case \(\#_{I^{\prime}}(b_{1},a)\geq\#_{w}(b_{1},a)>0\) since the \(a\)-block is contained in the \(b_{1}\)-block in \(w\). We get that (3.1) \[\#_{I^{\prime}}(c,a)\geq\#_{I^{\prime}}(c,b_{1})\geq\#_{w}(c,b_{1})+1=\#_{w}(c,a)+1\] where the first inequality comes from transitivity, the second from the previous case since \((b_{1},c)\in A\) and the last equality again because the \((a,b_{1})\) are \(A\)-dependent.
Suppose that \(k>1\). Then the induction hypothesis implies that \(\#_{I^{\prime}}(b_{k},a)\geq\#_{w}(b_{k},a)+1>0\). Just like in equation (3.1), applying transitivity to \(a<b_{k}<c\) and using that \((b_{k},c)\in A\) and \((a,b_{k})\) is \(A\)-dependent (so there cannot be any occurrence of \(c\) between \(a\) and \(b_{k}\)) gives us the inequalities \(\#_{I^{\prime}}(c,a)\geq\#_{I^{\prime}}(c,b_{k})\geq\#_{w}(c,b_{k})+1=\#_{w} (c,a)+1\).
Finally, we check that \(I\) is indeed transitive. Let \(1\leq a<b<c\leq n\) and \(\#_{I}(b,a)>0\). We need to check that \(\#_{I}(c,a)\geq\#_{I}(c,b)\).
**Case 1:** if \(\#_{w}(b,a)=0\), then \((a,b)\) is \(A\)-dependent and \(\#_{I}(c,a)=\#_{I}(c,b)\).
**Case 2:** Suppose that \(\#_{w}(b,a)>0\).
**Case 2.1:** If \(\#_{I}(c,b)=\#_{w}(c,b)\), due to the inclusion \(\operatorname{inv}(w)\subset I\) and the transitivity of \(\operatorname{inv}(w)\) we have that \(\#_{I}(c,a)\geq\#_{w}(c,a)\geq\#_{w}(c,b)=\#_{I}(c,b)\).
**Case 2.2:** Suppose that \(\#_{I}(c,b)=\#_{w}(c,b)+1\) i.e. \((b,c)\) is \(A\)-dependent. If \(\#_{w}(c,a)\geq\#_{w}(c,b)+1\), we have \(\#_{I}(c,a)\geq\#_{w}(c,a)\geq\#_{w}(c,b)+1=\#_{I}(c,b)\). Otherwise, we have \(\#_{w}(c,a)=\#_{w}(c,b)=:i\). It follows from the assumption \(\#_{w}(b,a)>0\) that the \(a\)-block appears in \(w\) between the first occurrence of \(b\) and the \(i\)-th occurrence of \(c\). This implies that \((a,c)\) is also \(A\)-dependent, with a corresponding sequence that is included in the one giving the \(A\)-dependency of \((b,c)\). The two \(A\)-dependencies together with the transitivity of \(w\) for \(a<b<c\) imply that \(\#_{I}(c,a)=\#_{w}(c,a)+1\geq\#_{w}(c,b)+1=\#_{I}(c,b)\).
Let \((w,A)\) be a face of \(\operatorname{Perm_{s}}\). We define \(\Delta_{(w,A)}\) to be the following intersection of facets of \(\Delta_{w}\):
\[\Delta_{(w,A)}:=\bigcap_{(a,c)\in A}\left\{\Delta_{w}\cap\Delta_{w^{\prime}} \,|\,w^{\prime}\text{ is the transposition of $w$ along $(a,c)$}\right\}, \tag{3.2}\]
and \(\Delta_{(w,A)}:=\Delta_{w}\) if \(A=\emptyset\).
Note that the \(|A|\) routes that are in \(\Delta_{w}\) and not in \(\Delta_{(w,A)}\) correspond to the prefixes of \(w\) that end at an ascent in \(A\).
**Lemma 3.14**.: _Let \((w,A)\) be a face of \(\text{Perm}_{\text{\rm s}}\) and \(w^{\prime}\) a Stirling \(\text{\rm s}\)-permutation. We denote by \([w,w+A]\) the interval of the \(\text{\rm s}\)-weak order defined by \(w\) and \(w+A\)._
_Then, \(\Delta_{(w,A)}\subseteq\Delta_{w^{\prime}}\) if and only if \(w^{\prime}\in[w,w+A]\)._
Proof.: Recall that the inversion set of \(w+A\) is described in Proposition 3.12 and that \(w^{\prime}\in[w,w+A]\) if and only if its inversion set satisfies that for all \(1\leq a<c\leq n\), \(\#_{w}(c,a)\leq\#_{w^{\prime}}(c,a)\leq\#_{w+A}(c,a)\). We show that these inequalities are exactly the ones implied by the union of routes that give vertices of \(\Delta_{(w,A)}\), in the sense of Lemma 3.11.
Let \((a,c)\) be a pair with \(\#_{w}(c,a)=t\). We have to show these three inequalities:
1. There is a route \(R\) in \(\Delta_{(w,A)}\) such that \(R\in\Delta_{w^{\prime}}\) implies the inequality \(\#_{w^{\prime}}(c,a)\geq t\). We can take the route that corresponds to the first prefix of \(w\) containing the \(t\)-th occurrence of \(c\) that does not end at an ascent in \(A\). Such a prefix cannot contain any \(a\) since \(a<c\).
2. There is a route \(R\) in \(\Delta_{(w,A)}\) such that \(\#_{w^{\prime}}(c,a)\leq t\) for all \(w^{\prime}\) with \(R\in\Delta_{w^{\prime}}\) if and only if \(\#_{w+A}(c,a)=t\), that is, the pair \((a,c)\) is not \(A\)-dependent. Indeed, this inequality is only implied by routes that contain the edges \(e_{t}^{c}\) and \(e_{s_{a}}^{a}\). Such routes in \(\Delta_{w}\) correspond to prefixes in \(w\) that contain the \(a\)-block and the \(t\)-th occurrence of \(c\) and that do not end inside a \(b\)-block for any \(b<c\). The pair \((a,c)\) is \(A\)-dependent exactly when all such prefixes end at a descent in \(A\), so the corresponding routes are removed in \(\Delta_{(w,A)}\).
3. If \(t+1<s_{c}\) and \(\#_{w+A}(c,a)=t+1\), i.e. \((a,c)\) is an \(A\)-dependent pair, there is a route \(R\) in \(\Delta_{(w,A)}\) such that \(R\in\Delta_{w^{\prime}}\) implies \(\#_{w^{\prime}}(c,a)\leq t+1\). Indeed, we can take the route that corresponds to the prefix of \(w\) that ends at the \((t+1)\)-th occurrence of \(c\). Since \(t+1<s_{c}\), \(c\) appears afterwards so this prefix does not end at an ascent. (Note that if \(t+1=s_{c}\) there is no need to check that \(\#_{w^{\prime}}(c,a)\leq s_{c}\)).
Lemma 3.14 leads to the following alternative characterization of \(\Delta_{(w,A)}\).
**Corollary 3.15**.: \(\Delta_{(w,A)}=\bigcap_{w^{\prime}\in[w,w+A]}\Delta_{w^{\prime}}\)_._
**Lemma 3.16**.: _If \(C\) is a clique of routes of \((\operatorname{\mathrm{oru}(s)},\preceq)\) that contains \(\operatorname{R}(n+1,1,(0)^{n}),\operatorname{R}(n+1,1,(1)^{n})\) and at least one route that starts with \(e\) for each source-edge \(e\) that is not \((v_{-1},v_{0})\), then \(\Delta_{C}\) is in the interior of \(\text{Triang}_{DKK}(\operatorname{\mathrm{oru}(s)},\preceq)\)._
Proof.: Suppose that \(\Delta_{C}\) is a boundary simplex of \(\text{Triang}_{DKK}(\operatorname{\mathrm{oru}(s)},\preceq)\). Then it is contained in a facet that is in the boundary of \(\text{Triang}_{DKK}(\operatorname{\mathrm{oru}(s)},\preceq)\). This facet corresponds to a clique of the form \(\Delta_{w}\setminus R\), where \(w\) is a Stirling \(\text{\rm s}\)-permutation and \(R\) is a route of \(\Delta_{w}\) that does not correspond to an ascent nor a descent of \(w\). Hence, either \(R\in\{\operatorname{R}(n+1,1,(0)^{n}),\operatorname{R}(n+1,1,(1)^{n})\}\), or \(R\) corresponds to a prefix \(w_{[i]}\) such that \(w_{i}=w_{i+1}\). In this case, suppose that \(w_{i}\) is the \(t\)-th occurrence of \(c\) in \(w\). Then \(R\) is the only route of \(\Delta_{w}\) that starts with the edge \(e_{t}^{c}\). In any case, since \(C\subseteq\Delta_{w}\setminus R\), it does not satisfy the condition of the lemma.
**Corollary 3.17**.: _Let \(w\) be a Stirling \(\text{\rm s}\)-permutation and \(A\) a subset of its ascents. Then \(\Delta_{(w,A)}\) is an interior simplex of \(\text{Triang}_{DKK}(\operatorname{\mathrm{oru}(s)},\preceq)\)._
Proof.: It is sufficient to show that \(\Delta_{(w,A)}\) contains \(\operatorname{R}(n+1,1,(0)^{n}),\operatorname{R}(n+1,1,(1)^{n})\) and at least one route that starts with \(e\) for each source-edge \(e\) that is not \((v_{-1},v_{0})\).
First, it is clear that \(\mathrm{R}(n+1,1,(0)^{n})\) and \(\mathrm{R}(n+1,1,(1)^{n})\) are in \(\Delta_{(w,A)}\) since they do not correspond to ascents in \(w\).
Let \(c\in[n]\) and \(t\in[s_{c}-1]\). Then the prefix of \(w\) that ends with the \(t\)-th occurrence of \(c\) corresponds to a route \(R\) that contains the edge source-edge \(e_{t}^{c}\). Moreover, there cannot be an ascent of \(w\) after this prefix since there are still occurrences of \(c\) afterwards. Thus the route \(R\) is not removed from \(\Delta_{w}\) to \(\Delta_{(w,A)}\).
**Theorem 3.18**.: _The map \((w,A)\mapsto\Delta_{(w,A)}\) induces a poset isomorphism between the face poset of the \(\mathrm{s}\)-permutahedron \(\mathrm{Perm}_{\mathrm{s}}\) and the set of interior simplices of \(\mathrm{Triang}_{DKK}(\mathrm{oru(s)},\preceq)\) ordered by reverse inclusion._
Proof.: The fact that all \(\Delta_{(w,A)}\) are interior simplices of \(\mathrm{Triang}_{DKK}(\mathrm{oru(s)},\preceq)\) is stated in Corollary 3.17. The injectivity follows from Lemma 3.14.
Let us show the surjectivity. Let \(F\) be an interior simplex of \(\mathrm{Triang}_{DKK}(\mathrm{oru(s)},\preceq)\). Let \(w\) be a Stirling \(\mathrm{s}\)-permutation that is minimal for the \(\mathrm{s}\)-weak order with respect to the condition that \(F\subseteq\Delta_{w}\). Then, \(F\) is an intersection of facets of \(\Delta_{w}\). These facets correspond to certain transpositions involving \(w\). We denote by \(A\) the set of ascents corresponding to these transpositions. The minimality of \(w\) implies that all elements in \(A\) are ascents (and not descents) of \(w\). Thus, \(F=\Delta_{(w,A)}\), and the choice of \(w\) was unique.
Finally, let \(w,w^{\prime}\) be Stirling \(\mathrm{s}\)-permutations and \(A,A^{\prime}\) subsets of their respective ascents. Lemma 3.14 implies that \([w,w+A]\subseteq[w^{\prime},w^{\prime}+A^{\prime}]\) if and only if \(\Delta_{(w^{\prime},A^{\prime})}\subseteq\Delta_{(w,A)}\), which proves that the map is a poset isomorphism.
Just as how the minimal elements of the face poset of \(\mathrm{Perm}_{\mathrm{s}}\) have a characterization as the maximal cliques of \(\mathrm{Triang}_{DKK}(\mathrm{oru(s)},\preceq)\), the maximal elements of the face poset also have an explicit characterization in terms of cliques.
**Corollary 3.19**.: _A simplex \(\Delta_{C}\) of \(\mathrm{Triang}_{DKK}(\mathrm{oru(s)},\preceq)\) corresponds with a maximal interior face of \(\mathrm{Perm}_{\mathrm{s}}\) if and only if \(C\) is a clique of size \(|s|-n+2\) that satisfies the following:_
* \(\mathrm{R}[w_{[0]}]\) _and_ \(\mathrm{R}[w_{[|s|]}]\) _are in_ \(C\)_, and_
* _each source-edge of_ \(\mathrm{oru(s)}\) _that is different from_ \((v_{-1},v_{0})\) _is contained in exactly one route in_ \(C\)_._
Proof.: We first note that for each \(i\in[n]\), the graph \(\mathrm{oru(s)}\) has \(s_{i}-1\) source-edges, so \(\mathrm{oru(s)}\) indeed has \(\sum_{i=1}^{n}(s_{i}-1)=|s|-n\) source-edges that are not \((v_{-1},v_{0})\). By Lemma 3.16, a clique \(C\) with the above stated properties corresponds with a maximal interior face of \(\mathrm{Perm}_{\mathrm{s}}\).
Conversely, let \((w,A)\) be a maximal face of \(\mathrm{Perm}_{\mathrm{s}}\). We will check that \(C=\Delta_{(w,A)}\) satisfies the specified properties. Let \(N\subset[0,|s|]\) denote the set of non-ascent positions in \(w\), so that \(C=\cup_{j\in N}\mathrm{R}[w_{[j]}]\).
Observe that since \(w\) is \(121\)-avoiding, if it has \(n-1\) ascents, then the ascents are of the form \((i,c_{i})\) where \(i<c_{i}\) for each \(i\in[n-1]\). Moreover, for each \(i\in[n-1]\), it is the \(s_{i}\)-th occurrence of \(i\) in \(w\) which produces an ascent pair in \(w\). Therefore, the set \(N\backslash\{0,|s|\}\) indexes the first \(s_{i}-1\) occurrences of \(i\) in \(w\).
Now suppose \(j\in N\backslash\{0,|s|\}\) is a non-ascent position of \(w\) so that \(\mathrm{R}[w_{[j]}]\in C\). We denote \(a\) the letter \(w_{j}\). If \(w_{j}\) is the \(k\)-th occurrence of \(a\) in \(w\) for some \(k\in[s_{a}-1]\), then the route \(\mathrm{R}[w_{[j]}]\) contains the proper source-edge \(e_{k}^{a}\). Lastly, since \(|N\backslash\{0,|s|\}|=|s|-n\), then \(C\) has the desired properties.
#### 3.2.4. On the \(h\) and \(h^{*}\)-polynomials
Given a (simplicial, polytopal) complex \(\Delta\) one can define its \(f\)- and \(h\)-polynomials as follows. The _\(f\)-polynomial_ of \(\Delta\) is defined as
\[f_{\Delta}(x)=\sum_{F\in\Delta}x^{\dim(F)},\]
where the sum is over all the faces \(F\) of \(\Delta\). Then its _\(h\)-polynomial_ is defined by the relation
\[f(x)=h(x+1).\]
Let \(\mathcal{W}_{\mathrm{s}}(k)\) denote the subset of Stirling s-permutations with \(k\) descents. We generalize the notion of the Eulerian polynomial and define the s_-order Eulerian polynomial_ by \(A_{\mathrm{s}}(x)=\sum_{k\geq 0}|\mathcal{W}_{\mathrm{s}}(k)|x^{k}\). This recovers the classical Eulerian polynomial when \(\mathrm{s}=(1,\ldots,1)\) and the second-order Eulerian polynomial (see for example [20, Section 6.2]) when \(\mathrm{s}=(2,\ldots,2)\)3.
Footnote 3: Savage and Visontai [38] have previously defined a notion of s-Eulerian polynomials in the context of s-_lecture hall polytopes_ that is different from \(A_{\mathrm{s}}(x)\).
The \(f\)-polynomial of the s-permutahedron is
\[f_{\mathrm{Perm}_{\mathrm{s}}}(x) =\sum_{\begin{subarray}{c}w\in\mathcal{W}_{\mathrm{s}}\\ A\subseteq\mathrm{ASC}(w)\end{subarray}}x^{|A|}\] \[=\sum_{w\in\mathcal{W}_{\mathrm{s}}}(1+x)^{|\,\mathrm{ASC}(w)|}\] \[=\sum_{k\geq 0}|\mathcal{W}_{\mathrm{s}}(k)|(x+1)^{k}\] \[=A_{\mathrm{s}}(x+1),\]
where \(\mathrm{ASC}(w)\) denotes the set of ascents of a Stirling s-permutation \(w\).
In the second-to-last equality we used the fact that the number of elements in \(\mathcal{W}_{\mathrm{s}}\) with \(k\) ascents is equal to the number of elements in \(\mathcal{W}_{\mathrm{s}}\) with \(k\) descents, which can be seen by reading \(w\) in reverse.
**Proposition 3.20**.: _The \(h\)-polynomial of \(\mathrm{Perm}_{\mathrm{s}}\) is \(A_{\mathrm{s}}(x)\)._
_Example 3.21_.: As an example, the reader can compute the \(h\)-polynomial of \(\mathrm{Perm}_{\mathrm{s}}\) when \(\mathrm{s}=(1,2,1)\) from Figure 1 to get \(h_{\mathrm{Perm}_{\mathrm{s}}}(x)=1+5x+2x^{2}\).
Ehrhart showed in [14] that the lattice point enumerator \(|tP\cap\mathbb{Z}^{d}|\) of a \(t\) dilated \(d\)-dimensional lattice polytope \(P\) is a polynomial in \(t\), known as the _Ehrhart polynomial_ of \(P\). In addition, he showed that the generating series has the form
\[1+\sum_{t\geq 1}|tP\cap\mathbb{Z}^{d}|x^{t}=\frac{h^{*}(x)}{(1+x)^{d+1}},\]
where \(h^{*}(x)\) is a polynomial of degree \(d\) known as the _\(h^{*}\)-polynomial_ of \(P\). This polynomial was shown by Stanley to have nonnegative coefficients [41] and it turns out that the \(h^{*}\)-polynomial coincides with the \(h\)-polynomial of a shellable unimodular triangulation (see for example [2]). A triangulation is said to be _shellable_ if its facets can be totally ordered \(F_{1},\ldots,F_{m}\) such that for any pair \(p<q\), there exists \(r<q\) satisfying \(|F_{r}\cap F_{q}|=|F_{q}|-1\) and \(F_{p}\cap F_{q}\subseteq F_{r}\cap F_{q}\).
**Theorem 3.22**.: _The \(h^{*}\)-polynomial of \(\mathcal{F}_{\mathrm{oru(s)}}\) is \(A_{\mathrm{s}}(x)\)._
Proof.: Similar to the proof of [3, Lemma 6.1], we will show that any linear extension of the s-weak order gives a shelling \(\Delta_{w^{(1)}},\ldots,\Delta_{w^{(|\mathcal{W}_{\mathrm{s}}|)}}\) of \(\mathrm{Triang}_{DKK}(\mathrm{oru(s)},\preceq)\).
Fix a linear extension of the s-weak order and let \(p<q\) such that \(w^{(p)}<w^{(q)}\) in the linear extension. Let \(r<q\) be such that \(w^{(p)}\wedge w^{(q)}<w^{(r)}\lessdot w^{(q)}\) in the s-weak order. It is clear that \(|\Delta_{w^{(r)}}\cap\Delta_{w^{(q)}}|=|\Delta_{w^{(q)}}|-1\) since \(\Delta_{w^{(r)}}\) and \(\Delta_{w^{(q)}}\) intersect in a facet. Now suppose that there is a route \(R=R(k,t,\delta)\) such that \(R\in\Delta_{w^{(p)}}\cap\Delta_{w^{(q)}}\) but \(R\not\in\Delta_{w^{(r)}}\). By Lemma 3.11 one of the following cases will occur:
1. for some \(1\leq i<k\) such that \(\delta_{i}=0\) we will have \(\#_{w^{(r)}}(k,i)<t\) but \(\#_{w^{(p)}}(k,i)\geq t\) and \(\#_{w^{(q)}}(k,i)\geq t\), or
2. for some \(1\leq i<k\) such that \(\delta_{i}=1\) we will have \(\#_{w^{(r)}}(k,i)>t\), but \(\#_{w^{(p)}}(k,i)\leq t\) and \(\#_{w^{(q)}}(k,i)\leq t\), or
3. for some \(1\leq i<j<k\) such that \((\delta_{i},\delta_{j})=(1,0)\) we have that \(\#_{w^{(r)}}(j,i)=s_{j}\) but \(\#_{w^{(p)}}(j,i)=\#_{w^{(q)}}(j,i)=0\), or
4. for some \(1\leq i<j<k\) such that \((\delta_{i},\delta_{j})=(0,1)\) we have that \(\#_{w^{(r)}}(j,i)=0\) but \(\#_{w^{(p)}}(j,i)=\#_{w^{(q)}}(j,i)=s_{j}\).
Any one of these cases will contradict the fact that
\[\mathrm{inv}(w^{(p)})\cap\mathrm{inv}(w^{(q)})=\mathrm{inv}(w^{(p)}\wedge w^{ (q)})\subseteq\mathrm{inv}(w^{(r)})\subseteq\mathrm{inv}(w^{(q)}).\]
Hence, \(R\in\Delta_{w^{(r)}}\) and we have \(\Delta_{w^{(p)}}\cap\Delta_{w^{(q)}}\subseteq\Delta_{w^{(r)}}\cap\Delta_{w^{ (q)}}\).
Now, let
\[V_{q}=\{v\in\Delta_{w^{(q)}}\mid v\text{ is a vertex in }\Delta_{w^{(q)}} \text{ and }\Delta_{w^{(q)}}\backslash v\subseteq\Delta_{w^{(p)}}\text{ for some }1\leq p<q\}.\]
Then the \(k\)-th coefficient of the \(h\)-polynomial is \(|\{q\mid|V_{q}|=k,1\leq q\leq|\mathcal{W}_{\mathrm{s}}|\}|\), which is the number of Stirling s-permutations that cover exactly \(k\) elements in the s-weak order and hence is the number of Stirling s-permutations with exactly \(k\) descents.
_Example 3.23_.: Again as an example, the reader can confirm that the \(h^{*}\)-polynomial of \(\mathcal{F}_{\mathrm{oru(s)}}\) when \(\mathrm{s}=(1,2,1)\) is \(h^{*}_{\mathcal{F}_{\mathrm{oru(s)}}}(x)=1+5x+2x^{2}\).
## 4. Cayley trick and mixed subdivisions
The Cayley trick allows us to give another geometric realization of the s-permutahedron as a fine mixed subdivision of a \(n\)-dimensional polytope (or even as a \((n-1)\)-dimensional one). This technique was first developed by Sturmfels in [43, Section 5] for coherent subdivisions and by Humber, Rambau, and Santos in [21] for arbitrary subdivisions. It was applied to flow polytopes by Meszaros and Morales in [29, Section 7]. We slightly modify their work for our special case of the flow polytope \(\mathcal{F}_{\mathrm{oru(s)}}\). To this end we need some basic definitions.
### Background on the Cayley trick
Consider the polytopes \(P_{1},\ldots,P_{k}\) in \(\mathbb{R}^{d}\). Their _Minkowski sum_ is the polytope
\[P_{1}+\ldots+P_{k}:=\{x_{1}+\cdots+x_{k}\,|\,x_{i}\in P_{i}\}.\]
For the Minkowski sum of \(k\) copies of a polytope \(P\) we simply write \(kP\). The cells of this subdivision are called _Minkowski cells_ and are obtained via sums \(\sum B_{i}\) where \(B_{i}\) is the convex hull of a subset of vertices of \(P_{i}\). A _mixed subdivision_ of a Minkowski sum is a collection of Minkowski cells such that their union covers the Minkowski sum and they intersect properly
as Minkowski sums (see [37, Definition 1.1]). A _fine mixed subdivision_ is a minimal mixed subdivision via containment of its summands.
Let \(e_{1},\ldots,e_{k}\) be a basis of \(\mathbb{R}^{k}\). We call the polytope
\[\mathcal{C}(P_{1},\ldots,P_{k}):=\text{\it conv}(\{\mathbf{e_{1}}\}\times P_{1},\ldots,\{\mathbf{e_{k}}\}\times P_{k})\subseteq\mathbb{R}^{k}\times\mathbb{R }^{d}\]
the _Cayley embedding_ of \(P_{1},\ldots,P_{k}\).
**Proposition 4.1** (The Cayley trick [43, Section 5]).: _Let \(P_{1},\ldots,P_{k}\) be polytopes in \(\mathbb{R}^{d}\). The regular polytopal subdivisions (respectively triangulations) of \(\mathcal{C}(P_{1},\ldots,P_{k})\) are in bijection with the regular mixed subdivisions (respectively fine mixed subdivisions) of \(P_{1}+\ldots+P_{k}\)._
A concrete way to have this bijection (see the "one-picture-proof" [21, Figure 1]) is to intersect a subdivision of \(\mathcal{C}(P_{1},\ldots,P_{k})\) with the subspace \((\frac{1}{k},\ldots,\frac{1}{k})\times\mathbb{R}^{n}\) of \(\mathbb{R}^{k}\times\mathbb{R}^{n}\). Up to dilation by the factor \(k\) we obtain a mixed subdivision of \(P_{1}+\ldots+P_{k}\).
For regular subdivisions, this also gives a way to obtain an admissible height function for a mixed subdivision of \(P_{1}+\ldots+P_{k}\) from an admissible height function for a subdivision of \(\mathcal{C}(P_{1},\ldots,P_{k})\), see Section 5.2.1.
_Remark 4.2_.: Note that the Cayley trick induces a poset isomorphism between the interior faces of the subdivision of \(\mathcal{C}(P_{1},\ldots,P_{k})\) and the interior faces of the corresponding mixed subdivision of \(P_{1}+\ldots+P_{k}\) (both sets of faces being ordered by inclusion).
### The sum of cubes realization
To apply the Cayley trick to our triangulation \(\operatorname{Triang}_{DKK}(\operatorname{oru(s)},\preceq)\) of the flow polytope \(\mathcal{F}_{\operatorname{oru(s)}}\), we need to describe it as the Cayley embedding of some lower-dimensional polytopes. Recall that \(\mathcal{F}_{\operatorname{oru(s)}}\) lives in the space of edges of the graph \(\operatorname{oru(s)}\). We parameterize this space as \(\mathbb{R}^{p}\times\mathbb{R}^{2n}\), where \(p=1+\sum_{i=1}^{n}(s_{i}-1)\) and \(\mathbb{R}^{p}\) corresponds to the space of source-edges and \(\mathbb{R}^{2n}\) to the space of bumps and dips (edges of \(\operatorname{oru}_{n}\), see Definition 3.4). Moreover, for all \(i\in[n]\) and for any point in \(\mathcal{F}_{\operatorname{oru(s)}}\), (i.e. a flow of \(\mathcal{F}_{\operatorname{oru(s)}}\)), we have that the sum of its coordinates along edges \(e_{0}^{i}\) and \(e_{s_{i}}^{i}\) is determined by the coordinates along the source-edges \(e_{t}^{k}\) for \(k\in[i+1,n+1]\), \(t\in[s_{k}-1]\). Thus, \(\mathcal{F}_{\operatorname{oru(s)}}\) is affinely equivalent to its projection on the space \(\mathbb{R}^{p}\times\mathbb{R}^{n}\) where \(\mathbb{R}^{n}\) corresponds to the space of edges \(e_{0}^{i}\) for \(i\in[n]\).
With this parameterization, the indicator vector of the route of \(\operatorname{oru(s)}\) denoted \(\operatorname{R}(k,t,\delta)\) (as in the discussion after Definition 3.4) with \(k\in[n+1]\), \(t\in[s_{k}-1]\) and \(\delta\in\{0,1\}^{k-1}\) is:
\[e_{t}^{k}\times\sum_{i\in[k-1],\,\delta_{i}=0}e_{0}^{i}.\]
Thus, if we denote by \(\square_{k-1}\) these \((k-1)\)-dimensional hypercubes with vertices \(\{0,1\}^{k-1}\times 0^{n-k+1}\) embedded in \(\mathbb{R}^{n}\), we see that \(\mathcal{F}_{\operatorname{oru(s)}}\) is the Cayley embedding of \(\square_{n}\) and \(\square_{k-1}\) repeated \(s_{k}-1\) times for \(k\in[n]\).
We denote by \(\text{\it Subdiv}_{\square}(\text{s})\) the fine mixed subdivision of the Minkowski sum of hypercubes \(\square_{n}+\sum_{i=1}^{n}(s_{i}-1)\square_{i-1}\subseteq\mathbb{R}^{n}\) obtained by intersecting the triangulation \(\operatorname{Triang}_{DKK}(\operatorname{oru(s)},\preceq)\) (projected onto \(\mathbb{R}^{p}\times\mathbb{R}^{n}\)) with the subspace \(\left\{\frac{1}{p}\right\}^{p}\times\mathbb{R}^{n}\).
The following theorem follows directly from the Cayley trick (Proposition 4.1), and the isomorphism between the face poset of \(\operatorname{Perm}_{\text{s}}\) and the interior simplices of the DKK triangulation given in Theorem 3.18.
**Theorem 4.3**.: _The face poset of the \(\mathrm{s}\)-permutahedron \(\mathit{Perm}_{\mathrm{s}}\) is isomorphic to the set of interior cells of \(\mathit{Subdiv}_{\square}(\mathrm{s})\) ordered by reverse inclusion._
In particular, the \(\mathrm{s}\)-decreasing trees are in bijection with the maximal cells of \(\mathrm{Subdiv}_{\square}(\mathrm{s})\).
_Remark 4.4_.: We can use a different parameterization of the space where \(\mathcal{F}_{\mathrm{oru}(\mathrm{s})}\) lives by considering the cube \(\square_{n}\) as the Cayley embedding of two hypercubes \(\square_{n-1}\), or equivalently intersect \(\mathbb{R}^{n}\) with the hyperplane \(x_{n}=\frac{1}{2}\). This allows us to lower the dimension and obtain a fine mixed subdivision of the Minkowski sum of hypercubes \((s_{n}+1)\square_{n-1}+\sum_{i=1}^{n-1}(s_{i}-1)\square_{i-1}\). We use this representation for the figures.
Figure 10(a) shows the mixed cell corresponding to the Stirling \((1,2,1)\)-permutation \(w=3221\), obtained from the clique \(\Delta_{w}\) with the Cayley trick. Figure 10(b) shows the entire mixed subdivision for the case \(s=(1,2,1)\). Both figures are represented in the coordinate system \((e_{0}^{2},e_{0}^{3})\). In Figure 10 the dual graph of the cells of this mixed subdivision is portrayed with edges oriented perpendicular to each inner wall.
## 5. Intersection of tropical hypersurfaces
In the two realizations that we provided in Sections 3.2 and 4.2, the \(\mathrm{s}\)-decreasing trees index the maximal cells of a polytopal complex. However, Conjecture 1.2 asks for a polytopal complex where the \(\mathrm{s}\)-decreasing trees index the vertices.
In this section, we explain how to dualize our previous realizations in order to obtain such a polytopal realization and fully answer the conjecture for strict compositions. Tropical geometry offers a nice setting to dualize regular polyhedral subdivisions that moreover behaves well with the Cayley trick.
### Background on tropical dualization
This section is based on the work of Joswig in [25] and [24, Chapter 1].
Figure 11. (a) Summands of the Minkowski cell corresponding to \(w=3221\) together with their corresponding routes in the clique \(\Delta_{w}\). (b) Mixed subdivision of \(2\square_{2}+\square_{1}\) corresponding dually to the \((1,2,1)\)-permutahedron. The cells are numbered according to Figure 10. The highlighted cell in blue corresponds to \(w=3221\) as obtained in Figure 10(a).
Let \(\mathcal{A}=\{\mathbf{a}^{1},\ldots,\mathbf{a}^{m}\}\) be a point configuration in \(\mathbb{R}^{d}\) with integer coordinates, and \(\mathcal{S}\) a subdivision of \(\mathcal{A}\).
The subdivision \(\mathcal{S}\) is said to be _regular_ if there is a function \(\mathrm{h}:[m]\to\mathbb{R},i\mapsto\mathrm{h}^{i}\) such that the faces of \(\mathcal{S}\) are the images of the lower faces of the lift of \(\mathcal{A}\) (the polytope with vertices \((\mathbf{a}^{i},\mathrm{h}^{i})\in\mathbb{R}^{d+1}\) for \(i\in[m]\)) by the projection that omits the last coordinate. In this case, the function \(\mathrm{h}\) is called an _admissible height function_ for \(\mathcal{S}\).
Such a point configuration together with a height function \(\mathrm{h}\) is associated to the _tropical polynomial_ (in the min-plus algebra):
\[F(\mathbf{x})=\bigoplus_{i\in[m]}h^{i}\odot\mathbf{x}^{\mathbf{a}^{i}}=\min \left\{h^{i}+\langle\mathbf{a}^{i},\mathbf{x}\rangle\,|\,i\in[m]\right\},\]
where \(\mathbf{x}\in\mathbb{R}^{d}\) and \(\langle\cdot,\cdot\rangle\) denotes the usual scalar product in \(\mathbb{R}^{d}\).
The _tropical hypersurface defined by \(F\)_, or _vanishing locus of \(F\)_ is
\[\mathcal{T}(F):=\left\{\mathbf{x}\in\mathbb{R}^{d}\,|\,\text{the minimum of $F(\mathbf{x})$ is attained at least twice}\right\}.\]
It is the image codimension-2-skeleton of the _done_
\[\mathcal{D}(F)=\left\{(\mathbf{x},y)\in\mathbb{R}^{d+1}\mid\mathbf{x}\in \mathbb{R}^{d},\,y\in\mathbb{R},\,y\leq F(\mathbf{x})\right\}\]
under the orthogonal projection that omits the last coordinate [24, Corollary 1.6]. The _cells_ of \(\mathcal{T}(F)\) are the projections of the faces of \(\mathcal{D}(F)\) (here we include the regions of \(\mathbb{R}^{d}\) delimited by \(\mathcal{T}(F)\) as its \(d\)-dimensional cells ; in fact we are considering the normal complex \(NC(F)\) defined in [24, after Example 1.7]).
We say that \(\mathcal{T}(F)\) is the _tropical dual_ of the subdivision \(\mathcal{S}\) with admissible function \(\mathrm{h}\), since we have the following theorem:
**Theorem 5.1** ([24, Theorem 1.13]).: _There is a bijection between the \(k\)-dimensional cells of \(\mathcal{S}\) and the \((d-k)\)-dimensional cells of \(\mathcal{T}(F)\), that reverses the inclusion order._
This bijection sends a vertex \(\mathbf{a}^{\mathrm{j}}\) to the region
\[\left\{\mathbf{x}\in\mathbb{R}^{d}\,\bigg{|}\,\mathrm{h}^{j}+\langle\mathbf{a }^{j},\mathbf{x}\rangle=\min_{i\in[m]}\{\mathrm{h}^{i}+\langle\mathbf{a}^{i}, \mathbf{x}\rangle\}\right\},\]
and a cell of \(\mathcal{S}\) to the intersection of the regions corresponding to its vertices.
**Lemma 5.2**.: _The bijection of Theorem 5.1 restricts to a bijection between the interior cells of \(\mathcal{S}\) and the bounded cells of \(\mathcal{T}(F)\)._
Proof.: It is sufficient to show that the bijection restricts to a bijection between the interior facets (\((d-1)\)-dimensional cells) of \(\mathcal{S}\) and the bounded edges of \(\mathcal{T}(F)\). Indeed, suppose that it is the case. Any cell of \(\mathcal{S}\) is either maximal and associated to a vertex of \(\mathcal{T}(F)\), or it is an intersection of facets of \(\mathcal{S}\). A non-maximal cell of \(\mathcal{S}\) is interior if and only if it is included only in interior facets of \(\mathcal{S}\). Thus it is sent via the bijection to a cell of \(\mathcal{T}(F)\) that only contains bounded edges. Reciprocally, a non-bounded cell of \(\mathcal{T}(F)\) contains a non-bounded edge, so it is sent to a boundary cell of \(\mathcal{S}\).
Let us show the statement about the interior facets of \(\mathcal{S}\) in a fashion similar to the proof of [24, Theorem 1.13]. Let \(\widetilde{\mathcal{N}}(F):=\operatorname{conv}\{(\mathbf{a}^{i},r)\,|\,i\in[m ],r\geq h^{i}\}\subseteq\mathbb{R}^{d+1}\) be the extended Newton polyhedron of \(F\), whose lower faces project bijectively onto the cells of \(\mathcal{S}\). Let \(\mathbf{e}\) be an edge of \(\mathcal{T}(F)\) and \(H\) its corresponding facet in \(\mathcal{S}\) via the bijection. Suppose that \(\mathbf{e}\) is
unbounded, of the form \(\mathbf{e}=\mathbf{w}+\mathbb{R}_{+}\mathbf{v}\) for some \(\mathbf{v},\mathbf{w}\in\mathbb{R}^{d}\). Then, for any \(\lambda\in\mathbb{R}_{+}\) the vector \(-(\mathbf{w}+\lambda\mathbf{v},1)\) is in the normal cone of the lift of \(H\) in \(\widetilde{\mathcal{N}}(F)\). Taking the limit of \(\lambda\to 0\) of \(-\left(\frac{1}{\lambda}\mathbf{w}+\mathbf{v},\frac{1}{\lambda}\right)\), we obtain that \(-(\mathbf{v},0)\) is in the normal cone of the lift of \(H\), hence \(H\) is in the boundary of \(\mathcal{S}\).
Reciprocally, if \(H\) is a boundary facet of \(\mathcal{S}\), it means that the normal cone of the lift of \(H\) in \(\widetilde{\mathcal{N}}(F)\) is a two-dimensional cone whose extremal rays can be written \(-\mathbb{R}_{+}(\mathbf{v},0)\) and \(-\mathbb{R}_{+}(\mathbf{w},1)\), for some \(\mathbf{v},\mathbf{w}\in\mathbb{R}^{d}\). For any \(\lambda\in\mathbb{R}_{+}\), the vector \(-(\mathbf{w}+\lambda\mathbf{v},1)=-\lambda(\frac{1}{\lambda}\mathbf{w}+ \mathbf{v},\frac{1}{\lambda})\) is in this cone, so the point \(\mathbf{w}+\lambda\mathbf{v}\) belongs to the edge \(\mathbf{e}\) in \(\mathcal{T}(F)\). Hence, this edge is unbounded.
In the case where \(\mathcal{A}\) is a Cayley embedding, Joswig explains in [24, Corollary 4.9] how the Cayley trick allows us to describe the tropical dual of a regular mixed subdivision with an arrangement of tropical hypersurfaces. This extends what was known for triangulations of a product of simplices \(\Delta_{m-1}\times\Delta_{d-1}\), which is the Cayley embedding of \(m\) copies of the simplex \(\Delta_{d-1}\) (the canonical simplex in \(\mathbb{R}^{d}\)) and gives arrangements of tropical hyperplanes, see [12, Section 4], [15].
We consider \(\mathcal{A}\) given by the vertices of the Cayley embedding \(\mathcal{C}(P_{1},\ldots,P_{k})\), with \(P_{j}=\operatorname{conv}(\mathbf{a}^{j,1},\ldots,\mathbf{a}^{j,m_{j}})\) a polytope in \(\mathbb{R}^{d}\) with integer coordinate vertices, and consider a regular subdivision given by the height \(\operatorname{h}=(h^{1,1},\ldots,h^{1,m_{1}},\ldots,h^{k,m_{k}})\in\mathbb{R} ^{[m_{1}]\times\ldots\times[m_{k}]}\).
After the Cayley trick we obtain the subdivision \(\widetilde{\mathcal{S}}\) of the point configuration \(\widetilde{\mathcal{A}}\) given by the points of the form \(\sum_{j=1}^{k}\mathbf{a}^{j,i_{j}}\) for \((i_{1},\ldots,i_{k})\in[m_{1}]\times\ldots\times[m_{k}]\) with height \(h^{(i_{1},\ldots,i_{k})}=\sum_{j=1}^{k}h^{j,i_{j}}\).
The corresponding tropical polynomial is
\[\widetilde{F}(\mathbf{x}) = \bigoplus_{(i_{1},\ldots,i_{k})\in[m_{1}]\times\ldots\times[m_{k} ]}h^{(i_{1},\ldots,i_{k})}\odot\mathbf{x}^{\sum_{j=1}^{k}\mathbf{a}^{j,i_{j}}}\] \[= \bigoplus_{(i_{1},\ldots,i_{k})\in[m_{1}]\times\ldots\times[m_{k} ]}\bigodot_{j=1}^{k}h^{j,i_{j}}\odot\mathbf{x}^{\mathbf{a}^{j,i_{j}}}\] \[= \bigodot_{j=1}^{k}\bigoplus_{i_{j}\in[m_{j}]}h^{j,i_{j}}\odot \mathbf{x}^{\mathbf{a}^{j,i_{j}}}\] \[= \bigodot_{j=1}^{k}F_{j}(\mathbf{x}),\]
where \(F_{j}\) is the tropical polynomial \(F_{j}(\mathbf{x})=\bigoplus_{i_{j}\in[m_{j}]}h^{j,i_{j}}\odot\mathbf{x}^{ \mathbf{a}^{j,i_{j}}}\).
Then, the vanishing locus \(\mathcal{T}(\widetilde{F})\) is obtained by taking the union of the vanishing loci \(\mathcal{T}(F_{j})\) for \(j\in[k]\) and the cells of \(\mathcal{T}(\widetilde{F})\) are the intersections of the cells of all \(\mathcal{T}(F_{j})\), \(j\in[m]\). We say that these cells are _induced_ by the arrangement of tropical hypersurfaces \(\{\mathcal{T}(F_{j})\,|\,j\in[m]\}\). We have the following theorem as a consequence of Theorem 5.1.
**Theorem 5.3**.: _The tropical dual of the mixed subdivision \(\widetilde{\mathcal{S}}\) is the polyhedral complex of cells induced by the arrangement of tropical hypersurfaces \(\{\mathcal{T}(F_{j})\,|\,j\in[m]\}\)._
### The tropical realization
Before applying this theorem to our mixed subdivision \(\operatorname{Subdiv}_{\square}(\operatorname{s})\), we explain how to obtain admissible height functions.
#### 5.2.1. DKK admissible height functions
Danilov et al. provided explicit constructions of admissible height functions for the DKK triangulation of a flow polytope ([11, Lemma 2 & 3]) that we can adapt to our particular graph \(\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \cdot}}}}}}}}}}}}}\). Note that since their definition of regular subdivisions is in terms of upper faces (linearity areas of a concave function) we change the sign from their \(w\) to our \(\operatorname{h}\). We slightly refine their results.
Let \((G,\preceq)\) be a framed graph. Let \(P\) and \(Q\) be a pair of non-coherent routes of \(G\) that are in conflict at subroutes \([x_{1},y_{1}]\),..., \([x_{k},y_{k}]\), where \(x_{1}\leq y_{1}<x_{2}\leq y_{2}\ldots<x_{k}\leq y_{k}\) and the subroutes \([x_{i},y_{i}]\) are as long as possible. We define the route \(P^{\prime}\) as the concatenation of subroutes \(Px_{1}\), \(x_{1}Qx_{2}\), \(x_{2}Px_{3}\),..., that we denote \(Px_{1}Qx_{2}Px_{3}\dots\) and \(Q^{\prime}\) the concatenation \(Qx_{1}Px_{2}Qx_{3}\dots\). It is clear that \(P+Q=P^{\prime}+Q^{\prime}\) (where \(P+Q\) denotes the union of edges in \(P\) and edges in \(Q\)) and \(P^{\prime}\) and \(Q^{\prime}\) are coherent. We call \(P^{\prime}\) and \(Q^{\prime}\) the _resolvents_ of \(P\) and \(Q\).
We say that there is a _minimal conflict_ between routes \(P\) and \(Q\) if they are in conflict at exactly one subroute \([v_{i},v_{j}]\) and the edges of \(P\) and \(Q\) that end at \(v_{i}\) are adjacent for the total order \(\preceq_{\mathcal{I}_{i}}\), (resp. the edges of \(P\) and \(Q\) that start at \(v_{j}\) are adjacent for the total order \(\preceq_{\mathcal{O}_{j}}\)).
**Lemma 5.4** (adaptation of [11, Lemma 2]).: _Let \((G,\preceq)\) be a framed graph. A function \(\operatorname{h}\) from the routes of \(G\) to \(\mathbb{R}\) is an admissible height function of \(\operatorname{\mathit{Triang}}_{DKK}(G,\preceq)\) if and only if:_
_For any two non-coherent routes \(P\) and \(Q\) with resolvents \(P^{\prime}\) and \(Q^{\prime}\) we have:_
\[\operatorname{h}(P)+\operatorname{h}(Q)>\operatorname{h}(P^{\prime})+ \operatorname{h}(Q^{\prime}). \tag{5.1}\]
Proof.: The original statement of [11, Lemma 2] is that a weaker version of this condition, where \(P^{\prime}\) and \(Q^{\prime}\) are not necessarily the resolvents but can be any two routes that satisfy \(P+Q=P^{\prime}+Q^{\prime}\), is sufficient. Let us show (with the same ideas as their proof) that it is also necessary and that we can even choose \(P^{\prime},Q^{\prime}\) to be the resolvents of \(P\) and \(Q\). Suppose that \(\operatorname{h}\) is an admissible height function of \(\operatorname{\mathit{Triang}}_{DKK}(G,\preceq)\) and let \(P,Q\) be two non-coherent routes of \((G,\preceq)\) with resolvents \(P^{\prime},Q^{\prime}\). Since they form a clique, \(P^{\prime}\) and \(Q^{\prime}\) are the vertices of an edge of the DKK triangulation of \(\mathcal{F}_{G}\). The point \(F=\frac{1}{2}(P+Q)=\frac{1}{2}(P^{\prime}+Q^{\prime})\) belongs to this edge. Since this edge as to be lifted to a lower face of the lift of the flow polytope \(\mathcal{F}_{G}\) given by the height function \(\operatorname{h}\), which is admissible for \(\operatorname{\mathit{Triang}}_{DKK}(G,\preceq)\), we necessarily have \(\operatorname{h}(P^{\prime})+\operatorname{h}(Q^{\prime})<\operatorname{h}(P )+\operatorname{h}(Q)\).
This statement can be made slightly stronger by restricting condition 5.1 to minimal conflicts.
**Lemma 5.5**.: _Let \((G,\preceq)\) be a framed graph. A function \(\operatorname{h}\) from the routes of \(G\) to \(\mathbb{R}\) is an admissible height function of \(\operatorname{\mathit{Triang}}_{DKK}(G,\preceq)\) if and only if:_
Figure 12. Two routes in conflict (left) and their resolvents (right).
_For any minimal conflict between two routes \(P\) and \(Q\) with resolvents \(P^{\prime}\) and \(Q^{\prime}\), we have_
\[\mathrm{h}(P)+\mathrm{h}(Q)>\mathrm{h}(P^{\prime})+\mathrm{h}(Q^{\prime}). \tag{5.2}\]
Proof.: Let \(\mathrm{h}\) be a function from the routes of \(G\) to \(\mathbb{R}\) such that for any minimal conflict between two routes \(P\) and \(Q\) with resolvents \(P^{\prime}\) and \(Q^{\prime}\), we have \(\mathrm{h}(P)+\mathrm{h}(Q)>\mathrm{h}(P^{\prime})+\mathrm{h}(Q^{\prime})\). It follows from Lemma 5.4 that we only need to show that for any two non-coherent routes \(P\) and \(Q\), there exist routes \(P^{\prime}\) and \(Q^{\prime}\) such that \(P+Q=P^{\prime}+Q^{\prime}\) and \(\mathrm{h}(P)+\mathrm{h}(Q)>\mathrm{h}(P^{\prime})+\mathrm{h}(Q^{\prime})\).
First, suppose that \(P\) and \(Q\) are conflicting at exactly one subroute \([v_{i},v_{j}]\). We can build partial routes \(R_{1}=Pv_{i},R_{2},\ldots,R_{k}=Qv_{i}\) that end at \(v_{i}\) and such that \(R_{1}\prec R_{2}\prec\ldots\prec R_{k}\), their ending edges are adjacent in \(\preceq_{\mathcal{I}_{i}}\) and they are not in conflict. This can be done by building these partial routes from right to left: the ending edge is determined and we can choose the other ones as we want but if we arrive at a vertex common to a previously built partial route we choose the same edges as in this partial route. Similarly we can build partial routes \(S_{1}=v_{j}Q,S_{2},\ldots,S_{t}=v_{j}P\) that start at \(v_{j}\) and such that \(S_{1}\prec S_{2}\prec\ldots\prec S_{t}\), their starting edges are adjacent in \(\preceq_{\mathcal{O}_{j}}\) and they are not in conflict. Then for any \(x\in[k-1]\), \(y\in[t-1]\) the routes \(R_{x}v_{i}Pv_{j}S_{y+1}\) and \(R_{x+1}v_{i}Pv_{j}S_{y}\) are in minimal conflict, with resolvents \(R_{x}v_{i}Pv_{j}S_{y}\) and \(R_{x+1}v_{i}Pv_{j}S_{y+1}\). Hence the condition on \(\mathrm{h}\) implies the following inequality:
\[\mathrm{h}(R_{x}v_{i}Pv_{j}S_{y+1})+\mathrm{h}(R_{x+1}v_{i}Pv_{j}S_{y})> \mathrm{h}(R_{x}v_{i}Pv_{j}S_{y})+\mathrm{h}(R_{x+1}v_{i}Pv_{j}S_{y+1}).\]
When we sum all these inequalities for all \(x\in[k-1]\), \(y\in[t-1]\) we see that all terms of the form \(\mathrm{h}(R_{x}v_{i}Pv_{j}S_{y})\) are cancelled out by pairs, except for \((x,y)\in\{(1,1),(k,t),(1,t),(k,1)\}\). We end up with:
\[\mathrm{h}(R_{1}v_{i}Pv_{j}S_{t})+\mathrm{h}(R_{k}v_{i}Pv_{j}S_{1})>\mathrm{h }(R_{1}v_{i}Pv_{j}S_{1})+\mathrm{h}(R_{k}v_{i}Pv_{j}S_{t}),\]
which is exactly \(\mathrm{h}(P)+\mathrm{h}(Q)>\mathrm{h}(P^{\prime})+\mathrm{h}(Q^{\prime})\), where \(P^{\prime}\) and \(Q^{\prime}\) are the resolvents of \(P,Q\).
Now, we can finish the proof by induction on the number of conflicts. Suppose that \(\mathrm{h}\) satisfies that for any pair of non-coherent routes \(P\) and \(Q\) with at most \(n\) conflicts their resolvents \(P^{\prime},Q^{\prime}\) satisfy \(\mathrm{h}(P)+\mathrm{h}(Q)>\mathrm{h}(P^{\prime})+\mathrm{h}(Q^{\prime})\). Let \(P\) and \(Q\) be non-coherent routes with \(n+1\) conflicts at subroutes \([x_{1},y_{1}],\ldots,[x_{n+1},y_{n+1}]\). Since the routes \(P\) and \(Px_{1}Q\) have \(n\) conflicts and their resolvents are \(Px_{1}P^{\prime}=P^{\prime}\) and \(Px_{1}Q^{\prime}\), the induction hypothesis gives us:
\[\mathrm{h}(P)+\mathrm{h}(Px_{1}Q)>\mathrm{h}(P^{\prime})+\mathrm{h}(Px_{1}Q^{ \prime}).\]
Similarly we have:
\[\mathrm{h}(Q)+\mathrm{h}(Qx_{1}P)>\mathrm{h}(Qx_{1}P^{\prime})+\mathrm{h}(Q^{ \prime}).\]
Moreover, the routes \(P\) and \(Qx_{1}P^{\prime}\) only have one conflict and their resolvents are \(P^{\prime}\) and \(Qx_{1}P\), so we have
\[\mathrm{h}(P)+\mathrm{h}(Qx_{1}P^{\prime})>\mathrm{h}(P^{\prime})+\mathrm{h}(Qx _{1}P),\]
and similarly:
\[\mathrm{h}(Q)+\mathrm{h}(Px_{1}Q^{\prime})>\mathrm{h}(Px_{1}Q)+\mathrm{h}(Q^{ \prime}).\]
When we sum up these four inequalities, some terms cancel out and we recover:
\[\mathrm{h}(P)+\mathrm{h}(Q)>\mathrm{h}(P^{\prime})+\mathrm{h}(Q^{\prime}).\qed\]
Recall that the routes of \(\mathrm{oru}(\mathrm{s})\) are denoted \(\mathrm{R}(k,t,\delta)\) as in the discussion after Definition 3.4. Adapting [11, Lemma 3] to our context gives us the following lemma.
**Lemma 5.6**.: _Let \({\rm s}\) be a composition and \(\varepsilon>0\) a sufficiently small real number. Consider \({\rm h}_{\varepsilon}\) to be the function that associates to a route \({\rm R}={\rm R}(k,t_{k},\delta)\) of \({\rm{oru(s)}}\) the quantity_
\[{\rm h}_{\varepsilon}({\rm R})=-\sum_{k\geq c>a\geq 1}\varepsilon^{c-a}(t_{c}+ \delta_{a})^{2}, \tag{5.3}\]
_where \(t_{c}=\begin{cases}0&\text{if }\delta_{c}=0,\\ s_{c}&\text{if }\delta_{c}=1,\end{cases}\) for all \(c\in[k-1]\)._
_Then \({\rm h}_{\varepsilon}\) is an admissible height function for \(\text{Triang}_{DKK}({\rm{oru(s)}},\preceq)\)._
**Proposition 5.7**.: _In Lemma 5.6, it is enough to take \(\varepsilon<\frac{1}{n(1+\sum_{j=2}^{n}(2s_{j}+1))}\)._
Proof.: Let \(P={\rm R}(k,t,\delta)\) and \(Q={\rm R}(k^{\prime},t^{\prime},\delta^{\prime})\) be two routes of \({\rm{oru(s)}}\) that are in minimal conflict at a common route \([v_{n+1-y},v_{n-x}]\). We can suppose that \(Pv_{n+1-y}\prec Qv_{n+1-y}\). Note that this implies that \(\delta_{x}=1\) and \(\delta^{\prime}_{x}=0\). We deal separately with the three following cases (which are the only possible ones for a minimal conflict) and compute the quantity \(H:={\rm h}_{\varepsilon}(P)+{\rm h}_{\varepsilon}(Q)-{\rm h}_{\varepsilon}(P ^{\prime})-{\rm h}_{\varepsilon}(Q^{\prime})\).
**Case 1: \(k=k^{\prime}=y\), \(t\in[s_{y}-2]\), \(t^{\prime}=t+1\).**
In the computation of \({\rm h}_{\varepsilon}(P)+{\rm h}_{\varepsilon}(Q)-{\rm h}_{\varepsilon}(P^{ \prime})-{\rm h}_{\varepsilon}(Q^{\prime})\), we see that all pairs \((a,c)\) in formula 5.3 cancel out either with \({\rm h}_{\varepsilon}(P)-{\rm h}_{\varepsilon}(Q^{\prime})\) or \({\rm h}_{\varepsilon}(Q)-{\rm h}_{\varepsilon}(P^{\prime})\), except for \((a,c)=(x,y)\). Thus we have:
\[H ={\rm h}_{\varepsilon}(P)+{\rm h}_{\varepsilon}(Q)-{\rm h}_{ \varepsilon}(P^{\prime})-{\rm h}_{\varepsilon}(Q^{\prime})\] \[=-\varepsilon^{y-x}\Big{(}(t+1)^{2}+((t+1)+0)^{2}-(t+0)^{2}-((t+ 1)+1)^{2}\Big{)}\] \[=2\ \varepsilon^{y-x}>0.\]
**Case 2: \(k>k^{\prime}=y\), \(\delta_{y}=0\), \(t^{\prime}=1\).**
Here the pairs that do not cancel out are all pairs \((a,c)\) for \(k\geq c\geq y\) and \(x\geq a\). Then we have:
\[H =-\sum_{x\geq a}\varepsilon^{y-a}\Big{(}\delta_{a}^{2}+(1+\delta^ {\prime}_{a})^{2}-\delta^{\prime\ 2}_{a}-(1+\delta_{a})^{2}\Big{)}-\sum_{k\geq c>y,\ x\geq a} \varepsilon^{c-a}\Big{(}(t_{c}+\delta_{a})^{2}-(t_{c}+\delta^{\prime}_{a})^{2 }\Big{)}\] \[=2\ \varepsilon^{y-x}-2\ \sum_{x>a}\varepsilon^{y-a}(\delta^{\prime}_{a }-\delta_{a})-\sum_{k\geq c>y,\ x\geq a}\varepsilon^{c-a}\Big{(}2\ t_{c}( \delta_{a}-\delta^{\prime}_{a})+\delta_{a}^{2}-\delta^{\prime\ 2}_{a}\Big{)}\] \[\geq 2\ \varepsilon^{y-x}-2\ \sum_{x>a}\varepsilon^{y-a}-\sum_{k \geq c>y,\ x\geq a}\varepsilon^{c-a}(2s_{c}+1)\] \[\geq 2\ \varepsilon^{y-x}-2\ \varepsilon^{y-x+1}\Big{(}x-1+x\sum_{k \geq c>y}(2s_{c}+1)\Big{)}\] \[\geq 2\ \varepsilon^{y-x}\Big{(}1-\varepsilon\Big{(}y-2+(y-1)\sum_{k \geq c>y}(2s_{c}+1)\Big{)}\Big{)}\]
Then, we see that if \(\varepsilon<\frac{1}{n(1+\sum_{j=2}^{n}(2s_{j}+1))}\), then for any \(y\in[2,n]\) we have
\[1-\varepsilon\Big{(}y-2+(y-1)\sum_{k\geq c>y}(2s_{c}+1)\Big{)}>0,\]
thus \(H>0\).
**Case 3: \(k^{\prime}>k=y\), \(t=s_{y}-1,\delta^{\prime}_{y}=1\).** Here again, the pairs that do not cancel out are all pairs \((a,c)\) for \(k\geq c\geq y\) and \(x\geq a\) and we have:
\[H =-\sum_{x\geq a}\varepsilon^{y-a}\Big{(}(s_{y}-1+\delta_{a})^{2}+ (s_{y}+\delta^{\prime}_{a})^{2}-(s_{y}-1+\delta^{\prime}_{a})^{2}-(s_{y}+ \delta_{a})^{2}\Big{)}\] \[\quad-\sum_{k\geq c>y,\ x\geq a}\varepsilon^{c-a}\Big{(}(t^{ \prime}_{c}+\delta^{\prime}_{a})^{2}-(t^{\prime}_{c}+\delta_{a})^{2}\Big{)}\] \[=2\ \varepsilon^{y-x}+2\sum_{x>a}\varepsilon^{y-a}(\delta^{\prime }_{a}-\delta_{a})-\sum_{k\geq c>y,\ x\geq a}\varepsilon^{c-a}\Big{(}2\ t^{ \prime}_{c}(\delta^{\prime}_{a}-\delta_{a})+{\delta^{\prime}_{a}}^{2}-\delta _{a}^{2}\Big{)},\]
and the rest of the computations are very similar to the Case 2.
#### 5.2.2. Coordinates for the \(\mathrm{s}\)-permutahedron
For the remainder of this section \(\mathrm{s}\) is assumed to be a composition and \(\mathrm{h}\) an admissible height function for \(\mathrm{Triang}_{DKK}(\mathrm{oru}(\mathrm{s}),\preceq)\).
Since we defined in Section 4.2 the mixed subdivision \(\mathrm{Subdiv}_{\square}(\mathrm{s})\) from the regular triangulation \(\mathrm{Triang}_{DKK}(\mathrm{oru}(\mathrm{s}),\preceq)\) via the Cayley trick, the following theorem directly follows from Theorem 5.3.
**Theorem 5.8**.: _The tropical dual of the mixed subdivision \(\mathrm{Subdiv}_{\square}(\mathrm{s})\) is the polyhedral complex of cells induced by the arrangement of tropical hypersurfaces_
\[\mathcal{H}_{\mathrm{s}}(\mathrm{h}):=\left\{\mathcal{T}(F^{k}_{t})\,|\,k\in[ 2,n+1],\,t\in[s_{k}-1]\right\},\]
_where \(F^{k}_{t}(\mathbf{x})=\bigoplus\mathrm{h}(\mathrm{R}(k,t,\delta))\odot \mathbf{x}^{\delta}=\min\Big{\{}\mathrm{h}(\mathrm{R}(k,t,\delta))+\sum_{i\in[ k-1]}\delta_{i}x_{i}\,|\,\delta\in\{0,1\}^{k-1}\Big{\}}\)._
**Definition 5.9**.: We denote by \(\mathit{Perm}_{\mathrm{s}}(\mathrm{h})\) the polyhedral complex of bounded cells induced by the arrangement \(\mathcal{H}_{\mathrm{s}}(\mathrm{h})\).
**Theorem 5.10**.: _The face poset of the geometric polyhedral complex \(\mathit{Perm}_{\mathrm{s}}(\mathrm{h})\) is isomorphic to the face poset of the combinatorial \(\mathrm{s}\)-permutahedron \(\mathit{Perm}_{\mathrm{s}}\)._
Figure 13. The \((1,1,1,2)\)-permutahedron (left) and the \((1,1,1,2)\)-permutahedron (right) via their tropical realization.
Proof.: We showed in Theorem 4.3 that the face poset of \(\operatorname{Perm}_{\mathrm{s}}\) is anti-isomorphic to the face poset of interior cells of the mixed subdivision \(\operatorname{Subdiv}_{\square}(\mathrm{s})\). It then follows from Lemma 5.2 and Theorem 5.8 that this poset is isomorphic to the poset of bounded cells of \(\mathcal{H}_{\mathrm{s}}(\mathrm{h})\), which is the face poset of \(\operatorname{Perm}_{\mathrm{s}}(\mathrm{h})\).
Figure 13 shows some examples of such realizations of the \(s\)-permutahedron.
Moreover, we can describe the explicit coordinates of the vertices of \(\operatorname{Perm}_{\mathrm{s}}(\mathrm{h})\). For a Stirling s-permutation \(w\), \(a\in[n]\) and \(t\in[s_{a}]\), we denote \(i(a^{t})\) the length of the prefix of \(w\) that precedes the \(t\)-th occurrence of \(a\). As explained in the argument leading to Lemma 3.9, this prefix is associated to the route \(\mathrm{R}[w_{[i(a^{t})]}]\) in the clique \(\Delta_{w}\).
**Theorem 5.11**.: _The vertices of \(\operatorname{Perm}_{\mathrm{s}}(\mathrm{h})\) are in bijection with Stirling \(\mathrm{s}\)-permutations. Moreover, the vertex \(\mathbf{v}(w)=(\mathbf{v}(w)_{a})_{a\in[n]}\) associated to a Stirling \(\mathrm{s}\)-permutation \(w\) has coordinates_
\[\mathbf{v}(w)_{a}=\sum_{t=1}^{s_{a}}\left(\mathrm{h}(\mathrm{R}[w_{[i(a^{t})]} ])-\mathrm{h}(\mathrm{R}[w_{[i(a^{t})+1]}])\right). \tag{5.4}\]
Proof.: The bijection between vertices of \(\operatorname{Perm}_{\mathrm{s}}(\mathrm{h})\) and Stirling \(\mathrm{s}\)-permutations is a direct consequence of Theorem 5.10.
Let \(w\) be a Stirling \(\mathrm{s}\)-permutation. It is associated via Theorem 5.8 to the intersection of all regions of the form
\[\Big{\{}\mathbf{x}\in\mathbb{R}^{n}\Bigm{|}\mathrm{h}(\mathrm{R}(c,t,\delta))+ \sum_{a\in[c-1]}\delta_{a}x_{a}=\min_{\theta\in\{0,1\}^{c-1}}\{\mathrm{h}( \mathrm{R}(c,t,\theta))+\sum_{a\in[c-1]}\theta_{a}x_{a}\}\Big{\}}, \tag{5.5}\]
where \(\mathrm{R}(c,t,\delta)\) is a route in the clique \(\Delta_{w}\). It follows from the previous remark that this intersection is a single point, that we denote \(\mathbf{v}\). We show that \(\mathbf{v}\) necessarily has the coordinates given by the theorem. Let \(a\in[n]\). Both routes \(\mathrm{R}[w_{[i(a^{1})]}]\) and \(\mathrm{R}[w_{[i(a^{s_{a}})+1]}]\) are of the form \(\mathrm{R}(c,t,\delta)\) and \(\mathrm{R}(c,t,\delta^{\prime})\) respectively, where \(c\) is the smallest letter such that the \(a\)-block is contained in the \(c\)-block in \(w\), and \(t\) denotes the number of occurrences of \(c\) that precedes the \(a\)-block. If the \(a\)-block is contained in no other block we set \(c=n+1\) and \(t=1\). The indicator vectors \(\delta\) and \(\delta^{\prime}\) satisfy that \(\delta^{\prime}-\delta\) is the indicator vector of the letters \(b\leq a\) such that the \(b\)-block is contained in the \(a\)-block in \(w\). The fact that both routes belong to \(\Delta_{w}\) implies that \(\mathrm{h}(\mathrm{R}[w_{[i(a^{1})]}])+\sum_{b\in[c-1]}\delta_{b}v_{b}=\mathrm{ h}(\mathrm{R}[w_{[i(a^{s_{a}})+1]}])+\sum_{b\in[c-1]}\delta^{\prime}_{b}v_{b}\), thus
\[\sum_{\begin{subarray}{c}b\in[a]\mathrm{s.t.}\\ \text{$b$-block $\subseteq a$-block}\end{subarray}}v_{b}=\mathrm{h}( \mathrm{R}[w_{[i(a^{1})]}])-\mathrm{h}(\mathrm{R}[w_{[i(a^{s_{a}})+1]}]).\]
Then, we obtain Equation 5.4 by induction on \(a\). Indeed, if the equation is true for all \(b<a\), then all terms in \(\sum_{\begin{subarray}{c}b\in[a-1]\mathrm{s.t.}\\ \text{$b$-block $\subset a$-block}\end{subarray}}v_{b}\) cancel by pairs except the terms that correspond to a prefix ending at or just before an occurrence of \(a\) in \(w\), which are of the form \(-\mathrm{h}(\mathrm{R}[w_{[i(a^{r})]}])\) for \(r\in[2,s_{a}]\) or \(\mathrm{h}(\mathrm{R}[w_{[i(a^{r})+1]}])\) for \(r\in[s_{a}-1]\).
**Corollary 5.12**.: _The \(\mathrm{s}\)-permutahedron \(\operatorname{Perm}_{\mathrm{s}}(\mathrm{h})\) is contained in the hyperplane_
\[\left\{\mathbf{x}\in\mathbb{R}^{n}\Biggm{|}\sum_{i=1}^{n}x_{i}=\mathrm{h}( \mathrm{R}(n+1,1,(0)^{n})-\mathrm{h}(\mathrm{R}(n+1,1,(1)^{n}))\Bigg{\}}\right. \tag{5.6}\]
**Theorem 5.13**.: _Let \(1\leq a<c\leq n\). Let \(w\) and \(w^{\prime}\) be Stirling \(\mathrm{s}\)-permutations of the form \(u_{1}B_{a}cu_{2}\) and \(u_{1}cB_{a}u_{2}\) respectively, where \(B_{a}\) is the \(a\)-block of \(w\) and \(w^{\prime}\)._
_Then the edge of \(\text{Perm}_{\mathrm{s}}(\mathrm{h})\) corresponding to the transposition between \(w\) and \(w^{\prime}\) is:_
\[\mathbf{v}(w^{\prime})-\mathbf{v}(w)=\left(\mathrm{h}(\mathrm{R}[u_{1}c])+ \mathrm{h}(\mathrm{R}[u_{1}B_{a}])-\mathrm{h}(\mathrm{R}[u_{1}])-\mathrm{h}( \mathrm{R}[u_{1}B_{a}c])\right)(\mathbf{e}_{a}-\mathbf{e}_{c}), \tag{5.7}\]
_where \((\mathbf{e}_{i})_{i\in[n]}\) is the canonical basis of \(\mathbb{R}^{n}\)._
Proof.: We denote \(t:=\#_{w}(c,a)+1\), so that the transposition from \(w\) to \(w^{\prime}\) exchanges the \(a\)-block with the \(t\)-th occurrence of \(c\). We use the expression of the explicit coordinates given in Theorem 5.11 to compute \(\mathbf{v}(w^{\prime})-\mathbf{v}(w)\). The only routes that do not cancel out are the ones corresponding to prefixes \(u_{1}\), \(u_{1}c\), \(u_{1}B_{a}\) and \(u_{1}B_{a}c\), which gives the same route as \(u_{1}cB_{a}\). Indeed, a prefix contained in \(u_{1}\) is common to \(w\) and \(w^{\prime}\) ; a prefix that ends inside the \(a\)-block does not give information on \(c\), so the corresponding route will be common to \(w\) and \(w^{\prime}\), and a prefix that ends inside \(u_{2}\) does not give information on the relative order of the \(a\)-block and letter \(c\), so the corresponding route will also be common to \(w\) and \(w^{\prime}\). Hence:
\[\mathbf{v}(w^{\prime})-\mathbf{v}(w) =(\mathbf{v}(w^{\prime})_{a}-\mathbf{v}(w)_{a})\,\mathbf{e}_{a}+ (\mathbf{v}(w^{\prime})_{c}-\mathbf{v}(w)_{c})\,\mathbf{e}_{c}\] \[=\left(\mathrm{h}(\mathrm{R}[w^{\prime}_{[i(a^{1})]}])-\mathrm{h }(\mathrm{R}[w^{\prime}_{[i(a^{a})+1]}])-\mathrm{h}(\mathrm{R}[w_{[i(a^{1})]}] )+\mathrm{h}(\mathrm{R}[w_{[i(a^{a})+1]}])\right)\mathbf{e}_{a}\] \[\quad+\left(\mathrm{h}(\mathrm{R}[w^{\prime}_{[i(c^{t})]}])- \mathrm{h}(\mathrm{R}[w^{\prime}_{[i(c^{t})+1]}])-\mathrm{h}(\mathrm{R}[w_{[i (c^{t})]}])+\mathrm{h}(\mathrm{R}[w_{[i(c^{t})+1]}])\right)\mathbf{e}_{c}\] \[=(\mathrm{h}(\mathrm{R}[u_{1}c])-\mathrm{h}(\mathrm{R}[u_{1}cB_{ a}])-\mathrm{h}(\mathrm{R}[u_{1}])+\mathrm{h}(\mathrm{R}[u_{1}B_{a}]))\,\mathbf{e}_{a}\] \[\quad+\left(\mathrm{h}(\mathrm{R}[u_{1}])-\mathrm{h}(\mathrm{R}[ u_{1}c])-\mathrm{h}(\mathrm{R}[u_{1}B_{a}])+\mathrm{h}(\mathrm{R}[u_{1}B_{a}c]) \right)\mathbf{e}_{c}\] \[=(\mathrm{h}(\mathrm{R}[u_{1}c])+\mathrm{h}(\mathrm{R}[u_{1}B_{a }])-\mathrm{h}(\mathrm{R}[u_{1}])-\mathrm{h}(\mathrm{R}[u_{1}B_{a}c]))\,( \mathbf{e}_{a}-\mathbf{e}_{c}).\]
It follows from Lemma 5.5 that we have
\[\mathrm{h}(\mathrm{R}[u_{1}c])+\mathrm{h}(\mathrm{R}[u_{1}B_{a}])-\mathrm{h}( \mathrm{R}[u_{1}])-\mathrm{h}(\mathrm{R}[u_{1}B_{a}c])>0,\]
since \(P:=\mathrm{R}[u_{1}B_{a}]\) and \(Q:=\mathrm{R}[u_{1}c]\) are in minimal conflict at \([v_{n+1-c},v_{n+1-a}]\) and \(P^{\prime}:=\mathrm{R}[u_{1}]\) and \(Q^{\prime}:=\mathrm{R}[u_{1}B_{a}c]\) are their resolvents.
**Lemma 5.14**.: _For any strictly decreasing sequence of real numbers \(\kappa_{1}>\ldots>\kappa_{n}\), the direction \(\sum_{i=1}^{n}\kappa_{i}\mathbf{e}_{i}\) orients the edges of \(\text{Perm}_{\mathrm{s}}(\mathrm{h})\) according to the \(\mathrm{s}\)-weak order covering relations._
Proof.: This is a direct consequence of Theorem 5.13 and the remark at the end of its proof.
**Lemma 5.15**.: _The support \(\text{supp}(\text{Perm}_{\mathrm{s}}(\mathrm{h}))\), i.e. the union of faces of \(\text{Perm}_{\mathrm{s}}(\mathrm{h})\), is a polytope combinatorially isomorphic to the \((n-1)\)-dimensional permutahedron. More precisely it has:_
1. _vertices_ \(\mathbf{v}(w^{\sigma})\) _for all permutation_ \(\sigma\) _of_ \([n]\) _where_ \(w^{\sigma}\) _is the Stirling_ \(\mathrm{s}\)_-permutation_ \[w^{\sigma}=\underbrace{\sigma(1)\ldots\sigma(1)}_{s_{\sigma(1)}\text{ times}}\ldots\underbrace{\sigma(n)\ldots\sigma(n)}_{s_{\sigma(n)}\text{ times}},\]
2. _facet defining inequalities_ (5.8) \[\langle\delta,\mathbf{x}\rangle\geq\mathrm{h}(\mathrm{R}(n+1,1,(0)^{n}))- \mathrm{h}(\mathrm{R}(n+1,1,\delta)),\] (5.9) \[\langle\mathbf{1}-\delta,\mathbf{x}\rangle\leq\mathrm{h}(\mathrm{R}(n+1,1, \delta))-\mathrm{h}(\mathrm{R}(n+1,1,(1)^{n})),\] _for all_ \(\delta\in\{0,1\}^{n}\)_._
Proof.:
1. Let \(\sigma\) be a permutation of \([n]\). we consider the linear functional \(f(x)=\sum_{a\in[n]}\sigma(a)x_{a}\). Among the vertices of \(\operatorname{Perm_{s}(h)}\)\(f\) is maximized on \(\mathbf{v}(w^{\sigma})\). Indeed, let \(w^{\prime}\) be a Stirling s-permutation. * If \(w^{\prime}\) contains an ascent \((a,c)\) such that \(\sigma(a)>\sigma(c)\), then \(f\) is increasing along the edge of direction \(\mathbf{e}_{a}-\mathbf{e}_{c}\) corresponding to the transposition of \(w^{\prime}\) along the ascent \((a,c)\). * If \(w^{\prime}\) contains a descent \((c^{\prime},a^{\prime})\) such that \(\sigma(a^{\prime})<\sigma(c^{\prime})\), then \(f\) is increasing along the edge of direction \(\mathbf{e}_{\mathbf{c}^{\prime}}-\mathbf{e}_{\mathbf{a}^{\prime}}\) corresponding to the transposition of \(w^{\prime}\) along the descent \((c^{\prime},a^{\prime})\). * If \(w^{\prime}\) is in neither of the above cases, than necessarily \(w^{\prime}=w^{\sigma}\). This shows that the vertices of \(\operatorname{supp}(\operatorname{Perm_{s}(h)})\) have the same normal cones as the \((n-1)\)-permutahedron (embedded in \(\mathbb{R}^{n}\)), hence its normal fan is the braid fan.
2. It follows from the fact that all cliques \(\Delta_{w}\) contain the routes \(\operatorname{R}(n+1,1,(0)^{n})\) and \(\operatorname{R}(n+1,1,(1)^{n})\) (see the remark before Lemma 3.9) that all vertices of \(\operatorname{Perm_{s}(h)}\) are contained in the region \[\Big{\{}\mathbf{x}\in\mathbb{R}^{n}\,\Big{|}\,\operatorname{h}( \operatorname{R}(n+1,1,(0)^{n})) =\operatorname{h}(\operatorname{R}(n+1,1,(1)^{n}))+\sum_{a\in[n] }x_{a}\] \[=\min_{\theta\in\{0,1\}^{n}}\{\operatorname{h}(\operatorname{R}(n +1,1,\theta))+\sum_{a\in[n]}\theta_{a}x_{a}\}\Big{\}}.\] This region is exactly defined by intersecting the half-spaces defined by 5.8 and 5.9. Moreover, let \(\delta\in\{0,1\}^{n}\) and \(I:=\{i\in[n]\,|\,\delta_{i}=1\}\). The equality in 5.8 and 5.9 is achieved exactly on vertices \(\mathbf{v}(w^{\sigma})\) where \(\{\sigma(1),\ldots,\sigma(|I|)\}=I\). Hence, these inequalities define the facets of \(\operatorname{supp}(\operatorname{Perm_{s}(h)})\).
_Remark 5.16_.: With similar arguments we can see that the restriction of the s-weak order to a face of \(\operatorname{supp}(\operatorname{Perm_{s}(h)})\), associated to an ordered partition, will correspond to a product of s\({}^{\prime}\)-weak orders, one for each part of the ordered partition.
Note that Lemma 5.15 finishes to answer Conjecture 1.2 in the case where s is a composition, because then the zonotope \(\sum_{1\leq i<j\leq n}s_{j}[\mathbf{e}_{i},\mathbf{e}_{j}]\) is combinatorially isomorphic to the \((n-1)\)-dimensional permutahedron.
In the case where h is given by Lemma 5.6, we can even go a bit further.
**Proposition 5.17**.: _Let \(\varepsilon>0\) be a small enough real number so that \(\operatorname{h}_{\varepsilon}\) is an admissible height function for \(\operatorname{Triang}_{DKK}(\operatorname{oru}(\operatorname{s}),\preceq)\)._
_Then the support \(\operatorname{supp}(\operatorname{Perm_{s}(h_{\varepsilon})})\) is a translation of the zonotope \(2\sum_{1\leq a<c\leq n}s_{c}\,\varepsilon^{c-a}[\mathbf{e}_{a},\mathbf{e}_{c}]\)._
Proof.: It follows from Lemma 5.15 that the edges of \(\operatorname{supp}(\operatorname{Perm_{s}(h_{\varepsilon})})\) are of the form \([\mathbf{v}(w^{\sigma}),\mathbf{v}(w^{\sigma^{\prime}})]\), where \(\sigma\) and \(\sigma^{\prime}\) are permutations of \([n]\) related by a transposition along an ascent \((a,c)\). When we plug the expression 5.3 of \(\operatorname{h}_{\varepsilon}\) into the formula 5.7, where the letter \(c\) is replaced by \(s_{c}\) occurrences of \(c\), we see that the only terms that do not cancel out are those involving the pair \((a,c)\):
\[\mathbf{v}(w^{\sigma^{\prime}})-\mathbf{v}(w^{\sigma}) =-\varepsilon^{c-a}\Big{(}(0+1)^{2}+(s_{c}+0)^{2}-(0+0)^{2}-(s_{c }+1)^{2}\Big{)}(\mathbf{e}_{a}-\mathbf{e}_{c})\] \[=2\ s_{c}\ \varepsilon^{c-a}(\mathbf{e}_{a}-\mathbf{e}_{c}).\]
Hence all edges of the same direction have the same length, and since \(\operatorname{supp}(\operatorname{Perm}_{\operatorname{s}}(\operatorname{h}_{ \varepsilon}))\) is combinatorially equivalent to a permutahedron, it follows that it is a zonotope.
## 6. A Lidskii-type decomposition of the s-permutahedron
In this section we apply the Lidskii formulas for the volume and lattice points of flow polytopes to the polytope \(\mathcal{F}_{\operatorname{\operatorname{\operatorname{\operatorname{ord}}}} \operatorname{\operatorname{\operatorname{ord}}}\operatorname{\operatorname{ \operatorname{ord}}}\operatorname{\operatorname{\operatorname{ord}}}\operatorname{ \operatorname{\operatorname{ord}}}\operatorname{\operatorname{\operatorname{ord}}} \operatorname{\operatorname{\operatorname{ord}}}\operatorname{\operatorname{ \operatorname{ord}}}\operatorname{\operatorname{\operatorname{ord}}} \operatorname{\operatorname{\operatorname{ord}}}\operatorname{\operatorname{ \operatorname{ord}}}\operatorname{\operatorname{\operatorname{ord}}} \operatorname{\operatorname{\operatorname{ord}}}\operatorname{\operatorname{ \operatorname{ord}}}\operatorname{\operatorname{\operatorname{ord}}} \operatorname{\operatorname{\operatorname{ord}}}\operatorname{\operatorname{ \operatorname{ord}}}\operatorname{\operatorname{\operatorname{ord}}} \operatorname{\operatorname{\operatorname{ord}}}\operatorname{\operatorname{ \operatorname{ord}}}\operatorname{\operatorname{\operatorname{ord}}} \operatorname{\operatorname{\operatorname{ord}}}\operatorname{\operatorname{ \operatorname{\operatorname{ord}}}}\operatorname{\operatorname{ \operatorname{\operatorname{ord}}}\operatorname{\operatorname{\operatorname{ \operatorname{ord}}}}\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{ord}}}}\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{ord}}}}\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{ord}}}}\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{ord}}}}\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{ord}}}}}\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\cdot{ \cdot
Next, we use the Lidskii formulas in Theorem 6.1 to calculate the LHS above. The graph \(\operatorname{oru}_{n}\) has shifted outdegrees \(o_{i}=1\) and shifted indegrees \(d_{i}=1\), for \(i=1,\ldots,n\). By Equations (6.1) and (6.2) we have that
\[\#\mathcal{F}^{\mathbb{Z}}_{\operatorname{oru}_{n}} \Big{(}s_{n},s_{n-1},\ldots,s_{2},-\sum_{i}s_{i}\Big{)}\] \[=\sum_{\mathbf{j}}\binom{s_{n}+1}{j_{1}}\binom{s_{n-1}+1}{j_{2}} \cdots\binom{s_{2}+1}{j_{n-1}}\#\mathcal{F}^{\mathbb{Z}}_{\operatorname{oru}_ {n}}(\mathbf{j-1}),\] \[=\sum_{\mathbf{j}}\left(\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
to \(n+1-i\) and \(j_{1}+\ldots+j_{i-1}-(i-1)\) unlabelled blocks, where \(j_{k}\) is the number of blocks covered by the \((n+1-k)\)-block, whose positions were chosen at the step \(k\). At step \(i\) we choose the number \(j_{i}\) of blocks that will be covered by the \((n+1-i)\)-block and one among the \(\left(\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
The _Gale order_ on \(\left(\!\!\left(\!\!\left(\!\!\left[{}_{k}^{[m]}\right)\!\!\right)\!\right)\!\right\) is given by \(A\leq B\) if \(a_{i}\leq b_{i}\) for all \(i\in[k]\), where \(a_{1},\ldots,a_{k}\) (respectively \(b_{1},\ldots,b_{k}\)) are the elements of \(A\) (respectively \(B\)) ordered increasingly (see [16]). We denote this by \(\mbox{\it Gale}(k,m)\).
See Figure 14 for the example of \(s=(1,3,2)\). The central blue piece corresponds to \(\mathbf{j}=(1,1)\), which is the product of \(\mbox{\it Gale}(1,3)\) and \(\mbox{\it Gale}(1,2)\) where each is a chain of respective sizes \(3\) and \(2\). The two side pieces each correspond to \(\mathbf{j}=(2,0)\), and they are the Hasse diagrams of \(\mbox{\it Gale}(2,3)\).
## 7. Further directions
_Relation with the \(\mathrm{s}\)-associahedron._ Ceballos and Pons also conjectured ([8, Conjecture 2]) that there exists a geometric realization of the \(\mathrm{s}\)-permutahedron (when \(\mathrm{s}\) is a strict composition) such that the \(\mathrm{s}\)-associahedron can be obtained from it by removing certain facets. Our realizations seem very promising for providing a geometric relation between \(\mathrm{s}\)-permutahedra and \(\mathrm{s}\)-associahedra but this is still work in progress.
_Other poset structures on \(\mathrm{s}\)-decreasing trees._ In this article, we considered a specific triangulation of the flow polytope \(\mathcal{F}_{\mathrm{oru}(\mathrm{s})}\) induced by a fixed framing of \(\mathrm{oru}(\mathrm{s})\) described in Definition 3.4. It would be interesting to study DKK triangulations of \(\mathcal{F}_{\mathrm{oru}(\mathrm{s})}\) arising from other framings. For example, Bell et al. [3] showed that both the \(\mathrm{s}\)-Tamari lattice and the principal order ideal \(I(\nu)\) in Young's lattice, where \(\nu_{i}=1+s_{n-i+1}+s_{n-i+2}+\cdots+s_{n}\), can be realized as the graph dual to DKK triangulations of a flow polytope of the \(\nu\)-caracol graph. Do other interesting posets on \(\mathrm{s}\)-decreasing trees arise from other framed triangulations of \(\mathcal{F}_{\mathrm{oru}(\mathrm{s})}\)?
Figure 14. Subdivision of the complex of \(\mathrm{s}\)-trees for \(s=(1,3,2)\) that yields Equation (6.7).
_The case where \(s\) is a weak composition._ The s-weak order of Ceballos and Pons is defined for weak compositions but our realizations, which rely on a triangulation of the flow polytope \(\mathcal{F}_{\mathrm{oru(s)}}\) (see Sections 3.2, 4.2, and 5.2) hold only for strict compositions, since the graph \(\mathrm{oru(s)}\) is not defined for weak compositions. Furthermore, although Corollary 3.7 does not hold for weak compositions, Equation (6.5) does, so it would be interesting to extend our story to the case of weak compositions.
## 8. Acknowledgements
This work began under the collaborative project MATH-AMSUD with code 22-MATH-01 ALGonCOMB and we are very grateful for their support. We thank Viviane Pons for helpful comments and suggestions and for proposing this problem during the open problem session of the VIII Encuentro Colombiano de Combinatoria ECCO 2022. We extend our gratitude to all of the organizers of ECCO 2022. We also thank Cesar Ceballos, Balthazar Charles, Arnau Padrol, Vincent Pilaud, Germain Poullot, Francisco Santos, Yannic Vargas, and the combinatorics team of LIGM for helpful comments and proofreading of previous versions of this manuscript. For this work we used the open-source software Sage[13]. Rafael S. Gonzalez D'Leon is very grateful for the summer research stipend program of Loyola University Chicago since part of this work happened under their support.
|
2304.05555
|
Ultrastable super-Tonks-Girardeau gases under weak dipolar interactions
|
The highly excited super-Tonks-Girardeau (sTG) gas was recently observed to
be extremely stable in the presence of a weak dipolar repulsion. Here we reveal
the underlying reason for this mysterious phenomenon. By exactly solving the
trapped small clusters with both contact and dipolar interactions, we show that
the reason lies in the distinct spectral responses between sTG gas and its
decaying channel (bound state) when turn on a weak dipolar interaction.
Specifically, a tiny dipolar force can produce a visible energy shift for the
localized bound state, but can hardly affect the extended sTG branch. As a
result, the avoided level crossing between two branches is greatly modified in
both location and width in the parameter axis of coupling strength, leading to
a more (less) stable sTG gas for a repulsive (attractive) dipolar force. These
results, consistent with experimental observations, are found to robustly apply
to both bosonic and fermionic systems.
|
Yu Chen, Xiaoling Cui
|
2023-04-12T01:23:45Z
|
http://arxiv.org/abs/2304.05555v3
|
# On the mystery of ultrastable super-Tonks-Girardeau gases under weak dipolar interactions
###### Abstract
The highly excited super-Tonks-Girardeau (sTG) gas was recently observed to be extremely stable in the presence of a weak dipolar repulsion. Here we reveal the underlying reason for this mysterious phenomenon. By exactly solving the trapped small clusters with both contact and dipolar interactions, we show that the reason lies in the distinct spectral responses between sTG gas and its decaying channel (bound state) when turn on a weak dipolar interaction. Specifically, a tiny dipolar force can produce a visible energy shift for the localized bound state, but can hardly affect the extended sTG branch. As a result, the avoided level crossing between two branches is greatly modified in both location and width in the parameter axis of coupling strength, leading to a more (less) stable sTG gas for a repulsive (attractive) dipolar force. These results, consistent with experimental observations, are found to robustly apply to both bosonic and fermionic systems.
Super-Tonks-Girardeau (sTG) state stays in a highly excited branch in one dimension (1D) under inter-particle attractions, which hosts an even stronger correlation than the Tonks-Girardeau regime with hard-core repulsions. Such intriguing state was first predicted in identical bosons by quantum Monte-Carlo[1] and Bethe-ansatz methods[2], and subsequently realized in a quasi-1D ultracold Bose gas as tuning the 1D coupling strength across resonance[3]. Later, the sTG state of spin-1/2 fermions was also discovered with Bethe-ansatz solutions[4] and observed experimentally in trapped small clusters[5; 6]. Recently the fermionic sTG gas has attracted great interests in exploring the itinerant Ferromagnetism in 1D and various spin chain configurations without lattice[7; 8; 9; 10; 11; 12; 13; 14].
However, the sTG gas is not always stable in practice -- as moving away from resonance, the gas will eventually collapse to low-lying bound states at intermediate attraction strength[3], making it impossible to approach stronger correlations with higher repulsive energies. Surprisingly, such instability has recently been rescued in experiment just by adding a weak dipolar repulsion among the atoms[15]. There, the gaseous repulsive branch was shown to become extremely stable over the whole sTG regime, and can even evolve adiabatically for two rounds of interaction cycles with continuously increasing energies, realizing the quantum holonomy ever in a physical system[16]. On the other hand, when switch to a weak dipolar attraction, the sTG gas was found to be less stable than the usual case without dipolar interaction. These observations raise two big puzzles. First, how would a weak dipolar force, which barely changes the energy of sTG gas, can influence its stability so significantly? and secondly, why does this influence depend on the sign of dipolar force? Up to date no definitive answers arise to these puzzles.
In this work, we attempt to resolve these puzzles by exactly solving three trapped atoms (bosons or spin-1/2 fermions) with both contact and dipolar interactions. Such three-body system comprises a minimal yet fundamental model to describe the instability of sTG branch, as manifested by its avoided level crossings with many excited bound states as tuning the coupling strength. Based on this, we observe that t
Figure 1: (Color online) Illustration for the modified stability of sTG gas by dipolar interaction \(V_{dd}\). Red dashed line marks the energy level of sTG gas, which can become unstable due to the hybridization with an excited bound state (EBS, dotted line) at their avoided level crossing. In the presence of a weak \(V_{dd}\), sTG is hardly affected in energy while the EBS spectrum can be shifted visibly due to its localized wave function and thus large response to \(V_{dd}(\sim 1/r^{3})\). For a repulsive \(V_{dd}(>0)\), the EBS energy is up-shifted and the avoided crossing moves to \(1/g\to 0^{-}\) with a narrower width, giving rise to a more stable sTG gas with less hybridization with EBS. In comparison, an attractive \(V_{dd}(<0)\) leads to a down-shifted EBS level and thus the avoided crossing moves to \(1/g\to-\infty\) with a broader width, giving a less stable sTG gas.
sTG gas originates from its distinct spectral response under a weak dipolar interaction, as compared to all bound state channels it decays into. Specifically, given the form of dipolar repulsion as \(\sim 1/r^{3}\) (\(r\) is the inter-particle distance), it can accumulate much more interaction energy for the localized bound states than the extended sTG branch. As a result, as illustrated in Fig.1, the avoided level crossing between the two branches would shift to strong attraction regime if the dipolar force is repulsive, leading to a smaller wave function overlap and thus a narrower width at their crossing. This enhances the stability of sTG gas. Alternatively, when switch to an attractive dipolar force, the inter-branch crossing moves to weak attraction side with a broader width, giving a less stable sTG gas. These effects, consistent with experimental observations in [15], are found to universally apply to identical bosons and spin-1/2 fermions. Our results thus suggest a general tool to tune the stability of target state by manipulating its decay channel under designed potentials.
We consider the following Hamiltonian (\(\hbar=1\)):
\[H = \sum_{i}\left(-\frac{1}{2m}\frac{\partial^{2}}{\partial x_{i}^{2 }}+\frac{1}{2}m\omega^{2}x_{i}^{2}\right) \tag{1}\] \[+\sum_{\langle i,j\rangle}\Big{(}g\delta(x_{i}-x_{j})+V_{dd}(x_{ i}-x_{j})\Big{)};\]
here \(x_{i}\) is the 1D coordinate; \(\omega\) is the harmonic trap frequency, and accordingly we define the trap length \(l=1/\sqrt{\mu\omega}\) (\(\mu=m/2\) is the reduced mass); \(g=-1/(\mu a)\) is the contact coupling with 1D scattering length \(a\); for the dipolar interaction \(V_{dd}(r)\), since its short-range part is greatly modified by higher transverse modes in realistic quasi-1D geometry[17; 18; 19], here we take a short-range cutoff \(r_{c}(=0.15l)\) and simplify it as \(D/|r|^{3}\) for \(r>r_{c}\) and 0 otherwise.
The three-body problem of identical bosons or spin-1/2 fermions can be exactly solved based on (1). To facilitate later discussions, we shall mainly focus on the fermion case (\(\downarrow\uparrow\uparrow\)) where some analytical results can be obtained. Consider a spin-\(\downarrow\) atom at \(x_{1}\) and two \(\uparrow\) atoms at \(x_{2},x_{3}\), we define \(r=x_{2}-x_{1}\) and \(\rho=\frac{2}{\sqrt{3}}(x_{3}-(x_{1}+x_{2})/2)\) to describe the relative motions, respectively, within a \(\downarrow\)-\(\uparrow\) dimer and between the dimer and the rest fermion. Writing the three-body ansatz in the center-of-mass(CoM) frame as \(\Psi(r,\rho)=\sum_{mn}c_{mn}\phi_{m}(r)\phi_{n}(\rho)\), where \(\phi_{m}\) and \(\phi_{n}\) are harmonic eigen-states along \(r\) and \(\rho\) with eigen-energies \(\epsilon_{k}=(k+1/2)\omega\) (\(k=m,n\)), and plugging it into the Schrodinger equation \(H\Psi(r,\rho)=E\Psi(r,\rho)\), we obtain[20]
\[(E-\epsilon_{m}-\epsilon_{n})c_{mn}=g\sum_{ij}c_{ij}\phi_{i}(0) \Big{(}\phi_{m}(0)\delta_{j,n}+\eta A_{mn,j}\Big{)}\] \[+D\sum_{ij}c_{ij}\Big{(}B_{m,i}\delta_{j,n}+B_{mn,ij}^{+}+B_{mn, ij}^{-}\Big{)}. \tag{2}\]
where \(\eta=-1\) for fermions and 2 for bosons; \(A_{mn,j}=\int d\rho\phi_{m}(\sqrt{3}\rho/2)\phi_{n}(-\rho/2)\phi_{j}(\rho)\); \(B_{m,i}=\int_{|r|>r_{c}}dr\phi_{m}(r)\phi_{i}(r)/|r|^{3}\); and \(B_{mn,ij}^{\pm}=\int_{|r_{\pm}|>r_{c}}drd\rho\phi_{m}(r)\phi_{n}(\rho)\phi_{i}( r)\phi_{j}(\rho)/|r_{\pm}|^{3}\) (here \(r_{\pm}=\pm r/2+\sqrt{3}\rho/2\)). By diagonalizing the matrix equation (2), both \(E\) and \(\{c_{mn}\}\) can be obtained.
We remark that the trapped three-body system comprises a minimal yet fundamental model to describe the instability of sTG gas. To begin with, let us consider three fermions (\(\downarrow\uparrow\uparrow\)) without dipolar interaction (\(D=0\)). Its spectrum has been studied previously[7; 8; 10], and here we shall focus on the avoided level crossing of sTG branch with a sequence of excited bound states, as labeled by \(n=1,2,3...\) in Fig.2(a) from weak to strong coupling regime. Near \(1/g\to 0^{-}\), these bound states are essentially composed by a tight \(\uparrow\downarrow\) dimer plus a free \(\uparrow\) atom at excited levels, as described by the atom-dimer wavefunction:
\[\psi_{ad}^{(m)}=\Phi_{d}(r)\phi_{m}(\rho)-(x_{2}\leftrightarrow x_{3}), \tag{3}\]
with energy
\[E_{ad}^{(m)}=E_{d}+\epsilon_{m}. \tag{4}\]
Here \(\Phi_{d}\) is the dimer wavefunction with energy \(E_{d}\), and \(\phi_{m}\) is the free fermion state with energy \(\epsilon_{m}\). On the other hand, for the repulsive sTG branch near resonance, one can treat \(1/g\) as a small parameter and construct the effective spin-chain model \(H=J/g\sum_{i}(\mathbf{s}_{i}\cdot\mathbf{s}_{i+1}-1/4)\)[11; 12; 13; 14], with \(J\) the spin-exchange amplitude between neighboring spin-ordered states. This gives the wavefunction and energy of sTG gas as:
\[\Psi_{sTG} = \Psi_{0}-\frac{1}{g}\Psi_{1}; \tag{5}\] \[E_{sTG} = E_{0}-\frac{3J}{2g}, \tag{6}\]
where \(\Psi_{0}\) is the fermionalized wavefunction in hard-core limit with energy \(E_{0}\), and \(\Psi_{1}\) denotes the first order correction when a \(\uparrow\downarrow\) pair come close to each other (see Supplementary materials[20]). In Fig.2(b), we take the second avoided crossing (\(n=2\)) for example and show that (3,5) can indeed well approximate the two branches far from their level crossings.
Importantly, Eqs.(3,5) suggest qualitatively different real-space distributions between sTG and atom-dimer states. To be concrete, all atom-dimer states have a dominant weight when one \(\uparrow\)-\(\downarrow\) pair come close to each other, i.e., \(r\to 0\) or \(r_{+}=(r+\sqrt{3}\rho)/2\to 0\), given that they contain very localized dimer components. In contrast, sTG state is dominated by hard-core part, which is much more extended in real space and only has a little weight (\(\propto 1/g\)) along the dimer lines. Such difference is numerically confirmed in Fig.2(c1,c3), where we have plotted real-space \(\Psi\) for different branches and the results are consistent with theoretical predictions from Eqs.(3, 5) shown in Fig.2(d1,d3).
Above wave-function analyses are crucial for understanding the loss mechanism of sTG gas. As shown in Fig.2(b), at certain \(g_{c}\) when the sTG state and one atom-dimer branch have perfect energy match, they can hybridize strongly and open an energy gap. Accordingly, an avoided level crossing is generated near \(g_{c}\), and the resulted wavefunction inherits all the key features from sTG and atom-dimer states, see Fig.2(c2). Therefore, when drive the sTG gas to \(\sim g_{c}\), it tends to develop visible atom-dimer feature and accumulate great possibilities when \(\uparrow\)-\(\downarrow\) come close together. Such accumulation causes the instability of sTG branch, since it can easily undergo an inelastic decay to deep molecule and cause atom loss. We note that similar inelastic loss due to couplings to excited molecular states was found previously for two atoms in anharmonic potentials[21; 22; 23].
Practically, the loss possibility of sTG gas depends on how strongly it couples to the excited bound states, which can be evaluated by the energy gap \(E_{G}\) at each avoided crossing. Numerically, we extract \(E_{G}\) as the minimal energy difference nearby each avoided crossing, and accordingly the location \(g_{c}\) can also be identified. The results of \(1/g_{c}\) and \(E_{G}\) are shown in Fig.2(e1) and (e2). We can see that as the crossing point moving away from resonance (smaller index \({}^{\prime}n^{\prime}\)), \(E_{G}\) becomes larger, consistent with a larger wavefunction overlap between \(\psi_{sTG}\) and \(\psi_{ad}^{(n)}\) (see comparison in Fig.2(e2)). This indicates a less stable sTG gas, as it has a stronger hybridization with excited bound states in a broader parameter regime and thus can easily transit to decay channels. This picture is supported by experimental observations that the sTG gas eventually collapses at intermediate attractive \(g\) as moving away from resonance[3; 15].
Figure 2: (Color online). Hybridization between sTG branch and excited bound states for three harmonically trapped fermions (\(\uparrow\)\(\downarrow\)) without dipolar interaction. (a) Spectrum in the center-of-mass frame, with the lowest repulsive branch highlighted as red (the part at \(1/g<0\) is called as sTG gas). Indices \({}^{\prime}n=1,2,3...^{\prime}\) mark the locations of avoided level crossing between sTG and various excited atom-dimer states from weak to strong couplings. (b) Magnified spectrum near the second avoided crossing (\(n=2\)). The RGB color map is provided according to the wavefunction overlap with sTG (Eq.5, red) and atom-dimer (Eq.3, blue) states. (c1-c3) Contour plots of three-body wave-function \(\Psi\) in \((r,\rho)\) plane for three typical coupling strengths as marked in (b). For comparison, (d1,d3) show theoretical predictions to (c1,c3) using Eqs.(3,5). (e1,e2) show the location \(1/g_{c}\) and energy gap \(E_{G}\) for each avoided level crossing. For comparison, the theoretical prediction to \(1/g_{c}\) by comparing (4) and (6) is shown in (e1), and the wavefunction overlap between (3) and (5) is shown in (e2). Here the energy, length and coupling \(g\) are respectively in units of \(\omega\), \(l\) and \(\omega l\).
Given the loss mechanism of sTG gas as above, now we are ready to study the effect of dipolar interaction \(V_{dd}\). In accordance with Ref.[15], we focus on small dipolar interaction with \(|D|\ll\omega l^{3},gl^{2}\). We will show below that even such a weak \(V_{dd}\) can dramatically enhance or reduce the stability of sTG gas, depending on whether it is repulsive or attractive. The key lies in the distinct spectral responses between sTG and excited bound states when \(V_{dd}\) is turned on.
Taking a typical \(g\) away from any \(g_{c}\), in Fig.3(a) we plot the energy shifts \(\Delta E\) for the sTG gas and its nearest atom-dimer branch as varying \(|D|\). Clearly, the atom-dimer energy changes rapidly as \(|D|\) increases, while the sTG energy changes much more slowly. This can be attributed to very different real-space distributions of the two states. Namely, the atom-dimer is more localized along \(r\), \(r_{+}\to 0\) and thus it produces a significant spectral response to \(V_{dd}\sim(1/|r|^{3}+1/|r_{+}|^{3}+...)\); on the contrary, sTG is more extended and has little weight near \(r\), \(r_{+}\to 0\), thus producing a negligible energy shift to \(V_{dd}\). At small \(D\), \(\Delta E\) of each branch can be well approximated by mean-field shift \(\langle V_{dd}\rangle\), as shown by dotted lines in Fig.3(a). This allows us to analytically determine the shift of crossing point, \(\Delta(1/g_{c})\), by equating (4) and (6) after adding up \(\langle V_{dd}\rangle\) for each branch:
\[\Delta(1/g_{c})=\frac{\langle V_{dd}\rangle_{ad}-\langle V_{dd}\rangle_{sTG} }{\partial E_{d}/\partial(1/g)-3J/2}. \tag{7}\]
Eq.(7) tells that the distinct spectral responses, \(\langle V_{dd}\rangle_{sTG}\neq\langle V_{dd}\rangle_{ad}\), directly lead to a finite shift \(\Delta(1/g_{c})\) of the inter-branch crossing. Moreover, the sign of \(\Delta(1/g_{c})\) exactly follows \(D\): a positive \(D\) will drive the crossing point towards resonance (\(\Delta(1/g_{c})>0\)), while a negative \(D\) drives it to the opposite direction (\(\Delta(1/g_{c})<0\)). All these features are verified numerically in Fig.3(b), where Eq.(7) provides a reasonably good fit to the change of \(1/g_{c}\) at small \(|D|\). In addition, we observe that the shift \(\Delta(1/g_{c})\) becomes less pronounced for those crossings near resonance. This can be attributed to the large denominator of Eq.7 produced by \(\partial E_{d}/\partial(1/g)\propto g^{3}\) for deep dimers. Therefore, \(V_{dd}\) can hardly modify the level crossings near resonance, but visibly affect the crossings with small indices \({}^{\prime}n^{\prime}\) at intermediate \(g\).
Given the intimate relation between \(1/g_{c}\) and \(E_{G}\) (see Fig.2(e1,e2)), the shift of \(1/g_{c}\) by \(V_{dd}\) inevitably leads to the change of \(E_{G}\), as plotted in Fig.3(c). For a repulsive \(V_{dd}(D>0)\), all \(1/g_{c}\) shift to resonance regime with decreasing energy gap \(E_{G}\), indicating a more stable sTG gas; while for an attractive \(V_{dd}(D<0)\), all \(1/g_{c}\) shift away from resonance with increasing \(E_{G}\), indicating a less stable sTG. Again, \(E_{G}\) changes most visibly for the outmost crossing (\({}^{\prime}n=1^{\prime}\)). Remarkably, all \(E_{G}\) become vanishingly small at a weak \(D/(\omega l^{3})\sim 0.01\), suggesting an extremely sTG gas in the whole \(g<0\) regime. We note that if take the same \(D\) as used in experiment[15], as marked by vertical dot line in Fig.3(c), the first \(E_{G}\) can reduce to about one third of its original value at \(D=0\), and all the rest \(E_{G}\) become negligibly small.
In above we have shown how a weak repulsive or attractive \(V_{dd}\) can significantly affect the stability of fermionic sTG gas, while leaving its energy barely changed. The key ingredient here is the dramatic energy response of excited bound state to \(V_{dd}\), which leads to a modified avoided crossing (in both location and width) between it and the sTG branch. In this way, the stability property of sTG gas is changed, and its dependence on the sign of \(V_{dd}\) (or \(D\)) is naturally resulted. We have checked that the same physics also hold for three trapped bosons[20], with a minor difference that bosons support cluster bound states that cannot fall into the atom-dimer description. Finally, we remark that since the physics above does not rely on the atom number, it can also apply to large systems as in the experiment[15]. In that case, there are many more excited bound states from higher harmonic levels to have (avoided) level crossing with the sTG branch. However, only those far from resonance are responsible for the instability of sTG gas due to their stronger hybridization in-between. The presence of a weak dipolar interaction is expected to shift their energies visibly due to their localized nature (c.f. Fig.1), giving the significant change of the stability of sTG gas.
Figure 3: (Color online). Response of three fermions to a weak dipolar interaction with strength \(D\). (a) Energy shifts of sTG and excited atom-dimer branches as functions of \(|D|\) at given \(1/g=-0.27\). Dotted lines show mean-field shifts \(\langle V_{dd}\rangle\). (b) Locations of three avoided crossings (as marked by \({}^{\prime}n=1,2,3^{\prime}\) in Fig.2(a)) as functions of \(|D|\). Dotted lines shows linear fits according to Eq.7. (c) Associated energy gap \(E_{G}\) of each avoided crossing as a function of \(|D|\). Dotted vertical line marks the strength of repulsive \(D\) used in experiment[15]. Here the energy \(E\), coupling \(g\), and dipolar force \(D\) are respectively in units of \(\omega\), \(\omega l\) and \(\omega l^{3}\).
In summary, we have revealed the underlying mechanism for a mysterious phenomenon recently observed in 1D sTG gas, namely, its greatly enhanced (reduced) stability by a weak repulsive (or attractive) dipolar interaction. The key to this phenomenon is the significant spectral response of excited bound states -- the decay channel of sTG gas -- to the dipolar force. Therefore the sTG gas is indirectly affected due to the inter-branch hybridization at their level crossing, leading to a modification of sTG stability but not its energy. In this regard, our results suggest a powerful tool to tune the stability of target state by manipulating its decay channel under designed potentials. Such state-selective manipulation may help to engineer many more fascinating long-lived quantum states in cold atoms platform in future.
**Acknowledgement.** We thank Benjamin Lev for positive and helpful feedback on our manuscript. This work is supported by the National Key Research and Development Program of China (2018YFA0307600), the National Natural Science Foundation of China (12074419, 12134015), and the Strategic Priority Research Program of Chinese Academy of Sciences (XDB3300000).
|
2303.10841
|
Gradient-flowed order parameter for spontaneous gauge symmetry breaking
|
The gauge-invariant two-point function of the Higgs field at the same
spacetime point can make a natural gauge-invariant order parameter for
spontaneous gauge symmetry breaking. However, this composite operator is
ultraviolet divergent and is not well defined. We propose using a gradient flow
to cure the divergence from putting the fields at the same spacetime point. As
a first step, we compute it for the Abelian Higgs model with a positive mass
squared at the one-loop order in the continuum theory using the saddle-point
method to estimate the finite part. The order parameter consistently goes to
zero in the infrared limit of the infinite flow time.
|
Kengo Kikuchi, Kenji Nishiwaki, Kin-ya Oda
|
2023-03-20T03:12:26Z
|
http://arxiv.org/abs/2303.10841v2
|
# **Gradient-flowed order parameter**
###### Abstract
The gauge-invariant two-point function of the Higgs field at the same spacetime point can make a natural gauge-invariant order parameter for spontaneous gauge symmetry breaking. However, this composite operator is ultraviolet divergent and is not well-defined. We propose using a gradient flow to cure the divergence from putting the fields at the same spacetime point. As a first step, we compute it for the Abelian Higgs model with a positive mass-squared at the one-loop order in the continuum theory using the saddle-point method to estimate the finite part. The order parameter consistently goes to zero in the infrared limit of the infinite flow time.
\({}^{*}\) _RIKEN iTHEMS, Wako, Saitama 351-0198, Japan_
\({}^{\dagger}\) _Department of Physics, Shiv Nadar Institution of Eminence,_
_Gautam Buddha Nagar 201314, India_
\({}^{\ddagger}\) _Department of Mathematics, Tokyo Woman's Christian University, Tokyo 167-8585, Japan_
###### Contents
* 1 Introduction
* 2 Gradient flow of Abelian Higgs model
* 2.1 Setup
* 2.2 Gradient Flow
* 3 UV-finiteness of gauge two-point function
* 3.1 Configurations
* 4 Higgs two-point function as flowed order parameter
* 4.1 Configurations
* 4.2 Evaluation of momentum integrals
* 4.2.1 Divergent part in saddle-point approximation
* 4.2.2 UV divergent part without approximation
* 4.3 Asymptotic form in large flow time
* 5 Summary and discussions
* A Details on Higgs two-point function at one loop
* A.1 Leading order
* A.2 One-loop order
* A.2.1 Diagram A
* A.2.2 Diagram B
* A.2.3 Diagram C
* A.2.4 Diagram D
* A.2.5 Diagram E
* A.2.6 Diagram a
* A.2.7 Diagram b
* A.2.8 Diagram c
Introduction
A non-perturbative definition of gauge theory has been given only on a lattice so far. A gauge symmetry cannot be spontaneously broken by the Higgs mechanism when one path integrates over all the gauge configurations on the lattice [1]. In the continuum perturbative calculation, the vacuum expectation value (VEV) of the Higgs field, \(\left\langle\widehat{\Phi}(x)\right\rangle\), is regarded as the order parameter of the spontaneous symmetry breaking (SSB). The crucial argument for the impossibility of the SSB in the non-perturbative lattice gauge theory is that \(\widehat{\Phi}(x)\) is not a gauge-invariant operator, and hence its expectation value \(\left\langle\widehat{\Phi}(x)\right\rangle=\int D\Phi\,\Phi(x)\,e^{-S[\Phi]}\) becomes zero, due to the Elitzur theorem when one path-integrates over all the gauge configurations without a gauge fixing [1, 2, 3]. In the lattice gauge theory, the SSB can be measured e.g. by the two-point correlation function of Higgs field at two different points, which is not gauge invariant, by limiting the path-integral to a particular fixed gauge slice; see, e.g., Ref. [4] and also Ref. [5] for a recent review. It is desirable to have a gauge-invariant order parameter that does not require the gauge fixing.
A natural gauge invariant operator is \(\widehat{\Phi}^{\dagger}(x)\,\widehat{\Phi}(x)\). The problem is that this operator suffers from ultraviolet (UV) divergences by placing two operators \(\widehat{\Phi}^{\dagger}(x)\) and \(\widehat{\Phi}(x)\) at the same spacetime point \(x\). One might then turn to a two-point function inserted with a path-ordered Wilson line \(\widehat{\Phi}^{\dagger}(x)\,\mathrm{P}\exp\!\left(ig\int_{y}^{x}\mathrm{d}z^ {\mu}\widehat{A}_{\mu}(z)\right)\widehat{\Phi}(y)\) in the infrared (IR) limit \(|x-y|\to\infty\). This operator contains both the Higgs VEV and the Wilson-line contribution. Therefore one must examine the scaling behavior in the limit \(|x-y|\to\infty\) in order to distinguish the symmetric, Higgs, and confinement phases.1 It would be worthwhile if one can provide a regularized composite operator for \(\widehat{\Phi}^{\dagger}(x)\,\widehat{\Phi}(x)\).2
Footnote 1: Here the confinement phase denotes when the Wilson line acquires a nontrivial (non-unity) expectation value; the Higgs phase denotes when only the Higgs acquires an expectation value; the symmetric phase denotes when both the expectation values of Higgs and Wilson line vanish.
Footnote 2: Other discussions on gauge dependence in phase structures are found, e.g., in [6, 7, 8, 9].
The gradient flow is a powerful tool to cure UV divergences by smearing the field by going into the extra flow-time direction [10, 11, 12, 13]. In particular, one can completely remove the UV divergences from placing the gauge fields at the same spacetime point in the correlation functions [12, 13]. The same argument holds for the quark fields except for the requirement for an extra field (wave-function) renormalization [14]. (Various other aspects have been explored in the context of the gradient flow, such as non-perturbative renormalization group [15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26], holographic theories [27, 28, 29, 30, 31, 32, 33], \(O(N)\) nonlinear sigma model and its large-\(N\) expansion [34, 35, 36, 37], supersymmetric gradient flow [38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49], phenomenological applications [50, 51, 52, 53, 54, 55, 56, 57, 58], and formal issues in the quantum field theory [59, 60, 61, 62, 63].)
In this paper, we propose a gauge invariant order parameter for the SSB using a gradient flow--the _flowed order parameter_: \(\left\langle\widehat{\Psi}^{\dagger}(t,x)\,\widehat{\Psi}(s,x)\right\rangle\), where \(\Psi^{\dagger}(t,x)\) and \(\Psi(s,x)\) are the classical solutions to the gradient flow equation toward the flow times \(t\) and \(s\) with the boundary conditions \(\Psi^{\dagger}(0,x)=\Phi^{\dagger}(x)\) and \(\Psi(0,x)=\Phi(x)\), respectively. This order parameter naturally cures the above-mentioned UV divergence from placing Higgs fields at the same spacetime point thanks to the smearing from the gradient flow.
As a first step in one of the simplest examples, we compute the gauge-invariant flowed order parameter in the gauge-fixed perturbation theory in the symmetric phase in the Abelian Higgs model, which is asymptotic non-free.
The organization of this paper is as follows. In section 2, we describe our setup of the
model and its gradient flow equation. In section 3, we show the UV-finiteness of the gauge-boson two-point function at the one-loop level. In section 4, we consider the Higgs two-point function as a flowed order parameter and analyze the behavior of the finite part at the large flow time limit. Section 5 provides a summary and discussions. In Appendix A, we provide details on the calculations of the Higgs two-point function at the one-loop level.
## 2 Gradient flow of Abelian Higgs model
We first present the setup for the analysis in this paper and then show the gradient flow equations and their solutions.
### Setup
We study the Abelian Higgs model with a \(U(1)\) Higgs field \(\Phi\) and a gauge field \(A_{\mu}\) in the \(d\)-dimensional Euclidean spacetime, with the metric \(\delta_{\mu\nu}=\delta^{\mu\nu}=\left(\mathrm{I}_{d}\right)_{\mu\nu}=\left( \mathrm{I}_{d}\right)^{\mu\nu}\),
\[S :=S_{A}+S_{\Phi}, \tag{1}\] \[S_{A} :=\frac{1}{g_{0}^{2}}\int\mathrm{d}^{d}x\left\{\frac{1}{4}F_{\mu \nu}F_{\mu\nu}+\frac{1}{2\xi_{0}}\left(\partial_{\mu}A_{\mu}\right)^{2}\right\},\] (2) \[S_{\Phi} :=\frac{1}{g_{0}^{2}}\int\mathrm{d}^{d}x\left\{\left(D_{\mu} \Phi\right)^{\dagger}D_{\mu}\Phi+m_{0}^{2}\left(\Phi^{\dagger}\Phi\right)+ \frac{\lambda_{0}}{g_{0}^{2}}\left(\Phi^{\dagger}\Phi\right)^{2}\right\}, \tag{3}\]
where \(\mathrm{I}_{d}\) is the \(d\)-dimensional identity matrix; \(\Phi\) is a complex scalar field with the unit charge; \(g_{0}\), \(m_{0}\), \(\xi_{0}\), and \(\lambda_{0}\) are the bare gauge coupling, the bare mass, the bare gauge-fixing parameter, and the bare self-coupling of \(\Phi\), respectively; \(F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\) is the Abelian field strength; and the covariant derivative \(D_{\mu}\) takes the form
\[D_{\mu}:=\partial_{\mu}-iA_{\mu}. \tag{4}\]
We have employed the non-canonically normalized fields \(\Phi=g_{0}\Phi^{\mathrm{can}}\) and \(A_{\mu}=g_{0}A_{\mu}^{\mathrm{can}}\), where the superscript "can" denotes a canonically normalized field, such that the counting of the number of loops coincides with that of \(g_{0}\) in the gradient flow as we will see. Here and hereafter, we write bare fields such as \(\Phi\) and \(A_{\mu}\) without any subscript while writing renormalized fields with subscript "R", e.g., \(\Phi_{\mathrm{R}}\) and \(A_{\mathrm{R}\mu}\).
Hereafter, we frequently use the short-hand notation for the momentum integral,
\[\int_{p}:=\int\frac{\mathrm{d}^{d}p}{(2\pi)^{d}}, \tag{5}\]
where \(d=4-2\varepsilon\) with an infinitesimal variable \(\varepsilon>0\). The propagators are described as
\[\langle A_{\mu}(x)\,A_{\nu}(y)\rangle_{0} =\left(g_{0}^{2}\mu^{2\varepsilon}\right)\int_{p}\frac{e^{ip\cdot (x-y)}}{p^{2}}\left[\left(\delta_{\mu\nu}-\frac{p_{\mu}p_{\nu}}{p^{2}}\right)+ \xi_{0}\frac{p_{\mu}p_{\nu}}{p^{2}}\right], \tag{6}\] \[\langle\Phi(x)\,\Phi^{\dagger}(y)\rangle_{0} =\left(g_{0}^{2}\mu^{2\varepsilon}\right)\int_{p}\frac{e^{ip\cdot (x-y)}}{p^{2}+m_{0}^{2}}, \tag{7}\]
where the expectation value is defined in the language of path integral as
\[\langle\mathcal{O}\rangle_{0}:=\frac{\int\mathcal{D}A\mathcal{D}\Phi\mathcal{D }\Phi^{\dagger}\mathcal{O}\,e^{-S_{\mathrm{free}}}}{\int\mathcal{D}A\mathcal{D }\Phi\mathcal{D}\Phi^{\dagger}e^{-S_{\mathrm{free}}}}, \tag{8}\]
in which \(S_{\text{free}}\) denotes the free quadratic part of the action (1). Here, we introduce the renormalization scale \(\mu\) to compensate for the physical dimension of the gauge coupling \(g_{0}\).
The relations between the bare fields and renormalized fields are introduced,
\[A_{\mu} =\sqrt{Z_{A}}A_{\text{R}\mu}, \Phi =\sqrt{Z_{\Phi}}\Phi_{\text{R}}, \tag{9}\]
where the wave function renormalization factors are parametrized as
\[Z_{A} :=1+\delta Z_{A}, Z_{\Phi} :=1+\delta Z_{\Phi}. \tag{10}\]
And we introduce between the bare couplings and renormalized couplings,
\[m_{0}^{2} =Z_{\Phi}^{-1}\left(m^{2}+\delta m^{2}\right), g_{0} =Z_{A}^{-1/2}\left(g+\delta g\right),\] \[\xi_{0} =1+\delta\xi, \lambda_{0} =Z_{\Phi}^{-2}\left(\lambda+\delta\lambda\right), \tag{11}\]
where we adopt the notation for couplings where renormalized ones do not hold any subscripts, such as \(m\) and \(g\); \(\delta m^{2}\), \(\delta g\), \(\delta\xi\), and \(\delta\lambda\) are the counterparts for the scalar's mass, the gauge coupling, the gauge-fixing parameter, and the scalar's self-coupling, respectively.
In the minimal-subtraction scheme, the counterparts are identified at the one-loop level as
\[\delta Z_{A} =\frac{1}{\left(4\pi\right)^{2}\varepsilon}\left(-\frac{g^{2}}{3 }\right), \delta Z_{\Phi} =\frac{1}{\left(4\pi\right)^{2}\varepsilon}\left(2g^{2}\right),\] \[\delta m^{2} =\frac{1}{\left(4\pi\right)^{2}\varepsilon}\left(4\lambda m^{2} -g^{2}m^{2}\right), \delta g =0,\] \[\delta\xi =\frac{1}{\left(4\pi\right)^{2}\varepsilon}\left(-\frac{g^{2}}{3 }\right), \delta\lambda =\frac{1}{\left(4\pi\right)^{2}\varepsilon}\left(10\lambda^{2}+5g ^{4}-2g^{2}\lambda\right), \tag{12}\]
which leads to3
Footnote 3: The one-loop relationship between \(\lambda_{0}\) and \(\lambda\) is not necessary for the following discussion.
\[m_{0}^{2} =m^{2}+\frac{1}{\left(4\pi\right)^{2}\varepsilon}\left(4\lambda m ^{2}-3g^{2}m^{2}\right), \tag{13}\] \[g_{0} =g+\frac{1}{\left(4\pi\right)^{2}\varepsilon}\left(\frac{g^{3}}{ 6}\right),\] (14) \[\lambda_{0} =\lambda+\frac{1}{\left(4\pi\right)^{2}\varepsilon}\left(10 \lambda^{2}+5g^{4}-6g^{2}\lambda\right). \tag{15}\]
### Gradient Flow
The gradient flow is an efficient method to describe a well-defined expectation value of a composite operator through diffusion into the direction of a flow time \(t\), called the flow time [10, 11, 12, 13, 14].4 For \(A_{\mu}(x)\) and \(\Phi(x)\), we introduce a flow-time-dependent gauge field \(B_{\mu}(t,x)\) and a complex scalar field \(\Psi(t,x)\), respectively, for \(t\geq 0\) under the initial condition at \(t=0\):
Footnote 4: You can refer to the materials provided by Hiroshi Suzuki for the lectures at Osaka University from 14th Nov. to 16th Nov. 2018 [64, 65].
\[B_{\mu}(t=0,x)=A_{\mu}(x)\,,\hskip 28.452756pt\Psi(t=0,x)=\Phi(x)\,, \hskip 28.452756pt\Psi^{\dagger}(t=0,x)=\Phi^{\dagger}(x)\,. \tag{16}\]
We introduce a flowed action:
\[\widetilde{S} =\frac{1}{g_{0}^{2}}\int_{0}^{\infty}\mathrm{d}t\int\mathrm{d}^{d}x \left\{\frac{1}{4}G_{\mu\nu}(t,x)\,G_{\mu\nu}(t,x)+\frac{\alpha_{0}}{2}\big{(} \partial_{\mu}B_{\mu}(t,x)\big{)}^{2}+\big{(}D_{\mu}\Psi(t,x)\big{)}^{\dagger}D _{\mu}\Psi(t,x)\right\}, \tag{17}\]
where \(G_{\mu\nu}:=\partial_{\mu}B_{\nu}-\partial_{\nu}B_{\mu}\) and \(\alpha_{0}\) is a dimensionless gauge-fixing constant in the bulk (\(t>0\)). In the flowed action, we dropped the scalar mass and self-interaction terms, which include the bare parameters that contain the divergent counter terms in order to be related to those of the boundary theory at \(t=0\). There is no bulk divergence that cancels such a counter term [49, 66].
The gradient-flow equations are a kind of diffusion equation introduced as
\[\partial_{t}B_{\mu}(t,x) =-g_{0}^{2}\frac{\delta\widetilde{S}}{\delta B_{\mu}(t,x)}\] \[=\left(\delta_{\mu\nu}\,\partial^{2}-\partial_{\mu}\partial_{\nu }\right)B_{\nu}(t,x)+\alpha_{0}\,\partial_{\mu}\partial_{\nu}B_{\nu}(t,x)+R_{ \mu}^{B}(t,x)\,, \tag{18}\] \[\partial_{t}\Psi(t,x) =-g_{0}^{2}\frac{\delta\widetilde{S}}{\delta\Psi^{\dagger}(t,x)}\] \[=\partial^{2}\Psi(t,x)+R^{\Psi}(t,x)\,,\] (19) \[\partial_{t}\Psi^{\dagger}(t,x) =-g_{0}^{2}\frac{\delta\widetilde{S}}{\delta\Psi(t,x)}\] \[=\partial^{2}\Psi^{\dagger}(t,x)+R^{\Psi^{\dagger}}(t,x)\,, \tag{20}\]
with the non-linear interaction parts,
\[R_{\mu}^{B} :=i\left(\Psi^{\dagger}\partial_{\mu}\Psi-\left(\partial_{\mu} \Psi^{\dagger}\right)\Psi\right)+2B_{\mu}\Psi^{\dagger}\Psi, \tag{21}\] \[R^{\Psi} :=-2i\,B_{\mu}\partial_{\mu}\Psi-i\left(\partial_{\mu}B_{\mu} \right)\Psi-B_{\mu}B_{\mu}\Psi,\] (22) \[R^{\Psi^{\dagger}} :=2i\,B_{\mu}\partial_{\mu}\Psi^{\dagger}+i\left(\partial_{\mu} B_{\mu}\right)\Psi^{\dagger}-B_{\mu}B_{\mu}\Psi^{\dagger}, \tag{23}\]
where the symbol \(\delta\) represents the variation and we introduced \(\partial^{2}:=\partial_{\mu}\partial_{\mu}\).
We comment that if all the flowed fields have fixed values at \(t\to\infty\), the left-hand sides of the gradient flow equations become zero, and hence the limiting values of the flowed fields are given by the variation of the flowed action (17). Since the flowed Higgs field has no potential in the flowed action, any constant flowed Higgs field (independent of \(x\)) can be a solution to the variation of the flowed action. That is, this setup allows any value for the flowed Higgs field in the large \(t\) limit. It is non-trivial whether the flowed order parameter takes a non-zero value or not.
The flow equations can be formally solved as
\[B_{\mu}(t,x) =\int\mathrm{d}^{d}y\left\{\left[K_{t}^{(\alpha_{0})}\left(x-y \right)\right]_{\mu\nu}A_{\nu}(y)+\int_{0}^{t}\mathrm{d}s\left[K_{t-s}^{( \alpha_{0})}\left(x-y\right)\right]_{\mu\nu}R_{\nu}^{B}(s,y)\right\}, \tag{24}\] \[\Psi(t,x) =\int\mathrm{d}^{d}y\left\{K_{t}\left(x-y\right)\Phi(y)+\int_{0} ^{t}\mathrm{d}sK_{t-s}\left(x-y\right)R^{\Psi}(s,y)\right\},\] (25) \[\Psi^{\dagger}(t,x) =\int\mathrm{d}^{d}y\left\{K_{t}\left(x-y\right)\Phi^{\dagger}(y) +\int_{0}^{t}\mathrm{d}sK_{t-s}\left(x-y\right)R^{\Psi^{\dagger}}(s,y)\right\}, \tag{26}\]
where the following heat kernels are incorporated:
\[\left[K_{t}^{\left(\alpha_{0}\right)}\left(x\right)\right]_{\mu\nu} :=\int_{p}e^{ip\cdot x}\left[\left(\delta_{\mu\nu}-\frac{p_{\mu}p_ {\nu}}{p^{2}}\right)e^{-tp^{2}}+\frac{p_{\mu}p_{\nu}}{p^{2}}e^{-\alpha_{0}tp^{2 }}\right], \tag{27}\] \[K_{t}\left(x\right) :=\int_{p}e^{ip\cdot x}e^{-tp^{2}}. \tag{28}\]
Concrete forms of the classical formal solutions in Eqs. (24), (25) and (26) can be derived in a recursive way, depending on how many times each of the corresponding non-linear interaction terms in Eqs. (21), (22) and (23) is incorporated and also on the order of the incorporations. We adopt the following short-hand notation to reduce the complexity of the following description:
\[\widetilde{B}_{\mu}(t,x) :=\int\mathrm{d}^{d}y\left[K_{t}\left(x-y\right)\right]_{\mu\nu} A_{\nu}(y)\,,\] \[\widetilde{\Psi}(t,x) :=\int\mathrm{d}^{d}yK_{t}\left(x-y\right)\Phi(y)\,, \tag{29}\] \[\widetilde{\Psi}^{\dagger}(t,x) :=\int\mathrm{d}^{d}yK_{t}\left(x-y\right)\Phi^{\dagger}(y)\,,\]
where
\[\left[K_{t}\left(x\right)\right]_{\mu\nu}:=\left[K_{t}^{\left( \alpha_{0}=1\right)}\left(x\right)\right]_{\mu\nu}=\int_{p}e^{ip\cdot x}\delta _{\mu\nu}e^{-tp^{2}}. \tag{30}\]
The quantum two-point correlation functions of the flowed gauge bosons and the flowed Higgs bosons are estimated through the operation in path-integral in Eq. (8) as
\[\left\langle B_{\mu}(t,x)\,B_{\nu}(s,y)\,\mathcal{X}\right\rangle_{0},\qquad \left\langle\Psi(t,x)\,\Psi^{\dagger}(s,y)\,\mathcal{X}\right\rangle_{0}, \tag{31}\]
where \(\mathcal{X}\) symbolically represents possible contributions from perturbative expansions of \(e^{-S_{\mathrm{int}}}\), where \(S_{\mathrm{int}}\) is the interaction part of the original action in Eq. (1): \(\mathcal{X}\) is unity for leading-order (LO) calculations, while \(\mathcal{X}\) can become nontrivial (non-unity) at a loop level. It is straightforward to derive the LO result of the two-point functions:
\[\left\langle B_{\mu}(t,x)\,B_{\nu}(s,y)\right\rangle_{\mathrm{LO}} =\left\langle\widetilde{B}_{\mu}(t,x)\,\widetilde{B}_{\nu}(s,y) \right\rangle_{0}\] \[=\left\langle\int\mathrm{d}^{d}z\left[K_{t}^{\left(\alpha_{0} \right)}(x-z)\right]_{\mu\rho}A_{\rho}(z)\int\mathrm{d}^{d}\widetilde{z}\left[ K_{s}^{\left(\alpha_{0}\right)}(y-\widetilde{z})\right]_{\nu\sigma}A_{\sigma}( \widetilde{z})\right\rangle_{0}\] \[=\left(g_{0}^{2}\mu^{2\varepsilon}\right)\int_{p}\frac{e^{ip \cdot(x-y)}}{p^{2}}\left[\left(\delta_{\mu\nu}-\frac{p_{\mu}p_{\nu}}{p^{2}} \right)e^{-\left(t+s\right)p^{2}}+\xi_{0}\frac{p_{\mu}p_{\nu}}{p^{2}}e^{- \alpha_{0}(t+s)p^{2}}\right], \tag{32}\] \[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,y)\right\rangle_{\mathrm{ LO}} =\left\langle\widetilde{\Psi}(t,x)\,\widetilde{\Psi}^{\dagger}(s,y) \right\rangle_{0}\] \[=\left\langle\int\mathrm{d}^{d}zK_{t}(x-z)\,\Phi(z)\int\mathrm{d }^{d}\widetilde{z}K_{s}(y-\widetilde{z})\,\Phi^{\dagger}(\widetilde{z}) \right\rangle_{0}\] \[=\left(g_{0}^{2}\mu^{2\varepsilon}\right)\int_{p}\frac{e^{ip\cdot (x-y)-(t+s)p^{2}}}{p^{2}+m_{0}^{2}}, \tag{33}\]
where we have used the definition of the operation (8), Wick's theorem and the form of the heat kernels in Eqs. (27) and (28).5
Footnote 5: Currently, \(\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,y)\right\rangle=\left\langle\Psi^{ \dagger}(s,y)\,\Psi(t,x)\right\rangle\) is realised since we consider an Abelian gauge theory. For a flowed complex scalar field obeying the fundamental representation of the \(SU(N)\) non-Abelian gauge theory \(\Psi_{a}\left(a=1,2,...,N\right)\), the corresponding order parameter should be \(\left\langle\sum_{a=1}^{N}\Psi_{a}(t,x)\,\Psi_{a}^{\dagger}(s,y)\right\rangle =\left\langle\sum_{a=1}^{N}\Psi_{a}^{\dagger}(s,y)\,\Psi_{a}(t,x)\right\rangle\).
As we will see soon later, the two-point function of the flowed massless gauge boson \(B_{\mu}\) becomes finite without a wave function renormalization of the flowed field, while that of the flowed complex scalar field \(\Psi\) still contains a divergent part. To eliminate this part systematically, we introduce a wave function renormalization factor for \(\Psi\) as follows:
\[\Psi=\sqrt{\mathcal{Z}_{\Psi}}\Psi_{\mathrm{R}}, \tag{34}\]
with the counterpart \(\delta\mathcal{Z}_{\Psi}\) is defined as
\[\mathcal{Z}_{\Psi}:=1+\delta\mathcal{Z}_{\Psi}. \tag{35}\]
Finally, in addition to Eq. (29), we introduce the following symbols for later convenience,
\[\begin{split}\mathcal{B}_{\mu}(u,z)&:=\int \mathrm{d}^{d}w\int_{0}^{u}\mathrm{d}u^{\prime}K_{u-u^{\prime}}(z-w)\left[i \left(\widetilde{\Psi}^{\dagger}\partial_{\mu}\widetilde{\Psi}-\left( \partial_{\mu}\widetilde{\Psi}^{\dagger}\right)\widetilde{\Psi}\right) \right]_{(u^{\prime},w)},\\ \mathcal{P}(u,z)&:=\int\mathrm{d}^{d}w\int_{0}^{u} \mathrm{d}u^{\prime}K_{u-u^{\prime}}(z-w)\left[-2i\,\widetilde{B}_{\mu} \partial_{\mu}\widetilde{\Psi}-i\left(\partial_{\mu}\widetilde{B}_{\mu} \right)\widetilde{\Psi}\right]_{(u^{\prime},w)},\\ \mathcal{P}^{\dagger}(u,z)&:=\int\mathrm{d}^{d}w \int_{0}^{u}\mathrm{d}u^{\prime}K_{u-u^{\prime}}(z-w)\left[2i\,\widetilde{B} _{\mu}\partial_{\mu}\widetilde{\Psi}^{\dagger}+i\left(\partial_{\mu} \widetilde{B}_{\mu}\right)\widetilde{\Psi}^{\dagger}\right]_{(u^{\prime},w)}, \end{split} \tag{36}\]
where we introduced the short-hand notation for a function \(f\) of the flowed fields,
\[f\Big{(}B_{\mu}(u,z)\,,\Psi(u,z)\,,\Psi^{\dagger}(u,z)\Big{)}=\Big{[}f\Big{(} B_{\mu},\Psi,\Psi^{\dagger}\Big{)}\Big{]}_{(u,z)}\,. \tag{37}\]
## 3 UV-finiteness of gauge two-point function
As we saw in Eqs. (32) and (33), at the LO, the two-point functions of the gauge field and the Higgs field are constructed by the leading-order solutions, shown in Eq. (29), of the gradient flow equations (18)-(20). The next-to-the-leading two-point functions of the flowed fields can be calculated in a similar way, namely, by taking an expectation value of two sequential approximation solutions [refer to Eq. (8), also Eqs. (24)-(26)]. Here, possible configurations are classified into two categories at the one-loop level:
* It involves at least one field that is evolved non-linearly in its flow equation.
* It does not involve fields that are evolved non-linearly in their flow equations, while a one-loop structure is observed at the boundary with the zero value of the flow time.
Another critical point is that we take into account only the configurations depicted by one-particle irreducible (1PI) flow Feynman diagrams. At the one-loop level, in the language of the flow Feynman diagram [13], possible configurations contributing to the gauge-boson
two-point function are summarized as eleven diagrams as shown in Fig. 1. In the following loop calculation, we will set the two gauge-fixing parameters as
\[\alpha_{0}=\xi_{0}=1, \tag{38}\]
which is a Feynman-gauge-like configuration both in the bulk (at a flow-time \(t>0\)) and the boundary (\(t=0\)); see also (11) and (12) in the tree calculation.
### Configurations
Adopting the short-hand notations in Eqs. (29) and (36) makes us describe the details of each configuration efficiently. First, we write down the five configurations classified into Category-(i), which involve at least one field that is evolved non-linearly in its flow equation:
\[\left\langle B_{\mu}(t,x)\,B_{\nu}(s,y)\right\rangle_{\rm A} :=\] \[= \left(g_{0}^{4}h^{4\varepsilon}\right)\int_{p,\ell}\int_{0}^{t}{ \rm d}u\int_{0}^{s}{\rm d}u^{\prime}\frac{(p+2\ell)_{\mu}\left(p+2\ell\right)_ {\nu}e^{ip\cdot(x-y)-(s+t)p^{2}}}{\left(\ell^{2}+m_{0}^{2}\right)\left[(p+\ell )^{2}+m_{0}^{2}\right]}e^{-2(u+u^{\prime})\left(\ell^{2}+p\cdot\ell\right)}, \tag{39}\]
Figure 1: All the configurations at the one-loop level for the gauge-boson two-point function are shown in the language of the flow Feynman diagram [13]. In each flow Feynman diagram, the left and right squares indicate the spacetime positions of \((t,x)\) and \((s,y)\), respectively. The double solid and wavy lines are flow propagator of the scalar and vector fields, respectively, i.e., their flow-time evolutions with the non-linear terms (18)–(20), where both of two endpoints are located in the bulk (at a flow time \(t>0\)). Black arrows and white-out arrows indicate the directions of the Higgs particle number and of the flow time, respectively. Each white circle, which is called a flow vertex, indicates a branching in the bulk due to one of the possible non-linear interactions in the flow equations. Each black dot shows an ordinary vertex at the boundary (\(t=0\)). All the diagrams are drawn by the package feynMF[67].
\[\left\langle B_{\mu}(t,x)\,B_{\nu}(s,y)\right\rangle_{\rm B} :=\left\langle\mathcal{B}_{\mu}(t,x)\left\{-\frac{1}{g_{0}^{2}} \int{\rm d}^{d}X\left[iA_{\sigma}\left(\Phi^{\dagger}\partial_{\alpha}\Phi- \left(\partial_{\alpha}\Phi^{\dagger}\right)\Phi\right)\right]_{\!(X)}\right\} \widetilde{B}_{\nu}(s,y)\right\rangle_{\rm 1PI}\] \[=\left(-g_{0}^{4}\mu^{4\varepsilon}\right)\int_{p,\ell}\int_{0}^{ t}{\rm d}u\frac{\left(p+2\ell\right)_{\mu}\left(p+2\ell\right)_{\nu}e^{ip\cdot(x-y)-(s+t) p^{2}}}{p^{2}\left(\ell^{2}+m_{0}^{2}\right)\left[\left(p+\ell\right)^{2}+m_{0}^{2} \right]}e^{-2u\left(\ell^{2}+p\cdot\ell\right)}, \tag{40}\] \[\left\langle B_{\mu}(t,x)\,B_{\nu}(s,y)\right\rangle_{\rm C} :=\left\langle\int{\rm d}^{d}z\int_{0}^{t}{\rm d}uK_{t-u}(x-z) \left[2\widetilde{B}_{\mu}\widetilde{\Psi}^{\dagger}\widetilde{\Psi}\right]_ {\!(u,z)}\widetilde{B}_{\nu}(s,y)\right\rangle_{\rm 1PI}\] \[=\left(2g_{0}^{4}\mu^{4\varepsilon}\right)\int_{p,\ell}\int_{0}^{ t}{\rm d}u\frac{\delta_{\mu\nu}\,e^{ip\cdot(x-y)-(s+t)p^{2}}}{p^{2}\left(\ell^{2}+m_{0}^ {2}\right)}e^{-2u\ell^{2}},\] (41) \[\left\langle B_{\mu}(t,x)\,B_{\nu}(s,y)\right\rangle_{\rm D} :=\left\langle\int{\rm d}^{d}z\int_{0}^{t}{\rm d}uK_{t-u}(x-z) \left\{i\left[\widetilde{\Psi}^{\dagger}\partial_{\mu}\mathcal{P}-\left( \partial_{\mu}\widetilde{\Psi}^{\dagger}\right)\mathcal{P}\right]_{\!(u,z)} \right\}\widetilde{B}_{\nu}(s,y)\right\rangle_{\rm 1PI}\] \[=\left(-g_{0}^{4}\mu^{4\varepsilon}\right)\int_{p,\ell}\int_{0}^{ t}{\rm d}u\int_{0}^{u}{\rm d}\widehat{u}\frac{\left(2\ell-p\right)_{\mu}\left(2 \ell-p\right)_{\nu}e^{ip\cdot(x-y)-(s+t)p^{2}}}{p^{2}\left[\left(p-\ell\right) ^{2}+m_{0}^{2}\right]}e^{-2u\left(\ell^{2}-p\cdot\ell\right)-2\widehat{u} \left(p^{2}-p\cdot\ell\right)},\] (42) \[\left\langle B_{\mu}(t,x)\,B_{\nu}(s,y)\right\rangle_{\rm E} :=\left\langle\int{\rm d}^{d}z\int_{0}^{t}{\rm d}uK_{t-u}(x-z) \left\{i\left[\mathcal{P}^{\dagger}\partial_{\mu}\widetilde{\Psi}-\left( \partial_{\mu}\mathcal{P}^{\dagger}\right)\widetilde{\Psi}\right]_{\!(u,z)} \right\}\widetilde{B}_{\nu}(s,y)\right\rangle_{\rm 1PI}\] \[=\left(-g_{0}^{4}\mu^{4\varepsilon}\right)\int_{p,\ell}\int_{0}^{ t}{\rm d}u\int_{0}^{u}{\rm d}\widehat{u}\frac{\left(2\ell-p\right)_{\mu}\left(2 \ell-p\right)_{\nu}e^{ip\cdot(x-y)-(s+t)p^{2}}}{p^{2}\left[\left(p-\ell\right) ^{2}+m_{0}^{2}\right]}e^{-2u\left(\ell^{2}-p\cdot\ell\right)-2\widehat{u} \left(p^{2}-p\cdot\ell\right)}, \tag{43}\]
where the labels (A) to (E) tell us how they relate to the flowed Feynman diagrams in Fig 1, and the label "1PI" indicates the operation required to drop off non-1PI configurations, such as disconnected ones and ones including a tadpole, manifestly. Here, \(p\) and \(\ell\) represent the physical momentum flowing from the point \(y\) to \(x\) and a loop momentum, respectively. To describe the configuration (B) simply, we introduced a similar notation for the fields on the boundary as introduced in Eq. (37). Note that the result in (D) is the same as that in (E). The remaining four contributions classified into Category-(i), namely (B\({}^{\prime}\)) to (E\({}^{\prime}\)), are simply obtained by the parameter replacement as
\[\left\langle B_{\mu}(t,x)\,B_{\nu}(s,y)\right\rangle_{\rm B^{ \prime}} =\left(\left\langle B_{\nu}(s,y)\,B_{\mu}(t,x)\right\rangle_{\rm B} \right)^{*}, \left\langle B_{\mu}(t,x)\,B_{\nu}(s,y)\right\rangle_{\rm C^{\prime}} =\left(\left\langle B_{\nu}(s,y)\,B_{\mu}(t,x)\right\rangle_{\rm C }\right)^{*},\] \[\left\langle B_{\mu}(t,x)\,B_{\nu}(s,y)\right\rangle_{\rm D^{ \prime}} =\left(\left\langle B_{\nu}(s,y)\,B_{\mu}(t,x)\right\rangle_{\rm D }\right)^{*}, \left\langle B_{\mu}(t,x)\,B_{\nu}(s,y)\right\rangle_{\rm E^{\prime}} =\left(\left\langle B_{\nu}(s,y)\,B_{\mu}(t,x)\right\rangle_{\rm E }\right)^{*}. \tag{44}\]
Two configurations are classified into Category-(ii) "with a loop on the boundary", where their corresponding flow Feynman diagrams labeled as (a) and (b) are shown in Fig. 1:
\[\left\langle B_{\mu}(t,x)\,B_{\nu}(s,y)\right\rangle_{\rm a} :=\left\langle\widetilde{B}_{\mu}(t,x)\left\{-\frac{1}{g_{0}^{2}} \int{\rm d}^{d}X\left[A_{\alpha}A_{\alpha}\Phi^{\dagger}\Phi\right]_{\!(X)} \right\}\widetilde{B}_{\nu}(s,y)\right\rangle_{\rm 1PI}\] \[=\left(-2g_{0}^{4}\mu^{4\varepsilon}\right)\int_{p}\frac{\delta_{ \mu\nu}\,e^{ip\cdot(x-y)-(s+t)p^{2}}}{\left(p^{2}\right)^{2}}\int_{\ell}\frac{1} {\ell^{2}+m_{0}^{2}}, \tag{45}\] \[\left\langle B_{\mu}(t,x)\,B_{\nu}(s,y)\right\rangle_{\rm b} :=\left\langle\widetilde{B}_{\mu}(t,x)\,\frac{1}{2!}\left\{-\frac{1} {g_{0}^{2}}\int{\rm d}^{d}X\left[iA_{\alpha}\left(\Phi^{\dagger}\partial_{\alpha} \Phi-\left(\partial_{\alpha}\Phi^{\dagger}\right)\Phi\right)\right]_{\!(X)}\right\}\]
\[\left.\left\langle B_{\mu}(t,x)\,B_{\nu}(s,y)\right\rangle_{\text{LO}} \right|_{\text{UV-div}} =\left.\left(g_{0}^{2}\mu^{2\varepsilon}\right)\int_{p}\frac{e^{ip \cdot\left(x-y\right)-\left(s+t\right)p^{2}}}{p^{2}}\left[\left(\delta_{\mu \nu}-\frac{p_{\mu}p_{\nu}}{p^{2}}\right)+\xi_{0}\frac{p_{\mu}p_{\nu}}{p^{2}} \right]\right|_{\text{UV-div}}\] \[=\int_{p}\frac{e^{ip\cdot\left(x-y\right)-\left(s+t\right)p^{2}}} {p^{2}}\left\{\frac{g^{4}}{3\left(4\pi\right)^{2}}\frac{1}{\varepsilon}\left[ \left(\delta_{\mu\nu}-\frac{p_{\mu}p_{\nu}}{p^{2}}\right)+\frac{p_{\mu}p_{\nu} }{p^{2}}\right]-\frac{g^{4}}{3\left(4\pi\right)^{2}}\frac{1}{\varepsilon}\frac {p_{\mu}p_{\nu}}{p^{2}}\right\}, \tag{53}\]
where we used the one-loop relationships between the bare couplings and renormalized ones, especially \(\xi_{0}=1+\delta\xi\) with \(\delta\xi\) given in Eq. (12). Note that the first and second terms of Eq. (53) originate from \(g_{0}^{2}\) and \(\xi_{0}\), respectively. It is straightforward to sum over the contributions (48)-(53) and to reach the conclusion,
\[\left.\left\langle B_{\mu}(t,x)\,B_{\nu}(s,y)\right\rangle_{\text{total}} \right|_{\text{UV-div}}=0, \tag{54}\]
which means that no extra wave-function renormalization is necessary to regularize the gauge-boson two-point function, as expected via the \((d+1)\)-dimensional BRST symmetry that ensures the cancellation [12, 13].
## 4 Higgs two-point function as flowed order parameter
Next, we focus on the gauge-invariant two-point function of the flowed Higgs field at the one-loop level. Again, the primary calculation method is the same as in the previous section, and the same gauge fixings in (11) and \(\alpha_{0}=1\) are used. Also, we take into account the bulk wave-function renormalization (34), where for the current one-loop calculation, the following relation is used (except for the counterterm; see Eqs. (65) and (103) below):
\[\Psi(t,x)=\Psi_{\rm R}(t,x)\,. \tag{55}\]
Figure 2: All the configurations at the one-loop level for the Higgs two-point function are shown in the language of flow Feynman diagram [13]. In each flow Feynman diagram, the left and right squares indicate the spacetime positions of \((t,x)\) and \((s,y)\), respectively. The double solid and wavy lines are flow propagators of the scalar and vector fields, respectively, i.e., their flow-time evolutions with the non-linear terms (18)–(20), where both of two endpoints are located in the bulk (at a flow-time \(t>0\)). Black arrows and white-out arrows indicate the directions of the Higgs particle number and of the flow time, respectively. Each white circle, which is the flow vertex, indicates a branching in the bulk due to one of the possible non-linear interactions in the flow equations. Each black dot shows an ordinary vertex on the boundary (\(t=0\)). All the diagrams are drawn by the package feynMF[67].
### Configurations
The following five configurations involve at least one field that is evolved non-linearly in its flow equation [in Category-(i)]:
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,y)\right\rangle_{\mathrm{A}} :=\left\langle\mathcal{P}(t,x)\,\mathcal{P}^{\dagger}(s,y)\right\rangle _{\mathrm{1PI}}\] \[=\left(g_{0}^{4}\mu^{4\varepsilon}\right)\int_{0}^{t}\mathrm{d}u \int_{0}^{s}\mathrm{d}\widetilde{u}\int_{p,\ell}\frac{\left(p+\ell\right)^{2}}{ \left(p-\ell\right)^{2}\left(\ell^{2}+m_{0}^{2}\right)}e^{+ip\cdot\left(x-y \right)-\left(t+s\right)p^{2}-2\left(u+\widetilde{u}\right)\ell^{2}+2\left(u+ \widetilde{u}\right)\ell\cdot p}, \tag{56}\] \[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,y)\right\rangle_{\mathrm{B}} :=\left\langle\int\mathrm{d}^{d}z\int_{0}^{t}\mathrm{d}uK_{t-u}( x-z)\left[-\widetilde{B}_{\mu}\widetilde{B}_{\mu}\widetilde{\Psi}\right]_{\left(u,z \right)}\widetilde{\Psi}(s,y)\right\rangle_{\mathrm{1PI}}\] \[=\left(-g_{0}^{4}\mu^{4\varepsilon}\right)\int_{p}\frac{e^{+ip \cdot\left(x-y\right)-\left(t+s\right)p^{2}}}{p^{2}+m_{0}^{2}}\int_{0}^{t} \mathrm{d}u\int_{\ell}\frac{d}{\ell^{2}}e^{-2u\ell^{2}},\] (57) \[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,y)\right\rangle_{\mathrm{C}} :=\] \[=\left(g_{0}^{4}\mu^{4\varepsilon}\right)\int_{p}\frac{e^{+ip \cdot\left(x-y\right)-\left(t+s\right)p^{2}}}{p^{2}+m_{0}^{2}}\int_{0}^{t} \mathrm{d}u\int_{\ell}\frac{\left(p+\ell\right)^{2}}{\left(p-\ell\right)^{2} \left(\ell^{2}+m_{0}^{2}\right)}e^{-2u\left(\ell^{2}-\ell\cdot p\right)},\] (58) \[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,y)\right\rangle_{\mathrm{ D}} := \left\langle\,\int\mathrm{d}^{d}z\int_{0}^{t}\mathrm{d}uK_{t-u}(x- z)\left[-2i\,\widetilde{B}_{\mu}\partial_{\mu}\mathcal{P}-i\left(\partial_{\mu} \widetilde{B}_{\mu}\right)\mathcal{P}\right]_{\left(u,z\right)}\widetilde{\Psi }^{\dagger}(s,y)\,\right\rangle_{\mathrm{1PI}}\] \[=\left(g_{0}^{4}\mu^{4\varepsilon}\right)\int_{p}\frac{e^{+ip \cdot\left(x-y\right)-\left(t+s\right)p^{2}}}{p^{2}+m_{0}^{2}}\int_{0}^{t} \mathrm{d}u\int_{0}^{u}\mathrm{d}\widetilde{u}\int_{\ell}\frac{\left(\ell+p \right)^{2}}{\left(\ell-p\right)^{2}}e^{-2u\ell^{2}+2\left(u-\widetilde{u} \right)\ell\cdot p},\] (59) \[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,y)\right\rangle_{\mathrm{E}} := \left\langle\,\int\mathrm{d}^{d}z\int_{0}^{t}\mathrm{d}uK_{t-u}( x-z)\left[-2i\,\mathcal{B}_{\mu}\partial_{\mu}\widetilde{\Psi}-i\left(\partial_{ \mu}\mathcal{B}_{\mu}\right)\widetilde{\Psi}\right]_{\left(u,z\right)} \widetilde{\Psi}^{\dagger}(s,y)\,\right\rangle_{\mathrm{1PI}}\] \[=\left(-g_{0}^{4}\mu^{4\varepsilon}\right)\int_{p}\frac{e^{+ip \cdot\left(x-y\right)-\left(t+s\right)p^{2}}}{p^{2}+m_{0}^{2}}\int_{0}^{t} \mathrm{d}u\int_{0}^{u}\mathrm{d}\widetilde{u}\int_{\ell}\frac{\left(\ell+p \right)^{2}}{\ell^{2}+m_{0}^{2}}e^{-2u\ell^{2}+2\left(u-\widetilde{u}\right) \ell\cdot p}, \tag{60}\]
where the corresponding flow Feynman diagrams are shown in Fig. 2.
The three configurations, (a) to (c) in Fig. 2, do not involve fields evolved non-linearly in their flow equations, while one-loop structures are observed at the boundary with the zero value of the flow time [in Category-(ii)]:
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,y)\right\rangle_{\mathrm{ a}} :=\] \[=\left(-g_{0}^{4}\mu^{4\varepsilon}\right)\int_{p}\frac{e^{+ip \cdot\left(x-y\right)-\left(t+s\right)p^{2}}}{\left(p^{2}+m_{0}^{2}\right)^{2 }}\int_{\ell}\frac{d}{\ell^{2}+\mu_{A}^{2}}, \tag{61}\] \[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,y)\right\rangle_{\mathrm{ b}} :=\]
\[=\left(-4\lambda_{0}g_{0}^{2}\mu^{4\varepsilon}\right)\int_{p}\frac{e^ {+ip\cdot(x-y)-\left(t+s\right)p^{2}}}{\left(p^{2}+m_{0}^{2}\right)^{2}}\int_{ \ell}\frac{1}{\ell^{2}+m_{0}^{2}}, \tag{62}\] \[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,y)\right\rangle_{\rm c} :=\left\langle\widetilde{\Psi}(t,x)\,\widetilde{\Psi}^{\dagger}(s, y)\,\frac{1}{2}\left\{\left(-1\right)\int{\rm d}^{d}X\left[iA_{\alpha}\left( \Phi^{\dagger}\partial_{\alpha}\Phi-\left(\partial_{\alpha}\Phi^{\dagger} \right)\Phi\right)_{\left(X\right)}\right]\right\}\] \[\quad\times\left\{\left(-1\right)\int{\rm d}^{d}Y\left[iA_{\beta} \left(\Phi^{\dagger}\partial_{\beta}\Phi-\left(\partial_{\beta}\Phi^{\dagger} \right)\Phi\right)_{\left(Y\right)}\right]\right\}\Bigg{\rangle}_{\rm 1PI}\] \[=\left(g_{0}^{4}\mu^{4\varepsilon}\right)\int_{p}\frac{e^{+ip \cdot(x-y)-\left(t+s\right)p^{2}}}{\left(p^{2}+m_{0}^{2}\right)^{2}}\int_{ \ell}\frac{\left(\ell+p\right)^{2}}{\left(\ell-p\right)^{2}\left(\ell^{2}+m_{ 0}^{2}\right)}, \tag{63}\]
where a virtual mass for the gauge boson \(\mu_{A}\) is introduced for the calculation of diagram (a).
The last four contributions, (B\({}^{\prime}\)) to (E\({}^{\prime}\)), are obtained by the parameter replacement and the complex conjugation as
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,y)\right\rangle_{\rm B^{ \prime}} =\left(\left\langle\Psi(s,y)\,\Psi^{\dagger}(t,x)\right\rangle_{\rm B }\right)^{*}, \left\langle\Psi(t,x)\,\Psi^{\dagger}(s,y)\right\rangle_{\rm C^{ \prime}} =\left(\left\langle\Psi(s,y)\,\Psi^{\dagger}(t,x)\right\rangle_{\rm C} \right)^{*},\] \[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,y)\right\rangle_{\rm D^{ \prime}} =\left(\left\langle\Psi(s,y)\,\Psi^{\dagger}(t,x)\right\rangle_{\rm D} \right)^{*}, \left\langle\Psi(t,x)\,\Psi^{\dagger}(s,y)\right\rangle_{\rm E^{ \prime}} =\left(\left\langle\Psi(s,y)\,\Psi^{\dagger}(t,x)\right\rangle_{\rm E} \right)^{*}. \tag{64}\]
From the above forms, we recognize that no singularity emerges in the limit \(y\to x\), and the expectation value of the smeared version of the composite operator \(\widehat{\Psi}^{\dagger}(x)\,\widehat{\Psi}(x)\) is well defined.
Also, we need the counter term (CT) from the bulk wave function renormalization of the flowed field, which is easily estimated from Eqs. (33) and (34) as,
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,y)\right\rangle_{\rm CT} :=\delta\mathcal{Z}_{\Psi}\left\langle\widetilde{\Psi}(t,x)\, \widetilde{\Psi}^{\dagger}(s,y)\right\rangle_{0}\] \[=\delta\mathcal{Z}_{\Psi}\left(g_{0}^{2}\mu^{2\varepsilon}\right) \int_{p}\frac{e^{ip\cdot(x-y)-\left(t+s\right)p^{2}}}{p^{2}+m_{0}^{2}}. \tag{65}\]
### Evaluation of momentum integrals
The smeared two-point functions are regular in the limit \(y\to x\), and we can estimate them approximately well by use of the saddle-point method when the flow times \(s\) and \(t\) are sufficiently large, since the solution to the gradient flow has a form such as \(\int_{p}f(p)e^{-tp^{2}}\). We frequently utilize the saddle-point approximation formula,
\[\int_{p}f(p)\,e^{-\alpha\left(p-p_{*}\right)^{2}}\simeq\left(\frac{1}{4\pi \alpha}\right)^{d/2}f(p_{*})\,, \tag{66}\]
where this approximation works fine for a sufficiently large positive \(\alpha\), \(f(p)\) is a rational function of the \(d\)-dimensional momentum \(p\), and \(p_{*}\) represents the position of the saddle point.
Here, UV divergences appear in the present calculations executed under the continuous spacetime limit due to intermediate loop diagrams. Also, we will witness the emergence of IR divergences, which may originate from saddle-point calculations around \(p_{*}\simeq 0\). They are considered artifacts due to our approximation with the saddle-point method. As we will see,
if we evaluate divergent parts without the approximation, IR divergences do not emerge, as expected.6 Details of the following calculations are available in Appendix A.
Footnote 6: It is understood that the IR divergence in Eq. (61) is cut off by \(\mu_{A}\) as in the ordinary QED calculations, and we safely take the limit \(\mu_{A}\to 0\) after the integral.
After the \(\varepsilon\) expansion around zero, we get the form for the LO part,
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{LO}} = \frac{g_{0}^{2}}{\left(4\pi\right)^{2}}m_{0}^{2}e^{m_{0}^{2}(s+t) }\Gamma\big{(}{-1,m_{0}^{2}\left(s+t\right)}\big{)}+\mathcal{O}(\varepsilon) \tag{67}\] \[\xrightarrow[\text{large}\,s,t]{}\frac{g_{0}^{2}}{\left(4\pi \right)^{2}}\frac{1}{m_{0}^{2}\left(s+t\right)^{2}}+\mathcal{O}(\varepsilon)\,,\]
where we took the first term of the series expansion of \(e^{m_{0}^{2}(s+t)}\Gamma\big{(}{-1,m_{0}^{2}\left(s+t\right)}\big{)}\) around infinity. This form is suitable for our current strategy, where the saddle-point approximation is adapted for the one-loop diagrams, assuming that \(s\) and \(t\) are sufficiently large. Also, we derive the \(\varepsilon\)-expanded form of the counter term in the same way,
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{CT}} \xrightarrow[\text{large}\,s,\,t]{}\left.\mathcal{Z}_{\Psi}\frac{g_{0}^{2}}{ \left(4\pi\right)^{2}}\frac{1}{m_{0}^{2}\left(s+t\right)^{2}}\right|_{\text{ CT}}+\mathcal{O}(\varepsilon)\,. \tag{68}\]
For the contributions (A) to (E), results are shown
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{A}} \xrightarrow[\text{s.p.}]{}\frac{g_{0}^{4}}{\left(4\pi\right)^{4}\left(s+t \right)^{2}m_{0}^{2}}\frac{1}{4}\left[-1-\log(a)\right] \tag{69}\] \[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{D }}\xrightarrow[\text{s.p.}]{}-\frac{g_{0}^{4}}{\left(4\pi\right)^{4}\left(s+ t\right)^{2}m_{0}^{2}}\left\{\frac{2}{\varepsilon}+1+2\log\left[32\pi^{2}t \left(s+t\right)\mu^{4}\right]\right\}+\mathcal{O}(\varepsilon)\,,\] \[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{B }}\xrightarrow[\text{s.p.}]{}-\frac{g_{0}^{4}}{\left(4\pi\right)^{4}\left(s+ t\right)^{2}m_{0}^{2}}\left\{\frac{2}{\varepsilon}+1+2\log\left[32\pi^{2}t \left(s+t\right)\mu^{4}\right]\right\}+\mathcal{O}(\varepsilon)\,,\] \[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{C }}\xrightarrow[\text{s.p.}]{}\frac{g_{0}^{4}}{\left(4\pi\right)^{4}\left(s+ t\right)^{2}m_{0}^{2}}\frac{1}{2}\left\{\frac{1}{\varepsilon}-\gamma+1+\log \left[\frac{16\pi^{2}(s+t)\,\mu^{4}}{m_{0}^{2}}\right]\right\}\] \[\quad+\frac{6g_{0}^{4}}{\left(4\pi\right)^{4}t^{2}m_{0}^{2}}e^{ \frac{1}{2}m_{0}^{2}\left(2s+t\right)}\left[\Gamma\bigg{(}0,\frac{1}{2}m_{0} ^{2}\left(2s+t\right)\bigg{)}-e^{\frac{3}{2}m_{0}^{2}\left(2s+t\right)}\Gamma \big{(}0,2m_{0}^{2}\left(2s+t\right)\big{)}\right]+\mathcal{O}(\varepsilon)\,,\] (71) \[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{D }}\xrightarrow[\text{s.p.}]{}\frac{g_{0}^{4}}{\left(4\pi\right)^{4}\left(s+ t\right)^{2}m_{0}^{2}}\frac{1}{4}\bigg{(}\frac{1}{\varepsilon}-\gamma+\log \left[\frac{16\pi^{2}(s+t)\,\mu^{4}}{\mu_{\text{IR}}^{2}}\right]\bigg{)}\] \[\quad-\frac{g_{0}^{4}}{\left(4\pi\right)^{4}t^{2}m_{0}^{2}}\frac{ 9}{2}\left\{\frac{1}{\varepsilon}+1+\log\left[16\pi^{2}t\left(2s+t\right)\mu^{ 4}\right]+e^{\frac{1}{2}m_{0}^{2}\left(2s+t\right)}G_{1,2}^{2,0}\bigg{(} \genfrac{}{}{0.0pt}{}{1}{0,0}{1}\,\genfrac{}{}{0.0pt}{}{1}{2}m_{0}^{2}(2s+t) \bigg{)}\right\}\]
\[-\frac{g_{0}^{4}(s+3t)^{2}(s+5t)^{2}}{\left(4\pi\right)^{4}8t(s+t)^{5}m_{0}^{2}} \left\{\frac{1}{\varepsilon}+1+\log\left[32\pi^{2}t(s+t)\mu^{4}\right]+e^{\frac {m_{0}^{2}(s+t)(s+3t)}{2t}}G_{1,2}^{2,0}\genfrac{1}{}{0.0pt}{}{1}{1}{0,0}\, \frac{m_{0}^{2}(s+t)(s+3t)}{2t}\right)\right\} \tag{72}\] \[+\mathcal{O}(\varepsilon)\,,\] \[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\rm E} \xrightarrow[\text{s.p.}]{}-\frac{g_{0}^{4}}{\left(4\pi\right)^{4}\left(s+t \right)^{2}m_{0}^{2}}\left\{-\frac{1}{\varepsilon}+\gamma-1+\log\left[\frac{m _{0}^{2}}{16\pi^{2}\left(s+t\right)\mu^{4}}\right]\right\}+\mathcal{O}( \varepsilon)\,, \tag{73}\]
where "s.p." declares the adoption of the saddle-point approximation. Here, \(a\) is a dimensionless regulator for an infrared divergence, \(\mu_{\rm IR}\) is a virtual mass for the massless gauge boson to regularize an infrared divergence. \(\Gamma(a,z)\) is the incomplete gamma function defined by
\[\Gamma(a,z):=\int_{z}^{\infty}t^{a-1}e^{-t}\mathrm{d}t, \tag{74}\]
where \(\Gamma(a,0)\) is reduced to the Euler gamma function \(\Gamma(a)\) and \(\gamma\) is the Euler-Mascheroni constant. \(G_{1,2}^{2,0}\genfrac{1}{}{0.0pt}{}{1}{0,0}\,z\) is the Meijer G-function under the designated arguments. The definition of the Meijer G-function is
\[G_{p,q}^{m,n}\,\genfrac{(}{)}{0.0pt}{}{a_{1},\dots,a_{p}}{b_{1},\dots,b_{q}}z \Bigr{]}:=\frac{1}{2\pi i}\int_{\gamma_{L}}\frac{\Pi_{j=1}^{m}\Gamma(b_{j}+s) \,\Pi_{j=1}^{n}\Gamma(1-a_{j}-s)}{\Pi_{j=n+1}^{p}\Gamma(a_{j}+s)\,\Pi_{j=m+1} ^{q}\Gamma(1-a_{j}-s)}z^{-s}\mathrm{d}s, \tag{75}\]
where \(m\), \(n\), \(p\), and \(q\) are integers obeying \(0\leq n\leq p\) and \(0\leq m\leq q\), and the contour \(\gamma_{L}\) lies between the poles of \(\Gamma(1-a_{j}-s)\) and the poles of \(\Gamma(b_{i}+s)\)[68]. The four corresponding results of (B\({}^{\prime}\)) to (E\({}^{\prime}\)) are easily obtainable using the replacement rule in Eq. (64).
For the configurations (a) to (c), we reach the \(\varepsilon\)-expanded forms,
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\rm a} =0, \tag{76}\] \[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\rm b} =4\lambda_{0}g_{0}^{2}\frac{m_{0}^{2}}{\left(4\pi\right)^{4}} \left\{\frac{1}{\varepsilon}\left[-1+e^{m_{0}^{2}\left(s+t\right)}\left(1+m_{0 }^{2}\left(s+t\right)\right)E_{1}\bigl{(}m_{0}^{2}\left(s+t\right)\bigr{)} \right]\right.\] (77) \[\left.\phantom{\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_ {\rm c}}\xrightarrow[\text{s.p.}]{}\frac{g_{0}^{4}}{\left(4\pi\right)^{4} \left(s+t\right)^{2}m_{0}^{2}}\left\{-\frac{1}{\varepsilon}+\gamma-1+\log \left[\frac{m_{0}^{2}}{16\pi^{2}\left(s+t\right)\mu^{4}}\right]\right\}+ \mathcal{O}(\varepsilon)\,, \tag{78}\]
where \(E_{n}(z)\) represents the exponential integral function defined by
\[E_{n}(z):=\int_{1}^{\infty}\frac{e^{-zt}}{t^{n}}\mathrm{d}t, \tag{79}\]
and \(G_{2,3}^{3,0}\genfrac{(}{)}{0.0pt}{}{1,1}{0,0}\,z\) is another specific case of the Meijer G-function under the designated arguments.
#### 4.2.1 Divergent part in saddle-point approximation
In each final expression written below in the renormalized couplings, we ignore higher-order parts in multiplicative computations,
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\rm A}^{ \rm(s.p.)}\biggr{|}_{\rm div} =\frac{g^{4}}{(4\pi)^{4}m^{2}}\left(\frac{1}{t^{2}}+\frac{1}{s^{2} }-\frac{1}{(t+s)^{2}}\right)\frac{9}{\varepsilon}+\frac{g^{4}}{\left(4\pi \right)^{4}\left(t+s\right)^{2}m^{2}}\frac{1}{4}\log(a), \tag{80}\] \[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\rm B}^{ \rm(s.p.)}\biggr{|}_{\rm div} =-\frac{g^{4}}{\left(4\pi\right)^{4}\left(t+s\right)^{2}m^{2}} \frac{2}{\varepsilon},\] (81) \[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\rm C}^{ \rm(s.p.)}\biggr{|}_{\rm div} =\frac{g^{4}}{\left(4\pi\right)^{4}\left(t+s\right)^{2}m^{2}} \frac{1}{2}\frac{1}{\varepsilon},\] (82) \[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\rm D}^{ \rm(s.p.)}\biggr{|}_{\rm div} =\frac{g^{4}}{\left(4\pi\right)^{4}m^{2}}\left\{\frac{1}{4}\frac{1 }{(t+s)^{2}}-\frac{9}{2}\frac{1}{t^{2}}+\frac{(s+3t)^{2}(s+5t)^{2}}{8t(s+t)^{5 }}\right\}\frac{1}{\varepsilon}\] \[\left.+\frac{g^{4}}{\left(4\pi\right)^{4}\left(t+s\right)^{2}m^{2 }}\left[\frac{1}{4}\log\biggl{(}\frac{16\pi^{2}\left(t+s\right)\mu^{4}}{\mu_{ \rm IR}^{2}}\biggr{)}\right],\] (83) \[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\rm E}^{ \rm(s.p.)}\biggr{|}_{\rm div} =-\frac{g^{4}}{\left(4\pi\right)^{4}\left(t+s\right)^{2}m^{2}} \frac{1}{4}\frac{1}{\varepsilon},\] (84) \[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\rm a} \biggr{|}_{\rm div} =0,\] (85) \[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\rm b} \biggr{|}_{\rm div} =4\lambda g^{2}\frac{m^{2}}{(4\pi)^{4}}\frac{1}{\varepsilon}\left\{-1 +e^{m^{2}\left(t+s\right)}\left[1+m^{2}\left(t+s\right)\right]E_{1}\bigl{(}m^ {2}\left(t+s\right)\bigr{)}\right\},\] (86) \[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\rm C}^{ \rm(s.p.)}\biggr{|}_{\rm div} =-\frac{g^{4}}{\left(4\pi\right)^{4}\left(t+s\right)^{2}m^{2}} \frac{1}{\varepsilon},\] (87) \[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\rm LO}^{ \rm(s.p.)}\biggr{|}_{\rm div} =\frac{g_{0}^{2}}{\left(4\pi\right)^{2}}\frac{1}{m_{0}^{2}\left(t+s \right)^{2}},\] \[=\frac{1}{\left(4\pi\right)^{2}\left(t+s\right)^{2}m^{2}}\left[ \frac{g^{4}}{3\left(4\pi\right)^{2}}\frac{1}{\varepsilon}-\frac{1}{\left(4\pi \right)^{2}}\left(4g^{2}\lambda-3g^{4}\right)\frac{1}{\varepsilon}\right],\] (88) \[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\rm CT}^{ \rm(s.p.)}\biggr{|}_{\rm div} =\delta\mathcal{Z}_{\Psi}\frac{g^{2}}{\left(4\pi\right)^{2}}\frac{1 }{m^{2}\left(t+s\right)^{2}}, \tag{89}\]
where the first and second terms of the result for the LO part originate from \(g_{0}^{2}\) and \(m_{0}^{2}\), respectively. Here, we straightforwardly determine the condition for \(\delta\mathcal{Z}_{\Psi}\) to eliminate both the UV and IR divergences listed above as
\[\delta\mathcal{Z}_{\Psi}= -\frac{g^{2}}{384\pi^{2}}\frac{1}{\varepsilon}\frac{108\,s^{6}+4 35\,s^{5}t+1244\,s^{4}t^{2}+682\,s^{3}t^{3}+1244\,s^{2}t^{4}+435\,st^{5}+108\, t^{6}}{s^{2}t^{2}\left(t+s\right)^{2}}\] \[-\frac{g^{2}}{64\pi^{2}}\left\{\log(a)+2\log\left[\frac{16\pi^{2 }\left(t+s\right)\mu^{4}}{\mu_{\rm IR}^{2}}\right]\right\}. \tag{90}\]
#### 4.2.2 UV divergent part without approximation
On the other hand, we can identify the UV poles due to the integrals of the loop-momentum \(\ell\) before the inverse Fourier transforms after taking the safe limit \(y\to x\) as follows:
\[\Big{\langle}\Psi(t,x)\,\Psi^{\dagger}(s,x)\Big{\rangle}_{\text{A}} \Big{|}_{\text{UV-div}} =0, \tag{91}\] \[\Big{\langle}\Psi(t,x)\,\Psi^{\dagger}(s,x)\Big{\rangle}_{\text{B }}\Big{|}_{\text{UV-div}} =\Big{\langle}\Psi(t,x)\,\Psi^{\dagger}(s,x)\Big{\rangle}_{\text{ B}^{\prime}}\Big{|}_{\text{UV-div}} =\int_{p}\frac{e^{-(t+s)p^{2}}}{p^{2}+m^{2}}\left(\frac{-2g^{4}}{ \left(4\pi\right)^{2}}\frac{1}{\varepsilon}\right),\] (92) \[\Big{\langle}\Psi(t,x)\,\Psi^{\dagger}(s,x)\Big{\rangle}_{\text{ C}}\Big{|}_{\text{UV-div}} =\Big{\langle}\Psi(t,x)\,\Psi^{\dagger}(s,x)\Big{\rangle}_{\text{ C}^{\prime}}\Big{|}_{\text{UV-div}} =\int_{p}\frac{e^{-(t+s)p^{2}}}{p^{2}+m^{2}}\left(\frac{g^{4}}{ 2\left(4\pi\right)^{2}}\frac{1}{\varepsilon}\right),\] (93) \[\Big{\langle}\Psi(t,x)\,\Psi^{\dagger}(s,x)\Big{\rangle}_{\text{ D}}\Big{|}_{\text{UV-div}} =\Big{\langle}\Psi(t,x)\,\Psi^{\dagger}(s,x)\Big{\rangle}_{\text{ D}^{\prime}}\Big{|}_{\text{UV-div}} =\int_{p}\frac{e^{-(t+s)p^{2}}}{p^{2}+m^{2}}\left(\frac{+g^{4}}{ 4\left(4\pi\right)^{2}}\frac{1}{\varepsilon}\right),\] (94) \[\Big{\langle}\Psi(t,x)\,\Psi^{\dagger}(s,x)\Big{\rangle}_{\text{ E}}\Big{|}_{\text{UV-div}} =\Big{\langle}\Psi(t,x)\,\Psi^{\dagger}(s,x)\Big{\rangle}_{\text{ E}^{\prime}}\Big{|}_{\text{UV-div}} =\int_{p}\frac{e^{-(t+s)p^{2}}}{p^{2}+m^{2}}\left(\frac{-g^{4}}{ 4\left(4\pi\right)^{2}}\frac{1}{\varepsilon}\right),\] (95) \[\Big{\langle}\Psi(t,x)\,\Psi^{\dagger}(s,x)\Big{\rangle}_{\text{ a}}\Big{|}_{\text{UV-div}} =0,\] (96) \[\Big{\langle}\Psi(t,x)\,\Psi^{\dagger}(s,x)\Big{\rangle}_{\text{ b}}\Big{|}_{\text{UV-div}} =\int_{p}\frac{e^{-(t+s)p^{2}}}{\left(p^{2}+m^{2}\right)^{2}}\left( \frac{4\lambda g^{2}m^{2}}{\left(4\pi\right)^{2}}\frac{1}{\varepsilon}\right),\] (97) \[\Big{\langle}\Psi(t,x)\,\Psi^{\dagger}(s,x)\Big{\rangle}_{\text{ C}}\Big{|}_{\text{UV-div}} =\int_{p}\frac{e^{-(t+s)p^{2}}}{\left(p^{2}+m^{2}\right)^{2}} \left\{\frac{g^{4}\left[2\left(p^{2}+m^{2}\right)-3m^{2}\right]}{\left(4\pi \right)^{2}}\frac{1}{\varepsilon}\right\},\] (98) \[\Big{\langle}\Psi(t,x)\,\Psi^{\dagger}(s,x)\Big{\rangle}_{\text{ LO}}\Big{|}_{\text{UV-div}} =\left(g_{0}^{2}\mu^{2\varepsilon}\right)\int_{p}\frac{e^{-(t+s )p^{2}}}{p^{2}+m_{0}^{2}}\] (99) \[=\int_{p}\frac{e^{-(t+s)p^{2}}}{p^{2}+m^{2}}\left[\frac{g^{4}}{ 3\left(4\pi\right)^{2}}\frac{1}{\varepsilon}-\frac{1}{p^{2}+m^{2}}\frac{m^{2}} {\left(4\pi\right)^{2}}\left(4g^{2}\lambda-3g^{4}\right)\frac{1}{\varepsilon} \right],\] (100) \[\Big{\langle}\Psi(t,x)\,\Psi^{\dagger}(s,x)\Big{\rangle}_{\text{ CT}}\Big{|}_{\text{UV-div}} =\left(\delta\mathcal{Z}_{\Psi}g^{2}\right)\int_{p}\frac{e^{-(t+s) p^{2}}}{p^{2}+m^{2}}, \tag{101}\]
where we ignore higher-order parts in multiplicative computations in each expression written in the renormalized couplings.
The total of the UV divergent parts shown above takes the form,
\[\Big{\langle}\Psi(t,x)\,\Psi^{\dagger}(s,x)\Big{\rangle}_{\text{ total}}\Big{|}_{\text{UV-div}}=\left(-\frac{2}{3}\frac{1}{\varepsilon}\frac{g^{4}}{ (4\pi)^{2}}+\delta\mathcal{Z}_{\Psi}g^{2}\right)\,\int_{p}\frac{e^{-(t+s)p^{2} }}{p^{2}+m^{2}}, \tag{102}\]
where no divergence associated with \(m\) or \(\lambda\) remains, as expected. We can remove the remaining divergence associated with \(g\) by taking the bulk counterpart as
\[\delta\mathcal{Z}_{\Psi}=\frac{g^{2}}{(4\pi)^{2}}\frac{2}{3}\frac{1}{ \varepsilon}. \tag{103}\]
Note that we do not adopt any approximations to identify the divergences, where only UV divergences emerge. We confirmed that the IR divergences emerging in the previous calculation with the saddle-point approximation are artifacts.
In other words, we can take the two-point function of the flowed Higgs fields finite when we choose the flow field's extra wave function as (103). This result shows that the gradient flow indeed works in the \(U(1)\) Higgs model at one loop level, as shown for the non-Abelian gauge theory with fermion as a matter field in Ref. [14]. After subtracting the divergent part, we concentrate on the finite part of the Higgs two-point function.
### Asymptotic form in large flow time
Here, we consider the series expansion around the infinity for \(s=t\), after the coupling renormalizations and the removal of the divergences by taking the bulk counter term \(\delta\mathcal{Z}_{\Psi}\) suitably as shown in Eq. (90): For a large \(s\) corresponding to an IR region,
\[\left\langle\Psi(s,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\rm LO}^{ \rm(s.p.)} =\frac{g^{2}}{64m^{2}\pi^{2}s^{2}}+\mathcal{O}\!\left(\frac{1}{s^{3}} \right), \tag{104}\] \[\left\langle\Psi(s,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\rm A}^{ \rm(s.p.)} =-\frac{g^{4}}{4096\pi^{4}m^{2}s^{2}}\Bigg{[}-251+36\log\!\left(4m ^{2}s\right)-288\log\!\left(6m^{2}s\right)\] \[\qquad\qquad-288\log\!\left(\frac{32\pi^{2}s\mu^{4}}{m^{2}}\right) +36\log\!\left(\frac{64\pi^{2}s\mu^{4}}{m^{2}}\right)\Bigg{]}+\mathcal{O}\! \left(\frac{1}{s^{3}}\right),\] (105) \[\left\langle\Psi(s,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\rm B}^{ \rm(s.p.)} =-\frac{g^{4}}{1024\pi^{4}m^{2}s^{2}}\Bigg{[}1+2\log\!\left(64\pi^{ 2}s^{2}\mu^{4}\right)\Bigg{]}+\mathcal{O}\!\left(\frac{1}{s^{3}}\right),\] (106) \[\left\langle\Psi(s,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\rm C}^{ \rm(s.p.)} =-\frac{g^{4}}{2048\pi^{4}m^{2}s^{2}}\Bigg{[}-1+\gamma-\log\!\left( \frac{32\pi^{2}s\mu^{4}}{m^{2}}\right)\Bigg{]}+\mathcal{O}\!\left(\frac{1}{s^ {3}}\right),\] (107) \[\left\langle\Psi(s,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\rm D}^{ \rm(s.p.)} =-\frac{g^{4}}{4096\pi^{4}m^{2}s^{2}}\Bigg{[}36+\gamma+72\log\! \left(\frac{3m^{2}s}{2}\right)-36\log\!\left(4m^{2}s\right)\] \[\qquad\qquad-36\log\!\left(\frac{16\pi^{2}s\mu^{4}}{m^{2}}\right) +72\log\!\left(\frac{32\pi^{2}s\mu^{4}}{m^{2}}\right)\Bigg{]}+\mathcal{O}\! \left(\frac{1}{s^{3}}\right),\] (108) \[\left\langle\Psi(s,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\rm E}^{ \rm(s.p.)} =\frac{g^{4}}{4096\pi^{4}m^{2}s^{2}}\Bigg{[}-1+\gamma-\log\!\left( \frac{32\pi^{2}s\mu^{4}}{m^{2}}\right)\Bigg{]}+\mathcal{O}\!\left(\frac{1}{s^ {3}}\right),\] (109) \[\left\langle\Psi(s,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\rm a} =0,\] (110) \[\left\langle\Psi(s,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\rm b} =-\frac{\lambda g^{2}}{256\pi^{4}m^{2}s^{2}}\Bigg{[}-1+\gamma-\log \!\left(\frac{32\pi^{2}s\mu^{4}}{m^{2}}\right)\Bigg{]}+\mathcal{O}\!\left( \frac{1}{s^{3}}\right),\] (111) \[\left\langle\Psi(s,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\rm c}^{ \rm(s.p.)} =\frac{g^{4}}{1024\pi^{4}m^{2}s^{2}}\Bigg{[}-1+\gamma-\log\!\left( \frac{32\pi^{2}s\mu^{4}}{m^{2}}\right)\Bigg{]}+\mathcal{O}\!\left(\frac{1}{s^ {3}}\right), \tag{112}\]
with
\[\left\langle\Psi(s,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\rm B^{ \prime}}^{\rm(s.p.)} =\left\langle\Psi(s,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\rm B}^{ \rm(s.p.)},\ \ \left\langle\Psi(s,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\rm C^{ \prime}}^{\rm(s.p.)}=\left\langle\Psi(s,x)\,\Psi^{\dagger}(s,x)\right\rangle_{ \rm C}^{\rm(s.p.)},\]
\[\left\langle\Psi(s,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\rm D^{\prime}}^{\rm(s.p.)}=\left\langle\Psi(s,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\rm D}^{\rm(s.p.)},\ \ \ \left\langle\Psi(s,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\rm E^{\prime}}^{\rm(s.p.)}=\left\langle\Psi(s,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\rm E}^{\rm(s.p.)}. \tag{113}\]
Therefore at both levels of LO and the one-loop, we can conclude that
\[\left\langle\Psi(s,x)\,\Psi^{\dagger}(s,x)\right\rangle^{\rm(s.p.)}\xrightarrow[ s\to\infty]{}0, \tag{114}\]
in the saddle-point approximation. This result is just as expected for the spontaneous gauge symmetry-breaking order parameters in the symmetric phase.
## 5 Summary and discussions
We have proposed the flowed order parameter for spontaneous gauge symmetry breaking: the expectation value of the manifestly gauge-invariant composite operator at the same spacetime point, \(\widehat{\Phi}^{\dagger}(x)\,\widehat{\Phi}(x)\), smeared towards the flow-time direction \(t\) under the gradient flow. The deformed configurations \(\Psi(t,x)\) and \(\Psi^{\dagger}(s,y)\) after progressions of the flow times \(t\) and \(s\) have the good property: the limit \(y\to x\) can be taken without an extra UV divergence in its two-point function \(\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,y)\right\rangle\) as long as \(t,s>0\).
Here, \(\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle=\left\langle\Psi^{ \dagger}(s,x)\,\Psi(t,x)\right\rangle\) is interpreted as a well-defined version of \(\left\langle\widehat{\Phi}^{\dagger}(x)\,\widehat{\Phi}(x)\right\rangle\). The diffusion in the \(t\) direction through the gradient flow can be interpreted as lowering the physical reference energy. Therefore, under the limit \(t=s\to\infty\), \(\left\langle\Psi(s,x)\,\Psi^{\dagger}(s,x)\right\rangle\) would be related to \(\left\langle\widehat{\Phi}^{\dagger}(x)\,\widehat{\Phi}(x)\right\rangle\) that is evaluated with the ground state. Therefore, the binary information of whether \(\lim_{s\to\infty}\left\langle\Psi(s,x)\,\Psi^{\dagger}(s,x)\right\rangle\) takes zero or a nonzero value might be used for determining the phase.
As a first step, we compute the flowed order parameter in the Abelian Higgs model with a positive mass-squared parameter in the continuum theory at the one-loop level. With the help of the saddle-point approximation, we have derived the asymptotic analytic form of \(\left\langle\Psi(s,x)\,\Psi^{\dagger}(s,x)\right\rangle\) for large \(s\).
We have checked the following: (i) the limit \(\left\langle\Psi(s,x)\,\Psi^{\dagger}(s,x)\right\rangle\to 0\) for \(s\to\infty\), which consistently implies that the theory is still in the symmetric phase after taking into account one-loop radiative corrections for finite positive mass-squared; (ii) the UV finiteness of the \(U(1)\) gauge-boson two-point function at the one-loop level, which is a concrete confirmation of the magnificent property for gradient-flowed gauge fields, firstly discussed in Refs. [12, 13]; and (iii) the UV finiteness of the Higgs boson two-point function under a wave-function renormalization that subtracts the UV poles originating from loop integrals [14].
This paper is a first step in the new direction, determining the phase of a physical system by the flowed order parameter, namely the flowed bilinear of the Higgs at the same spacetime point. It can shed new light on identifying the phase of a physical system. Here, we have relied on perturbation under gauge fixing and have adopted the saddle-point approximation to derive analytic forms. Generalization to non-Abelian gauge theories and investigations in the lattice gauge theory, which does not require gauge fixing, will be important next steps.
Even within the simplest Abelian Higgs scenario, various aspects await further clarification:
* The standard perturbative method for determining the phase of the Abelian Higgs model is to investigate the vacua of the Coleman-Weinberg effective potential [69, 70, 71], which
tells us that a dynamical gauge symmetry breaking occurs if the squared Higgs mass is zero or takes a sufficiently small positive value. This region of the parameter space is out of the validity of our current analysis relying on the saddle-point approximation under assumption that the flow times \(t\) and \(s\) are much greater than other dimensionful parameters such as \(s,t\gg m^{-2}\), that is, our computation relies on the saddle-point method (67) for the integral of the form \(\int_{p}\frac{e^{-(t+s)p^{2}}}{p^{2}+m^{2}}g(p)\,,\) where \(g(p)\) is a rational function. It would be interesting to investigate the massless or nearly massless region, which might require a numerical calculation.
* As widely known, if the squared Higgs mass parameter is negative, the gauge symmetry is spontaneously broken down through the Higgs mechanism. Investigating the general properties of the gradient flow for the gauge theory in the broken phase is a theoretically important task.
* The two-point function of the smeared fields at the same spacetime point is well-defined. However, the deformation through the gradient flow might modify some of the properties of the original fields. Our result in the focused parameter region looks consistent, and further theoretical investigation would be worthwhile.
We expect that the theory space at \(t=0\) has one-to-one correspondence to that at \(t>0\). For a nonzero \(t\), the finite parts in Eqs. (69)-(73) and (76)-(78) are non-zero even for large positive mass-squared at finite \(t\). However, we should remember that the physical value of the flowed order parameter for the finite \(t\) can be defined only after we fix a renormalization condition for the theory at \(t\), which will be investigated in future work. This might lead to a new relationship between the gradient flow and the renormalization group as suggested in Refs. [15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26] in a different context.
## Acknowledgement
We thank Tetsuo Hatsuda, Etsuko Ito, Daisuke Kadoh, and Naoya Ukita for useful discussion. This work is in part supported by JSPS KAKENHI Grant Numbers 18K13546 (KK), 19H01899 (KO), and 21H01107 (KN, KO).
## Appendix A Details on Higgs two-point function at one loop
In this section, we provide details on the calculations of the Higgs two-point function at one loop, where both the divergent and finite parts are discussed.
We remember the following standard formulas for the \(\varepsilon\)-expansion,
\[\frac{1}{1-\varepsilon} =1+\varepsilon+\mathcal{O}\big{(}\varepsilon^{2}\big{)}\,,\] \[A^{\varepsilon} =1+\varepsilon\log A+\mathcal{O}\big{(}\varepsilon^{2}\big{)}\,,\] \[\Gamma(\varepsilon) =\frac{1}{\varepsilon}-\gamma+\mathcal{O}(\varepsilon)\,,\] \[\Gamma(\varepsilon-1) =-\frac{1}{\varepsilon}+\gamma-1+\mathcal{O}(\varepsilon)\,, \tag{115}\]
where the variable \(A\) is positive. Also, we skip showing explicit forms for the contributions designated by the diagrams B\({}^{\prime}\), C\({}^{\prime}\), D\({}^{\prime}\) and E\({}^{\prime}\) since each part can be easily obtained from the corresponding result given below and the replacement rule in Eq. (64).
### Leading order
For Eq. (33), after taking the limit \(y\to x\) safely, no approximation is necessary to perform the \(p\)-integral,
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{LO}} =\big{(}g_{0}^{2}\mu^{2\varepsilon}\big{)}\int_{p}\frac{e^{-(t+s )p^{2}}}{p^{2}+m_{0}^{2}}\] \[=\big{(}g_{0}^{2}\mu^{2\varepsilon}\big{)}\left(\frac{2}{\left(4 \pi\right)^{d/2}\Gamma(d/2)}\right)\int_{0}^{\infty}\mathrm{d}pp^{d-1}\frac{e ^{-(t+s)p^{2}}}{p^{2}+m_{0}^{2}}\] \[=\big{(}g_{0}^{2}\mu^{2\varepsilon}\big{)}\left(\frac{2}{\left(4 \pi\right)^{d/2}\Gamma(d/2)}\right)\frac{1}{2}e^{m_{0}^{2}(s+t)}m_{0}^{d-2} \Gamma\!\left(\frac{d}{2}\right)\Gamma\!\left(1-\frac{d}{2},m_{0}^{2}\left(s+ t\right)\right), \tag{116}\]
where, in the second and third lines, we integrated the isotropic angular part and the radial part of the \(d\)-dimensional \(p\) integral, respectively.
Based on the property,
\[\Gamma\!\left(1-\frac{d}{2},m_{0}^{2}\left(s+t\right)\right)=\Gamma\!\left(-1,m_{0}^{2}\left(s+t\right)\right)+\mathcal{O}(\varepsilon)\,, \tag{117}\]
it is straightforward to obtain the \(\varepsilon\)-expanded form,7
Footnote 7: The following form for larger \(t\) and \(s\) is also obtained through the saddle-point method for the \(p\)-integral of the second last form of Eq. (116).
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{LO }} =\frac{g_{0}^{2}}{\left(4\pi\right)^{2}}m_{0}^{2}e^{m_{0}^{2}(s+t)}\Gamma \!\left(-1,m_{0}^{2}\left(s+t\right)\right)+\mathcal{O}(\varepsilon)\] \[\xrightarrow[\text{large}\,s,t]{}\frac{g_{0}^{2}}{\left(4\pi \right)^{2}}\frac{1}{m_{0}^{2}\left(s+t\right)^{2}}+\mathcal{O}(\varepsilon)\,, \tag{118}\]
where we took the first term of the series expansion of \(e^{m_{0}^{2}\left(s+t\right)}\Gamma\!\left(-1,m_{0}^{2}\left(s+t\right)\right)\) around infinity,
\[e^{m_{0}^{2}\left(s+t\right)}\Gamma\!\left(-1,m_{0}^{2}\left(s+t\right)\right) =\left(\frac{1}{m_{0}^{2}\left(s+t\right)}\right)^{2}+\mathcal{O}\left[ \left(\frac{1}{m_{0}^{2}\left(s+t\right)}\right)^{3}\right]. \tag{119}\]
### One-loop order
#### a.2.1 Diagram A
In Eq. (56), after taking the limit \(y\to x\) safely, we can perform the integrals on the flow times exactly, where Eq. (56) leads to
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{A}} =\left(g_{0}^{4}\mu^{4\varepsilon}\right)\int_{p,\ell}\frac{(p+ \ell)^{2}}{\left(p-\ell\right)^{2}\left(\ell^{2}+m_{0}^{2}\right)}e^{-(t+s)p^ {2}}\] \[\quad\times\frac{1}{4\left(\ell^{2}-\ell\cdot p\right)^{2}}\left\{ e^{-2\left(s+t\right)\left(\ell^{2}-\ell\cdot p\right)}-e^{-2s\left(\ell^{2}- \ell\cdot p\right)}-e^{-2t\left(\ell^{2}-\ell\cdot p\right)}+1\right\}, \tag{120}\]
where we divide this form into the following four pieces,
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{ A-(i)}} :=+\left(g_{0}^{4}\mu^{4\varepsilon}\right)\int_{p,\ell}\frac{ \left(p+\ell\right)^{2}}{\left(p-\ell\right)^{2}\left(\ell^{2}+m_{0}^{2} \right)}e^{-\left(t+s\right)p^{2}}\times\frac{1}{4\left(\ell^{2}-\ell\cdot p \right)^{2}}e^{-2\left(s+t\right)\left(\ell^{2}-\ell\cdot p\right)}, \tag{121}\] \[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{ A-(ii)}} :=-\left(g_{0}^{4}\mu^{4\varepsilon}\right)\int_{p,\ell}\frac{ \left(p+\ell\right)^{2}}{\left(p-\ell\right)^{2}\left(\ell^{2}+m_{0}^{2} \right)}e^{-\left(t+s\right)p^{2}}\times\frac{1}{4\left(\ell^{2}-\ell\cdot p \right)^{2}}e^{-2t\left(\ell^{2}-\ell\cdot p\right)},\] (122) \[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{ A-(iii)}} :=-\left(g_{0}^{4}\mu^{4\varepsilon}\right)\int_{p,\ell}\frac{ \left(p+\ell\right)^{2}}{\left(p-\ell\right)^{2}\left(\ell^{2}+m_{0}^{2} \right)}e^{-\left(t+s\right)p^{2}}\times\frac{1}{4\left(\ell^{2}-\ell\cdot p \right)^{2}}.\] (123) \[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{ A-(iv)}} :=+\left(g_{0}^{4}\mu^{4\varepsilon}\right)\int_{p,\ell}\frac{ \left(p+\ell\right)^{2}}{\left(p-\ell\right)^{2}\left(\ell^{2}+m_{0}^{2} \right)}e^{-\left(t+s\right)p^{2}}\times\frac{1}{4\left(\ell^{2}-\ell\cdot p \right)^{2}}. \tag{124}\]
For Eq. (121), the square compilation of the exponent,
\[-\left(t+s\right)p^{2}-2\left(s+t\right)\left(\ell^{2}-\ell\cdot p\right)=-2 \left(s+t\right)\left(\ell-\frac{p}{2}\right)^{2}-\frac{1}{2}\left(s+t\right)p ^{2}, \tag{125}\]
tells us the valid information required to make the saddle-point approximation for the integral of \(\ell\). The formula in Eq. (66) with \(\ell_{*}=p/2\) in Eq. (121) brings us to
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{ A-(i)}} \simeq\left(g_{0}^{4}\mu^{4\varepsilon}\right)\left(9\cdot 4^{2}\right) \left(\frac{1}{8\pi\left(s+t\right)}\right)^{d/2}\int_{p}\frac{1}{\left(p^{2} \right)^{2}\left(p^{2}+4m_{0}^{2}\right)}e^{-\frac{1}{2}\left(s+t\right)p^{2}}\] \[=\left(g_{0}^{4}\mu^{4\varepsilon}\right)\left(9\cdot 4^{2}\right) \left(\frac{1}{8\pi\left(s+t\right)}\right)^{d/2}\left(\frac{2}{\left(4\pi \right)^{d/2}\Gamma(d/2)}\right)\] \[\quad\times\int_{0}^{\infty}\text{d}p\,p^{d-1}\int_{p}\frac{1}{ \left(p^{2}\right)^{2}\left(p^{2}+4m_{0}^{2}\right)}e^{-\frac{1}{2}\left(s+t \right)p^{2}}\] \[=\left(g_{0}^{4}\mu^{4\varepsilon}\right)\left(9\cdot 4^{2}\right) \left(\frac{1}{8\pi\left(s+t\right)}\right)^{d/2}\left(\frac{2}{\left(4\pi \right)^{d/2}\Gamma(d/2)}\right)\] \[\quad\times 2^{-7+d}e^{2m_{0}^{2}\left(s+t\right)}m_{0}^{-6+d} \Gamma\!\left(-2+\frac{d}{2}\right)\Gamma\!\left(3-\frac{d}{2},2m_{0}^{2} \left(s+t\right)\right), \tag{126}\]
where, in the second line, we integrated the isotropic angular part of the \(d\)-dimensional momentum. The evaluations of Eqs. (122) and (123) can follow the same line, and their final forms takes
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{A-(ii)}}\simeq -\left(g_{0}^{4}\mu^{4\varepsilon}\right)\left(9\cdot 4^{2}\right) \left(\frac{1}{8\pi s}\right)^{d/2}\left(\frac{2}{\left(4\pi\right)^{d/2}\Gamma (d/2)}\right) \tag{127}\] \[\times 2^{-7+d}e^{2m_{0}^{2}\left(s+2t\right)}m_{0}^{-6+d}\Gamma \!\left(-2+\frac{d}{2}\right)\Gamma\!\left(3-\frac{d}{2},2m_{0}^{2}\left(s+2t \right)\right),\]
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{A-(ii)}}\simeq -\left(g_{0}^{4}\mu^{4\varepsilon}\right)\left(9\cdot 4^{2}\right) \left(\frac{1}{8\pi t}\right)^{d/2}\left(\frac{2}{\left(4\pi\right)^{d/2} \Gamma(d/2)}\right) \tag{128}\]
For Eq. (124), it is straightforward to apply the formula about the \(p\)-integral around the saddle point \(p_{*}=0\), and we obtain the result,
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{A- (vi)}}\simeq \tag{129}\] \[=\]
where we performed the \(\ell\)-integration with the help of the Feynman parametrization in a standard method.
The first three pieces are \(\varepsilon\)-expanded straightforwardly, where the formulas are useful,
\[\frac{\Gamma(-\varepsilon)}{\Gamma(2-\varepsilon)} =-\frac{1}{\varepsilon}-1-\varepsilon+\mathcal{O}\!\left( \varepsilon^{2}\right),\] (130) \[\Gamma(\varepsilon+1,A) =e^{-A}+\left[e^{-A}\log(A)+G_{1,2}^{2,0}\!\left(\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{A-(iii)}}\simeq\] \[+\frac{9g_{0}^{4}}{\left(4\pi\right)^{4}t^{2}m_{0}^{2}}\Bigg{\{} \frac{1}{\varepsilon}+1+\log\left[2m_{0}^{2}(2s+t)\right]+\log\!\left(\frac{32 \pi^{2}\mu^{4}t}{m_{0}^{2}}\right)+e^{2m_{0}^{2}\!\left(2s+t\right)}G_{1,2}^{2,0}\!\left({}_{0,0}^{1}\right|2m_{0}^{2}(2s+t)\right)\Bigg{\}}+\mathcal{O}( \varepsilon)\,. \tag{134}\]
The form in Eq. (129) has no UV divergence in \(d\to 4\), but it contains an infrared divergence. To regularize this, we deform the upper bound of the integral range of \(y\) with a positive dimensionless parameter \(a\) as follows:
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{ A-(vi)}} \simeq\frac{g_{0}^{4}}{\left(4\pi\right)^{4}\left(s+t\right)^{2}m_ {0}^{2}}\frac{1}{4}\int_{0}^{1}\!\!\mathrm{d}x\int_{0}^{1-x}\!\!\mathrm{d}y \left[\frac{1}{1-x-y}\right]\] \[\hookrightarrow\frac{g_{0}^{4}}{\left(4\pi\right)^{4}\left(s+t \right)^{2}m_{0}^{2}}\frac{1}{4}\int_{0}^{1}\!\!\mathrm{d}x\int_{0}^{1-x-a}\! \!\mathrm{d}y\left[\frac{1}{1-x-y}\right]\] \[=\frac{g_{0}^{4}}{\left(4\pi\right)^{4}\left(s+t\right)^{2}m_{0}^ {2}}\frac{1}{4}\left[-1-\log(a)\right], \tag{135}\]
where the step designated by the symbol \(\hookrightarrow\) corresponds to the regularization of an infrared divergence.
#### a.2.2 Diagram B
In Eq. (57), after taking the limit \(y\to x\) safely, we obtain the form,
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{B}} =\left(-g_{0}^{4}\mu^{4\varepsilon}\right)\int_{p}\frac{e^{-(t+s) p^{2}}}{p^{2}+m_{0}^{2}}\int_{0}^{t}\mathrm{d}u\int_{\ell}\frac{d}{\ell^{2}}e^{ -2u\ell^{2}}\] \[=\left(-g_{0}^{4}\mu^{4\varepsilon}\right)\int_{p}\frac{e^{-(t+s) p^{2}}}{p^{2}+m_{0}^{2}}\int_{0}^{t}\mathrm{d}u\left(\frac{2d}{\left(4\pi\right)^{d /2}\Gamma(d/2)}\right)\int_{0}^{\infty}\mathrm{d}\ell\ell^{d-3}e^{-2u\ell^{2}}\] \[=\left(-g_{0}^{4}\mu^{4\varepsilon}\right)\int_{p}\frac{e^{-(t+s) p^{2}}}{p^{2}+m_{0}^{2}}\int_{0}^{t}\mathrm{d}u\left(\frac{2d}{\left(4\pi\right)^{d /2}\Gamma(d/2)}\right)2^{-d/2}u^{1-d/2}\Gamma\!\left(-1+\frac{d}{2}\right)\] \[=\left(-g_{0}^{4}\mu^{4\varepsilon}\right)\int_{p}\frac{e^{-(t+s) p^{2}}}{p^{2}+m_{0}^{2}}\,\frac{1}{\left(4\pi\right)^{d/2}}\int_{0}^{t}\mathrm{d}u \frac{d}{d/2-1}\left(2u\right)^{1-d/2}\] \[=\left(-g_{0}^{4}\mu^{4\varepsilon}\right)\int_{p}\frac{e^{-(t+s) p^{2}}}{p^{2}+m_{0}^{2}}\,\frac{1}{\left(4\pi\right)^{2-\varepsilon}}\frac{4-2 \varepsilon}{1-\varepsilon}\frac{\left(2t\right)^{\varepsilon}}{2\varepsilon}\] \[\simeq\left(-g_{0}^{4}\mu^{4\varepsilon}\right)\left(\frac{1}{4 \pi\left(s+t\right)}\right)^{d/2}\frac{1}{m_{0}^{2}}\frac{1}{\left(4\pi\right) ^{2-\varepsilon}}\frac{4-2\varepsilon}{1-\varepsilon}\frac{\left(2t\right)^{ \varepsilon}}{2\varepsilon}, \tag{136}\]
where, in the second and third lines, we integrated the isotropic angular part and the radial part of the \(d\)-dimensional \(\ell\) integral, respectively, while, in the last step, we adopted the formula in Eq. (66) to evaluate the \(p\)-integral around the saddle point \(p_{*}=0\).
It is very straightforward to obtain an \(\varepsilon\)-expanded form of Eq. (136):
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{B}}\simeq-\frac {g_{0}^{4}}{\left(4\pi\right)^{4}\left(s+t\right)^{2}m_{0}^{2}}\left\{\frac{2} {\varepsilon}+1+2\log\left[32\pi^{2}t\left(s+t\right)\mu^{4}\right]\right\}+ \mathcal{O}(\varepsilon)\,. \tag{137}\]
#### a.2.3 Diagram C
In Eq. (58), after taking the limit \(y\to x\) safely, we can perform the integrals on the flow time exactly, where Eq. (58) leads to
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{C}}=\left(g_{0}^{4 }\mu^{4\varepsilon}\right)\int_{p}\frac{e^{-(t+s)p^{2}}}{p^{2}+m_{0}^{2}}\int_{ \ell}\frac{\left(p+\ell\right)^{2}}{\left(p-\ell\right)^{2}\left(\ell^{2}+m_{0 }^{2}\right)}\times\frac{1}{2\left(\ell^{2}-\ell\cdot p\right)}\left[1-e^{-2t \left(\ell^{2}-\ell\cdot p\right)}\right], \tag{138}\]
where we divide this form into the following two pieces,
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{ C-(i)}} :=+\left(g_{0}^{4}\mu^{4\varepsilon}\right)\int_{p}\frac{e^{-(t+s )p^{2}}}{p^{2}+m_{0}^{2}}\int_{\ell}\frac{\left(p+\ell\right)^{2}}{\left(p- \ell\right)^{2}\left(\ell^{2}+m_{0}^{2}\right)}\times\frac{1}{2\left(\ell^{2} -\ell\cdot p\right)}, \tag{139}\] \[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{ C-(ii)}} :=-\left(g_{0}^{4}\mu^{4\varepsilon}\right)\int_{p}\frac{e^{-(t+s )p^{2}}}{p^{2}+m_{0}^{2}}\int_{\ell}\frac{\left(p+\ell\right)^{2}}{\left(p- \ell\right)^{2}\left(\ell^{2}+m_{0}^{2}\right)}\times\frac{1}{2\left(\ell^{2} -\ell\cdot p\right)}e^{-2t\left(\ell^{2}-\ell\cdot p\right)}. \tag{140}\]
For Eq. (139), it is straightforward to perform the saddle-point integral of \(p\) around the saddle point \(p_{*}=0\) as
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{ C-(i)}} \simeq\frac{\left(g_{0}^{4}\mu^{4\varepsilon}\right)}{2}\left( \frac{1}{4\pi\left(s+t\right)}\right)^{d/2}\frac{1}{m_{0}^{2}}\int_{\ell} \frac{1}{\left(\ell^{2}+m_{0}^{2}\right)\ell^{2}}\] \[=\frac{\left(g_{0}^{4}\mu^{4\varepsilon}\right)}{2}\left(\frac{1 }{4\pi\left(s+t\right)}\right)^{d/2}\frac{1}{m_{0}^{2}}\frac{1}{\left(4\pi \right)^{d/2}}\frac{\Gamma(2-d/2)}{\Gamma(2)}\int_{0}^{1}\mathrm{d}x\left( \frac{1}{xm_{0}^{2}}\right)^{2-d/2}, \tag{141}\]
where, in the second step, we integrate the \(\ell\) part with the help of the Feynman parametrization in a standard method.
For Eq. (140), we can perform the saddle-point integral of \(\ell\) around the saddle point \(\ell_{*}=p/2\) as
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{ C-(ii)}} \simeq\frac{\left(9\cdot 4^{2}g_{0}^{4}\mu^{4\varepsilon}\right)}{2}\left( \frac{1}{8\pi t}\right)^{d/2}\int_{p}\frac{1}{p^{2}\left(p^{2}+m_{0}^{2} \right)\left(p^{2}+4m_{0}^{2}\right)}e^{-\left(\frac{t}{2}+s\right)p^{2}}\] \[=\frac{\left(9\cdot 4^{2}g_{0}^{4}\mu^{4\varepsilon}\right)}{2} \left(\frac{1}{8\pi t}\right)^{d/2}\left(\frac{2}{\left(4\pi\right)^{d/2} \Gamma(d/2)}\right)\] \[\quad\times\int_{0}^{\infty}\mathrm{d}pp^{d-1}\frac{1}{p^{2}\left( p^{2}+m_{0}^{2}\right)\left(p^{2}+4m_{0}^{2}\right)}e^{-\left(\frac{t}{2}+s \right)p^{2}}\] \[=\frac{\left(9\cdot 4^{2}g_{0}^{4}\mu^{4\varepsilon}\right)}{2} \left(\frac{1}{8\pi t}\right)^{d/2}\left(\frac{2}{\left(4\pi\right)^{d/2} \Gamma(d/2)}\right)\] \[\quad\times\frac{1}{96\Gamma(2-d/2)}e^{\frac{1}{2}\left(-id\pi+m_ {0}^{2}(2s+t)\right)}m_{0}^{-6+d}\pi\left[i+\cot\!\left(\frac{d\pi}{2}\right)\right]\] \[\quad\times\left[-16\Gamma\!\left(2-\frac{d}{2},\frac{1}{2}m_{0}^ {2}\left(2s+t\right)\right)+2^{d}e^{3m_{0}^{2}(2s+t)/2}\Gamma\!\left(2-\frac{d} {2},2m_{0}^{2}\left(2s+t\right)\right)\right], \tag{142}\]
where, in the second and third lines, we integrated the isotropic angular part and the radial part of the \(d\)-dimensional \(p\) integral, respectively.
The \(\varepsilon\)-expansion of Eq. (141) is easily obtained as
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{C-(i)}} \simeq\frac{g_{0}^{4}}{2}\left(\frac{1}{4\pi\left(s+t\right)} \right)^{2}\frac{1}{m_{0}^{2}}\frac{1}{\left(4\pi\right)^{2}}\left[\mu^{4 \varepsilon}\left(4\pi\left(t+s\right)\right)^{\varepsilon}\left(4\pi\right)^ {\varepsilon}\right]\Gamma(\varepsilon)\left[1+\varepsilon\left(1-\log m_{0}^{2 }\right)\right]\] \[=\frac{g_{0}^{4}}{\left(4\pi\right)^{4}\left(s+t\right)^{2}m_{0}^ {2}}\frac{1}{2}\left\{\frac{1}{\varepsilon}-\gamma+1+\log\left[\frac{16\pi^{2 }(s+t)\,\mu^{4}}{m_{0}^{2}}\right]\right\}+\mathcal{O}(\varepsilon)\,. \tag{143}\]
Meanwhile, the following expansion forms,
\[\frac{1}{\Gamma(\varepsilon)\,\Gamma(2-\varepsilon)} =\varepsilon+\mathcal{O}\!\left(\varepsilon^{2}\right),\] (144) \[e^{\pi i\varepsilon} =1+i\pi\varepsilon+\mathcal{O}\!\left(\varepsilon^{2}\right),\] (145) \[\cot\left[\pi\left(2-\varepsilon\right)\right] =-\frac{1}{\pi\varepsilon}+\frac{\pi}{3}\varepsilon+\mathcal{O} \!\left(\varepsilon^{2}\right),\] (146) \[\Gamma(\varepsilon,A) =\Gamma(0,A)+\left[\Gamma(0,A)\log A+G_{1,2}^{2,0}\!\left(\! \left.\!\left.\!\left.\!\left.\!\left.\!\left.\!\!\left.\!\!\left.\!\!\!\! \left.\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
For Eq. (150), we evaluate the \(p\)-integral by use of Eq. (66) around the saddle point \(p_{*}=0\) as,
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{D-(i)}} \simeq\left(g_{0}^{4}\mu^{4\varepsilon}\right)\left(\frac{1}{4\pi \left(s+t\right)}\right)^{d/2}\frac{1}{4m_{0}^{2}}\int_{\ell}\frac{1}{\left( \ell^{2}\right)^{2}}\] \[\hookrightarrow\left(g_{0}^{4}\mu^{4\varepsilon}\right)\left( \frac{1}{4\pi\left(s+t\right)}\right)^{d/2}\frac{1}{4m_{0}^{2}}\int_{\ell} \frac{1}{\left(\ell^{2}+\mu_{\text{IR}}^{2}\right)^{2}}\] \[=\left(g_{0}^{4}\mu^{4\varepsilon}\right)\left(\frac{1}{4\pi \left(s+t\right)}\right)^{d/2}\frac{1}{4m_{0}^{2}}\frac{1}{\left(4\pi\right)^ {d/2}}\frac{\Gamma(2-d/2)}{\Gamma(2)}\left(\frac{1}{\mu_{\text{IR}}^{2}} \right)^{2-d/2}, \tag{153}\]
where, in the second line, we introduced a virtual mass \(\mu_{\text{IR}}\) to regularize the integral. The calculation in the third line was done in a standard technique.
For Eq. (151), the \(\ell\)-integral can be performed in the saddle point method in Eq. (66) around the saddle point \(\ell_{*}=p/2\) as
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{D- (ii)}} \simeq 18\left(g_{0}^{4}\mu^{4\varepsilon}\right)\left(\frac{1}{8\pi t }\right)^{d/2}\int_{p}\frac{1}{\left(p^{2}\right)^{2}\left(p^{2}+m_{0}^{2} \right)}e^{-\left(s+t/2\right)p^{2}}\] \[=18\left(g_{0}^{4}\mu^{4\varepsilon}\right)\left(\frac{1}{8\pi t }\right)^{d/2}\left(\frac{2}{\left(4\pi\right)^{d/2}\Gamma(d/2)}\right)\int_{0 }^{\infty}\text{d}p\,p^{d-1}\frac{1}{\left(p^{2}\right)^{2}\left(p^{2}+m_{0}^ {2}\right)}e^{-\left(s+t/2\right)p^{2}}\] \[=18\left(g_{0}^{4}\mu^{4\varepsilon}\right)\left(\frac{1}{8\pi t }\right)^{d/2}\left(\frac{2}{\left(4\pi\right)^{d/2}\Gamma(d/2)}\right)\] \[\quad\times\frac{1}{2}e^{m_{0}^{2}\left(2s+t\right)/2}m_{0}^{-6+ d}\Gamma\!\left(-2+\frac{d}{2}\right)\Gamma\!\left(3-\frac{d}{2},\frac{1}{2}m_{0}^ {2}\left(2s+t\right)\right), \tag{154}\]
where, in the second and third lines, we integrated the isotropic angular part and the radial part of the \(d\)-dimensional \(p\) integral, respectively.
For Eq. (152), the exponent is square completed as
\[-\left(s+t\right)p^{2}-2t\left(p-\ell\right)^{2}=-\left(3t+s\right)\left[p- \frac{2t}{3t+s}\ell\right]^{2}-\frac{2t\left(t+s\right)}{3t+s}\ell^{2}, \tag{155}\]
which tells us the saddle point of the \(p\)-integral as \(p_{*}=\left[2t/\left(3t+s\right)\right]\ell\). Through the formula in Eq. (66), we get
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{D- (iii)}} \simeq\frac{\left(g_{0}^{4}\mu^{4\varepsilon}\right)}{4}\left(\frac{1}{4 \pi\left(3t+s\right)}\right)^{d/2}\frac{\left(1+\frac{2t}{3t+s}\right)^{2} \left(-\frac{\left(3t+s\right)^{2}}{2t\left(t+s\right)}\right)}{\left(1-\frac {2t}{3t+s}\right)^{4}\left(\frac{2t}{3t+s}\right)^{2}}\] \[\quad\times\int_{\ell}\frac{1}{\left(\ell^{2}\right)^{2}}\frac{1}{ \ell^{2}+\left(\frac{3t+s}{2t}\right)^{2}m_{0}^{2}}e^{-\frac{2t\left(t+s \right)}{3t+s}\ell^{2}}. \tag{156}\]
The structure of the above \(\ell\)-integral is the same as that of Diagram D-(ii), and we immediately
obtain the final form,
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{D-(iii)}} \simeq\frac{\left(g_{0}^{4}\mu^{4\varepsilon}\right)}{4}\left( \frac{1}{4\pi\left(3t+s\right)}\right)^{d/2}\mathcal{A}\] \[\quad\times\left(\frac{2}{\left(4\pi\right)^{d/2}\Gamma(d/2)} \right)\frac{1}{2}e^{\mathcal{CM}^{2}}\mathcal{M}^{-6+d}\Gamma\!\left(-2+ \frac{d}{2}\right)\Gamma\!\left(3-\frac{d}{2},\mathcal{CM}^{2}\right), \tag{157}\]
where we define the variables for our convenience,
\[\mathcal{A}=-\frac{\left(s+3t\right)^{4}}{\left(2t\right)^{3}\left(s+t\right) }\frac{\left(1+\frac{2t}{s+3t}\right)^{2}}{\left(1-\frac{2t}{s+3t}\right)^{4} },\hskip 19.916929pt\mathcal{M}=\left(\frac{s+3t}{2t}\right)m_{0},\hskip 19.916929pt \mathcal{C}=\frac{2t\left(s+t\right)}{s+3t}. \tag{158}\]
The \(\varepsilon\)-expanded form of Eq. (153) is evaluated straightforwardly as
(159)
Calculations of the forms in Eqs (154) and (157) are also straightforward with the help of the formulas in Eqs (130) and (131) as
(160)
#### a.2.5 Diagram E
After taking the safe limit \(y\to x\) as in Eq. (60), we obtain
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{E}}=\left(-g_{ 0}^{4}\mu^{4\varepsilon}\right)\int_{p}\frac{e^{-\left(t+s\right)p^{2}}}{p^{2 }+m_{0}^{2}}\int_{0}^{t}\mathrm{d}u\int_{0}^{u}\mathrm{d}\widetilde{u}\int_{ \ell}\frac{\left(\ell+p\right)^{2}}{\ell^{2}+m_{0}^{2}}e^{-2u\ell^{2}+2(u- \widetilde{u})\ell\cdot p}. \tag{162}\]
We can perform the integrals on the flow times exactly, where Eq. (162) leads to
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{E}} =\left(-g_{0}^{4}\mu^{4\varepsilon}\right)\int_{p}\frac{e^{- \left(t+s\right)p^{2}}}{p^{2}+m_{0}^{2}}\int_{\ell}\frac{\left(\ell+p\right)^ {2}}{\ell^{2}+m_{0}^{2}}\] \[\quad\times\left[\frac{1}{4\ell^{2}\left(\ell^{2}-\ell\cdot p \right)}-\frac{1}{4\left(\ell\cdot p\right)\left(\ell^{2}-\ell\cdot p\right)} e^{-2t\left(\ell^{2}-\ell\cdot p\right)}+\frac{1}{4\left(\ell\cdot p\right) \ell^{2}}e^{-2t\ell^{2}}\right], \tag{163}\]
where we divide this form into the following three pieces,
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{E-(i)}} :=-\left(g_{0}^{4}\mu^{4\varepsilon}\right)\int_{p}\frac{e^{-(t+s )p^{2}}}{p^{2}+m_{0}^{2}}\int_{\ell}\frac{\left(\ell+p\right)^{2}}{\ell^{2}+m_ {0}^{2}}\times\frac{1}{4\ell^{2}\left(\ell^{2}-\ell\cdot p\right)}, \tag{164}\] \[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{E-( ii)}} :=+\left(g_{0}^{4}\mu^{4\varepsilon}\right)\int_{p}\frac{e^{-(t+s )p^{2}}}{p^{2}+m_{0}^{2}}\int_{\ell}\frac{\left(\ell+p\right)^{2}}{\ell^{2}+m_ {0}^{2}}\times\frac{1}{4\left(\ell\cdot p\right)\left(\ell^{2}-\ell\cdot p \right)}e^{-2t\left(\ell^{2}-\ell\cdot p\right)},\] (165) \[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{E-( iii)}} :=-\left(g_{0}^{4}\mu^{4\varepsilon}\right)\int_{p}\frac{e^{-(t+s )p^{2}}}{p^{2}+m_{0}^{2}}\int_{\ell}\frac{\left(\ell+p\right)^{2}}{\ell^{2}+m _{0}^{2}}\times\frac{1}{4\left(\ell\cdot p\right)\ell^{2}}e^{-2t\ell^{2}}. \tag{166}\]
For Eq. (164), we easily find the saddle point \(p_{*}=0\) and utilising the formula in Eq. (66) leads to
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{E- (i)}} \simeq-\frac{\left(g_{0}^{4}\mu^{4\varepsilon}\right)}{4}\left( \frac{1}{4\pi\left(t+s\right)}\right)^{d/2}\frac{1}{m_{0}^{2}}\int_{\ell}\frac{ 1}{\ell^{2}\left(\ell^{2}+m_{0}^{2}\right)}\] \[=-\frac{\left(g_{0}^{4}\mu^{4\varepsilon}\right)}{4}\left( \frac{1}{4\pi\left(t+s\right)}\right)^{d/2}\frac{1}{m_{0}^{2}}\frac{1}{\left( 4\pi\right)^{d/2}}\frac{\Gamma(2-d/2)}{\Gamma(2)}\left(\frac{1}{xm_{0}^{2}} \right)^{2-d/2}, \tag{167}\]
where the second step is an ordinary calculation with the help of the Feynman parametrization.
For Eq. (165), the saddle point of \(\ell\) is located at \(\ell_{*}=p/2\) and the formula in Eq. (66) brings us into
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{E- (ii)}} \simeq-\left(9\cdot 2g_{0}^{4}\mu^{4\varepsilon}\right)\left(\frac{1}{8\pi t }\right)^{d/2}\int_{p}\frac{1}{p^{2}\left(p^{2}+m_{0}^{2}\right)\left(p^{2}+4m _{0}^{2}\right)}e^{-(s+t/2)p^{2}}\] \[=-\left(18g_{0}^{4}\mu^{4\varepsilon}\right)\left(\frac{1}{8\pi t }\right)^{d/2}\left(\frac{2}{\left(4\pi\right)^{d/2}\Gamma(d/2)}\right)\] \[\quad\times\int_{0}^{\infty}\mathrm{d}pp^{d-1}\frac{1}{p^{2}\left( p^{2}+m_{0}^{2}\right)\left(p^{2}+4m_{0}^{2}\right)}e^{-\left(\frac{t}{2}+s \right)p^{2}}\] \[=-\left(18g_{0}^{4}\mu^{4\varepsilon}\right)\left(\frac{1}{8\pi t }\right)^{d/2}\left(\frac{2}{\left(4\pi\right)^{d/2}\Gamma(d/2)}\right)\] \[\quad\times\frac{1}{96\Gamma(2-d/2)}e^{\frac{1}{2}\left(-id\pi+m_ {0}^{2}(2s+t)\right)}m_{0}^{-6+d}\pi\left[i+\cot\!\left(\frac{d\pi}{2}\right)\right]\] \[\quad\times\left[-16\Gamma\!\left(2-\frac{d}{2},\frac{1}{2}m_{0}^{ 2}\left(2s+t\right)\right)+2^{d}e^{3m_{0}^{2}(2s+t)/2}\Gamma\!\left(2-\frac{d} {2},2m_{0}^{2}\left(2s+t\right)\right)\right], \tag{168}\]
where, in the second and third lines, we integrated the isotropic angular part and the radial part of the \(d\)-dimensional \(p\) integral, respectively.
For Eq. (166), we can make a deformation with a Feynman parameter integral,
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{E-(iii)}} =-\frac{\left(g_{0}^{4}\mu^{4\varepsilon}\right)}{4}\int_{p,\ell} \frac{\left(p+\ell\right)^{2}}{p^{2}+m_{0}^{2}}\int_{0}^{1}\!\!\mathrm{d}x\int _{0}^{1-x}\!\!\mathrm{d}y\frac{2!e^{-\left(t+s\right)p^{2}-2t\ell^{2}}}{\left[ \left(x+y\right)\ell^{2}+ym_{0}^{2}+\left(1-x-y\right)\left(p\cdot\ell\right) \right]^{3}}\] \[\simeq-\frac{\left(g_{0}^{4}\mu^{4\varepsilon}\right)}{4}\int_{ \ell}\frac{\ell^{2}}{m_{0}^{2}}\int_{0}^{1}\mathrm{d}x\int_{0}^{1-x}\mathrm{d} y\frac{2!e^{-2t\ell^{2}}}{\left[\left(x+y\right)\ell^{2}+ym_{0}^{2}\right]^{3}}\] \[\simeq 0, \tag{169}\]
where, in the second and the third steps, we performed the \(p\)- and \(\ell\)-integrals by use of the formula in Eq. (66) around the saddle points \(p_{*}=0\) and \(\ell_{*}=0\), respectively.
The \(\varepsilon\)-expansion of Eq. (167) is straightforward, and that of Eq. (168) is doable following the way for Eq. (142), where their resultants are
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{E-(i)}} \simeq-\frac{g_{0}^{4}}{\left(4\pi\right)^{4}\left(s+t\right)^{2}m_ {0}^{2}}\frac{1}{4}\left\{\frac{1}{\varepsilon}-\gamma+1+\log\left[\frac{16 \pi^{2}\left(s+t\right)\mu^{4}}{m_{0}^{2}}\right]\right\}+\mathcal{O}( \varepsilon)\,, \tag{170}\] \[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{E- (ii)}} \simeq-\frac{g_{0}^{4}}{\left(4\pi\right)^{4}t^{2}m_{0}^{2}}\frac{3}{32} ^{\frac{1}{2}m_{0}^{2}\left(2s+t\right)}\] \[\times\left[16\Gamma\!\left(0,\frac{1}{2}m_{0}^{2}\left(2s+t \right)\right)-16\,e^{\frac{3}{2}m_{0}^{2}\left(2s+t\right)}\Gamma\!\left(0,2m _{0}^{2}\left(2s+t\right)\right)\right]+\mathcal{O}(\varepsilon)\,. \tag{171}\]
#### a.2.6 Diagram a
For Eq. (61), after taking the limit \(y\to x\) safely, we find that the \(p\)-integral part and \(\ell\)-integral part are separable,
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{a}} =\left(-dg_{0}^{4}\mu^{4\varepsilon}\right)\int_{p}\frac{e^{-\left(t+s \right)p^{2}}}{\left(p^{2}+m_{0}^{2}\right)^{2}}\int_{\ell}\frac{1}{\ell^{2}+ \mu_{A}^{2}}, \tag{172}\]
where evaluating the latter part is straightforwardly doable,
\[\int_{\ell}\frac{1}{\ell^{2}+\mu_{A}^{2}} =\frac{1}{\left(4\pi\right)^{d/2}}\left(\mu_{A}^{2}\right)^{1- \varepsilon}\Gamma(\varepsilon-1)\,, \tag{173}\]
which contains no infrared divergence, and we can take the limit \(\mu_{A}\to 0\) safely. Thereby, we conclude that
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{a}} \xrightarrow[\mu_{A}\to 0]{}0. \tag{174}\]
#### a.2.7 Diagram b
For Eq. (62), after taking the limit \(y\to x\) safely, we can perform the \(p\)-integral part and \(\ell\)-integral part separately,
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\text{b}} =\left(-4\lambda_{0}g_{0}^{2}\mu^{4\varepsilon}\right)\int_{p} \frac{e^{-\left(t+s\right)p^{2}}}{\left(p^{2}+m_{0}^{2}\right)^{2}}\int_{ \ell}\frac{1}{\ell^{2}+m_{0}^{2}}, \tag{175}\]
where the latter part can be evaluated as in Eq. (173). The former integral is also analytically feasible with the help of the exponential integral function \(E_{n}(z)\) as
\[\int_{p}\frac{e^{-(t+s)p^{2}}}{\left(p^{2}+m_{0}^{2}\right)^{2}} =\left(\frac{2}{\left(4\pi\right)^{d/2}\Gamma(d/2)}\right)\frac{1} {8}\left(d-4\right)\left(s+t\right)^{2-d/2}\Gamma\!\left(-2+\frac{d}{2}\right)\] \[\times\left\{-2+e^{m_{0}^{2}\left(s+t\right)}\left[-2+d+2m_{0}^{ 2}\left(s+t\right)\right]E_{\frac{d-2}{2}}\!\left(m_{0}^{2}\left(s+t\right) \right)\right\}. \tag{176}\]
The total form is given as
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\mathrm{b}} =\left(-4\lambda_{0}g_{0}^{2}\mu^{4\varepsilon}\right)\frac{ \left(m_{0}^{2}\right)^{1-\varepsilon}}{\left(4\pi\right)^{4-2\varepsilon}} \Gamma(\varepsilon-1)\,\Gamma(-\varepsilon)\,\frac{2}{\Gamma(2-\varepsilon)} \left(-\frac{\varepsilon}{4}\right)\left(s+t\right)^{\varepsilon}\] \[\times\left\{-2+e^{m_{0}^{2}\left(s+t\right)}\left[2-2\varepsilon +2m_{0}^{2}\left(s+t\right)\right]E_{1-\varepsilon}\!\left(m_{0}^{2}\left(s+t \right)\right)\right\}. \tag{177}\]
With the help of the following formulas,
\[\frac{\Gamma(\varepsilon-1)\,\Gamma(-\varepsilon)}{\Gamma(2- \varepsilon)} =\frac{1}{\varepsilon^{2}}+\frac{2-\gamma}{\varepsilon}+\frac{1}{12} \left(36-24\gamma+6\gamma^{2}+\pi^{2}\right)+\mathcal{O}(\varepsilon)\,, \tag{178}\] \[E_{1-\varepsilon}\!\left(m_{0}^{2}\left(s+t\right)\right) =E_{1}\!\left(m_{0}^{2}\left(s+t\right)\right)+G_{2,3}^{3,0}\!\left(\!{}_{1,0,0}^{1,1}\right|\!m_{0}^{2}\left(s+t\right)\!\right)\varepsilon+\mathcal{O} (\varepsilon)\,, \tag{179}\]
we reach the \(\varepsilon\)-expanded form,
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\mathrm{b}} =4\lambda_{0}g_{0}^{2}\frac{m_{0}^{2}}{\left(4\pi\right)^{4}} \Bigg{\{}\frac{1}{\varepsilon}\left[-1+e^{m_{0}^{2}\left(s+t\right)}\left(1+m _{0}^{2}\left(s+t\right)\right)E_{1}\!\left(m_{0}^{2}\left(s+t\right)\right) \right]\] \[\quad+\left(2-\gamma+\log\left[\frac{16\pi^{2}\left(s+t\right)\mu ^{4}}{m_{0}^{2}}\right]\right)\left[-1+e^{m_{0}^{2}\left(s+t\right)}\left(1+m _{0}^{2}\left(s+t\right)\right)E_{1}\!\left(m_{0}^{2}\left(s+t\right)\right) \right]\] \[\quad-e^{m_{0}^{2}\left(s+t\right)}E_{1}\!\left(m_{0}^{2}\left(s+ t\right)\right)+e^{m_{0}^{2}\left(s+t\right)}\left(1+m_{0}^{2}\left(s+t\right) \right)G_{2,3}^{3,0}\!\left(\!{}_{0,0,0}^{1,1}\right|\!m_{0}^{2}\left(s+t\right) \right)\Bigg{\}}+\mathcal{O}(\varepsilon)\,. \tag{180}\]
#### a.2.8 Diagram c
For Eq. (63), the form after taking the limit \(y\to x\) safely,
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\mathrm{c}} =\left(g_{0}^{4}\mu^{4\varepsilon}\right)\int_{p}\frac{e^{-(t+s)p^ {2}}}{\left(p^{2}+m_{0}^{2}\right)^{2}}\int_{\ell}\frac{\left(\ell+p\right)^{ 2}}{\left(\ell-p\right)^{2}\left(\ell^{2}+m_{0}^{2}\right)}\] \[\simeq\left(g_{0}^{4}\mu^{4\varepsilon}\right)\frac{1}{m_{0}^{4}} \left(\frac{1}{4\pi\left(s+t\right)}\right)^{d/2}\int_{\ell}\frac{1}{\left( \ell^{2}+m_{0}^{2}\right)}\] \[=\left(g_{0}^{4}\mu^{4\varepsilon}\right)\frac{1}{m_{0}^{4}} \left(\frac{1}{4\pi\left(s+t\right)}\right)^{d/2}\frac{1}{\left(4\pi\right)^{ d/2}}\left(m_{0}^{2}\right)^{1-\varepsilon}\Gamma(\varepsilon-1)\,, \tag{181}\]
where in the second line, we performed the saddle-point integration by use of Eq. (66) with \(p_{*}=0\), and in the third line, Eq. (173) was applied.
The evaluation of the \(\varepsilon\)-expanded form is straightforward, where the resultant form is
\[\left\langle\Psi(t,x)\,\Psi^{\dagger}(s,x)\right\rangle_{\mathrm{c}} \simeq\frac{g_{0}^{4}}{\left(4\pi\right)^{4}\left(s+t\right)^{2}m_{0}^{2}} \left\{-\frac{1}{\varepsilon}+\gamma-1+\log\left[\frac{m_{0}^{2}}{16\pi^{2} \left(s+t\right)\mu^{4}}\right]\right\}+\mathcal{O}(\varepsilon)\,. \tag{182}\]
|
2310.10979
|
A New Gauge-Theoretic Construction of 4-Dimensional Hyperkähler ALE
Spaces
|
Non-compact hyperk\"ahler spaces arise frequently in gauge theory. The
4-dimensional hyperk\"ahler ALE spaces are a special class of non-compact
hyperk\"ahler spaces. They are in one-to-one correspondence with the finite
subgroups of SU(2) and have interesting connections with representation theory
and singularity theory, captured by the McKay Correspondence.
The 4-dimensional hyperk\"ahler ALE spaces are first classified by Peter
Kronheimer via a finite-dimensional hyperk\"ahler reduction. In this paper, we
give a new gauge-theoretic construction of these spaces. More specifically, we
realize each 4-dimensional hyperk\"ahler ALE space as a moduli space of
solutions to a system of equations for a pair consisting of a connection and a
section of a vector bundle over an orbifold Riemann surface, modulo a gauge
group action. The construction given in this paper parallels Kronheimer's
original construction and hence can also be thought of as a gauge-theoretic
interpretation of Kronheimer's construction of these spaces.
|
Jiajun Yan
|
2023-10-17T03:54:54Z
|
http://arxiv.org/abs/2310.10979v1
|
# A New Gauge-Theoretic Construction of \(4\)-Dimensional Hyperkahler ALE Spaces
###### Abstract.
We prove a new gauge-theoretic construction of all the \(4\)-dimensional hyperkahler ALE (asymptotically locally Euclidean) spaces. These spaces are originally constructed by Peter Kronheimer in his thesis [17]. They are in one-to-one correspondence with the finite subgroups of \(SU(2)\) and have deep connections with representation theory, singularity theory and low-dimensional topology. Topologically, these spaces are plumbings of the \(4\)-ball where the plumbing graph is described by the ADE-type Dynkin diagrams of semi-simple Lie algebras. Geometrically, they are the resolution of singularity of \(\mathbb{C}^{2}/\Gamma\), where \(\Gamma\) is a finite subgroup of \(SU(2)\) and the blowup diagram naturally corresponds to the plumbing graph. The interesting connections these spaces share with representation theory, singularity theory and low-dimensional topology are captured by the McKay Correspondence. In Kronheimer's construction, each of them is realized through a hyperkahler reduction of a finite-dimensional vector space. We will review this construction in Section 2.
On the other hand, non-compact hyperkahler ALE spaces frequently arise in gauge theory as the moduli spaces of solutions to gauge theoretic equations. Well-known examples include the Hitchin moduli spaces of solutions to self-duality equations on Riemann surfaces. There have been results realizing different types of gravitational instantons as moduli spaces of monopoles. Here we give a new construction of \(4\)-dimensional hyperkahler ALE spaces using a gauge-theoretic approach. More specifically, we realize each \(4\)-dimensional hyperkahler ALE space as a moduli space of solutions to a system of equations for a pair consisting of a connection and a section of a vector bundle over an orbifold Riemann surface, modulo a gauge group action given by the following theorem. The precise definitions of the notations in the theorem below will be given in Section 2 and Section 3.
**Theorem 1.1** (cf. Theorem 3.8).: _Let \(\tilde{\zeta}=(\tilde{\zeta}_{1},\tilde{\zeta}_{2},\tilde{\zeta}_{3})\), where for all \(i\), \(\tilde{\zeta}_{i}\in Z\). Let_
\[\mathcal{X}_{\tilde{\zeta}}=\{(B,\Theta)\in\mathcal{A}_{\tau}^{F}\times C^{ \infty}(S^{2}/\Gamma,E(\Gamma))|(3.1)-(3.4)\}/\mathcal{G}_{\tau}^{F,\Gamma}.\]
_Then for a suitable choice of \(\tilde{\zeta}\), \(\mathcal{X}_{\tilde{\zeta}}\) is diffeomorphic to the resolution of singularity \(\widetilde{\mathbb{C}^{2}/\Gamma}\). Furthermore, for \(\zeta=\tilde{\zeta}^{*}=-\tilde{\zeta}\), there exists a map \(\Phi\) taking \(X_{\zeta}\) in [17] to \(\mathcal{X}_{\tilde{\zeta}}\) and a natural choice of metric on \(\mathcal{X}_{\tilde{\zeta}}\) such that \(\Phi\) is an isometry._
Below is the layout of the paper. In Section 2, we give a review of Kronheimer's original construction of ALE spaces [17] and introduce various notations. In Section 3, we give an overview of the new gauge-theoretic construction of ALE spaces where we write down the bundles and gauge
groups that we will working with for the construction. We also sketch the procedures for constructing the moduli spaces. In Section 4, we construct hyperkahler structures on some infinite-dimensional spaces which will be important for the construction. Section 5 and Section 6 deal with the main technical steps for proving the main theorem (cf. Theorem 3.8) and we prove the main theorem in Section 7. This new construction also leads to several different directions for generalizations which we will not discuss in detail here.
## 2. Preliminaries
We begin the section with a review of Kronheimer's construction of ALE spaces in [17] which will be of great importance throughout the paper. Then, we will lay out the basic setups for the main gauge-theoretic construction, introducing definitions central to the construction as well as fixing notations and conventions.
### Kronheimer's construction of ALE spaces
We review Kronheimer's construction of ALE spaces via hyperkahler reduction in [17] in this subsection.
Let \(\Gamma\) be a finite subgroup of \(SU(2)\) and let \(R\) be its regular representation. Let \(Q\cong\mathbb{C}^{2}\) be the canonical \(2\)-dimensional representation of \(SU(2)\) and let \(P=Q\otimes_{\mathbb{C}}End(R)\), where \(End(R)\) denote the endomorphism space of \(R\). Let \(M=P^{\Gamma}\) be the space of \(\Gamma\)-invariant elements in \(P\). After fixing a \(\Gamma\)-invariant hermitian metric on \(R\), \(P\) and \(M\) can be regarded as right \(\mathbb{H}\)-modules. Now, choose an orthonormal basis on \(Q\), then we can write an element in \(P\) as a pair of matrices \((\alpha,\beta)\) with \(\alpha,\beta\in End(R)\), and the action of \(J\) on \(P\) is given by
\[J(\alpha,\beta)=(-\beta^{*},\alpha^{*}).\]
Since the action of \(\Gamma\) on \(P\) is \(\mathbb{H}\)-linear, the subspace \(M\) is then an \(\mathbb{H}\)-submodule, which can be regarded as a flat hyperkahler manifold. Explicitly, a pair \((\alpha,\beta)\) is in \(M\) if for each
\[\gamma=\begin{pmatrix}u&v\\ -v^{*}&u^{*}\end{pmatrix}\in\Gamma,\]
where \(v^{*}\) and \(u^{*}\) denote the complex conjugate of \(v\) and \(u\), respectively, we have
\[R(\gamma^{-1})\alpha R(\gamma)=u\alpha+v\beta, \tag{2.1}\]
\[R(\gamma^{-1})\beta R(\gamma)=-v^{*}\alpha+u^{*}\beta. \tag{2.2}\]
Let \(U(R)\) denote the group of unitary transformations of \(R\) and let \(F\) be the subgroup formed by elements in \(U(R)\) that commute with the \(\Gamma\)-action on \(R\). The natural action of \(F\) on \(P\) is given by the following: for \(f\in F\),
\[(\alpha,\beta)\mapsto(f\alpha f^{-1},f\beta f^{-1}).\]
Again, the action of \(F\) on \(P\) is \(\mathbb{H}\)-linear and preserves \(M\). On the other hand, since \(F\) acts by conjugation, the scalar subgroup \(T\subset F\) acts trivially, and hence, we get an action of \(F/T\) on \(M\) that preserves \(I\), \(J\), \(K\).
Now, let \(\mathbf{f}/\mathbf{t}\) be the Lie algebra of \(F/T\) and identify \((\mathbf{f}/\mathbf{t})^{*}\) with the traceless elements of \(\mathbf{f}\subset End(R)\). As the action of \(F/T\) on \(M\) is Hamiltonian with respect to \(I\), \(J\), \(K\), we obtain the following moment maps:
\[\mu_{1}(\alpha,\beta)=\frac{i}{2}([\alpha,\alpha^{*}]+[\beta,\beta^{*}]),\]
\[\mu_{2}(\alpha,\beta)=\frac{1}{2}([\alpha,\beta]+[\alpha^{*},\beta^{*}]),\]
\[\mu_{3}(\alpha,\beta)=\frac{i}{2}(-[\alpha,\beta]+[\alpha^{*},\beta^{*}]).\]
Let \(\mu=(\mu_{2},\mu_{2},\mu_{3}):M\rightarrow\mathbb{R}^{3}\otimes(\mathbf{f}/ \mathbf{t})^{*}\). Let \(Z\) denote the center of \((\mathbf{f}/\mathbf{t})^{*}\) and let \(\zeta=(\zeta_{1},\zeta_{2},\zeta_{3})\in\mathbb{R}^{3}\otimes Z\). For \(\zeta\) lying in the "good set", we get \(X_{\zeta}=\mu^{-1}(\zeta)/F\) is a smooth \(4\)-manifold diffeomorphic to \(\widetilde{\mathbb{C}^{2}/\Gamma}\).
**Proposition 2.1** (cf. Proposition 2.1. in [17]).: _Suppose that \(F\) acts freely on \(\mu^{-1}(\zeta)\). Then_
1. \(d\mu\) _has full rank at all points of_ \(\mu^{-1}(\zeta)\)_, so that_ \(X_{\zeta}\) _is a nonsingular manifold of_ \(\dim M-2\dim F\) _(resp._ \(\dim M-4\dim F\)_),_
2. _the metric_ \(g\) _and complex structures_ \(I\) _(resp._ \(I\)_,_ \(J\)_,_ \(K\)_) descend to_ \(X_{\zeta}\)_, and equipped with these,_ \(X_{\zeta}\) _is kahler (resp. hyperkahler)._
Now, we review some basic representation theory regarding to the McKay Correspondence mentioned in [17]. Let \(R_{0},...,R_{r}\) be the irreducible representations of \(\Gamma\) with \(R_{0}\) the trivial representation, and let
\[Q\otimes R_{i}=\bigoplus_{j}a_{ij}R_{j}\]
be the decomposition of \(Q\otimes R_{i}\) into irreducibles. The representations \(R_{1},...,R_{r}\) correspond to the set of simple roots \(\xi_{1},...,\xi_{r}\) for the associated root system of one of the ADE-type Dynkin diagrams. Furthermore, if \(\xi_{0}=-\sum_{1}^{r}n_{i}\xi_{i}\) is the negative of the highest root, then we have that for all \(i\),
\[n_{i}=\dim R_{i}.\]
Hence, the regular representation \(R\) decomposes as
\[R=\bigoplus_{i}\mathbb{C}^{n_{i}}\otimes R_{i},\]
and \(M\) decomposes as
\[M=\bigoplus_{i,j}a_{ij}Hom(\mathbb{C}^{n_{i}},\mathbb{C}^{n_{j}}),\]
and \(F\) can be written as
\[F=\times_{i}U(n_{i}).\]
Consequently, we get
\[\dim_{\mathbb{R}}M=\sum_{i,j}2a_{ij}n_{i}n_{j}=\sum_{i}4n_{i}^{2}=4|\Gamma|,\]
and
\[\dim_{\mathbb{R}}F=\sum_{i}n_{i}^{2}=|\Gamma|.\]
The center of the Lie algebra \(\mathbf{f}\) is spanned by the elements \(\sqrt{-1}\pi_{i}\), where \(\pi_{i}\) is the projection \(\pi_{i}:R\to\mathbb{C}^{n_{i}}\otimes R_{i}\) (\(i=0,...,r\)). Let \(h\) be the real Cartan algebra associated to the Dynkin diagram, then there is a linear map \(l\) from the center of \(\mathbf{f}\) to \(h^{*}\) defined by the following:
\[l:\sqrt{-1}\pi_{i}\mapsto n_{i}\xi_{i}.\]
The kernel of \(l\) is the one-dimensional subalgebra \(\mathbf{t}\subset\mathbf{f}\), so on the dual space, we get an isomorphism
\[\iota:Z\to h.\]
For each root \(\xi\), we write
\[D_{\xi}=\ker(\xi\circ\iota).\]
**Proposition 2.2** (cf. Proposition 2.8. in [17]).: _If \(F/T\) does not act freely on \(\mu^{-1}(\zeta)\), then \(\zeta\) lies in one of the codimensional-\(3\) subspaces \(\mathbb{R}^{3}\otimes D_{\xi}\subset\mathbb{R}\otimes Z\), where \(\xi\) is a root._
Hence, the "good set" mentioned earlier in the subsection refers to the following:
\[(\mathbb{R}^{3}\otimes Z)^{\circ}=(\mathbb{R}^{3}\otimes Z)\setminus\bigcup_{ \xi}(\mathbb{R}^{3}\otimes D_{\xi}).\]
The following theorems are also proven in [17] and [18], and together, they give a complete construction and classification of ALE spaces. For all the theorems below in this subsection, let \((X,g)\) be a \(4\)-dimensional hyperkahler manifold.
**Theorem 2.3** (cf. Theorem 1.1. in [17]).: _Let three cohomology classes \(\alpha_{1},\alpha_{2},\alpha_{3}\in H^{2}(X;\mathbb{R})\) be given which satisfy the nondegeneracy condition:_
\((*)\) _For each \(\Sigma\in H_{2}(X;\mathbb{Z})\) with \(\Sigma\cdot\Sigma=-2\), there exists \(i\in\{1,2,3\}\) with \(\alpha_{i}(\Sigma)\neq 0\)._
_Then there exists on \(X\) an ALE hyperkahler structure for which the cohomology classes of the kahler form \([\omega_{i}]\) are the given \(\alpha_{i}\)._
**Theorem 2.4** (cf. Theorem 1.2. in [17]).: _Every ALE hyperkahler \(4\)-manifold is diffeomorphic to the minimal resolution of \(\mathbb{C}^{2}/\Gamma\) for some \(\Gamma\subset SU(2)\), and the cohomology classes of the kahler forms on such a manifold must satisfy the nondegeneracy condition \((*)\)._
**Theorem 2.5** (cf. Theorem 1.3. in [17]).: _If \(X_{1}\) and \(X_{2}\) are two ALE hyperkahler \(4\)-manifolds, and there is a diffeomorphism \(X_{1}\to X_{2}\) under which the cohomology classes of the kahler forms agree, then \(X_{1}\) and \(X_{2}\) are isometric._
### Basic setups for the gauge-theoretic construction
We start off by considering \(S^{3}\) as a principal \(S^{1}\)-bundle over \(S^{2}\) via the dual Hopf fibration. The explicit construction is as follows:
\[S^{3}=\{(z_{1},z_{2})\in\mathbb{C}^{2}:|z_{1}|^{2}+|z_{2}|^{2}=1\}.\]
The \(S^{1}\)-action on \(S^{3}\) is given by the following: for \(g=e^{i\theta}\in S^{1}\),
\[(z_{1},z_{2})g=(z_{1}e^{-i\theta},z_{2}e^{-i\theta}).\]
If we think of the base \(S^{2}\) as a unit sphere sitting inside \(\mathbb{R}^{3}\), we can write down the projection map explicitly which will be useful later on: let \(\pi:S^{3}\to S^{2}\) be the projection map where \(\pi(z_{1},z_{2})=(2z_{1}z_{2}^{*},|z_{1}|^{2}-|z_{2}|^{2})\). In terms of real coordinates, \(\pi(a,b,c,d)=(2(ac+bd),bc-ad,a^{2}+b^{2}-c^{2}-d^{2})\). Equivalently, we can think of \(\pi\) as a map from \(S^{3}\) to \(\mathbb{C}P^{1}\) given by \(\pi:S^{3}\to\mathbb{C}P^{1}\) with \(\pi(z_{1},z_{2})=[z_{1}:z_{2}]\).
Now, we turn to the associated bundles of \(S^{3}\). A complex vector space \(V\) with an \(S^{1}\)-action on \(V\) determines a vector bundle over \(S^{2}\) associated to \(S^{3}\) with fiber \(V\). Below, we consider the specific \(S^{1}\)-action on a complex vector space \(V\) given by the scalar multiplication.
**Definition 2.6**.: Let \(V\) be a complex vector space with \(S^{1}\)-action given by scalar multiplication. Then \(E(V)\) is defined as \(E(V)=S^{3}\times V/\sim\), where \([p,v]\sim[pg,g^{-1}v]\), for all \(g\in S^{1}\), and \(E(\overline{V})\) is defined as \(E(\overline{V})=S^{3}\times V/\sim\), where \([p,v]\sim[pg,gv]\), for all \(g\in S^{1}\).
There are three important examples that we will be working with closely, i.e., the hyperplane bundle, the tautological bundle over \(S^{2}\) and the associate bundle with fiber \(V=End(R)\), where \(R\) is the regular representation of a finite subgroup \(\Gamma\) of \(SU(2)\).
**Example 2.7** (The hyperplane bundle).: Let \(V=\mathbb{C}\). Then by the previous definition, \(E(\mathbb{C})\) is isomorphic to the hyperplane bundle \(H\) over \(\mathbb{C}P^{1}\).
**Example 2.8**.: Let \(\Gamma\) be a finite subgroup of \(SU(2)\), and let \(R\) be the regular representation of \(\Gamma\) with invariant hermitian metric. We see that \(E(V)\) splits orthogonally into a direct sum of hyperplane bundles, that is, \(E(V)=\oplus_{i}H_{i}\), where each \(H_{i}=E(\mathbb{C}\cdot e_{i})\) is isomorphic to the hyperplane bundle \(H\).
### The \(\Gamma\)-action and orbifold vector bundles
Let \(\Gamma\) be a finite subgroup of \(SU(2)\) as before, and let \(V\) be a representation of \(\Gamma\) given by \(r:\Gamma\to GL_{\mathbb{C}}(V)\). We want to build an orbifold vector bundle incorporating the \(\Gamma\)-representation. To this end, we introduce the following definition.
**Definition 2.9**.:
1. Suppose either \(\Gamma\) doesn't contain \(-1\in SU(2)\) or \(\Gamma\) contains \(-1\) and \(r(-1)=-1\in GL_{\mathbb{C}}(V)\). Let \(E(V)_{r}^{\Gamma}\) be defined as follows: \(E(V)_{r}^{\Gamma}=S^{3}\times V/\sim\), where \([p,v]\sim[pg,g^{-1}v]\sim[p\gamma,\gamma^{-1}v]\), for all \(g\in S^{1}\) and \(\gamma\in\Gamma\), where \(\gamma^{-1}v:=r(\gamma^{-1})v\).
2. Suppose \(\Gamma\) contains \(-1\in SU(2)\) and \(r(-1)\neq-1\in GL_{\mathbb{C}}(V)\), we decompose \(V\) into the eigenspaces of \(r(-1)\), and write \(V=V_{0}\oplus V_{1}\)
where \(r(-1)\) acts as \(1\) on \(V_{0}\) and acts as \(-1\) on \(V_{1}\). Define \(E(V)_{r}^{\Gamma}\) to be \(E(V)_{r}^{\Gamma}=S^{3}\times V_{1}/\sim\), where \([p,v]\sim[pg,g^{-1}v]\sim[p\gamma,\gamma^{-1}v]\), for all \(g\in S^{1}\) and \(\gamma\in\Gamma\), where \(\gamma^{-1}v:=r(\gamma^{-1})v\).
We will oftentimes abbreviate \(E(V)_{r}^{\Gamma}\) as \(E(\Gamma)\).
**Remark 2.10**.:
1. We can think of \(E(\Gamma)\) as an orbifold vector bundle over \(S^{2}/\Gamma\).
2. Let \(C^{\infty}(S^{2},E(V))\) denote the space of smooth sections of \(E(V)\) and let \(C^{\infty}(S^{2},E(V))^{\Gamma}\) denote the space of \(\Gamma\)-invariant sections of \(E(V)\). With the above definition, we always have that \(C^{\infty}(S^{2},E(V))^{\Gamma}\cong C^{\infty}_{orb}(S^{2}/\Gamma,E(V)_{r}^{ \Gamma})=C^{\infty}_{orb}(S^{2}/\Gamma,E(\Gamma))\). Note that we will begin to drop the subscript and simply use \(C^{\infty}(S^{2}/\Gamma,E(\Gamma))\) or \(C^{\infty}(E(\Gamma))\) to denote the space of orbifold sections of \(E(\Gamma)\) or equivalently the \(\Gamma\)-invariant sections of \(E(V)\) in the coming sections.
3. If we let \(V\) be equal to the endomorphism space \(End(R)\) of the regular representation \(R\) of \(\Gamma\). Then we have \(\gamma^{-1}v=R(\gamma^{-1})vR(\gamma)\). Recall that in Kronheimer's construction, when forming \(M=P^{\Gamma}=(Q\otimes V)^{\Gamma}\), the element \(-1\in\Gamma\) acts on \(Q\) by scalar multiplication and on \(V\) by the \(\Gamma\)-representation \(r(-1)\). Hence, if an element \(\sum q\otimes v\) is \(\Gamma\)-invariant, we must have \(\sum q\otimes v=\sum(-q)\otimes r(-1)v\), so \(r(-1)\) must act as \(-1\in GL_{\mathbb{C}}(V)\) on \(V\), so \(\sum q\otimes v\) lies in \(Q\otimes V_{1}\). In other words, \(P^{\Gamma}=(Q\otimes V_{1})^{\Gamma}\).
Now, we want to equip the bundles with a pointwise metric. Let \(E(V)\) and \(E(\Gamma)\) be defined as above.
**Definition 2.11**.:
1. Let \(h_{V}\) be a hermitian metric on \(V\). Then the pointwise hermitian metric \(h_{E(V)}\) on \(E(V)\) with respect to \(h_{V}\) is given by \(h_{E(V)}([p,v_{1}],[p,v_{2}])_{x}=h_{V}(v_{1},v_{2})\), where \(p\in S^{3}\) lies in the fiber over \(x\in S^{2}\).
2. Suppose \(h_{V}\) is also \(\Gamma\)-invariant, then \(h_{V}\) gives rise to a pointwise hermitian metric on \(E(\Gamma)\) again given by \(h_{E(V)}([p,v_{1}],[p,v_{2}])_{x}=h_{V}(v_{1},v_{2})\), where \(p\in S^{3}\) lies in the fiber over \(x\in S^{2}\).
**Remark 2.12**.: With the above definition, we can identify \(E(\overline{V})\) with the dual bundle \(E(V)^{*}\), where \([p,v]^{*}([p,v^{\prime}])=h_{V}([p,v^{\prime}],[p,v])=[p,v^{*}]([p,v^{\prime}])\), for \([p,v^{*}]\in E(\overline{V})\), and the metric on \(E(\overline{V})\) is given by taking
\[h_{E(\overline{V})}([p,v_{1}^{*}],[p,v_{2}^{*}])_{x}=h_{\overline{V}}(v_{1}^{* },v_{2}^{*})=h_{V}(v_{2},v_{1}).\]
On the other hand, we can also express \(h_{E(V)}\) in terms of the trace, that is, let
\[h_{E(V)}([p,v_{1}],[p,v_{2}])_{x}=\operatorname{Tr}([p,v_{1}],[p,v_{2}]^{*})_ {x}=\operatorname{Tr}(v_{1}v_{2}^{*}).\]
As a result, we also get \(E(\Gamma)^{*}\).
### The gauge-theoretic framework
We are ready to introduce the gauge-theoretic framework in this paper. We will mainly be working with the orbifold vector bundle \(E(\Gamma)\) that we have defined previously for the main construction. We fix a holomorphic structure on \(E(\mathbb{C})=H\), and denote it by \(\bar{\partial}\).
For the remaining of the section, we assume \(V=End(S)\) to be the endomorphism space of some \(\Gamma\)-representation \(S\). We fix a \(\Gamma\)-invariant hermitian structure \(h_{V}\) on \(V\) and hence get pointwise metrics on \(E(V)\) and \(E(\Gamma)\). We take \(\omega_{vol}\) to be the Fubini-Study form on \(\mathbb{C}P^{1}\).
Let \(A_{0}\) be the unique Chern connection on \(H\) compatible with the holomorphic structure \(\bar{\partial}\) and the hermitian structure descending from \(E(V)\). Note that \(A_{0}\) will be \(\Gamma\)-invariant as it is invariant under \(SU(2)\).
Let \(P\) be the bundle of automorphisms of \(E(V)\). Then \(P\) is in fact the trivial bundle \(S^{2}\times GL_{\mathbb{C}}(V)\). Now, let \(F\subset U(S)\) be the subgroup of unitary transformations of \(S\) that commute with the \(\Gamma\)-action, and let \(T\) be the scalar subgroup of \(F\). Then we can think of \(\tilde{P}\) defined such that \(\tilde{P}=S^{2}\times F/T\) as a subbundle of \(P\), as we can think of \(F/T\) as lying inside \(GL_{\mathbb{C}}(V)\) by acting on \(V\) by conjugation. As \(F\) is the subgroup of \(U(S)\) with elements that commute with the \(\Gamma\)-action, we also get that \(\tilde{P}^{\Gamma}\) defined as \(\tilde{P}^{\Gamma}=S^{2}/\Gamma\times F/T\) is a subbundle of the bundle automorphisms of \(E(\Gamma)\). This motivates the following definition.
**Definition 2.13**.: Let \(V=End(S)\) be the endomorphism space of some \(\Gamma\)-representation \(S\). Let \(F\subset U(S)\) be the unitary transformations of \(S\) that commute with the \(\Gamma\)-action, and let \(T\) be the scalar subgroup sitting inside \(F\).
1. Let the gauge group \(\mathcal{G}^{F,\Gamma}\) of \(E(\Gamma)\) be defined as \(\mathcal{G}^{F,\Gamma}=Map(S^{2}/\Gamma,F/T)\). Let \(\mathbf{g}^{F,\Gamma}\) denote the Lie algebra of \(\mathcal{G}^{F,\Gamma}\). We use \(\rho\) to denote an element in \(\mathcal{G}^{F,\Gamma}\), and we use \(Y\) to denote an element in \(\mathbf{g}^{F,\Gamma}\).
2. Let \(\mathcal{G}^{F,\Gamma}_{\mathbb{C}}\) denote the complexification of \(\mathcal{G}^{F,\Gamma}\), that is, \[\mathcal{G}^{F,\Gamma}_{\mathbb{C}}=Map(S^{2}/\Gamma,F^{c}/\mathbb{C}^{*}),\] where \(F^{c}=GL_{\mathbb{C}}(V)^{\Gamma}\) denotes the complex linear transformations of \(S\) that commute with the \(\Gamma\)-action. We use \(\kappa\) to denote an element in \(\mathcal{G}^{F,\Gamma}_{\mathbb{C}}\).
**Definition 2.14**.: We define the configuration space to be \(\mathcal{A}^{F}\times C^{\infty}(S^{2}/\Gamma,E(\Gamma))\) where \(\mathcal{A}^{F}\) and \(C^{\infty}(S^{2}/\Gamma,E(\Gamma))\) are defined as follows.
1. Let \(\mathcal{A}^{F}\) be the space of connections on \(E(\Gamma)\) given by \[\mathcal{A}^{F}=\{A_{0}+\kappa^{*}\partial\kappa^{*-1}+\kappa^{-1}\bar{ \partial}\kappa\mid\kappa\in\mathcal{G}^{F,\Gamma}_{\mathbb{C}}\},\] where \(A_{0}\) is the aforementioned Chern connection on \(H\) or equivalently \(S^{3}\) thought of as the induced connection on \(E(\Gamma)\). We will always denote \(\kappa^{*}\partial\kappa^{*-1}+\kappa^{-1}\bar{\partial}\kappa\) by \(B\), and sometimes we omit the base connection \(A_{0}\).
2. Let \(C^{\infty}(S^{2}/\Gamma,E(\Gamma))\) denote the abbreviation for the space of orbifold vector bundle sections \(C^{\infty}_{orb}(S^{2}/\Gamma,E(\Gamma))\).
**Remark 2.15**.:
1. Notice that \(\mathcal{G}^{F,\Gamma}\) is the subgroup of the group of unitary gauge automorphisms of \(E(\Gamma)=E(V)^{\Gamma}=E(End(S))^{\Gamma}\) induced by the automorphisms of \(E(S)^{\Gamma}\). And the action of
\(\mathcal{G}^{F,\Gamma}\) on \(\mathcal{A}^{F}\times C^{\infty}(S^{2}/\Gamma,E(\Gamma))\) is given by the following: for a pair \((B,\Theta)\in\mathcal{A}^{F}\times C^{\infty}(S^{2}/\Gamma,E(\Gamma))\), \[\rho\cdot(B,\Theta)=(B+\rho d_{B}\rho^{-1},\rho\Theta\rho^{-1}).\] Note that here we omit the base connection \(A_{0}\) as \(\rho\) fixes \(A_{0}\).
2. The action of the connection form \(B\) on a section \(\Theta\) comes from the representation of the Lie algebra of \(F^{c}\) on \(V\) induced from the representation \(S\).
3. The key point of the definition of \(\mathcal{A}^{F}\) is that it can be thought of as the complex gauge orbit containing \(A_{0}\), which will become important in the later sections.
**Definition 2.16** (Symplectic structure on \(\mathcal{A}^{F}\times C^{\infty}(S^{2}/\Gamma,E(\Gamma))\)).: Let \((B_{1},\Theta_{1})\) and \((B_{2},\Theta)\) be in \(\mathcal{A}^{F}\times C^{\infty}(S^{2}/\Gamma,E(\Gamma))\). Let a symplectic \(2\)-form \(\mathbf{\Omega}\) on \(\mathcal{A}^{F}\times C^{\infty}(S^{2}/\Gamma,E(\Gamma))\) be defined as follows:
\[\mathbf{\Omega}((B_{1},\Theta_{1}),(B_{2},\Theta))=\int_{S^{2}/\Gamma}B_{1} \wedge B_{2}+\int_{S^{2}/\Gamma}-Im\langle\Theta_{1},\Theta_{2}\rangle\omega _{vol}.\]
**Definition 2.17**.:
1. Let \(\mathcal{G}^{F,\Gamma}_{0}\) denote the based subgroup of \(\mathcal{G}^{F,\Gamma}\), that is \[\mathcal{G}^{F,\Gamma}_{0}=\{\rho\in\mathcal{G}^{F,\Gamma}|\rho(x)=1,\text{ for some fixed base point }x\in S^{2}/\Gamma\}.\] We also get the complexified version \(\mathcal{G}^{F,\Gamma}_{0,\mathbb{C}}\) for the above definition.
2. Let \(\mathcal{G}^{F,\Gamma}_{\tau}\) denote the antipodal-invariant subgroup of \(\mathcal{G}^{F,\Gamma}\) and let \(\Omega^{2}_{\tau}(S^{2}/\Gamma;\mathbf{f}/\mathbf{t})\) denote the antipodal-invariant subgroup of \(\Omega^{2}(S^{2}/\Gamma;\mathbf{f}/\mathbf{t})\), where \(\tau:S^{2}\to S^{2}\) is the antipodal map given by \(x=(a,b,c)\mapsto\tau(x)=(-a,-b,-c)\). We can also think of \(\tau\) as a map from \(\mathbb{C}P^{1}\) to \(\mathbb{C}P^{1}\) with \(\tau:\mathbb{C}P^{1}\to\mathbb{C}P^{1},[z_{1}:z_{2}]\mapsto[-\bar{z}_{2}:\bar{ z}_{1}]\). We remark here that \(\tau\) commutes with the \(\Gamma\)-action and hence descends to a map \(\tau:S^{2}/\Gamma\to S^{2}/\Gamma\).
Below, we define the \(L^{2}\) inner product on various spaces.
**Definition 2.18**.:
1. Let \(\Theta_{1}\), \(\Theta_{2}\) be two sections of \(E(\Gamma)\). We define the \(L^{2}\) inner product of \(\Theta_{1}\) and \(\Theta_{2}\) to be \[\langle\Theta_{1},\Theta_{2}\rangle_{L_{2}}=\int_{S^{2}/\Gamma}\langle\Theta_{ 1},\Theta_{2}\rangle\omega_{vol}=\int_{S^{2}/\Gamma}\operatorname{Tr}(\Theta_{ 1}\Theta_{2}^{*})\omega_{vol},\] where \(\langle\Theta_{1},\Theta_{2}\rangle_{x}=h_{E(V)}(\Theta_{1}(x),\Theta_{2}(x))_ {x}=\operatorname{Tr}(\Theta_{1}(x)\Theta_{2}^{*}(x))_{x}\).
2. We identify \(\Omega^{0}(S^{2}/\Gamma;\mathbf{f}/\mathbf{t})\) and \(\Omega^{2}(S^{2}/\Gamma;\mathbf{f}/\mathbf{t})\) as dual spaces through the following integration: let \(\phi_{1}\in\Omega^{0}(S^{2}/\Gamma;\mathbf{f}/\mathbf{t})\) and \(\phi_{2}\in\Omega^{2}(S^{2}/\Gamma;\mathbf{f}/\mathbf{t})\), then \(\phi_{2}(\phi_{1})=\int_{S^{2}/\Gamma}\langle\phi_{1},\phi_{2}\rangle\), where we think of \(\phi_{2}\) as an element in \(\Omega^{0}(S^{2}/\Gamma;\mathbf{f}/\mathbf{t})\) multiplied by the volume form \(\omega_{vol}\), and the inner product is pointwisely given by the inner product on \(\mathbf{f}/\mathbf{t}\).
## 3. An Overview of the Gauge-Theoretic Construction
In this section, we describe the main gauge-theoretic construction of the ALE spaces while leaving some details of the construction and most proofs to the following sections. We make an important remark that from this point on and throughout the rest of the paper, we take the \(\Gamma\)-representation \(S\) to be the regular representation \(R\) of \(\Gamma\) unless otherwise specified, and carry on with the same notations introduced in the previous sections. In particular, we have \(V=End(R)\).
### Symplectic reduction
Recall that in the previous section, we define the gauge group to be \(\mathcal{G}^{F,\Gamma}=Map(S^{2}/\Gamma,F/T)\) acting on the configuration space \(\mathcal{A}^{F}\times C^{\infty}(S^{2}/\Gamma,E(\Gamma))\) under the following action: for \(\rho\in\mathcal{G}^{F,\Gamma}\), and \((B,\Theta)\in\mathcal{A}^{F}\times C^{\infty}(S^{2}/\Gamma,E(\Gamma))\),
\[\rho\cdot(B,\Theta)=(B+\rho d_{B}\rho^{-1},\rho\Theta\rho^{-1}).\]
**Proposition 3.1**.: _The above gauge group action on \(\mathcal{A}^{F}\times C^{\infty}(S^{2}/\Gamma,E(\Gamma))\) is Hamiltonian and gives rise to the following moment map:_
\[\tilde{\mu}_{1}:\mathcal{A}^{F}\times C^{\infty}(S^{2}/\Gamma,E(\Gamma)) \rightarrow\Omega^{2}(S^{2}/\Gamma;\mathbf{f}/\mathbf{t}),\]
\[(B,\Theta)\mapsto F_{B}-\frac{i}{2}[\Theta,\Theta^{*}]\omega_{vol}.\]
**Remark 3.2**.:
1. Notice that \(B\) alone isn't a connection whereas \(A_{0}+B\) is a connection on \(E(\Gamma)\). Hence, we can write \(F_{A_{0}+B}=F_{A_{0}}+F_{B}\), and \(\bar{\partial}_{A_{0}+B}=\bar{\partial}_{A_{0}}+B^{0,1}\).
2. With the preceding proposition in place, we can write down the following equations: for \(\tilde{\zeta}_{1}\in Z\), where \(Z\) is the center of \((\mathbf{f}/\mathbf{t})^{*}\) thought of as traceless matrices in \(\mathbf{f}/\mathbf{t}\), we consider (3.1) \[\bar{\partial}_{A_{0}+B}\Theta=0\] (3.2) \[F_{B}-\frac{i}{2}[\Theta,\Theta^{*}]\omega_{vol}=\tilde{\zeta}_{1}\cdot\omega_ {vol}\]
The above equations motivate the following definition.
**Definition 3.3**.: For an element \(\tilde{\zeta}_{1}\in Z\), let \(\mathcal{M}(\Gamma,\tilde{\zeta}_{1})\) be the moduli space of solutions to 3.1 and 3.2 that lie in the configuration space \(\mathcal{A}^{F}\times C^{\infty}(S^{2}/\Gamma,E(\Gamma))\) modulo the gauge group action, that is,
\[\mathcal{M}(\Gamma,\tilde{\zeta}_{1})=\{(A_{0}+B,\Theta)\in\mathcal{A}^{F} \times C^{\infty}(S^{2}/\Gamma,E(\Gamma))\mid(\ref{eq:1})-(\ref{eq:1})\}/ \mathcal{G}^{F,\Gamma}.\]
**Proposition 3.4**.: _For choices of \(\tilde{\zeta}_{1}\) such that \(\mathcal{G}^{F,\Gamma}\) acts freely on the space of solutions to 3.1 and 3.2 in \(\mathcal{A}^{F}\times C^{\infty}(S^{2}/\Gamma,E(\Gamma))\), \(\mathcal{M}(\Gamma,\tilde{\zeta}_{1})\) can be identified with \(\mu_{1}(\tilde{\zeta}_{1})^{-1}/F\) in [17]._
**Remark 3.5**.: We will discuss the conditions assumed in the above proposition in detail in the following sections and we will prove the proposition in Section 7.
### Further reduction
Everything regarding to the hyperkahler structure on \(C^{\infty}(S^{2}/\Gamma,E(\Gamma))\) appearing in this subsection will be discussed in detail in Section 4. Here we give a brief overview. It turns out that \(C^{\infty}(S^{2}/\Gamma,E(\Gamma))\) can be given a hyperkahler structure.
Before we write down the kahler forms, we first introduce some notations. For a section \(\Theta\) of \(C^{\infty}(S^{2}/\Gamma,E(\Gamma))\), we identify \(\Theta\) with an \(S^{1}\)- and \(\Gamma\)-equivariant map \(\lambda:S^{3}\to End(R)\), and hence we can express \(\Theta\) as \(\Theta:x\mapsto[p,\lambda(p)]\), for \(x\in S^{2}\) and \(p\in\pi^{-1}(x)\subset S^{3}\).
There is a complex structure \(J\), in addition to the standard complex structure \(I\), on the space of sections \(C^{\infty}(S^{2}/\Gamma,E(\Gamma))\), which we can express as follows. Let \(\Theta:x\mapsto[p,\lambda(p)]\) be given, the action of \(J\) on \(\Theta\) is given by \(J\Theta:x\mapsto[p,-\lambda(J(p))^{*}]\), where \(p\in S^{3}\) and \(J\) on \(S^{3}\) is just the usual quaternion action.
**Proposition 3.6**.: _There are three symplectic forms on \(C^{\infty}(S^{2}/\Gamma,E(\Gamma))\) compatible with complex structures \(I\), \(J\), \(K\), respectively:_
\[\omega_{1}(\Theta_{1},\Theta_{2})=\int_{S^{2}/\Gamma}-Im\langle\Theta_{1}, \Theta_{2}\rangle\omega_{vol},\]
\[\omega_{2}(\Theta_{1},\Theta_{2})=\int_{S^{2}/\Gamma}Re\langle J\Theta_{1}, \Theta_{2}\rangle\omega_{vol},\]
\[\omega_{3}(\Theta_{1},\Theta_{2})=\int_{S^{2}/\Gamma}-Im\langle J\Theta_{1}, \Theta_{2}\rangle\omega_{vol},\]
_and a hyperkahler metric \(g_{h}\) such that_
\[g_{h}(\Theta_{1},\Theta_{2})=\int_{S^{2}/\Gamma}Re\langle\Theta_{1},\Theta_{2} \rangle\omega_{vol},\]
_together giving rise to a hyperkahler structure on \(C^{\infty}(S^{2}/\Gamma,E(\Gamma))\)._
We will prove the above proposition in Section 4. It turns out that the action of the \(\tau\)-invariant gauge group \(\mathcal{G}_{\tau}^{F,\Gamma}\) on the space of sections of \(E(\Gamma)\) with respect to each one of the three symplectic forms is again Hamiltonian. Hence, we can write down the following additional moment maps and operate a further reduction on the configuration space:
* \(\tilde{\mu}_{2}:C^{\infty}(S^{2}/\Gamma,E(\Gamma))\to\Omega^{2}(S^{2}/\Gamma; \mathbf{f}/\mathbf{t}),\Theta\mapsto-\frac{1}{4}([J\Theta,\Theta^{*}]-[\Theta,J\Theta^{*}])\omega_{vol}\),
* \(\tilde{\mu}_{3}:C^{\infty}(S^{2}/\Gamma,E(\Gamma))\to\Omega^{2}(S^{2}/\Gamma; \mathbf{f}/\mathbf{t}),\Theta\mapsto-\frac{i}{4}([J\Theta,\Theta^{*}]+[\Theta,J\Theta^{*}])\omega_{vol}\).
We get two additional moment map equations: let \(\tilde{\zeta}_{2},\tilde{\zeta}_{3}\in Z\), consider
\[-\frac{1}{4}([J\Theta,\Theta^{*}]-[\Theta,J\Theta^{*}])\omega_{vol}=\tilde{ \zeta}_{2}\cdot\omega_{vol}, \tag{3.3}\]
\[-\frac{i}{4}([J\Theta,\Theta^{*}]+[\Theta,J\Theta^{*}])\omega_{vol}=\tilde{ \zeta}_{3}\cdot\omega_{vol}. \tag{3.4}\]
**Definition 3.7**.: Let \(\mathcal{A}_{\tau}^{F}\subset\mathcal{A}^{F}\) be the subspace of connections in \(\mathcal{A}^{F}\) on \(E(\Gamma)\) given by
\[\mathcal{A}_{\tau}^{F}=\{A_{0}+\kappa^{*}\partial\kappa^{*-1}+\kappa^{-1}\bar{ \partial}\kappa\mid\kappa\in\mathcal{G}_{\tau,\mathbb{C}}^{F,\Gamma}\},\]
where \(A_{0}\) is again the base Chern connection on \(E(\Gamma)\), and \(\mathcal{G}_{\tau,\mathbb{C}}^{F,\Gamma}\) is the complexification of the \(\tau\)-invariant subgroup \(\mathcal{G}_{\tau}^{F,\Gamma}\).
**Theorem 3.8**.: _Let \(\tilde{\zeta}=(\tilde{\zeta}_{1},\tilde{\zeta}_{2},\tilde{\zeta}_{3})\), where for all \(i\), \(\tilde{\zeta}_{i}\in Z\). Let_
\[\mathcal{X}_{\tilde{\zeta}}=\{(B,\Theta)\in\mathcal{A}_{\tau}^{F}\times C^{ \infty}(S^{2}/\Gamma,E(\Gamma))|(3.1)-(3.4)\}/\mathcal{G}_{\tau}^{F,\Gamma}.\]
_Then for a suitable choice of \(\tilde{\zeta}\), \(\mathcal{X}_{\tilde{\zeta}}\) is diffeomorphic to the resolution of singularity \(\widetilde{\mathbb{C}^{2}/\Gamma}\). Furthermore, for \(\zeta=\tilde{\zeta}^{*}=-\tilde{\zeta}\), there exists a map \(\Phi\) taking \(X_{\zeta}\) in [17] to \(\mathcal{X}_{\tilde{\zeta}}\) and a natural choice of metric on \(\mathcal{X}_{\tilde{\zeta}}\) such that \(\Phi\) is an isometry._
**Remark 3.9**.:
1. We will make the statement of "a suitable choice of \(\tilde{\zeta}^{*}\) precise in Section \(7\) where we also prove the theorem.
2. We remark that \(3.1-3.2\) resemble Hitchin's equations but they are not the same. First of all, the bundle considered here regarding which these equations are written is different from that of Hitchin; furthermore, the pair consists of a connection and a section whereas Hitchin considers a connection and a \((1,0)\)-form.
### Proof of Proposition 3.1
Here in this subsection, we give the proof of Proposition 3.1, which involves simply standard calculations.
Proof of Proposition 3.1.: We will show that \(F_{B}-\frac{i}{2}[\Theta,\Theta^{*}]\omega_{vol}\) is a moment map on \(\Omega^{1}(S^{2}/\Gamma;\mathbf{f}/\mathbf{t})\times C^{\infty}(S^{2}/\Gamma,E(\Gamma))\) induced by the action of \(\mathcal{G}^{F,\Gamma}\). We need to check the two properties of a moment map.
Let \(Y:S^{2}/\Gamma\to\mathbf{f}/\mathbf{t}\) be in \(\mathbf{g}^{F,\Gamma}\), and let \(Y^{\sharp}\) be the vector field on \(\Omega^{1}(S^{2}/\Gamma;\mathbf{f}/\mathbf{t})\times C^{\infty}(S^{2}/\Gamma,E(\Gamma))\) generated by \(Y\). Then \(Y^{\sharp}(B,\Theta)\) is given by
\[\frac{d}{dt}|_{t=0}(B+\exp(tY)d_{B}\exp(-tY),\exp(tY)\Theta\exp(-tY))=(-d_{B}Y,[Y,\Theta]).\]
Hence, we have
\[\iota_{Y^{\sharp}}\omega_{(B,\Theta)}(B^{\prime},\Theta^{\prime})=\]
\[\int_{S^{2}/\Gamma}\operatorname{Tr}(-d_{B}Y\wedge B^{\prime})-\int_{S^{2}/ \Gamma}Im\langle[Y,\Theta],\Theta^{\prime}\rangle\omega_{vol}=\]
\[\int_{S^{2}/\Gamma}\operatorname{Tr}([Y,B]\wedge B^{\prime}-dY\wedge B^{ \prime})-\int_{S^{2}/\Gamma}Im\langle[Y,\Theta],\Theta^{\prime}\rangle\omega _{vol}.\]
Meanwhile, let \((B_{t},\Theta_{t})_{t\in[0,1]}\) be a path in \(\Omega^{1}(S^{2}/\Gamma;\mathbf{f}/\mathbf{t})\times C^{\infty}(S^{2}/\Gamma,E(\Gamma))\) such that \((B_{0},\Theta_{0})=(B,\Theta)\) and \(\frac{d}{dt}|_{t=0}(B_{t},\Theta_{t})=(B^{\prime},\Theta^{\prime})\). Then we also have
\[d\tilde{\mu}_{1(B,\Theta)}^{Y}(B^{\prime},\Theta^{\prime})=\]
\[\frac{d}{dt}|_{t=0}\int_{S^{2}/\Gamma}\operatorname{Tr}(Y\wedge F_{B_{t}})- \frac{d}{dt}|_{t=0}\int_{S^{2}/\Gamma}\langle Y,\frac{i}{2}[\Theta_{t},\Theta_{ t}^{*}]\rangle\omega_{vol}=\]
\[\frac{d}{dt}|_{t=0}\int_{S^{2}/\Gamma}\operatorname{Tr}(Y\wedge F_{B_{t}})-\frac{d} {dt}|_{t=0}\int_{S^{2}/\Gamma}-\frac{i}{2}\langle Y,[\Theta_{t},\Theta^{*}_{t}] \rangle\omega_{vol}=\]
\[\int_{S^{2}/\Gamma}\operatorname{Tr}(Y\wedge(dB^{\prime}+B^{\prime}\wedge B+B \wedge B^{\prime}))-\int_{S^{2}/\Gamma}-\frac{i}{2}\langle Y,[\Theta^{\prime},\Theta^{*}]+[\Theta,\Theta^{\prime*}]\rangle\omega_{vol}.\]
Hence, we have \(\iota_{Y^{*}}\omega_{(B,\Theta)}(B^{\prime},\Theta^{\prime})=d\tilde{\mu}^{Y} _{1(B,\Theta)}(B^{\prime},\Theta^{\prime})\).
We also need to check the equivariance condition, that is, \(\tilde{\mu}_{1}\circ\psi_{\rho}=Ad^{*}_{\rho}\circ\tilde{\mu}_{1}\). Let \(\rho\) be an element in the unitary gauge group \(\mathcal{G}^{F,\Gamma}\), and let
\[\psi_{\rho}:\Omega^{1}(S^{2}/\Gamma;\mathbf{f}/\mathbf{t})\times C^{\infty}( S^{2}/\Gamma,E(\Gamma))\rightarrow\Omega^{1}(S^{2}/\Gamma;\mathbf{f}/ \mathbf{t})\times C^{\infty}(S^{2}/\Gamma,E(\Gamma))\]
be the diffeomorphism on the configuration space induced by \(\rho\). We have
\[\tilde{\mu}_{1}\circ\psi_{\rho}(B,\Theta)=F(B+\rho d_{B}\rho^{-1})-\frac{i}{2} [\rho\Theta\rho^{-1},(\rho\Theta\rho^{-1})^{*}]\omega_{vol}.\]
Meanwhile,
\[Ad^{*}_{\rho}\circ\tilde{\mu}_{1}(B,\Theta)=\rho F_{B}\rho^{-1}-\frac{i}{2} \rho[\Theta,\Theta^{*}]\rho^{-1}\omega_{vol}.\]
Since the gauge action on curvature is conjugation and \(\rho^{-1}=\rho^{*}\), we have the desired equality
\[\tilde{\mu}_{1}\circ\psi_{\rho}(B,\Theta)=Ad^{*}_{\rho}\circ\tilde{\mu}_{1}(B,\Theta).\]
## 4. Hyperkahler structure on \(C^{\infty}(S^{2}/\Gamma,E(\Gamma))\)
### The quaternions
Let \(SU(2)\) denote the 2-dimensional special unitary group. Explicitly, \(SU(2)=\{\gamma\in Gl_{2}(\mathbb{C})|\gamma=\begin{pmatrix}u&v\\ -v^{*}&u^{*}\end{pmatrix},|u|^{2}+|v|^{2}=1\}\). Note that \(u^{*}\) refers to complex conjugation.
Below, we will set up another piece of conventions, that is, to endow \(\mathbb{C}^{2}\) with a right \(\mathbb{H}\)-module structure. Write \((z_{1},z_{2})\) for a point in \(\mathbb{C}^{2}\), where \(I,J,K\) act on \(\mathbb{C}^{2}\) as follows: \(I(z_{1},z_{2})=(iz_{1},iz_{2})\), \(J(z_{1},z_{2})=(-z_{2}^{*},z_{1}^{*})\), \(K(z_{1},z_{2})=(-iz_{2}^{*},iz_{1}^{*})\).
Note we also have \(SU(2)\) acting on the right on \(\mathbb{C}^{2}\): Let \(\gamma\in SU(2)\), \(\gamma=\begin{pmatrix}u&v\\ -v^{*}&u^{*}\end{pmatrix}\), then
\[J((z_{1},z_{2})\gamma)=J(uz_{1}-v^{*}z_{2},vz_{1}+u^{*}z_{2})=(-v^{*}z_{1}^{*}- uz_{2}^{*},u^{*}z_{1}^{*}-vz_{2}^{*}),\]
and
\[(J(z_{1},z_{2}))\gamma=(-z_{2}^{*},z_{1}^{*})\gamma=(-v^{*}z_{1}^{*}-uz_{2}^{* },u^{*}z_{1}^{*}-vz_{2}^{*}),\]
so the \(SU(2)\)-action commutes with the \(J\)-action. Hence, the \(SU(2)\)-action on \(\mathbb{C}^{2}\) commutes with all the \(I\)-, \(J\)-, \(K\)-actions.
If we restrict the actions to \(S^{3}\) thought of as the unit quaternions, then we make the following observations:
**Lemma 4.1**.: _The \(S^{1}\)-action on \(S^{3}\) coming from the dual Hopf fibration commutes with \(I\), and for \(p\in S^{3}\), \(g\in S^{1}\), \(J(pg)=J(p)g^{*}\), \(K(pg)=K(p)g^{*}\)._
**Lemma 4.2**.: _Consider \(S^{3}\) as the principal \(S^{1}\)-bundle via the dual Hopf fibration. Let \(\pi:S^{3}\to\mathbb{C}P^{1}\) be the projection map where \(\pi(z_{1},z_{2})=[z_{1}:z_{2}]\). Then \(I\) acts as the identity and \(J\), \(K\) act as the natural involution on the base \(\mathbb{C}P^{1}\) given by \(\tau:\mathbb{C}P^{1}\to\mathbb{C}P^{1}\), \([z_{1}:z_{2}]\mapsto[-z_{2}^{*}:z_{1}^{*}]\)._
### Quaternionic structures on associated bundles and spaces of sections
We have previously introduced the bundle \(E(V)\) and \(E(\Gamma)\). In this subsection, we introduce quaternionic structures on these bundles and their spaces of sections.
We begin with \(E(V)\). Notice that we can define the quaternion actions on \(E(V)\) in the following way:
\[I[p,v]=[-I(p),v]=[p,iv],\]
\[J[p,v]=[J(p),v^{*}],\]
\[K[p,v]=[-K(p),v^{*}],\]
with \([p,v]\in E(V)\).
It's straightforward to check that the \(I\)-, \(J\)-, \(K\)-actions defined above satisfy the properties for quaternionic actions. Hence, we have equipped \(E(V)\) with a quaternionic structure.
Now, we move on to \(E(\Gamma)\). In the previous subsection, we have shown that the \(\Gamma\)-action and the \(J\)-action commute on \(\mathbb{C}^{2}\). Observe that we have that the \(\Gamma\)-action commutes with the quaternion actions on the level of \(E(V)\) as well; more precisely, we have that
\[J(\gamma[p,v])=J[p\gamma,\gamma^{-1}v]=J[p\gamma,R(\gamma^{-1})vR(\gamma)]\]
\[=[J(p\gamma),(R(\gamma^{-1})vR(\gamma))^{*}]=[J(p)\gamma,R(\gamma)^{*}v^{*}R( \gamma^{*})^{*}]\]
\[=[J(p)\gamma,R(\gamma^{*})v^{*}R(\gamma)]=[J(p)\gamma,R(\gamma^{-1})v^{*}R( \gamma)]=\gamma(J[p,v]),\]
given that \(\gamma\in SU(2)\) and \(R:\Gamma\to U(R)\subset End(R)\) is the regular representation.
Hence, the quaternion actions descend to \(E(\Gamma)\). We remark that the \(J\)- and \(K\)-actions on \(E(V)\) and \(E(\Gamma)\) act on the base by \(\tau\) which we have introduced previously.
**Proposition 4.3**.: _The map \(I:E(V)\to E(V)\) is an isometry with respect to the hermitian metric on \(E(V)\), and \(J,K:E(V)\to E(V)\) are skew-isometries in the sense that \(\langle J[p,v_{1}],J[p,v_{2}]\rangle=\overline{\langle v_{1},v_{2}\rangle}\), \(\langle K[p,v_{1}],K[p,v_{2}]\rangle=\overline{\langle v_{1},v_{2}\rangle}\), for \([p,v_{1}],[p,v_{2}]\in E(V)_{\text{x}}\), for all \(x\in S^{2}\)._
From here, by pullbacks, we can make the spaces of sections \(C^{\infty}(E(V))\) and \(C^{\infty}(E(\Gamma))\) into right \(\mathbb{H}\)-modules. We will focus on \(C^{\infty}(E(\Gamma))\) here but the statements for \(C^{\infty}(E(V))\) are exactly the same.
**Proposition 4.4**.: _The space of sections \(C^{\infty}(E(\Gamma))\) of \(E(\Gamma)\) is an infinite-dimensional right \(\mathbb{H}\)-module with the following quaternionic actions: for \(\Theta\) a section of \(E(\Gamma)\), we identify \(\Theta\) with a map \(\lambda:S^{3}\to End(R)\) equivariant with respect to the \(S^{1}\)- and \(\Gamma\)-action, and we define that for \(\Theta:x\mapsto[p,\lambda(p)]\),_
\[I\Theta:x\mapsto[p,i\lambda(p)],\]
\[J\Theta:x\mapsto[p,-\lambda(J(p))^{*}],\] \[K\Theta:x\mapsto[p,\lambda(K(p))^{*}],\]
_where \(J(p)\) and \(K(p)\) are the usual \(J\)-, \(K\)- actions on \(S^{3}\)._
We leave out the proofs for the above propositions as they involve simply using and checking the properties of quaternionic actions. Also, Proposition 4.4 holds for the space of sections \(C^{\infty}(E(V))\) of \(E(V)\) with appropriate modifications of adjectives.
### Hyperkahler structure on the space of sections \(C^{\infty}(E(\Gamma))\)
With the previous observations involving the quaternion actions, we are now ready to introduce the hyperkahler structure on the space of sections \(C^{\infty}(E(\Gamma))\) that will be relevant to the construction. We remark that the same analysis below will give rise to hyperkahler structures of \(C^{\infty}(E(V))\) as well; in fact, we can even replace the regular representation \(R\) with any complex \(\Gamma\)-representation \(S\) and obtain a hyperkahler structure on \(C^{\infty}(E(End(S)))\), as we use no specific properties of the regular representation \(R\) for defining the hyperkahler structure.
Recall that in the previous subsection, we have that for \(\Theta:x\mapsto[p,\lambda(p)]\), the action of \(J\) on \(\Theta\) is such that
\[J\Theta:x\mapsto[p,-\lambda(J(p))^{*}],\]
where \(J(p)\) is the usual \(J\)-action on \(S^{3}\).
We now give the proof of Proposition 3.6.
Proof of Proposition 3.6.: We focus on \(\omega_{3}\). First we make the observation that
\[\omega_{3}(\Theta_{1},\Theta_{2}) =\int_{S^{2}/\Gamma}-Im\langle J\Theta_{1},\Theta_{2}\rangle \omega_{vol}=\int_{S^{2}/\Gamma}-Im\langle-J\Theta_{2},\Theta_{1}\rangle\omega _{vol}\] \[=\int_{S^{2}/\Gamma}-Im\langle J\Theta_{2},\Theta_{1}\rangle\tau^ {*}\omega_{vol}=-\omega_{3}(\Theta_{2},\Theta_{1}).\]
Indeed, for \(\Theta_{1}:x\mapsto[p,\lambda_{1}(p)]\) and \(\Theta_{2}:x\mapsto[p,\lambda_{2}(p)]\), we have
\[\langle J\Theta_{1},\Theta_{2}\rangle_{x}=\operatorname{Tr}(-\lambda_{1}(J(p ))^{*}\lambda_{2}(p)^{*})\]
and
\[\langle-J\Theta_{2},\Theta_{1}\rangle_{x}=\operatorname{Tr}(\lambda_{2}(J(p) )^{*}\lambda_{1}(p)^{*})=\operatorname{Tr}(\lambda_{1}(p)^{*}\lambda_{2}(J(p) )^{*}).\]
Since \(J\) acts on \(S^{2}/\Gamma\) by \(\tau\) which has the property that \(\tau^{*}\omega_{vol}=-\omega_{vol}\), we have the desired equality after integration. This gives \(\omega_{3}\) the skew-symmetric property of a symplectic form. The same can be shown for \(\omega_{2}\). The properties of \(\omega_{2}\) and \(\omega_{3}\) being closed and non-degenerate are obvious. We hence can also write down the compatible hyperkahler metric \(g_{h}\) on \(C^{\infty}(E(\Gamma))\):
\[g_{h}(\Theta_{1},\Theta_{2})=\int_{S^{2}/\Gamma}Re\langle\Theta_{1},\Theta_{2 }\rangle\omega_{vol},\]
and it is evident that \(g_{h}\) is compatible with the complex structures and the symplectic forms.
Next, we want to justify the two additional moment map equations, 3.3 and 3.4. To start with, we make the observation that for \(\Theta:x\mapsto[p,\lambda(p)]\) and \(Y:S^{2}/\Gamma\to\mathbf{f}/\mathbf{t}\) an element in \(\mathbf{g}^{F,\Gamma}\), we have
\[Y\Theta-\Theta Y:x\mapsto[p,Y(x)\lambda(p)-\lambda(p)Y(x)]\]
and
\[J(Y\Theta-\Theta Y):x\mapsto[p,-\lambda(J(p))^{*}Y(\tau(x))^{*}+Y(\tau(x))^{*} \lambda(J(p))^{*}].\]
Thus, we can think of \(J(Y\Theta-\Theta Y)=[J\Theta,(\tau^{*}Y)^{*}]\), where \(\tau\) denotes the involution we have introduced previously. Meanwhile, for \(YJ\Theta-J\Theta Y\), we have
\[YJ\Theta-J\Theta Y:x\mapsto[p,-Y(x)\lambda(J(p))^{*}+\lambda(J(p))^{*}Y(x)].\]
Hence, for \(Y:S^{2}/\Gamma\to\mathbf{f}/\mathbf{t}\) invariant under \(\tau\), that is, \(Y(x)=Y(\tau(x)),\forall x\in S^{2}/\Gamma\), we have
\[J(Y\Theta-\Theta Y)=[J\Theta,(\tau^{*}Y)^{*}]=[J\Theta,-Y]=[Y,J\Theta]=YJ\Theta -J\Theta Y. \tag{4.1}\]
**Proposition 4.5**.: _The action of the \(\tau\)-invariant subgroup \(\mathcal{G}_{\tau}^{F,\Gamma}\) of \(\mathcal{G}^{F,\Gamma}\) on \(C^{\infty}(E(\Gamma))\) is Hamiltonian with respect to the symplectic forms \(\omega_{2}\) and \(\omega_{3}\) and give rise to the following moment maps:_
\[\tilde{\mu}_{2}:C^{\infty}(S^{2}/\Gamma,E(\Gamma))\to\Omega^{2}(S^{2}/\Gamma; \mathbf{f}/\mathbf{t}),\]
\[\Theta\mapsto-\frac{1}{4}([J\Theta,\Theta^{*}]-[\Theta,J\Theta^{*}])\omega_{ vol},\]
_and_
\[\tilde{\mu}_{3}:C^{\infty}(S^{2}/\Gamma,E(\Gamma))\to\Omega^{2}(S^{2}/\Gamma; \mathbf{f}/\mathbf{t}),\]
\[\Theta\mapsto-\frac{i}{4}([J\Theta,\Theta^{*}]+[\Theta,J\Theta^{*}])\omega_{ vol}.\]
Proof.: Again, we first focus on \(\omega_{3}\). Similar to the proof of Proposition 3.1, we let \(Y:S^{2}/\Gamma\to\mathbf{f}/\mathbf{t}\) be a \(\tau\)-invariant element in \(\mathbf{g}^{F,\Gamma}\) and let \(Y^{\sharp}\) denote the vector field on \(C^{\infty}(E(\Gamma))\) induced by \(Y\).
Now, let's compute \(\iota_{Y^{\sharp}}\omega_{3\Theta}(\Theta^{\prime})\). We have
\[\iota_{Y^{\sharp}}\omega_{3\Theta}(\Theta^{\prime})=\int_{S^{2}/\Gamma}-Im \langle J[Y,\Theta],\Theta^{\prime}\rangle\omega_{vol}=\int_{S^{2}/\Gamma}-Im \langle J(Y\Theta-\Theta Y),\Theta^{\prime}\rangle\omega_{vol}.\]
Hence, by 4.1, we have
\[\int_{S^{2}/\Gamma}-Im\langle J(Y\Theta-\Theta Y),\Theta^{\prime}\rangle \omega_{vol}=\]
\[\int_{S^{2}/\Gamma}-Im\langle[J\Theta,Y^{*}],\Theta^{\prime}\rangle\omega_{ vol}=\int_{S^{2}/\Gamma}Im\mathrm{Tr}([Y^{*},J\Theta]\Theta^{\prime*})\omega_{vol}\]
\[=\int_{S^{2}/\Gamma}\frac{i}{2}\mathrm{Tr}([\Theta^{\prime},J\Theta^{*}]Y^{*}+[ J\Theta,\Theta^{\prime*}]Y^{*})\omega_{vol}\]
\[=\int_{S^{2}/\Gamma}\frac{i}{2}(\langle[\Theta^{\prime},J\Theta^{*}],Y\rangle+ \langle[J\Theta,\Theta^{\prime*}],Y\rangle)\omega_{vol}.\]
Meanwhile, by the skew-symmetric property of \(\omega_{3}\), we also have
\[\int_{S^{2}/\Gamma}-Im\langle J[Y,\Theta],\Theta^{\prime}\rangle\omega_{vol}=\int_ {S^{2}/\Gamma}-Im\langle-J\Theta^{\prime},[Y,\Theta]\rangle\omega_{vol}\]
\[=\int_{S^{2}/\Gamma}Im\mathrm{Tr}([Y,\Theta^{*}]J\Theta^{\prime})\omega_{vol}= \int_{S^{2}/\Gamma}\frac{i}{2}\mathrm{Tr}([J\Theta^{\prime*},\Theta]Y+[\Theta ^{*},J\Theta^{\prime}]Y)\omega_{vol}\]
\[=\int_{S^{2}/\Gamma}\frac{i}{2}(\langle[\Theta,J\Theta^{\prime*}],Y\rangle+ \langle[J\Theta^{\prime},\Theta^{*}],Y\rangle)\omega_{vol}.\]
Now, we obtain the following:
\[2\iota_{Y^{\sharp}}\omega_{3\Theta}(\Theta^{\prime})=\]
\[\int_{S^{2}/\Gamma}\frac{i}{2}\langle[\Theta^{\prime},J\Theta^{*}]+[\Theta,J \Theta^{\prime*}],Y\rangle\omega_{vol}+\int_{S^{2}/\Gamma}\frac{i}{2}\langle[J \Theta,\Theta^{\prime*}]+[J\Theta^{\prime},\Theta^{*}],Y\rangle\omega_{vol}.\]
On the other hand, let \(\Theta_{t}\) with \(t\in[0,1]\) be a path in \(C^{\infty}(S^{2}/\Gamma,E(\Gamma))\) such that \(\Theta_{0}=\Theta\) and \(\frac{d}{dt}|_{t=0}\Theta_{t}=\Theta^{\prime}\). Then we have
\[d\tilde{\mu}^{Y}_{3\Theta}(\Theta^{\prime})=\frac{d}{dt}|_{t=0}\int_{S^{2}/ \Gamma}-\langle Y,\frac{i}{4}([J\Theta_{t},\Theta_{t}^{*}]+[\Theta_{t},J\Theta _{t}^{*}])\rangle\omega_{vol}\]
\[=\int_{S^{2}/\Gamma}-\langle Y,\frac{i}{4}([J\Theta^{\prime},\Theta^{*}]+[J \Theta,\Theta^{\prime*}])\rangle\omega_{vol}+\int_{S^{2}/\Gamma}-\langle Y, \frac{i}{4}([\Theta^{\prime},J\Theta^{*}]+[\Theta,J\Theta^{\prime*}])\rangle \omega_{vol}.\]
The above computations verify that
\[\tilde{\mu}_{3}(\Theta)=-\frac{i}{4}([J\Theta,\Theta^{*}]+[\Theta,J\Theta^{*} ])\omega_{vol}.\]
By very similar computations, we also get that for
\[\omega_{2}(\Theta_{1},\Theta_{2})=\int_{S^{2}/\Gamma}Re\langle J\Theta_{1}, \Theta_{2}\rangle\omega_{vol},\]
we have
\[\tilde{\mu}_{2}(\Theta)=-\frac{1}{4}([J\Theta,\Theta^{*}]-[\Theta,J\Theta^{*} ])\omega_{vol}.\]
We leave out the proof for the equivariance condition as it is essentially the same as that of Proposition 3.1.
**Remark 4.6**.:
1. Note, here we need to restrict the gauge group action to the \(\tau\)-invariant subgroup \(\mathcal{G}_{\tau}^{F,\Gamma}\) which is different from the previous setup.
2. Observe that \(\frac{i}{4}([J\Theta,\Theta^{*}]+[\Theta,J\Theta^{*}])\) and \(\frac{1}{4}([J\Theta,\Theta^{*}]-[\Theta,J\Theta^{*}])\) are both \(\tau\)-invariant and hence the new moment maps map into the correct space.
**Lemma 4.7**.: _If \(\Theta\) is holomorphic with respect to a fixed holomorphic structure on \(E(\Gamma)\) and is identified with a pair of matrices \((\alpha,\beta)\), then \(J\Theta=J(\alpha,\beta)=(-\beta^{*},\alpha^{*})\)._
Proof.: As before, we express \(\Theta\) as \(\Theta:x\mapsto[p,\lambda(p)]\), where \(\lambda:S^{3}\to End(R)\) is \(S^{1}\)- and \(\Gamma\)-equivariant. Since \(\Theta\) is holomorphic, \(\lambda\) can be extended to a complex linear map \(\lambda:\mathbb{C}^{2}\to End(R)\). Hence, \(\lambda\) can be thought of as a pair of matrices \((\alpha,\beta)\) such that \(\lambda(z_{1},z_{2})=z_{1}\alpha+z_{2}\beta\), for \((z_{1},z_{2})\in\mathbb{C}^{2}\).
On the other hand, we have \(J\Theta:x\mapsto[p,-\lambda(J(p))^{*}]\). This give us
\[-\lambda(J(z_{1},z_{2}))^{*}=-\lambda(-z_{2}^{*},z_{1}^{*})^{*}=-(-z_{2}^{*} \alpha+z_{1}^{*}\beta)^{*}=-z_{1}\beta^{*}+z_{2}\alpha^{*}.\]
This precisely says that \(J\Theta\) reduces to \((-\beta^{*},\alpha^{*})\).
**Remark 4.8**.:
1. Provided with the previous lemma, we observe that for \(\tilde{\mu}_{3}\), we have \[\frac{i}{4}([J\Theta,\Theta^{*}]+[\Theta,J\Theta^{*}])\] \[=\frac{i}{4}([-\beta^{*},\alpha^{*}]+[\alpha^{*},\beta^{*}]+[\alpha,-\beta]+[ \beta,\alpha])\] \[=\frac{i}{4}(2[\alpha^{*},\beta^{*}]-2[\alpha,\beta])=\frac{i}{2}([\alpha^{*}, \beta^{*}]-[\alpha,\beta]),\] but this is precisely the third moment map \(\mu_{3}\) in Kronheimer's setup [17]; similar calculations show that \(\tilde{\mu}_{2}\) also reduces to \(\mu_{2}\) in Kronheimer's setup [17]. This observation will become a key element in the proof of Theorem 3.6.
2. We remark that the same analysis presented in this section will give rise to hyperkahler structures to \(C^{\infty}(E(End(S)))\) and \(C^{\infty}(E(End(S))_{r}^{\Gamma})\) if we replace the regular representation \(R\) with any \(\Gamma\)-representation \(r\) on \(S\) with an appropriately chosen hermitian structure to obtain a hyperkahler structure on \(C^{\infty}(E(End(S)))\) and \(C^{\infty}(E(End(S))_{r}^{\Gamma})\), as we use no specific properties of the regular representation \(R\) for defining the hyperkahler structure.
## 5. Uniqueness theorems
In this section, we analyze both the unitary gauge group action and the complex gauge group action on the configuration space \(\mathcal{A}^{F}\times C^{\infty}(E(\Gamma))\). In particular, we prove two uniqueness theorems: the first one states that any solution to 3.1 and 3.2 lying in \(\mathcal{A}^{F}\times C^{\infty}(E(\Gamma))\) that are \(\mathcal{G}^{F,\Gamma}_{\mathbb{C}}\)-equivalent are also \(\mathcal{G}^{F,\Gamma}\)-equivalent, which is a standard occurrence in gauge theory. The second uniqueness theorem can be thought of as a corollary of the first one, which states that any solution to 3.1 - 3.4 lying in \(\mathcal{A}^{F}_{\tau}\times C^{\infty}(E(\Gamma))\) that are \(\mathcal{G}^{F,\Gamma}_{\tau,\mathbb{C}}\)-equivalent must also be \(\mathcal{G}^{F,\Gamma}_{\tau}\)-equivalent.
**Lemma 5.1**.: _Up to automorphisms of \(E(\Gamma)\), the space \(\mathcal{A}^{F}\) defines a single holomorphic structure on \(E(\Gamma)\), identifying \(E(\Gamma)\) with the direct sum of hyperplane bundles holomorphically._
Proof.: By construction, \(A_{0}\) is taken to be the Chern connection giving rise to the holomorphic structure on \(E(\Gamma)\) such that \(E(\Gamma)\) splits holomorphically as a direct sum of hyperplane bundles. As \(\mathcal{A}^{F}\) is simply defined to
be the complex orbit containing \(A_{0}\), we must have that \(\mathcal{A}^{F}\) defines a single holomorphic structure identifying \(E(\Gamma)\) with the direct sum of hyperplane bundles holomorphically, as stated in the lemma.
**Lemma 5.2**.: _The based complex gauge group acts freely on \(\mathcal{A}^{F}\), and the stabilizer of \(B\) in the complex gauge group is isomorphic to the constant subgroup._
**Remark 5.3**.: The two preceding lemmas can both be formulated where we replace \(\mathcal{A}^{F}\) with \(\mathcal{A}^{F}_{\tau}\) and use the corresponding \(\tau\)-invariant gauge groups.
**Definition 5.4**.: Let \(Q\) be the canonical \(2\)-dimensional representation of \(SU(2)\). Let \(Hom(Q,V)^{\Gamma}\) denote the \(\Gamma\)-invariant subset of \(Hom(Q,V)\), consisting of all maps that commute with the \(\Gamma\)-actions on \(Q\) and \(V\), that is, for \(f\in Hom(Q,V)\),\(f(\gamma(z))=\gamma(f(z))\), where \(\gamma\in\Gamma\) and \(z\in Q\).
**Lemma 5.5**.: _The space \(Hom(Q,V)\) is isomorphic to the space of holomorphic sections of \(E(V)\) with respect to \(A_{0}\). The space \(Hom(Q,V)^{\Gamma}\) is isomorphic to the space of holomorphic sections of \(E(\Gamma)\) with respect to \(A_{0}\)._
**Remark 5.6**.:
1. It is easy to see that \(M\cong Hom(Q,V)^{\Gamma}\), and hence by the previous lemma, we can think of \(M\) as the space of holomorphic sections of \(E(\Gamma)\) with respect to the fixed connection \(A_{0}\).
2. The above lemma gives rise to a map \[\Psi:M\to\mathcal{A}^{F}\times C^{\infty}(E(\Gamma))\] \[\lambda\mapsto(A_{0},\Theta:x\mapsto[p,\lambda(p)]),\] with the property such that \(\Psi\) is an isomorphism onto its image. In addition, \(\Psi\) can be naturally regarded as an isometry onto its image. To see this, we observe that the hyperkahler metric \(g_{h}\) given in Proposition 3.6 restricted to the set \(\{\Theta\in C^{\infty}(E(\Gamma))|\bar{\partial}_{A_{0}}\Theta=0\}\) agrees with the natural flat hyperkahler metric on \(M\). Hence, \(\Psi\) is an isometry onto its image.
3. A holomorphic section of \(E(\Gamma)\) with respect to the fixed connection \(A_{0}\) can be expressed as a pair of matrices \((\alpha,\beta)\) where \((\alpha,\beta)\) is \(\Gamma\)-invariant as in [17].
We omit the proofs for the two preceding lemmas as the proofs can be found in or follow from standard references such as [16], [15], and [10].
**Lemma 5.7**.: _There is a map_
\[\tilde{\Psi}:M\to\{(A_{0}+B,\Theta)\in\mathcal{A}^{F}\times C^{\infty}(E( \Gamma))|\bar{\partial}_{A_{0}+B}\Theta=0\}/\mathcal{G}^{F,\Gamma}_{0,\mathbb{ C}}\]
_such that is \(\tilde{\Psi}\) an isomorphism, where \(M\) comes from Kronheimer's construction in [17], and there exists a residual \(F^{c}\) action on both sides which also coincides._
Proof.: By Lemma 5.2, we know that \(\mathcal{G}^{F,\Gamma}_{0,\mathbb{C}}\) acts freely and transitively on the space of connections. Hence, we can take \(\tilde{\Psi}\) to be the following composition of maps: let \(\mathcal{C}\) denote \(\{(A_{0}+B,\Theta)\in\mathcal{A}^{F}\times C^{\infty}(E(\Gamma))|\bar{\partial} _{A_{0}+B}\Theta=0\}\), and consider
\[\tilde{\Psi}:M\rightarrow\mathcal{C}\rightarrow\mathcal{C}/\mathcal{G}^{F, \Gamma}_{0,\mathbb{C}},\]
\[(\alpha,\beta)=\lambda\mapsto(A_{0},\Theta:x\mapsto[p,\lambda(p)])\mapsto[( A_{0},\Theta:x\mapsto[p,\lambda(p)])],\]
where \([(A_{0},\Theta:x\mapsto[p,\lambda(p)])]\) denotes the gauge orbit containing the chosen representative. Previous arguments suggest that \(\tilde{\Psi}\) is an isomorphism. It follows naturally that the residual \(F^{c}\) action on both \(M\) and \(\mathcal{C}/\mathcal{G}^{F,\Gamma}_{0,\mathbb{C}}\) coincides.
**Remark 5.8**.: We let \(\tilde{\Psi}_{\tau}\) denote the map
\[\tilde{\Psi}_{\tau}:M\rightarrow\{(A_{0}+B,\Theta)\in\mathcal{A}^{F}_{\tau} \times C^{\infty}(E(\Gamma))|\bar{\partial}_{A_{0}+B}\Theta=0\}/\mathcal{G}^{ F,\Gamma}_{\tau,0,\mathbb{C}}.\]
We have that \(\tilde{\Psi}_{\tau}\) is again an isomorphism following the same arguments as in the previous lemma.
Before proceeding, we set up some linear algebra that will be of use later. Recall that \(E(V)\) is the vector bundle associated to \(S^{3}\) on the \(\Gamma\)-representation \(V=End(R)\). We have the following two maps induced by left and right multiplication on \(V\):
\[c_{l}:V\to End(V),c_{l}(\phi)(\psi)=\phi\circ\psi\]
and
\[c_{r}:V\to End(V),c_{l}(\phi)(\psi)=\psi\circ\phi.\]
Since both \(c_{l}\) and \(c_{r}\) commute with the \(S^{1}\)-action, they give rise to bundle maps:
\[c_{l},c_{r}:E(V)\to E(End(V)).\]
Hence, given \(\phi,\psi\in E(V)_{x}\), we have the following composition:
\[E(V)_{x}\otimes E(V)^{*}_{x}\to E(End(V))_{x}\otimes E(End(V))^{*}_{x} \rightarrow\underline{End(V)}_{x},\]
\[\phi\otimes\psi^{*}\mapsto c_{l}(\phi)\otimes c_{l}(\psi^{*})\mapsto[c_{l}( \phi),c_{l}(\psi^{*})].\]
On the other hand, we also have
\[E(V)_{x}\otimes E(V)^{*}_{x}\rightarrow\underline{End(R)}_{x}\to \underline{End(End(R))}_{x}=\underline{End(V)}_{x},\]
\[\phi\otimes\psi^{*}\mapsto[\phi,\psi^{*}]\mapsto c_{l}([\phi,\psi^{*}]),\]
where we also have
\[[c_{l}(\phi),c_{l}(\psi^{*})]=c_{l}([\phi,\psi^{*}]).\]
Similarly, there are maps such as
\[E(End(R))\otimes\underline{End(R)}\to E(End(R)),\]
\[\underline{End(R)}\otimes E(End(R))\to E(End(R)),\]
\[End(R)\otimes E(End(R))\otimes End(R)\to E(End(R)),\]
modeled locally on maps such as
\[End(R)\otimes End(R)\to End(R),\phi\otimes\psi\mapsto\phi\circ\psi.\]
**Lemma 5.9** (Uniqueness theorem 1).: _Let \((B_{1},\Theta_{1})\) and \((B_{2},\Theta_{2})\) be two solutions to 3.1 and 3.2 in \(\mathcal{A}^{F}\times C^{\infty}(E(\Gamma))\) that lie on the same complex orbit, that is, there exists a complex automorphism of \(E(\Gamma)\) taking \((B_{1},\Theta_{1})\) to \((B_{2},\Theta_{2})\). Then \((B_{1},\Theta_{1})\) and \((B_{2},\Theta_{2})\) are unitarily equivalent._
Proof.: This proof is modeled on Hitchin's proof of Theorem (2.7) in [11]. Let \(\kappa:E(\Gamma)\to E(\Gamma)\) be the complex automorphism satisfying \(\Theta_{1}\kappa=\kappa\Theta_{2}\) and \(\bar{\partial}_{B_{1}}\kappa=\kappa\bar{\partial}_{B_{2}}\). We also have
\[\bar{\partial}_{A_{0}+B_{1}}\Theta_{1}=\bar{\partial}_{A_{0}+B_{2}}\Theta_{2}=0\]
and
\[F_{B_{1}}-\frac{i}{2}[\Theta_{1},\Theta_{1}^{*}]\omega_{vol}=F_{B_{2}}-\frac{i }{2}[\Theta_{2},\Theta_{2}^{*}]\omega_{vol}=\sigma.\]
Now we define two bundles: let
\[W=End(E(\Gamma))\cong E(\Gamma)\otimes E(\Gamma)^{*},\]
and let
\[W^{\circ}=E(End(V))^{\Gamma}.\]
We remark that both \(W\) and \(W^{\circ}\) have the same fibers isomorphic to \(End(V)\), but \(W\) is a trivial bundle whereas \(W^{\circ}\) is again an associated bundle of \(S^{3}\). We can think of \(\kappa\) as a section of \(W\). We also have that \(\Theta_{1}\) and \(\Theta_{2}\) together define a section
\[\mathbf{\Theta}=c_{l}(\Theta_{1})-c_{r}(\Theta_{2})\]
of \(W^{\circ}\), and \(B_{1}\) and \(B_{2}\) together define a connection
\[\mathbf{B}=B_{1}\otimes id-id\otimes B_{2}^{*}\]
on both \(W\) and \(W^{\circ}\), as \(End(W)\) and \(End(W^{\circ})\) are both isomorphic to \(End(End(V))^{\Gamma}\). As we have
\[\kappa\Theta_{1}=\Theta_{2}\kappa,\]
we must have that
\[\mathbf{\Theta}\kappa=(c_{l}(\Theta_{1})-c_{r}(\Theta_{2}))\kappa=0.\]
We observe that the pair \((\mathbf{B},\mathbf{\Theta})\) satisfies the equations
\[\bar{\partial}_{\mathbf{B}}\mathbf{\Theta}=0\]
and
\[F_{\mathbf{B}}-\frac{i}{2}[\mathbf{\Theta},\mathbf{\Theta}^{*}]\omega_{vol}= \operatorname{ad}(\sigma)\]
on \(W^{\circ}\), where \([\mathbf{\Theta},\mathbf{\Theta}^{*}]=c_{l}([\Theta_{1},\Theta_{1}^{*}])-c_{r} ([\Theta_{2},\Theta_{2}^{*}])\).
To proceed, we now think of \(\kappa\) as a holomorphic section of \(W\) with respect to \(\mathbf{B}\), that is, \(\bar{\partial}_{\mathbf{B}}\kappa=0\), as \(\bar{\partial}_{B_{1}}\kappa=\kappa\bar{\partial}_{B_{2}}\). Before we continue further, we first want to prove a useful identity. Consider
\[\bar{\partial}\langle\partial_{\mathbf{B}}\kappa,\kappa\rangle=\langle\bar{ \partial}_{\mathbf{B}}\partial_{\mathbf{B}}\kappa,\kappa\rangle-\langle \partial_{\mathbf{B}}\kappa,\partial_{\mathbf{B}}\kappa\rangle.\]
Since \(F_{\mathbf{B}}=\bar{\partial}_{\mathbf{B}}\partial_{\mathbf{B}}+\partial_{\mathbf{B}}\bar{ \partial}_{\mathbf{B}}\) and \(\bar{\partial}_{\mathbf{B}}\kappa=0\), we have
\[\bar{\partial}\langle\partial_{\mathbf{B}}\kappa,\kappa\rangle=\langle F_{\mathbf{B}} \kappa,\kappa\rangle-\langle\partial_{\mathbf{B}}\kappa,\partial_{\mathbf{B}}\kappa\rangle.\]
Now we integrate on both sides and get
\[\int_{S^{2}/\Gamma}\bar{\partial}\langle\partial_{\mathbf{B}}\kappa,\kappa\rangle+ \int_{S^{2}/\Gamma}\langle\partial_{\mathbf{B}}\kappa,\partial_{\mathbf{B}}\kappa \rangle=\int_{S^{2}/\Gamma}\langle F_{\mathbf{B}}\kappa,\kappa\rangle,\]
and by Stokes' theorem, we get
\[0\leq\int_{S^{2}/\Gamma}\langle\partial_{\mathbf{B}}\kappa,\partial_{\mathbf{B}}\kappa \rangle=\int_{S^{2}/\Gamma}\langle F_{\mathbf{B}}\kappa,\kappa\rangle.\]
Hence, we have
\[\int_{S^{2}/\Gamma}\langle\partial_{\mathbf{B}}\kappa,\partial_{\mathbf{B}}\kappa \rangle=\int_{S^{2}/\Gamma}\langle F_{\mathbf{B}}\kappa,\kappa\rangle=\]
\[\int_{S^{2}/\Gamma}\frac{i}{2}\langle[\mathbf{\Theta},\mathbf{\Theta}^{*}]\kappa, \kappa\rangle\omega_{vol}-\int_{S^{2}/\Gamma}\langle\mathrm{ad}(\sigma)\kappa,\kappa\rangle.\]
Since \(\sigma\) takes values in the center \(Z\), we have that \(\kappa\) commutes with \(\sigma\), i.e., \(\mathrm{ad}(\sigma)\kappa=0\), and hence the following equation
\[-\int_{S^{2}/\Gamma}\langle\mathrm{ad}(\sigma)\kappa,\kappa\rangle=0\]
holds as \(\sigma\otimes 1(\kappa)=1\otimes\sigma^{T}(\kappa)\), which can be shown using essentially the same arguments as in showing \(\mathbf{\Theta}\kappa=0\).
As we have shown that \(\mathbf{\Theta}\kappa=0\), we also obtain
\[\langle[\mathbf{\Theta},\mathbf{\Theta}^{*}]\kappa,\kappa\rangle=\langle\mathbf{\Theta} \mathbf{\Theta}^{*}\kappa,\kappa\rangle=\langle\mathbf{\Theta}^{*}\kappa,\mathbf{\Theta} ^{*}\kappa\rangle\geq 0\]
and hence must be purely real. Consequently, \(\frac{i}{2}\langle[\mathbf{\Theta},\mathbf{\Theta}^{*}]\kappa,\kappa\rangle\) must be purely imaginary, so it must be \(0\). This gives us that \(\partial_{\mathbf{B}}\kappa=0\).
Putting everything together, we have \(\partial_{\mathbf{B}}\kappa=\bar{\partial}_{\mathbf{B}}\kappa=0\), \(\mathbf{\Theta}\kappa=\mathbf{\Theta}^{*}\kappa=0\). Let \(\rho=\kappa(\kappa^{*}\kappa)^{-\frac{1}{2}}\) then we must have \(d_{\mathbf{B}}\rho=0\). Since \(\mathbf{\Theta}\kappa=\mathbf{\Theta}^{*}\kappa=0\), we have \(\kappa^{*}\Theta_{2}=\Theta_{1}\kappa^{*}\) and \(\kappa\Theta_{2}=\Theta_{1}\kappa\), which implies \(\rho\Theta_{2}=\Theta_{1}\rho\). Hence, we obtain the desire statement that \((B_{1},\Theta_{1})\) and \((B_{2},\Theta_{2})\) lie on the same unitary gauge orbit.
**Corollary 5.10** (Uniqueness theorem 2).: _Let \((B_{1},\Theta_{1})\) and \((B_{2},\Theta_{2})\) be two solutions to 3.1 - 3.4 in \(\mathcal{A}_{\tau}^{F}\times C^{\infty}(E(\Gamma))\) that lie on the same \(\mathcal{G}_{\tau,\mathbb{C}}^{F,\Gamma}\)-orbit, that is, there exists a complex automorphism of \(E(\Gamma)\) in \(\mathcal{G}_{\tau,\mathbb{C}}^{F,\Gamma}\) that takes \((B_{1},\Theta_{1})\) to \((B_{2},\Theta_{2})\). Then \((B_{1},\Theta_{1})\) and \((B_{2},\Theta_{2})\) lie on the same \(\mathcal{G}_{\tau}^{F,\Gamma}\)-orbit._
Proof.: Let \(\kappa\) be such a complex automorphism. By the same arguments as in the previous lemma, we can modify \(\kappa\) and obtain a unitary gauge element \(\rho=\kappa(\kappa^{*}\kappa)^{\frac{1}{2}}\) that also sends \((B_{1},\Theta_{1})\) to \((B_{2},\Theta_{2})\). We must also have that \(\rho\) is \(\tau\)-invariant as \(\kappa\) is \(\tau\)-invariant. Hence, \(\rho\) lies in \(\mathcal{G}_{\tau}^{F,\Gamma}\)
Before we proceed to the next section, we prove the following proposition which analyzes the stabilizer group of a holomorphic section \(\Theta\).
**Proposition 5.11**.: _If \(\Theta\) has trivial stabilizer in \(Stab(B)\) with \(\bar{\partial}_{A_{0}+B}\Theta=0\), then \(\Theta\) has trivial stabilizer in \(\mathcal{G}^{F,\Gamma}\)._
Proof.: Let \(\kappa:E(\Gamma)\to E(\Gamma)\) be a complex automorphism on \(E(\Gamma)\) taking \(A_{0}\) to \(A_{0}+B\). In other words, we have \(B=\kappa^{-1}\bar{\partial}\kappa+\kappa^{*}\partial\kappa^{*-1}\). Consider \(\kappa^{-1}\Theta\kappa\), it is a holomorphic section of \(E(\Gamma)\) with respect to \(A_{0}\). Hence, we can rewrite \(\kappa^{-1}\Theta\kappa\) as a pair of matrices \((\alpha,\beta)\). The identification is as follows: for \(x\in S^{2}\), \(\kappa^{-1}\Theta\kappa:x\mapsto[p,\lambda(p)]\), where \(\lambda:S^{3}\to End(R)\) is given by \(\lambda(z_{1},z_{2})=z_{1}\alpha+z_{2}\beta\).
Since \((\alpha,\beta)\) is \(\Gamma\)-invariant, we have that for \(\gamma=\begin{pmatrix}u&v\\ -v^{*}&u^{*}\end{pmatrix}\), the pair \((\alpha,\beta)\) must satisfy
\[R(\gamma^{-1})\alpha R(\gamma)=u\alpha+v\beta \tag{5.1}\]
and
\[R(\gamma^{-1})\beta R(\gamma)=-v^{*}\alpha+u^{*}\beta \tag{5.2}\]
as in [17]. Notice that if \(v\neq 0\), then \(\beta\) is uniquely given by \(\beta=v^{-1}R(\gamma^{-1})\alpha R(\gamma)-v^{-1}u\alpha\). On the other hand, if \(v=0\) for all \(\gamma\in\Gamma\), then it implies that \(\Gamma\) is a cyclic subgroup. Hence, we break the proof into two cases.
_Case 1:_\(\Gamma\) is not cyclic.
In this case, we have that \(v\neq 0\) and \(\beta=v^{-1}R(\gamma^{-1})\alpha R(\gamma)-v^{-1}u\alpha\). First, we want to show that \((\alpha,\beta)\) has trivial stabilizer in \(F/T\) if and only if \(\alpha\) has trivial stabilizer in \(F/T\). We can assume that \(\alpha\) and \(\beta\) are both nonzero as by 5.1 and 5.2, it's easy to see that if either \(\alpha\) or \(\beta\) is \(0\), then both have to be \(0\).
We observe that \((\alpha,\beta)\) has trivial stabilizer in \(F/T\) if and only if \(\alpha\) has trivial stabilizer in \(F/T\): if \(\alpha\) has trivial stabilizer in \(F/T\), then clearly \((\alpha,\beta)\) has trivial stabilizer in \(F/T\); on the other hand, if some element \(f\) stabilizes \(\alpha\), then it stabilizes \(\beta\) as well by the equality \(\beta=v^{-1}R(\gamma^{-1})\alpha R(\gamma)-v^{-1}u\alpha\), so \(f\) stabilizes \((\alpha,\beta)\). With the preceding arguments, we can rephrase the assumption that \((\alpha,\beta)\) has trivial stabilizer in \(F/T\) as simply that \(\alpha\) has trivial stabilizer in \(F/T\).
Now, at a point \(p\) thought of as a pair \((z_{1},z_{2})\), we can use some \(\gamma\in\Gamma\) to get the following equality
\[f(z_{1}\alpha+z_{2}\beta)f^{-1}=f(z_{1}\alpha-z_{2}v^{-1}u\alpha+z_{2}v^{-1}R( \gamma^{-1})\alpha R(\gamma))f^{-1}\]
\[=z_{1}f\alpha f^{-1}-z_{2}v^{-1}uf\alpha f^{-1}+z_{2}v^{-1}R(\gamma^{-1})(f \alpha f^{-1})R(\gamma).\]
Assume that we are given \(f\alpha f^{-1}\neq\alpha\), for all \(f\in F/T\), we want to show that for any pair of points \((z_{1},z_{2})\) and for all \(f\in F/T\), we always have the following:
\[z_{1}\alpha-z_{2}v^{-1}u\alpha+z_{2}v^{-1}R(\gamma^{-1})\alpha R(\gamma)\neq z _{1}f\alpha f^{-1}-z_{2}v^{-1}uf\alpha f^{-1}+z_{2}v^{-1}R(\gamma^{-1})(f \alpha f^{-1})R(\gamma).\]
To achieve this end, let \(L_{\gamma}\) be the linear map defined as follows: for a pair \((c,d)\in M\), consider
\[L_{\gamma}:c\mapsto z_{1}c-z_{2}v^{-1}uc+z_{2}v^{-1}R(\gamma^{-1})cR(\gamma).\]
Then we need to show \(L_{\gamma}(\alpha)\neq L_{\gamma}(f\alpha f^{-1})\). As we know that \(\alpha\neq f\alpha f^{-1}\), it suffices to show that
\[\bigcap_{\gamma\in\Gamma}\ker(L_{\gamma})=0.\]
We can assume that \(z_{2}\neq 0\) as the inequality is clearly satisfied when \(z_{2}=0\). Hence, \(c\) lies in the kernel of \(L_{\gamma}\) if
\[\frac{z_{2}v^{-1}u-z_{1}}{z_{2}v^{-1}}c=R(\gamma^{-1})cR(\gamma).\]
This implies that \(c\) must be a scalar multiple of \(d\), that is, \(d=qc\); in particular, by applying 5.1 and 5.2 to the pair \((c,d)\), we must have \((qu+q^{2}v+v^{*}-u^{*}q)c=0\). Notice that this equality must be satisfied for any choice of \(\gamma\in\Gamma\) with \(v\neq 0\), and since \(q\) and \(c\) are fixed, we see that this equality can only hold when \(c=0\). As a result, \(z_{1}\alpha+z_{2}\beta\) has trivial stabilizer for all \((z_{1},z_{2})\), which gives us that \((\alpha,\beta)\) has trivial stabilizer in \(\mathcal{G}^{F,\Gamma}\).
_Case 2:_\(\Gamma\) is cyclic.
When \(\Gamma\) is a cyclic subgroup, we can write down \(\alpha\) and \(\beta\) explicitly and describe the action of \(\Gamma\) and \(F/T\) explicitly as well. We use the decomposition of \(M\) in terms of simply-laced Dynkin diagram given in [17] and reviewed in Section 2.1:
\[M=\bigoplus_{i,j}a_{ij}Hom(\mathbb{C}^{n_{i}},\mathbb{C}^{n_{j}}).\]
We also have that
\[F=\times_{i}U(n_{i}).\]
For the case where \(\Gamma\) is cyclic, \(n_{i}=1\) for all \(i\), and
\[M=(\bigoplus_{i}Hom(\mathbb{C}^{n_{i}},\mathbb{C}^{n_{i+1}}))\oplus(\bigoplus _{j}Hom(\mathbb{C}^{n_{j+1}},\mathbb{C}^{n_{j}})).\]
We can regard \(\alpha\in\bigoplus_{i}Hom(\mathbb{C}^{n_{i}},\mathbb{C}^{n_{i+1}})\) and \(\beta\in\bigoplus_{j}Hom(\mathbb{C}^{n_{j+1}},\mathbb{C}^{n_{j}})\). Hence, we can write \(\alpha=(a_{1},...,a_{n})\) and \(\beta=(b_{1},...,b_{n})\), and \(F\) acts on \(\mathbb{C}^{n_{i}}\) and \(\mathbb{C}^{n_{j}}\) by scalar multiplication.
For \((\alpha,\beta)\) to have trivial stabilizer in \(F/T\), we must have that for all \(i\in\{1,...,n\}\), at least one of \(a_{i}\) and \(b_{i}\) is not \(0\). For \(z_{1}\alpha+z_{2}\beta\) to have trivial stabilizer in \(F/T\) at \((z_{1},z_{2})\), we must have that for all \(i\in\{1,...,n\}\), at least one of \(z_{1}a_{i}\) and \(z_{2}b_{i}\) is not \(0\). But this can only happen when either \(z_{1}\) or \(z_{2}\) is \(0\). This means that the stabilizer of \((\alpha,\beta)\) in \(\mathcal{G}^{F,\Gamma}\) must be the identity away from \((0,z_{2})\) and \((z_{1},0)\), and hence it must be the identity by continuity.
Hence, we have shown that if \(\lambda(p)\) has trivial stabilizer at a single \(p\), then for any other \(p^{\prime}\), \(\lambda(p^{\prime})\) also has trivial stabilizer. This is equivalent to saying that if \(\kappa^{-1}\Theta\kappa\) has trivial stabilizer in \(F/T\), then it has trivial stabilizer
in \(\mathcal{G}^{F,\Gamma}\). By pushing forward using \(\kappa\), we get the desired statement of the lemma.
**Corollary 5.12**.: _If \(\Theta\) has trivial stabilizer in \(Stab(B)\) with \(\bar{\partial}_{A_{0}+B}\Theta=0\), then \(\Theta\) has trivial stabilizer in \(\mathcal{G}^{F,\Gamma}_{\tau}\)._
## 6. Smoothness and dimension calculations
In this section, we show that the moduli space is a smooth finite-dimensional manifold and calculate its dimension which will be useful for proving Theorem 3.8. To achieve this end, we first introduce the following lemma.
**Lemma 6.1**.: _If \((B,\Theta)\) and \((B^{\prime},\Theta^{\prime})\) are two solutions to 3.1 and 3.2 in \(\mathcal{A}^{F}\times C^{\infty}(E(\Gamma))\) with \(B\) and \(B^{\prime}\) not \(\mathcal{G}^{F,\Gamma}\)-equivalent, then \((B^{\prime},\Theta^{\prime})\) is separated from the subset of solutions such that the connection part is \(\mathcal{G}^{F,\Gamma}\)-equivalent to \(B\)._
Proof.: Suppose we have two solutions \((B,\Theta)\) and \((B^{\prime},\Theta^{\prime})\) such that \(B\) is not \(\mathcal{G}^{F,\Gamma}\)-equivalent to \(B^{\prime}\). We proceed by contradiction. Suppose that there exists a sequence of solutions \(\{(B_{n},\Theta_{n})\}_{n}\) such that \((B_{1},\Theta_{1})=(B,\Theta)\) and \(\{(B_{n},\Theta_{n})\}_{n}\) converges weakly in \(L^{2}_{1}\) to \((B^{\prime},\Theta^{\prime})\) with \(B_{n}\) lying on the same \(\mathcal{G}^{F,\Gamma}\)-orbit as \(B\), for all \(n\). Then we get a sequence of gauge elements lying in \(\mathcal{G}^{F,\Gamma}\), denoted by \(\{\rho_{n}\}\), such that \(\rho_{n}\cdot B=B_{n}\), for all \(n\). (Note that we don't assume \(\rho_{n}\cdot\Theta=\Theta_{n}\).) We want to show that \(\{\rho_{n}\}\) converges weakly to some \(\rho\). To this end, we follow Hitchin's proof of Theorem (2.7) in [11]. Consider the following:
\[\bar{\partial}_{B_{1}B_{n}}:\Omega^{0}(S^{2}/\Gamma;E(\Gamma)^{*}\otimes E( \Gamma))\to\Omega^{0,1}(S^{2}/\Gamma;E(\Gamma)^{*}\otimes E(\Gamma)),\]
where \(B_{n}\) acts on the \(E(\Gamma)^{*}\) factor, and \(B_{1}\) acts on the \(E(\Gamma)\) factor. Hence, \(\bar{\partial}_{B_{1}B^{\prime}}=\bar{\partial}_{B_{1}B_{n}}+t_{n}\) where \(t_{n}\to 0\) weakly in \(L^{2}_{1}\). As before, \(\rho_{n}\) is the sequence of unitary gauge elements taking \(B\) to \(B_{n}\), and \(\|\rho_{n}\|_{L^{2}}=1\).
We also have
\[\rho_{n}\cdot B_{1}=\rho_{n}^{*}\circ\partial_{B_{1}}\circ\rho_{n}^{*-1}+\rho _{n}^{-1}\circ\bar{\partial}_{B_{1}}\circ\rho_{n}=\partial_{B_{n}}+\bar{ \partial}_{B_{n}}.\]
Hence, \(\rho_{n}^{*}\circ\partial_{B_{1}}\circ\rho_{n}^{*-1}=\partial_{B_{n}}\) and \(\rho_{n}^{-1}\circ\bar{\partial}_{B_{1}}\circ\rho_{n}=\bar{\partial}_{B_{n}}\), so we have \(\bar{\partial}_{B_{1}}\circ\rho_{n}-\rho_{n}\circ\bar{\partial}_{B_{n}}=0\), but this is equivalent to \(\bar{\partial}_{B_{1}B_{n}}\rho_{n}=0\).
Now, the elliptic estimate for \(\bar{\partial}_{B_{1}B_{n}}\) gives us
\[\|\rho_{n}\|_{L^{2}_{1}}\leq C(\|[t_{n},\rho_{n}]\|_{L^{2}}+\|\rho_{n}\|_{L^{2} })=C(\|[t_{n},\rho_{n}]\|_{L^{2}}+1)\leq K_{1}\|t_{n}\|_{L^{4}}\|\rho_{n}\|_{L^ {4}}+K_{2}.\]
Since \(L^{2}_{1}\subset L^{4}\) compactly, we have that \(\|\rho_{n}\|_{L^{2}_{1}}\) is bounded and hence has a weakly convergent subsequence. Since \(L^{2}_{1}\subset L^{2}\) is compact and \(\|\rho_{n}\|_{L^{2}}=1\), the weak limit \(\rho\) is non-zero.
Hence, we have \(\rho\cdot B=B^{\prime}\). Since by construction, \(B\) and \(B^{\prime}\) lie on the same complex orbit, \(\rho\) must be a complex automorphism. Now since weak convergence implies pointwise convergence, that is, \(\rho_{n}(x)\to\rho(x)\), for all \(x\in S^{2}/\Gamma\), and \(F/T\) is compact, we must have \(\rho(x)\in F/T\), for all \(x\). Hence, \(\rho\) lies in \(\mathcal{G}^{F,\Gamma}\), but this is a contradiction.
**Corollary 6.2**.: _If \((B,\Theta)\) and \((B^{\prime},\Theta^{\prime})\) are two solutions to 3.1 - 3.4 in \(\mathcal{A}_{\tau}^{F}\times C^{\infty}(E(\Gamma))\) with \(B\) and \(B^{\prime}\) not \(\mathcal{G}_{\tau}^{F,\Gamma}\)-equivalent, then \((B^{\prime},\Theta^{\prime})\) is separated from the subset of solutions such that the connection part is \(\mathcal{G}_{\tau}^{F,\Gamma}\)-equivalent to \(B\)._
Proof.: We assume otherwise and again follow the same arguments as in the previous lemma with the further assumption that all the gauge transformations are \(\tau\)-invariant, that is, they lie in \(\mathcal{G}_{\tau}^{F,\Gamma}\). Hence, we obtain a limit \(\rho\) that lies in \(\mathcal{G}_{\tau}^{F,\Gamma}\) and hence obtains a contradiction.
**Corollary 6.3**.:
1. _Solutions to 3.1 and 3.2 in_ \(\mathcal{A}^{F}\times C^{\infty}(E(\Gamma))\) _with the connection part_ \(B\) _not_ \(\mathcal{G}^{F,\Gamma}\)_-equivalent lie in different connected components of the moduli space_ \(\mathcal{M}(\Gamma,\tilde{\zeta}_{1})\)_._
2. _Solutions to 3.1 - 3.4 in_ \(\mathcal{A}_{\tau}^{F}\times C^{\infty}(E(\Gamma))\) _with the connection part_ \(B\) _not_ \(\mathcal{G}_{\tau}^{F,\Gamma}\)_-equivalent lie in different connected components of the moduli space_ \(\mathcal{X}_{\tilde{\zeta}}\)_._
**Proposition 6.4**.:
1. _Suppose_ \((B,\Theta)\) _is a solution to 3.1 and 3.2 in_ \(\mathcal{A}^{F}\times C^{\infty}(E(\Gamma))\) _with trivial stabilizer in_ \(\mathcal{G}^{F,\Gamma}\)_, the moduli space_ \(\mathcal{M}(\Gamma,\tilde{\zeta}_{1})\) _at the orbit of_ \((B,\Theta)\) _is smooth of dimension_ \(2|\Gamma|+2\)_._
2. _If_ \((B,\Theta)\) _is a solution to_ 3.1 - 3.4 in_ \(\mathcal{A}_{\tau}^{F}\times C^{\infty}(E(\Gamma))\) _with trivial stabilizer in_ \(\mathcal{G}_{\tau}^{F,\Gamma}\)_, the moduli space_ \(\mathcal{X}_{\tilde{\zeta}}\) _at the orbit of_ \((B,\Theta)\) _is smooth of dimension_ \(4\)_._
Proof.:
1. Consider the set of sections \(\mathcal{S}=\{\Theta\in C^{\infty}E(\Gamma)|\bar{\partial}_{A_{0}+B}\Theta=0\}\). The stabilizer group \(Stab(B)\) of \(B\) in \(\mathcal{G}^{F,\Gamma}\) acts on \(\mathcal{S}\). By Lemma 5.2 and Lemma 5.7 (with small adaptations of the proof), we have that \(\mathcal{S}\) is isomorphic to \(M=P^{\Gamma}\) and \(Stab(B)\) is isomorphic to \(F\). Hence, we can restrict the symplectic structure compatible with \(I\) on \(C^{\infty}E(\Gamma)\) to \(\mathcal{S}\) and obtain a Hamiltonian action of \(Stab(B)\) on \(\mathcal{S}\) with respect to the restrictions of \(I\) on \(\mathcal{S}\). We also know that \(Stab(B)\) acts freely at \(\Theta\in\mathcal{S}\) as \(\mathcal{G}^{F,\Gamma}\) acts freely at \((B,\Theta)\). On the other hand, by Lemma 6.1 and Corollary 6.3, every point in the connected component of \(\mathcal{M}(\Gamma,\tilde{\zeta}_{1})\) containing the orbit of \((B,\Theta)\) has a unique representative lying in \(\mathcal{S}\). Hence, the smoothness and the dimension of \(\mathcal{M}(\Gamma,\tilde{\zeta}_{1})\) at \([(B,\Theta)]\) follow from Proposition 2.1 in [17].
2. First, we observe that the action of \(J\) commutes with the action of \(\rho\) when \(\rho\) lies in \(\mathcal{G}_{\tau}^{F,\Gamma}\). Hence, we can restrict the hyperkahler structure on \(C^{\infty}E(\Gamma)\) to \(\mathcal{S}\) and obtain a Hamiltonian action of \(Stab(B)\) on \(\mathcal{S}\) with respect to the restrictions of \(I\), \(J\), and \(K\) on \(\mathcal{S}\). We also know that \(Stab(B)\) acts freely at \(\Theta\in\mathcal{S}\) as \(\mathcal{G}_{\tau}^{F,\Gamma}\) acts freely at \((B,\Theta)\). On the other hand, by Corollary 6.2 and Corollary 6.3, every point in the connected component of \(\mathcal{X}_{\tilde{\zeta}}\) containing the orbit of \((B,\Theta)\) has a unique representative lying in \(\mathcal{S}\). Hence, the smoothness and the
dimension of \(\mathcal{X}_{\tilde{\zeta}}\) at \([(B,\Theta)]\) again follow from Proposition 2.1 in [17].
## 7. Proof of Theorem 3.8
### A criterion for obtaining free \(\mathcal{G}_{\tau}^{F,\Gamma}\)-action
Now we want to give a criterion for when the \(\mathcal{G}_{\tau}^{F,\Gamma}\)-action is free on \(\tilde{\mu}^{-1}(\tilde{\zeta})\).
We adapt the notations introduced in [17] and in Section 2.1 to our setting. Consider projection maps
\[\pi_{i}:R\to\mathbb{C}^{n_{i}}\otimes R_{i}.\]
Now, let \(\hat{Z}\) denote the center of \(\mathbf{f}\). Then \(\Omega^{0}(S^{2}/\Gamma;\hat{Z})\) is spanned by elements \(\sqrt{-1}\pi_{i}\), that is, smooth sections such that at each point the endomorphism is a scalar multiple of the projection map. Let \(h\) denote the real Cartan algebra associated to the Dynkin diagram, we then get a linear map \(l\) from \(\Omega^{0}(S^{2}/\Gamma;\hat{Z})\) to \(\Omega^{0}(S^{2}/\Gamma;h^{*})\) given by
\[l:\sqrt{-1}\pi_{i}\mapsto n_{i}\xi_{i},\]
and hence \(l\) induces a map \(\tilde{l}\) from \(\Omega^{0}(S^{2}/\Gamma;Z)\) to \(\Omega^{0}(S^{2}/\Gamma;h)\) which is an isomporhism.
Let \(\xi\) be a root, not necessarily simple. We define \(\tilde{D}_{\xi}\) to be \(\ker(\xi\circ\tilde{l})\), where we regard \(\xi\) as a constant element in \(\Omega^{0}(S^{2}/\Gamma;h^{*})\).
**Lemma 7.1**.: _Let \((B,\Theta)\) be a solution to 3.1 - 3.4 in \(\mathcal{A}_{\tau}^{F}\times C^{\infty}(E(\Gamma))\). If \(\mathcal{G}_{\tau}^{F,\Gamma}\) does not act freely on \((B,\Theta)\), then \(\tilde{\zeta}\) lies in \(\mathbb{R}^{3}\otimes\tilde{D}_{\xi}\)._
Proof.: This proof is a reformulation of Kronheimer's original proof of Proposition 2.8 in [17] in our setting. Suppose \((B,\Theta)\in\mu^{-1}(\tilde{\zeta})\) is fixed by some \(\rho\in\mathcal{G}_{\tau}^{F,\Gamma}\). In particular, \(\rho\) lies in \(Stab(B)\) and fixes \(\Theta\). Then we can rewrite \(\rho\) as
\[\rho=\kappa\rho_{0}\kappa^{-1},\]
where \(\rho_{0}\) is a constant in the complexification of \(F/T\) and
\[\kappa:E(\Gamma)\to E(\Gamma)\]
is a complex automorphism with
\[\kappa^{-1}\bar{\partial}\kappa+\kappa^{*}\partial\kappa^{*-1}=B.\]
We can find a lift \(\tilde{\rho_{0}}\) of \(\rho_{0}\) in the complexification of \(F\) and decompose \(R\) into the eigenspaces of \(\tilde{\rho_{0}}\) and obtain at least two \(\Gamma\)-invariant parts
\[R=R^{\prime}\oplus R^{\prime\prime}.\]
We have that \(E(End(R^{\prime}))\) is naturally a holomorphic subbundle of \(E(\Gamma)\) with respect to \(A_{0}\). This gives rise to a holomorphic subbundle \(\tilde{E}\) of \(E(\Gamma)\) with respect to \(B\) where the fiber of \(\tilde{E}\) over each point \(x\) is isomorphic to \(End(R^{\prime})\). Explicitly, \(\tilde{E}\) is the image of \(E(End(R^{\prime}))\) under \(\kappa\).
Without loss of generality we assume that \(\Theta\) is a holomorphic section of \(\tilde{E}\) with a free action by \(Map(S^{2}/\Gamma,F^{\prime}/T^{\prime})\), where \(Map(S^{2}/\Gamma,F^{\prime}/T^{\prime})\) is the natural gauge group acting on \(\tilde{E}\). In other words, \(\tilde{E}\) is the smallest holomorphic subbundle of \(E(\Gamma)\) such that \(\Theta\) is a holomorphic section of \(\tilde{E}\) and there is no proper subbundle of \(\tilde{E}\) of which \(\Theta\) is a section. We observe that \(\tilde{E}\) is \(\Gamma\)-invariant. In particular, \(\tilde{E}\) is isomorphic to \(E(End(R^{\prime}))^{\Gamma}\).
By Proposition 6.4, we know that the condition that \(Map(S^{2}/\Gamma,F^{\prime}/T^{\prime})\) acts freely on \(\Theta\) means that the moduli space of the reduction by \(Map(S^{2}/\Gamma,F^{\prime}/T^{\prime})\) on pairs on \(\tilde{E}\) is a smooth manifold at at least one point, with dimension
\[\dim_{\mathbb{R}}(Hom(\mathbb{C}^{2},End(R^{\prime}))^{\Gamma})-4\dim_{ \mathbb{R}}(F^{\prime}/T^{\prime})\geq 0.\]
This translates to
\[\dim_{\mathbb{C}}(Hom(\mathbb{C}^{2},End(R^{\prime}))^{\Gamma})-2\dim_{ \mathbb{C}}(End(R^{\prime})^{\Gamma})+2\geq 0,\]
and hence we have
\[2\dim_{\mathbb{C}}(End(R^{\prime})^{\Gamma})-\dim_{\mathbb{C}}(Hom(\mathbb{C}^ {2},End(R^{\prime}))^{\Gamma})\leq 2.\]
Now further decompose \(R^{\prime}\) into irreducibles \(R^{\prime}=\oplus n^{\prime}_{i}R_{i}\), then the above inequality is the same as the following:
\[2\sum_{i}(n^{\prime}_{i})^{2}-\sum_{i,j}a_{i,j}n^{\prime}_{i}n^{\prime}_{j} \leq 2.\]
Equivalently,
\[\sum_{i,j}c_{i,j}n^{\prime}_{i}n^{\prime}_{j}\leq 2,\]
where \(\bar{C}=(c_{i,j})\) is the extended Cartan matrix. Now let \(\xi\) be defined by
\[\xi=\sum_{0}^{r}n^{\prime}_{i}\xi_{i}.\]
The inequalities suggest that
\[\|\xi\|^{2}\leq 2,\]
which implies that \(\xi\) is a root.
Let \(\pi_{B}:E(\Gamma)\to\tilde{E}\) be the projection from \(E(\Gamma)\) to \(\tilde{E}\). We then have that \(\pi_{B}\) induces an element \(\tilde{\pi}\in\Omega^{0}(S^{2}/\Gamma;\mathbf{f})\) such that \(\tilde{\pi}(x)\in End(R)\) is given by
\[\tilde{\pi}(x):R_{x}\to R^{\prime}_{x},\]
where \(R_{x}\) is isomorphic to \(R\), and \(R^{\prime}_{x}\) is a subrepresentation of \(R_{x}\) which is also isomorphic to \(R^{\prime}\), for all \(x\). Notice that, \(\tilde{\pi}\) is identified with \(\kappa\cdot\xi=\kappa\xi\kappa^{-1}=\xi\) under \(l\), as \(\xi\) is in the center.
We have that \(\tilde{\pi}\) acts trivially on \(\Theta\), that is, \([\tilde{\pi},\Theta]=0\), as it is the identity on \(\tilde{E}\). Now consider \(\tilde{\zeta}(\tilde{\pi})\). We compute \(\tilde{\zeta}_{1}(\tilde{\pi})\) here:
\[\tilde{\zeta}_{1}(\tilde{\pi})=\int_{S^{2}/\Gamma}\operatorname{Tr}(\tilde{ \pi}F_{B})-\frac{i}{2}\int_{S^{2}/\Gamma}\operatorname{Tr}(\tilde{\pi}[\Theta,\Theta^{*}])\omega_{vol}.\]
We know that \(\int_{S^{2}/\Gamma}\operatorname{Tr}(F_{A_{0}+B})=\int_{S^{2}/\Gamma} \operatorname{Tr}(F_{A_{0}})+\operatorname{Tr}(F_{B})=\frac{i}{2\pi}\cdot c_{1} (E(\Gamma))\). By construction, the integral of \(c_{1}(E(\Gamma))\) concentrates on \(A_{0}\), that is, \(\int_{S^{2}/\Gamma}\operatorname{Tr}(F_{A_{0}})=\frac{i}{2\pi}\cdot c_{1}(E( \Gamma))\). Hence, we have that \(\int_{S^{2}/\Gamma}\operatorname{Tr}(F_{B})=0\). Since \(\tilde{E}\) is a holomorphic subbundle of \(E(\Gamma)\) and \(\tilde{\pi}F_{B}\) is the projection of \(F_{B}\) onto \(\tilde{E}\), we must have that on the subbundle \(\tilde{E}\),
\[\int_{S^{2}/\Gamma}\operatorname{Tr}(\tilde{\pi}F_{A_{0}+B})=\int_{S^{2}/ \Gamma}\operatorname{Tr}(\tilde{\pi}F_{A_{0}})+\operatorname{Tr}(\tilde{\pi} F_{B})=\frac{i}{2\pi}\cdot c_{1}(\tilde{E})\]
\[=\int_{S^{2}/\Gamma}\operatorname{Tr}(\tilde{\pi}F_{A_{0}}).\]
Hence, \(\int_{S^{2}/\Gamma}\operatorname{Tr}(\tilde{\pi}F_{B})=0\).
We have shown that the first integrand is 0. On the other hand, since \([\tilde{\pi},\Theta]=0\), we have
\[\operatorname{Tr}(\tilde{\pi}[\Theta,\Theta^{*}])=\operatorname{Tr}(\tilde{ \pi}\Theta\Theta^{*}-\tilde{\pi}\Theta^{*}\Theta)\]
\[=\operatorname{Tr}(\tilde{\pi}\Theta\Theta^{*}-\Theta\tilde{\pi}\Theta^{*})=0.\]
Hence, \(\tilde{\zeta}_{1}(\tilde{\pi})=0\), that is to say, \(\tilde{\zeta}_{1}\in\tilde{D}_{\xi}\). Similarly, \(\tilde{\zeta}_{2}(\tilde{\pi})=\tilde{\zeta}_{3}(\tilde{\pi})=0\). As a result, we have \(\tilde{\zeta}\in\mathbb{R}^{3}\otimes\tilde{D}_{\xi}\).
**Corollary 7.2**.: _For \(\zeta\) not lying in \(D_{\xi}\) as in [17] and \(\tilde{\zeta}=-\zeta\) thought of as a constant element in \(\Omega^{2}(S^{2}/\Gamma;Z)\), \(\mathcal{G}_{\tau}^{F,\Gamma}\) acts freely on \(\tilde{\mu}^{-1}(\tilde{\zeta})\)._
Proof.: If \(\zeta\) doesn't lie in \(D_{\xi}\) as in [17], then \(\tilde{\zeta}=-\zeta\) thought of as a constant element in \(\Omega^{2}(S^{2}/\Gamma;Z)\) doesn't lie in \(\tilde{D}_{\xi}\). Hence, by the previous lemma, \(\mathcal{G}_{\tau}^{F,\Gamma}\) acts freely on \(\tilde{\mu}^{-1}(\tilde{\zeta})\).
### Proof of Theorem 3.8 Part I
In this subsection, we prove one direction of Theorem 3.8 where we show the moduli space obtained by the gauge-theoretic construction contains the 4-dimensional hyperkahler ALE space given by Kronheimer's construction. To do this, we first explicitly identify certain solutions to the equations given previously with solutions to the equations given in Kronheimer's work and hence showing that the moduli space contains the corresponding 4-dimensional hyperkahler ALE space. Then by the uniqueness results, smoothness results and dimension calculations, we conclude that there cannot be any additional solutions other than the ones corresponding to the points of the 4-dimensional hyperkahler ALE space. Hence, we identify the moduli space with a 4-dimensional hyperkahler ALE space.
Proof of Theorem 3.8 Part I.:
**Lemma 7.3**.: _For \(\tilde{\zeta}=\zeta^{*}=-\zeta\), there is a map \(\Phi:X_{\zeta}\to\mathcal{X}_{\tilde{\zeta}}\) which is an embedding and there is a natural choice of metric on \(\mathcal{X}_{\tilde{\zeta}}\) such that \(\Phi\) is an isometry onto its image._
Proof.: We set \(B=0\), then the equations reduce to the following:
\[\bar{\partial}_{A_{0}}\Theta=0,\]
\[-\frac{i}{2}[\Theta,\Theta^{*}]\omega_{vol}=\tilde{\zeta}_{1}\cdot\omega_{vol}=- \zeta_{1}\cdot\omega_{vol},\]
\[-\frac{1}{4}([J\Theta,\Theta^{*}]-[\Theta,J\Theta^{*}])\omega_{vol}=\tilde{ \zeta}_{2}\cdot\omega_{vol}=-\zeta_{2}\cdot\omega_{vol},\]
\[-\frac{i}{4}([J\Theta,\Theta^{*}]+[\Theta,J\Theta^{*}])\omega_{vol}=\tilde{ \zeta}_{3}\cdot\omega_{vol}=-\zeta_{3}\cdot\omega_{vol}.\]
Now since in this case, we can think of \(\Theta\) as a pair of matrices \((\alpha,\beta)\), the equations can be further rewritten as the following (here we are implictly dropping the volume \(2\)-form on both sides):
\[\frac{i}{2}([\alpha,\alpha^{*}]+[\beta,\beta^{*}])=\zeta_{1}\]
\[\frac{1}{2}([\alpha,\beta]+[\alpha^{*},\beta^{*}])=\zeta_{2}\]
\[\frac{i}{2}([\alpha,\beta]-[\alpha^{*},\beta^{*}])=\zeta_{3}.\]
These are precisely Kronheimer's moment map equations and hence by the results of Kronheimer, and we get a solution to the equations. By Lemma 5.10, we know that if a \(\mathcal{G}^{F,\Gamma}_{\tau,\mathbb{C}}\)-orbit contains a solution coming from \(X_{\zeta}\), it is also the unique solution on that orbit. On the other hand, we also want to argue that two distinct solutions coming from \(X_{\zeta}\) will remain distinct in the new moduli space. Suppose there are two solutions coming from \(X_{\zeta}\) that become identified by \(\mathcal{G}^{F,\Gamma}_{\tau}\), then they must lie on the same \(\mathcal{G}^{F,\Gamma}_{\tau,\mathbb{C}}\)-orbit as well. Recall that we have
\[\{(A_{0}+B,\Theta)\in\mathcal{A}^{F}_{\tau}\times C^{\infty}(E(\Gamma))|\bar{ \partial}_{A_{0}+B}\Theta=0\}/\mathcal{G}^{F,\Gamma}_{\tau,0,\mathbb{C}} \cong M.\]
Hence, two solutions lie on the same \(\mathcal{G}^{F,\Gamma}_{\tau,\mathbb{C}}\)-orbit if and only if they also lie on the same \(F^{c}\)-orbit, which would imply that they are also on the same \(F\)-orbit. Hence, we define \(\Phi\) to be the bottom horizontal map that makes the following diagram commute:
\[\begin{CD}M@>{\Psi}>{}>\mathcal{A}^{F}_{\tau}\times C^{\infty}(E(\Gamma))\\ \mu^{-1}(\zeta)@>{\Psi|_{\mu^{-1}(\zeta)}}>{}>\tilde{\mu}^{-1}(\tilde{\zeta}) \\ @V{proj}V{}V@V{proj}V{proj}V\\ X_{\zeta}=\mu^{-1}(\zeta)/F@>{\Phi}>{}>\mathcal{X}_{\tilde{\zeta}}=\tilde{ \mu}^{-1}(\tilde{\zeta})/\mathcal{G}^{F,\Gamma}\end{CD}\]
That \(\Phi\) can be regarded as an isometry onto its image comes from the fact that \(\Psi|_{\mu^{-1}(\zeta)}\) is naturally an isometry onto its image, and we can define a metric on \(\mathcal{X}_{\bar{\zeta}}\) as follows: for \([(B_{1},\Theta_{1})],[(B_{2},\Theta_{2})]\in\mathrm{im}(\Phi)\), define
\[d([(B_{1},\Theta_{1})],[(B_{2},\Theta_{2})])=(\inf_{f\in F}\int_{S^{2}/\Gamma} Re\langle f\Theta_{1}^{\prime}f^{-1}-\Theta_{2}^{\prime},f\Theta_{1}^{\prime}f^{-1 }-\Theta_{2}^{\prime}\rangle\omega_{vol})^{\frac{1}{2}},\]
where \(\Theta_{1}^{\prime},\Theta_{2}^{\prime}\) are such that for some \(\rho_{1},\rho_{2}\in\mathcal{G}_{\tau}^{F,\Gamma}\), we have \(\rho_{1}\cdot(B_{1},\Theta_{1})=(0,\Theta_{1}^{\prime})\) as well as \(\rho_{2}\cdot(B_{2},\Theta_{2})=(0,\Theta_{2}^{\prime})\). We see that \(d\) is well-defined on the image of \(\Phi\), and that \(\Phi\) is an isometry onto its image.
### Proof of Theorem 3.8 Part II
In this subsection, we prove the other direction of Theorem 3.8, that is, we show that the moduli space \(\mathcal{X}_{\bar{\zeta}}\) obtained by the gauge-theoretic construction is indeed equal to the 4-dimensional hyperkahler ALE space \(X_{\zeta}\) given by Kronheimer's construction in [17]. To this end, we first prove the following lemma.
**Lemma 7.4**.: _The complement of \(X_{\zeta}\) contained in the gauge-theoretic moduli space \(\mathcal{X}_{\bar{\zeta}}\) is of higher codimension._
Proof.: First, in the setup of [17], by result of Kirwan [14] as cited also in [17], a stable orbit (closed and of maximal dimension) of \(M\) under the action of \(F^{c}\) contains a solution to the equation \(\frac{i}{2}([\alpha,\alpha^{*}]+[\beta,\beta^{*}])=0\). Now, for any choice of \(\zeta_{1}\), since \(|\mu_{1}-\zeta_{1}|^{2}\) is proper on the \(F^{c}\)-orbit containing a solution to \(\frac{i}{2}([\alpha,\alpha^{*}]+[\beta,\beta^{*}])=0\), and \(F/T\) acts freely on a stable orbit, we have that the complex orbit also contains a solution to \(\frac{i}{2}([\alpha,\alpha^{*}]+[\beta,\beta^{*}])=\zeta_{1}\). As the stable orbits are open and dense, the \(F^{c}\)-orbits not containing a solution to \(\frac{i}{2}([\alpha,\alpha^{*}]+[\beta,\beta^{*}])=\zeta_{1}\) is of higher codimension.
On the other hand, a solution in \(\mathcal{X}_{\bar{\zeta}}\) that does not a priori come from a solution in \(X_{\zeta}\) must have the form \((B,\Theta)\) with \(B\) not \(\mathcal{G}_{\tau}^{F,\Gamma}\)-equivalent to \(0\). Hence, it lies in a different connected component from the one containing the solutions coming from \(X_{\zeta}\) and is contained in a non-stable orbit of \(M\) when we identify the \(F^{c}\)-obits of \(M\) in [17] with the \(\mathcal{G}_{\tau,\mathbb{C}}^{F,\Gamma}\)-orbits of \(\mathcal{C}\) by Lemma 5.7 and Remark 5.8. This tells us that the \(\mathcal{G}_{\tau,\mathbb{C}}^{F,\Gamma}\)-orbits that do not a priori contain a solution coming from Kronheimer's construction must be of higher codimension in the moduli space.
Proof of main theorem Part II.: We want to argue that there are no additional solutions in the gauge-theoretic moduli space \(\mathcal{X}_{\bar{\zeta}}\) than the solutions coming from \(X_{\zeta}\) in [17]. We know if the gauge group acts freely at a solution, then it must come from Kronheimer's construction, by the previous lemma and dimension calculations. But by Lemma 7.1, we know that the gauge group \(\mathcal{G}_{\tau}^{F,\Gamma}\) acts freely on the space of solutions when \(\zeta\) not lying in \(D_{\xi}\), which is
precisely the assumption we have. Hence, all the solutions in \(\mathcal{X}_{\tilde{\zeta}}\) must come from \(X_{\zeta}\). Hence, they are equal, and \(\Phi:X_{\zeta}\to\mathcal{X}_{\tilde{\zeta}}\) is an isometry.
We have concluded the proof of the main theorem, and we will end this section by providing the proof of Proposition 3.4.
Proof of Proposition 3.4.: This proof follows essentially the same arguments as those of the proof of Theorem 3.8. First, observe that 3.1 and 3.2 reduce to \(\frac{i}{2}([\alpha,\alpha^{*}]+[\beta,\beta^{*}])=\zeta_{1}\) when we set \(B=0\). Hence, by Lemma 5.7 and 5.9, we again have that the space of solutions satisfying \(\frac{i}{2}([\alpha,\alpha^{*}]+[\beta,\beta^{*}])=\zeta_{1}\) lies inside \(\mathcal{M}(\Gamma,\tilde{\zeta_{1}})\) as a subset. Since we assume that we are choosing \(\tilde{\zeta_{1}}\) such that the action of the gauge group \(\mathcal{G}^{F,\Gamma}\) on the space of solutions to 3.1 and 3.2 is free, we then know that \(\mathcal{M}(\Gamma,\tilde{\zeta_{1}})\) is smooth. Hence again, by Proposition 6.4, we know that there cannot be any additional solutions in \(\mathcal{M}(\Gamma,\tilde{\zeta_{1}})\), and we get the desired conclusion.
|
2303.01794
|
Hitachi at SemEval-2023 Task 3: Exploring Cross-lingual Multi-task
Strategies for Genre and Framing Detection in Online News
|
This paper explains the participation of team Hitachi to SemEval-2023 Task 3
"Detecting the genre, the framing, and the persuasion techniques in online news
in a multi-lingual setup.'' Based on the multilingual, multi-task nature of the
task and the low-resource setting, we investigated different cross-lingual and
multi-task strategies for training the pretrained language models. Through
extensive experiments, we found that (a) cross-lingual/multi-task training, and
(b) collecting an external balanced dataset, can benefit the genre and framing
detection. We constructed ensemble models from the results and achieved the
highest macro-averaged F1 scores in Italian and Russian genre categorization
subtasks.
|
Yuta Koreeda, Ken-ichi Yokote, Hiroaki Ozaki, Atsuki Yamaguchi, Masaya Tsunokake, Yasuhiro Sogawa
|
2023-03-03T09:12:55Z
|
http://arxiv.org/abs/2303.01794v2
|
Hitachi at SemEval-2023 Task 3: Exploring Cross-lingual Multi-task Strategies for Genre and Framing Detection in Online News
###### Abstract
This paper explains the participation of team _Hitachi_ to SemEval-2023 Task 3 "_Detecting the genre, the framing, and the persuasion techniques in online news in a multi-lingual setup._" Based on the multilingual, multi-task nature of the task and the low-resource setting, we investigated different cross-lingual and multi-task strategies for training the pre-trained language models. Through extensive experiments, we found that (a) cross-lingual/multi-task training, and (b) collecting an external balanced dataset, can benefit the genre and framing detection. We constructed ensemble models from the results and achieved the highest macro-averaged F1 scores in Italian and Russian genre categorization subtasks.
## 1 Introduction
As we pay more and more attention to the socially influencing problems like COVID-19 and the Russo-Ukrainian war, there has been an increasing concern about _infodemic_ of false and misleading information (Piskorski et al., 2023). In particular, cross-lingual understanding of such information is becoming more important due to polarization of political stances, economical decoupling and echo chamber effect in social media. To that end, Piskorski et al. (2023) put together SemEval-2023 Task 3 "_Detecting the genre, the framing, and the persuasion techniques in online news in a multi-lingual setup._" The shared task aims to analyze several aspects of what makes a text persuasive and to foster development of building blocks for multilingual media analysis.
Creating annotated data for media analysis is time consuming thus we cannot assume that we can obtain training data of enough quality and quantity. To tackle the problem, we investigated and compared strategies for multilingual media analysis under a low-resource setting. Through extensive experiments, we found that (a) cross-lingual/multi-task training, and (b) collecting an external balanced dataset, can benefit the genre and framing detection. We constructed ensemble models from the results and participated in genre categorization (subtask 1) and framing detection (subtask 2) in six languages, where we achieved the highest macro-averaged F1 scores in Italian and Russian subtask 1.
## 2 Task Definition and our Strategy
SemEval-2023 Task 3 aims to analyze several aspects of what makes a text persuasive. It offers three subtasks on news articles in six languages: German (de), English (en), French (fr), Italian (it), Polish (pl) and Russian (ru). There are three additional languages (Georgian, Greek and Spanish) without training datasets (i.e., participants need to perform zero-shot language transfer).
**Subtask 1: News genre categorization**: Given a news article, a system has to determine whether it is an opinion piece, it aims at objective news reporting, or it is a satire piece. This is multi-class document classification and the official evaluation measure is macro average F1 score (macro-F1) over the three classes.
**Subtask 2: Framing detection**: Given a news article, a system has to identify what key aspects (frames) are highlighted the rhetoric from 14 frames (see (Card et al., 2015) for the taxonomy and definitions). This is multi-label document classification and the official evaluation measure is micro average F1 score (micro-F1) over the 14 frames.
**Subtask 3: Persuasion techniques detection**: Given a news article, a system has to identify the persuasion techniques in each paragraph from 23 persuasion techniques. This is multi-label paragraph classification.
The target articles are those identified to be
potentially spreading mis-/disinformation and are collected from 2020 to mid-2022. They revolve around widely discussed topics such as COVID-19, migration, the build-up leading to the Russo-Ukrainian war, and some country-specific local events such as elections.
We observed that the numbers of articles are limited for the relative large label space and there exist considerable overlaps of articles between subtask 1 and 2 (see Appendix A.1). Hence, we decided to investigate if models trained on multiple languages or another subtask can benefit the target task in this low resource setting (Section 5). Since subtask 1 and 2 conveniently share the task format, we opted to participate in subtask 1 and 2 in all the six languages.
We also noticed that the English training dataset exhibits significantly different label distribution to other languages and it is unbalanced. Hence, we decided to collect additional external dataset for English subtask 1 in a wish to improve task performance in English and to help with other languages through cross-lingual training (Section 3).
## 3 External Data for English Genre Categorization
In a preliminary analysis of the English subtask 1 dataset, we found that label distribution is quite unbalanced and it is different in the training and the development data. Therefore, we did not make any assumption about the distribution of the test data and decided to increase the number of rare labels in order to create a new, balanced dataset for English genre categorization. First, we undersampled articles from the training dataset for subtask 1 such that the numbers of articles for each label are equal, i.e., ten articles for each label.
We referred to a survey on fake news detection datasets (D'Ulizia et al., 2021) and checked a total of 27 datasets to see if they can be converted to subtask 1 dataset format using the following criteria:
**Label similarity** We checked whether the labels defined by an external dataset are close to subtask 1. For example, we focused on whether they used identical label names, such as "satire".
**Text similarity** We checked if the text type of a dataset is similar to subtask 1, such as whether they use news articles.
**Task similarity** We checked whether the task setting employed by a dataset is a method of classifying them into different classes rather than, for example, scoring them with a scale of 1 to 5.
After these checks, we adopted the Random Political News Data (Horne and Adali, 2017) which contains 75 articles for each of three labels. We added the total of 225 articles to the sampled 30 original articles and constructed the _Augmented (small)_ dataset which contains 255 articles in total.
Since Horne and Adali (2017) disclose the news media from which the data was collected, we independently collected around 1,000 additional articles from the sources shown in Table 1. However, we found in a preliminary experiment that overloading the dataset with external sources did not improve the performance. Hence we sampled 31 satire articles from the collected data and sampled more articles from the original dataset. This resulted in _Augmented (large)_ with 348 articles altogether. The final compositions of the augmented datasets are summarized in Table 2.
Since Horne and Adali (2017) considered English articles only, we were only able to obtain external data for English subtask 1. Nevertheless, the augmented data might be able to benefit non-English and subtask 2 datasets through pretraining on the augmented English dataset.
## 4 Cross-lingual Multi-task Transformers
We utilized pretrained language models (PLMs) in a simple sequence classification setup (Devlin et al., 2019). We employed XLM-RoBERTa large1(Conneau et al., 2020) and RemBERT1(Chung et al., 2021). For English, we also utilized RoBERTa
\begin{table}
\begin{tabular}{l l} \hline Label & News media \\ \hline \hline Satire & The Onion, Huffington Post Satire, Borowitz Report, \\ & The Beaverton, Satire Wire, and Faking News \\ Reporting & Wall Street Journal, The Economist, BBC, NPR, ABC, CBS, USA Today, The Guardian, NBC, The Washington Post \\ Opinion & Ending The Fed, True Pundit, abcnews.com.co, DC \\ Gazette, Liberty Writers News, Before its News, InfoWars, Real News Right Now \\ \hline \hline \end{tabular}
\end{table}
Table 1: The list of news media that we independently collected the data from
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{Satire} & \multicolumn{3}{c}{Reporting} & \multicolumn{3}{c}{Opinion} \\ \cline{2-10} Dataset name & O & E & C & O & E & C & O & E & C & Total \\ \hline Original & 10 & 0 & 0 & 41 & 0 & 0 & 382 & 0 & 0 & 433 \\ Augmented (small) & 10 & 75 & 0 & 10 & 75 & 0 & 10 & 75 & 0 & 255 \\ Augmented (large) & 10 & 75 & 31 & 41 & 75 & 0 & 41 & 75 & 0 & 348 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Number of articles from different sources in the original and our augmented datasets
large1(Liu et al., 2019) for single-language multi-task training. In order to allow multi-task training, we added a classifier head for each subtask2 on top of each model's [CLS] token followed by softmax for subtask 1 and sigmoid for subtask 2. Hence, each model shares most of the parameters for the two subtasks. We used Transformers library (Wolf et al., 2020) for the implementations.
Footnote 2: dropout\(\rightarrow\)linear\(\rightarrow\)tanh\(\rightarrow\)dropout\(\rightarrow\)linear for (XLM-) RoBERTa, and dropout\(\rightarrow\)linear for RemBERT
In multi-task training we simply took sum of losses from two tasks. Since there exist articles that only have either of subtask 1 or 2 labels, we ignore predictions for missing labels from loss calculation. For cross-lingual training, we concatenate all articles. All parameters are shared for the same subtask in different languages. For preprocessing, we concatenated all sentences from each article and tokenized them using the default tokenizer for each PLM. We truncated articles whose tokens do not fit onto each model's maximum context size.
## 5 Exploring Cross-lingual Multi-task Strategies
It is empirically known that further fine-tuning a model trained in a multi-task or cross-lingual setting on the target downstream task/language improves its performance (Koreeda et al., 2019). Also, models tend to require different hyperparameters for different training paradigms or languages. Hence, we decided to explore different multi-task and cross-lingual strategies through a series of random hyperparameter searches (Figure 1).
First, we ran a random hyperparameter search in cross-lingual and/or multi-task settings (Stage I). Regarding the resulting Stage I models as an additional hyperparameter, we ran another random hyperparameter search to optimize the choice of the pretraining paradigm along with other hyperparameters (Stage II). Finally, we construct an ensemble for each language-subtask pair from all models in Stage I and II using their performance in the development dataset.
Unlike more sophisticated hyperparameter search methods, this approach has an advantage that we can compare and evaluate different training paradigms post hoc.
The choice of the subtask 1 English datasets (Section 3) is also incorporated as an additional hyperparameter. The hyperparameter search spaces are shown in Appendix A.2.
We used the official development dataset for each language-subtask pair in order to calculate and compare the performance of each model.
### Stage I Training
In Stage I, we fine-tuned PLM in three settings.
1. Multi-task (30 hyperparameter sets for each language \(=180\) models)
2. Cross-lingual (50 hyperparameter sets for each subtask \(=100\) models)
3. Cross-lingual multi-task (50 hyperparameter sets \(=50\) models)
Hence, we trained 330 models in Stage I.
### Stage II Training
Stage I results in three groups of models that have been trained on each language-subtask pair. For example, "en subtask 1" in Figure 1 has incoming arrows from (1) multi-task ("en subtask 1+2"), (2) cross-lingual ("de+en+ff+it+pl+ru subtask 1"), and (3) cross-lingual multi-task ("de+en+ff+it+pl+ru subtask 1+2"). We also utilize vanilla PLMs for Stage II training (see the arrow from RoBERTa).
For each language-subtask pair, we picked four models from each group, resulting in 12 models for each language-subtask pair. The four models were
Figure 1: Series of random hyperparameter searches for exploring cross-lingual multi-task strategies. It only shows English subtask 1 and French subtask 2 but we did the same for all other languages in subtask 1 and 2.
chosen whose macro-F1, micro-F1, ROC-AUC or mAP was the best in the development dataset for the target language-subtask pair. This means that the same model can be chosen multiple times (e.g., a model which was the best in macro-F1 and micro-F1). We did not remove the duplicates in that case -- such model will be sampled twice as much as a model which was the best only in a single metric.
Regarding these Stage I models and vanilla PLMs as an additional hyperparameter, we carried out Stage II random hyperparameter search on each language. We sampled Stage I models three times more than PLMs, so that all groups (i.e., the four arrows entering "en subtask 1" in Figure 1) are sampled equally. We trained 50 models for each language-subtask pair (50 models \(\times\) 6 languages \(\times\) 2 subtasks \(=\) 600 models).
### Ensembling
Finally, we created an ensemble for each language-subtask pair from the results of hyperparameter search. In a rare case, fine-tuning the model on the downstream task can degrade the performance. Hence, we also considered the Stage I models for the ensemble.
We implemented multiple ensemble methods. Due the scarcity of the development data, the results tend to be unstable. Hence, we manually chose the best one for each language-subtask pair while monitoring multiple leave-one-out metrics on the development dataset. This allowed us to choose models that are not only overfitting to a single metric. The details of ensembling are described in Appendix A.3.
## 6 Results
### Subtask 1: News Genre Categorization
Excerpts from the official leaderboards (Piskorski et al., 2023) for subtask 1 are shown in Table 3. We were the first place in Italian and Russian and within top threes in French and Polish. In an unofficial ranking of mean macro-F1 of six languages, we were the first place (Table 4).
In Figure 2, we show macro-F1 for the development dataset of all the models considered for the ensemble construction. In all six languages, the models fine-tuned from cross-lingual and/or multi-lingual pretraining tend to perform better (i.e., have better median macro-F1) than the single language/task models trained from PLM ("PLM \(\rightarrow\) Stage II"). This shows that cross-lingual multi-task training was overall useful for genre categorization. In most cases, fine-tuning Stage I models in Stage II yields better results than the vanilla Stage I models.
The breakdown of the performance based on how each model was pretrained in Stage I is also shown in the Figure 2. The results are mixed as to which Stage I pretraining paradigms were useful to the Stage II downstream performance. In German, French and Italian, cross-lingual pretraining tends to be more beneficial than multi-task pretraining. In English, Polish and Russian, multi-task pretraining tends to be more beneficial. Interestingly, the combination of the both was never the best option in any language.
We analyzed the effect of incorporating external, balanced datasets for English subtask 1 (Figure 3). When directly fine-tuning PLM in Stage II, we
\begin{table}
\begin{tabular}{l l l} \hline \hline & Team & macro F1 & micro F1 \\ \hline
**1** & **Hitachi** & **72.93** & **76.79** \\ & SheffieldVeraAI & 72.13 & 77.88 \\
3 & MELODI & 68.35 & 76.08 \\
4 & DSHacker & 67.58 & 73.52 \\
5 & UMTeam & 65.52 & 75.60 \\
6 & MLModeler5 & 61.63 & 62.96 \\ \hline \hline \end{tabular}
\end{table}
Table 4: An unofficial subtask 1 leaderboard sorted by the mean macro-F1 over six languages (from Table 3)
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline & Team & macro & micro & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline
1 & UMTeam & 81.95 & 82.00 & 1 & MELODI & 78.43 & 81.48 \\ & SheffieldVeraAI & 81.95 & 82.00 & 2 & MLModeler5 & 61.63 & 62.96 \\
5 & MELODI & 77.89 & 78.00 & 6 & Unisa & 58.62 & 61.11 \\
6 & **Hitachi** & **77.66** & **76.00** & **7 & **Hitachi** & **55.29** & **59.26** \\
7 & FTD & 71.27 & 72.00 & 8 & UnedMediaBiasTeam & 52.36 & 57.41 \\ \hline \hline \multicolumn{4}{c}{(a) German (15 teams)} & \multicolumn{4}{c}{(b) English (22 teams)} \\ \hline \hline \multicolumn{4}{c}{Team} & \multicolumn{4}{c}{macro} & \multicolumn{1}{c}{Team} & \multicolumn{1}{c}{macro} & \multicolumn{1}{c}{micro} \\ \hline
**1** & **Hitachi** & **76.83** & **85.25** & 1 & FTD & 78.55 & 93.62 \\
2 & QUST & 76.68 & 83.61 & 2 & **Hitachi** & **77.92** & **87.23** \\
3 & DSHacker & 72.04 & 83.61 & 3 & SheffieldVeraAI & 76.45 & 85.11 \\
3 & SheffieldVeraAI & 72.04 & 83.61 & 4 & MELODI & 70.86 & 85.11 \\
5 & MELODI & 77.65 & 75.41 & 5 & UMTeam & 66.43 & 80.85 \\
6 & UmedMediaBiasTeam & 58.41 & 62.30 & 6 & SinanaAI & 66.35 & 80.85 \\ \hline \hline \end{tabular}
\end{table}
Table 3: An excerpt from the official leaderboard for subtask 1 showing the rank, macro-F1 and micro-F1 on the single official run on the test split in each language
see that models using either of external datasets tend to be considerably better than ones trained on the original training data. For both conditions, we can see that Augmented (large) tend to perform better than Augmented (small). This shows that obtaining a balanced dataset is important for news genre categorization.
### Subtask 2: Framing Detection
Excerpts from the official leaderboards (Piskorski et al., 2023) for subtask 2 are shown in Table 5. We were third to fifth places in all but Russian where we obtained the ninth place.
In Figure 4, we show micro-F1 for the development dataset of all the models considered for the ensemble construction. As in subtask 1, the models fine-tuned from cross-lingual and/or multi-lingual pretraining tend to perform better than the single language/task models fine-tuned directly from PLM. This suggests that cross-lingual multi-task training was also useful for framing detection.
In all languages in subtask 2, cross-lingual Stage II pretraining tends to result in better micro-F1 than multi-task models (Figure 4). We suspect that this lies in the difference in the linguistic nature of two subtasks; Framing can be determined by lexical semantic to some extent, hence transfers well across different languages with multilingual transformers. On the other hand, distinguishing the genre requires capturing language-specific pragmatics which may be the reason why it did not transfer between languages as effectively as subtask 2.
## 7 Related Work
There exist several studies on multilingual and/or multi-task learning in the context of media analysis. Uyangodage et al. (2021) applied cross-lingual training to multilingual false information detection and showed that cross-lingual training results in on par performance to monolingual training. Alam et al. (2021, 2021) annotated Tweets in multiple languages with multiple choice questions regarding COVID-19 disinformation. They fine-tuned mBERT model (Devlin et al., 2019) in a cross-lingual setup and in a multi-task setup, and found that the results are mixed in terms of their benefits. The SemEval 2023 Task 3 dataset (Piskorski et al., 2023) exhibits significantly different properties to the datasets previous works used and we found that cross-lingual and/or multi-task learning help in this dataset. We also carried out more thorough experiments than the previous works, which might have been the key to the improvement.
There also exist different approaches for multilingual and low-resource settings in broader domains. Reimers and Gurevych (2020) proposed a method for training multilingual sentence embeddings and demonstrated its effectiveness in 50+ languages. In a low-resource setup, Heinisch et al. (2022) used a data augmentation approach that retrieves additional data from related datasets by automatically labeling them. We only explored BERT-based approach in this work, but we wish to explore other approaches for cross-lingual multi-task learning in the future.
Figure 3: The effect of different training datasets in the development dataset for English subtask 1
Figure 2: Comparison of subtask 1 macro-F1 (the development dataset) under different training paradigms (CL: cross-lingual/MT: multi-task)
## 8 Conclusion
In our participation to SemEval-2023 Task 3, we investigated different strategies for multilingual genre and framing detection. Through the extensive experiments, we found that collecting an external balanced dataset can help genre categorization. We also find that cross-lingual and multi-task training can help both genre and framing detection and found that cross-lingual training is more beneficial for framing detection. We constructed ensemble models from the results and achieved the highest macro-F1 in Italian and Russian genre detection.
For future work, we will investigate the effect of cross-lingual multi-task training on zero-shot language transfer [14], Spanish and Georgian subtasks that we did not participate), as well as the effect on and benefit from training models on persuasion techniques detection (subtask 3).
## Acknowledgements
We used computational resource of AI Bridging Cloud Infrastructure (ABCI) provided by the National Institute of Advanced Industrial Science and Technology (AIST) for the experiments.
We would like to thank Gaku Morio and Yuichi Sasazawa for their valuable feedbacks. We would also like to thank Dr. Masaaki Shimizu for helping us with the computational resources.
|
2307.05005
|
Independent sets of non-geometric lattices and the maximal adjoint
|
We present a construction of a family of independent sets for a finite,
atomic and graded lattice generalizing the known cryptomorphism between
geometric lattices and matroids. This family then gives rise to an embedding
theorem into geometric lattices preserving the set of atoms. Lastly we apply
these theorems to the concept of adjoint matroids obtaining a characterization
and proving a conjecture on the combinatorial derived madtroid for uniform and
co-rank 3 matroids.
|
Or Raz
|
2023-07-11T03:56:36Z
|
http://arxiv.org/abs/2307.05005v1
|
# Independent sets of non-geometric lattices and the maximal adjoint
###### Abstract.
We present a construction of a family of independent sets for a finite, atomic and graded lattice generalizing the known cryptomorphism between geometric lattices and matroids. This family then gives rise to an embedding theorem into geometric lattices preserving the set of atoms. Lastly we apply these theorems to the concept of adjoint matroids obtaining a characterization and proving a conjecture on the combinatorial derived matdroid for uniform and co-rank 3 matroids.
**Disclaimer:** I am aware of the recently introduced work "Adjoints of Matroids" by Houshan Fu, Chunming Tang and Suijie Wang containing some of the same results. This document was researched and written before the author was aware of their work, except for part of the last section which is very different. The methods used in both works are different and contain partly different results.
## 1. Introduction
The construction of adjoint matroids was introduced in 1974 by ALAN L. C. CHEUNG in [2] as an attempt to extend the duality principle for projective space, i.e., the duality between points and hyperplanes. In the same vein, the set of matroids with an adjoint can be viewed as a generalization of the set of linear matroids. In recent years there has been renewed interest in adjoints coming from linear codes and dependencies between circuits of a matroid. As shown in [2], [3], [4] and [5] there does not always exist an adjoint, and even if it does exist it may not be unique.
The case of representable matroids (over arbitrary field) was developed simultaneously in two seperate works [7] and [8], showing (in part) that every representable matroid has a representable adjoint corresponding to its representation. In [6] the combinatorial derived matroid was introduced in an attempt to produce a generalization of an adjoint to arbitrary matroids.
In [10] Andreas Blass and Bruce E. Sagan introduced the concept of \(NBB\) sets as a way to calculate the mobius function of a finite and bounded lattice by ordering the atoms of the lattice. These sets can be thought of as a generalization of \(NBC\) sets for ordinary matroids. In [9] the family of \(NBB\) sets on all linear orders on the atoms of the lattice was used to form a family of independent sets and show that if it were a family of sets of a matroid then the lattice could be order embedded into the corresponding geometric lattice. We extend these results here and apply them to the construction of the adjoint matroid.
The structure of this paper is as follows. Section 1 is dedicated to preliminaries on lattices,matroids and the adjoint matroid. In section 2 we disscuss the family
of independent sets of a finite, atomic and graded lattice, recalling results from [9] and presenting a new embedding theorem. In section 3 we show the effects of basic lattice operations on its family of independent sets and a class of special maximal independent sets. Section 4 shift to working with adjoint matroids centred around characterization theorems of adjoints as an application of the family of independent sets. In section 5 we present some partial results showing that the combinatorial derived matroid is an adjoint and the existence of a maximal adjoint.
Throughout the paper we use lower case letters when referring to elements, upper case letters when referring to sets, upper case letters in calligraphic font when referring to families of sets, and upper case letters in gothic font when referring to collections of families. For example we may have \(x\in X\in\mathcal{X}\in\mathfrak{X}\).
## 2. Preliminaries
We start with some basic lattice properties, for more extensive backround on matroids and geometric lattices see [1]. A partially ordered set \(\mathcal{L}\) in which every pair of elements has a unique supremum (also called join and denoted \(\vee\)) and a unique infimum (also called meet and denoted \(\wedge\)) is called a lattice. We say that a lattice is bounded if it has a greatest element (denoted \(\hat{1}\)) and a least element (denoted \(\hat{0}\)).
### Definition:
The dual lattice \(\mathcal{L}^{opp}\) of \(\mathcal{L}\) is defined to be the lattice with the same underlying set and the reverse order.
We say \(B\) covers \(A\) in \(\mathcal{L}\) and denote \(A\lessdot B\) if \(A<B\) and for all \(C\) such that \(A\leq C\leq B\) we have \(C=B\) or \(C=A\). \(A\) is an atom if it covers \(\hat{0}\) and a co-atom if covered by \(\hat{1}\). A bounded lattice is called atomic if every element is a join of atoms and co-atomic if every element is a meet of co-atoms. We denote the set of atoms by \(A(\mathcal{L})\).
Finally, we say that a bounded lattice is graded if there exist a rank function \(rank:\mathcal{L}\rightarrow\mathbb{N}\) with the following properties:
1. \(rank(\hat{0})=0\).
2. Compatibility with the order of \(\mathcal{L}\), \(A\leq B\iff rank(A)\leq rank(B)\).
3. Compatibility with the the covering relation of \(\mathcal{L}\), \(A\lessdot B\Rightarrow rank(A)=rank(B)+1\).
### Definition:
A geometric lattice is a finite, graded lattice such that \(rank(A)+rank(B)\leq rank(A\lor B)+rank(A\wedge B)\).
Moving on to some basic matroid properties (for more details, see [1]).
We will be following the standard conventions, a matroid \(\mathcal{M}\) will be refering to the set of bases of the matroid and we denote by \(I(\mathcal{M}),\mathcal{C}(\mathcal{M}),\mathcal{L}(\mathcal{M}),E(\mathcal{M})\) the family of independent sets, the collection of circuits, the corresponding geometric lattice and the ground set accordingly. When it is obvious to which matroid we are referring, we will omit \(\mathcal{M}\), for example, \(\mathcal{L}\) instead of \(\mathcal{L}(\mathcal{M})\).
An element \(a\in E\) will be called a loop of \(\mathcal{M}\) if \(i\notin I\) for every \(I\in\mathcal{M}\). Two elements \(a,b\in E\) will be called parallel if both are not loops and \(\{a,b\}\notin I\) for
every \(I\in\mathcal{M}\). We will say that a matroid is simple if it has no loops or parallel elements.
### Definition:
Given two matroids \(\mathcal{M}_{1},\mathcal{M}_{2}\) on the ground set \(E\), we say \(\mathcal{M}_{1}\leq\mathcal{M}_{2}\) in the weak order of matroids if \(\mathcal{M}_{1}\subseteq\mathcal{M}_{2}\).
The dual matroid of \(\mathcal{M}\) denoted \(\mathcal{M}^{*}\) is the matroid on the same ground set, and in which a set is independent if and only if \(\mathcal{M}\) has a basis set disjoint from it. We denote the geometric lattice corresponding to \(\mathcal{M}^{*}\) by \(\mathcal{L}^{*}\) and a loop of \(\mathcal{M}^{*}\) will be called a co-loop.
A known property is that the complements of the hyperplanes (co-atoms of the geometric lattice) are precisely the circuits of the dual matroid. This property lets us view an extension of a dual lattice of a geometric lattice discussed in the following sections as a matroid with ground set \(\mathcal{C}(\mathcal{M})\). We formulate our definitions with this change in mind:
### Definition:
Let \(\mathcal{L}\) be a geometric lattice and \(\mathcal{L}^{opp}\) its dual poset. We say \(\mathcal{L}^{\triangle}\) is an adjoint of \(\mathcal{L}^{*}\) if there exist an extension of a bijection on the set of atoms \(f^{\prime}:A(\mathcal{L}^{opp})\to A(\mathcal{L}^{\triangle})\) to a rank preserving order embedding \(f:\mathcal{L}^{opp}\to\mathcal{L}^{\triangle}\). We say \(\mathcal{M}^{\triangle}\) is an adjoint matroid of \(\mathcal{M}\) if the corresponding geometric lattice of \(\mathcal{M}^{\triangle}\) is an adjoint of \(\mathcal{L}^{*}\).
The following two properties are known for adjoints (for example in [5])
1. \(A\lessdot B\in\mathcal{L}\Rightarrow f(A)\lessdot f(B)\)
2. \(f(A\wedge B)=f(A)\wedge f(B)\)
## 3. The family of independent sets and embedding theorems
We start with the definition of \(NBB\) sets as given in [10].
### Definition:
Let \((\mathcal{L},\omega)\) be a pair of a finite bounded lattice \(\mathcal{L}\) and a partial order \(\omega\) on its set of atoms \(A(\mathcal{L})\). A non-empty set \(D\subseteq A(\mathcal{L})\) of atoms is bounded below or \(BB\) if there exist \(a\in A(\mathcal{L})\) such that \(a\) is a strict lower bound for all \(d\in D\) in the order \(\omega\) and \(a\leq\lor D\) in \(\mathcal{L}\).
A set \(B\subseteq A(\mathcal{L})\) is called \(NBB\) if \(B\) does not contain any bounded below subset.
Given a bounded lattice \(\mathcal{L}\) and \(P\) the set of all linear orders on \(A(\mathcal{L})\) we define the family of independent sets \(I(\mathcal{L})\) of \(\mathcal{L}\) to be:
\[I\left(\mathcal{L}\right)=\left\{A\in P\left(A(\mathcal{L})\right)\mid\exists \omega\in P\text{ s.t. }A\text{ is an }NBB\text{ set in }\left(\mathcal{L},\omega\right)\right\}\]
We refer to elements in \(I(\mathcal{L})\) as independent sets of \(\mathcal{L}\); note that sets containing at most two atoms are always independent. The constructions in the section can be thought of as a generalization of the following observation:
### Theorem:
If \(\mathcal{L}\) is a geometric lattice then \(I(\mathcal{L})\) is the family of independent sets of the matroid corresponding to \(\mathcal{L}\).
For the remainder of this section we fix a finite, bounded, atomic and graded lattice \(\mathcal{L}\). If \(\lor A=X\in\mathcal{L}\) for a set of atoms \(A\subseteq A(\mathcal{L})\) we say \(A\) spans \(X\) and set \(rank(A)\coloneqq rank(X)\).
### Definition:
An independent set \(I\) of \(\mathcal{L}\) is called geometric if \(rank(I)=|I|\).
It was shown in [9] that the size of an independent set is always \(\leq\) then its rank and that every non geometric independent set can be extended to one of the same rank, consequently the rank of an element equals the size of a maximal independent set contained in it. The first embedding theorem was introduced in [9] and covers the case in which \(I(\mathcal{L})\) is a family of independent sets of a matroid.
### Theorem:
If \(I(\mathcal{L})\) is a family of independent set of a matroid then there exist a rank preserving order embedding \(f:\mathcal{L}\to\mathcal{P}\) for which the restriction to the set of atoms \(f\mid_{A(\mathcal{L})}:A(\mathcal{L})\to A(\mathcal{P})\) is a bijection. With \(\mathcal{P}\) being the geometric lattice corresponding to \(I(\mathcal{L})\).
We continue with a formulation of an embedding theorem in the case that \(I(\mathcal{L})\) is not a family of independent sets of a matroid.
### Lemma:
Let \(f^{\prime}:A(\mathcal{L})\to A(\mathcal{P})\) be a bijection between the set of atoms of \(\mathcal{L}\) and a geometric lattice \(\mathcal{P}\). If \(f^{\prime}\) can be extended to a rank preserving order embedding \(f:\mathcal{L}\to\mathcal{P}\) then \(f(I(\mathcal{L}))\subseteq I(\mathcal{P})\).
**Proof:** Let \(f:\mathcal{L}\to\mathcal{P}\) such an embedding and choose a minimal \(I\in f(I(\mathcal{L}))\setminus I(\mathcal{P})\). Let \(a\in f^{-1}(I)\) with \(rank(f^{-1}(I)\setminus\{a\})\leq rank(f^{-1}(I))\) guaranteed by the definition of \(NBB\) sets. As \(I\setminus\{f(a)\}\) is an independent set of \(\mathcal{P}\) we must have
\[f(a)\in\vee_{\mathcal{P}}(I\setminus\{f(a)\})\leq_{\mathcal{P}}f(\vee_{ \mathcal{L}}(I\setminus\{f(a)\}))\]
which is a contradiction to the choice of \(a\).
### Corollary:
Let \(\mathcal{L}\) be a finite and graded lattice (not necessarily atomic) and \(\mathcal{P}\) a geometric lattice. If there exist a rank preserving order embedding \(f:\mathcal{L}\to\mathcal{P}\) with \(f\mid_{A(\mathcal{L})}\) an injection into \(A(\mathcal{P})\) then \(f(I(\mathcal{L}))\subseteq I(\mathcal{P})\).
### Theorem:
Let \(f^{\prime}:A(\mathcal{L})\to A(\mathcal{P})\) a bijection between the set of atoms of \(\mathcal{L}\) and the set of atoms of a geometric lattice \(\mathcal{P}\) of the same rank. \(f^{\prime}\) can be extended to a rank preserving order embedding \(f:\mathcal{L}\to\mathcal{P}\) iff \(f(I(\mathcal{L}))\subseteq I(\mathcal{P})\) and \(rank_{\mathcal{L}}(\vee A)=rank_{\mathcal{P}}(\vee f(A))\) for every geometric independent set \(A\in I(\mathcal{L})\).
**Proof:** If there exist such a function \(f\) then Lemma [3.5] implies the first condition. The second condition holds as the rank of an element is the size of a geometric independent set spanning it and \(f\) is rank preserving.
In the other direction we define \(f(X)\coloneqq\vee_{\mathcal{P}}(f(A(X)))\). First notice that \(rank(X)\leq rank(f(X))\) for all \(X\in\mathcal{L}\) as \(f(X)\) contains the image of a spanning geometric set of \(X\). We can now deduce that \(f\) is order preserving as otherwise we would have a chain longer then \(rank(\mathcal{L})\) in \(\mathcal{P}\). To see \(f\) is an embedding let \(X\in\mathcal{L}\), \(a\in A(\mathcal{L})\setminus A(X)\) and \(B\) a geometric independent set spanning \(X\). Therefore \(f(A)\cup\{f(a)\}\in I(\mathcal{P})\) is a size \(rank(X)+1\) independent set in \(f(X)\) which contradicts \(f\) being rank preserving.
## 4. Lattice operations
In this section we present the effects of basic lattice operations on its family of independent sets.
Let \(\mathcal{L}\) be a finite, atomic and graded lattice and \(A\in I(\mathcal{L})\) an independent set. By definition \(A\) is NBB with respect to some linear order, notice we can always choose this linear order to start with the atoms in \(A\). We denote a linear order starting with \(A\) and for which \(A\) is \(NBB\) by \(\omega_{A}\). Let \(A=B\sqcup C\) be a partition of \(A\) which is NBB with respect to a linear order \(\omega_{A}\) starting with the atoms in \(B\). We denote such an order by \(\omega_{B,C}\).
### Definition:
Let \(\mathcal{L}\) be a finite, atomic and graded lattice. we define the following basic lattice operations:
* Let \(X\in\mathcal{L}\) an element of rank \(\geq 1\). The restriction of \(\mathcal{L}\) to \(X\) is the finite, atomic and graded lattice \([\hat{0},X]\).
* Let \(X\in\mathcal{L}\) an element of rank \(\leq rank(\mathcal{L})-1\). The contraction of \(\mathcal{L}\) to \(X\) is the finite and graded lattice \([X,\hat{1}]\).
* Let \(m\leq rank(\mathcal{L})\). The truncation of \(\mathcal{L}\) by \(m\) is the finite, atomic and graded lattice \(T(\mathcal{L},m)=\{X\in\mathcal{L}\mid rank(X)<m\}\cup\{A(\mathcal{L})\}\).
* The dual \(\mathcal{L}^{opp}\) of \(\mathcal{L}\) is the finite and graded lattice with the same underlying set and the reverse order.
### Theorem:
The basic lattice operations above effect the family of independent sets of \(\mathcal{L}\) in the following way:
1. \(I([\hat{0},X])=\{I\cap A(X)\mid I\in I(\mathcal{L})\}\) with \(A(X)\) denoting the set of atoms \(\leq X\).
2. \(I([X,\hat{1}])\cong(\{I\mid\exists\omega_{I,B_{X}}\}\cap(\cup_{X\preccurlyeq Y}A \left(Y\right)))/_{\sim_{X}}\) with \(B_{X}\) a maximal independent set spanning \(X\) and \(\sim_{X}\) being the equivalence relation having \(I\sim_{X}J\) if there exist \(X\lessdot Y\) such that \(I\setminus\{a\}=J\setminus\{b\}\) for some \(a,b\in A(Y)\setminus A(X)\).
3. \(I(T(\mathcal{L},m))=\{I\in I(\mathcal{L})\mid rank(I)\leq m\}\).
4. If \(\mathcal{L}\) is co-atomic then a set of coatoms \(\mathcal{X}\subset A(\mathcal{L}^{opp})\) is a geometric independent set iff there exist a linear order on \(\mathcal{X}=\{X_{1},...,X_{n}\}\) and a geometric independent set \(\{a_{1},...,a_{d-1}\}\) of \(\mathcal{L}\) with \(\omega_{\{a_{1},...,a_{n}\}}\) such that \(\wedge_{i=1}^{t}X_{i}=\vee_{i=1}^{d-t}a_{i}\) for all \(t\in[n]\).
**Proof:** (1) follows directly from the fact that any linear order of the restriction can be extended to a linear order of \(\mathcal{L}\) and visa versa.
(3) is also obvious as a rank \(k\) set of atoms being independent only depends on the rank \(\leq k\) elements of \(\mathcal{L}\).
For (2) we first observe that if \(\omega_{I,B_{X}}\) exist then \(I\) cannot contain two different atoms \(a,b\in A(Y)\setminus A(X)\) for \(X\lessdot Y\). Therefore every atom in \(I\) corresponds to a unique atom of \([X,\hat{1}]\). Consequently, by abuse of notation, we have \(\omega_{I}\) a linear order in which \(I\) is \(NBB\) in \([X,\hat{1}]\). In the other direction we again have every atom \(Y\) of \([X,\hat{1}]\) corresponding to a unique atom of \(\mathcal{L}/_{\sim_{X}}\) by choosing any \(a\in A(Y)\setminus A(X)\). Taking an order for which \(I\) is \(NBB\) in \([X,\hat{1}]\) we must have (again abusing notations) \(\omega_{I,B_{X}}\) in \(\mathcal{L}\) for any maximal independent set \(B_{X}\) spanning \(X\).
(4) holds as a result of the discussion under definition [3.3]. Take such an order of \(\mathcal{X}\) and choose any geometric independent set \(\{a_{1},...,a_{n}\}\) with \(\omega_{\{a_{1},...,a_{n}\}}\) spanning \(\wedge_{X\in\mathcal{X}}X\). As \(\mathcal{X}\) is geometric we have \(rank(\wedge_{i=1}^{t+1}X_{i})+1=rank(\wedge_{i=1}^{t}X_{i})\) for every
\([k-1]\) and so we can find atoms \(a_{n+1},...a_{d-1}\) as required by enlarging \(\{a_{1},...,a_{n+t}\}\) to a spanning set of \(\wedge_{i=1}^{t-1}X_{i}\) for each \(t\). The other direction follows from:
\[n=rank(\vee_{i=1}^{n}a_{i})=rank(\wedge_{i=1}^{n}X_{i})=|\mathcal{X}|\]
Comparing theorem [4.2] to the effect of these operations for geometric lattices (or simple matroids) we see that deletion and truncation work exactly the same. In contrast contracting an element depends on the ordering of the independent sets, even if it is atomic. One consequence of this difference is that the family of finite, atomic and graded lattices with \(I(\mathcal{L})\) a matroid is not closed under the operation of taking minors, which are lattices obtained from \(\mathcal{L}\) by a sequence of contractions and deletions. For example the lattice \(\mathcal{L}=U_{6,4}\setminus\left\{\left\{123\right\},\left\{124\right\}, \left\{126\right\},\left\{134\right\},\left\{136\right\}\right\}\), with \(U_{6,4}\) being the geometric lattice of the rank \(4\) uniform matroid on the ground set [6]. One can easily check that \(\mathcal{L}\) is an atomic and graded lattice with \(I(\mathcal{L})=I(U_{6,4})\) (obviously a matroid). However the contraction \([1,\left[6\right]]\), which is still a finite, atomic and graded lattice has a non-matroid family of independent sets \(I([1,\left[6\right]])=I(U_{5,3})\setminus\left\{\left\{234\right\},\left\{236 \right\}\right\}\).
For the second part of this section we introduce the following special kind of a maximal independent set.
### Definition:
Let \(\mathcal{L}\) be a finite, atomic and graded lattice a maximal independent set \(I\in I(\mathcal{I})\) will be called a basis of \(\mathcal{L}\) if every subset of \(I\) is geometric.
Our reason for calling these maximal independent sets bases is that, in the same way as bases of matroids, they admit the basis exchange property. Meaning, if \(B\) is a basis and \(I\) is a maximal independent set then for every atom \(a\leq B\) there exist an atom \(b\in I\) such that \((B\setminus\{a\})\cup\{b\}\) is a maximal independent set. This property stems from the fact that \(\vee(B\setminus\{a\})\neq\hat{1}\) for every \(a\in B\).
Another matroid-like property bases possess is the existence of a fundamental circuit.
### Lemma:
Let \(B\) be a basis of \(\mathcal{L}\). For every \(a\notin B\), \(B\cup\{a\}\) contains a unique minimal dependent set denoted \(\mathcal{C}_{B,a}\).
**Proof:** Assume \(C_{1},C_{2}\) are minimal dependent sets contained in \(B\cup\{a\}\) and let \(x\in C_{1}\setminus C_{2}\). As \(C_{1}\setminus\{x\}\) is independent we have \(\vee(C_{1}\setminus\{x\})=\vee C_{1}\). We then have \((B\cup\{a\})\setminus\{x\}\) independent as adding each atom in \(B\setminus\{x\}\) to \(C_{1}\setminus\{x\}\) (in any order) will raise its rank. We now have \(C_{2}\subseteq(B\cup\{a\})\setminus\{x\}\) which contradicts it being dependent.
We finish this section discussing the image of bases under basic lattice operations. From theorem [4.2] we can see that truncating or taking minors preserve the bases of the lattice. To see the same is true for the dual operation we observe that each subset \(I^{\prime}\) of size \(rank(\mathcal{L})-1\) of a basis \(\mathcal{I}\) corresponds to the unique co-atom \(\vee I^{\prime}\). We define \(I^{opp}=\left\{\vee(I\setminus\{a\})\mid a\in I\right\}\) to be the basis of \(\mathcal{L}^{opp}\) corresponding to \(I\). \(\mathcal{I}^{opp}\) is a basis as for every subset \(\mathcal{J}\subsetneq\mathcal{I}^{opp}\) and every \(I^{opp}\in\mathcal{I}^{opp}\setminus\mathcal{J}\) we have \(rank(\vee_{\mathcal{L}^{opp}}\mathcal{J})\leq rank(\vee_{\mathcal{L}^{opp}} (\mathcal{J}\cup\{I^{opp}\}))\).
### Theorem:
Let \(\mathcal{B}\) and \(\mathcal{B}^{opp}\) be the family of bases of \(\mathcal{L}\) and \(\mathcal{L}^{opp}\) respectively. If \(\mathcal{L}\) is co-atomic then the function \(opp:\mathcal{B}\rightarrow\mathcal{B}^{opp}\) with \(opp(I)=I^{opp}\) is a bijection.
**Proof:** Let \(I,J\) bases of \(\mathcal{L}\) with \(opp(I)=opp(J)\). For all \(a\in I\) there exist a unique \(b\in J\) with \(\vee(I\setminus\{a\})=\vee(J\setminus\{b\})\). \(opp\) being injective now follows by induction on \(I\setminus\{a\}\) and \(J\setminus\{b\}\) as bases of \([\hat{0},\vee(I\setminus\{a\})]\).
Let \(\mathcal{J}\) be a basis of \(\mathcal{L}^{opp}\), for every \(J\in\mathcal{J}\) we have a unique atom \(a=\vee_{\mathcal{L}^{opp}}(\mathcal{J}\setminus\{J\})\in A(\mathcal{L})\) with \(a\leq J\) and \(a\leq I\) for every \(I\in\mathcal{J}\setminus\{J\}\). Therefore \(I=\{\vee_{\mathcal{L}^{opp}}(\mathcal{J}\setminus\{J\})\mid J\in\mathcal{J}\}\) is a basis of \(\mathcal{L}\) such that \(opp(I)=J\).
## 5. adjoint matroids
In this section we apply the construction of the previous sections to the subject of adjoint matroids. We start by defining an important type of dependent sets related to \((\mathcal{L}^{*})^{opp}\), the dual lattice of the dual matroid. We have already shown that the size of an independent set of a lattice is \(\leq\) its rank, therefore every set larger than its rank must be dependent. We denote the complement of this family of sets by:
\[\mathcal{S}(\mathcal{M})=\{\mathcal{S}\mid|\mathcal{S}|\leq rank_{\mathcal{M}^ {*}}(E)-rank_{\mathcal{M}^{*}}\left(\cap_{C\in\mathcal{S}}C^{c}\right)\}\]
It was proven (for example in "On Adjoints and Dual Matroids") that not all matroids have an adjoint and even if they have one it may not be minimal. Theorem [3.7] already gives us a necessary and sufficient condition for a matroid being an adjoint of another matroid:
### Theorem:
Let \(\mathcal{M}\) be a matroid of rank \(d\) on the ground set \([n]\) and \(\mathcal{M}^{\triangle}\) a matroid of rank \(n-d\) on the ground set \(\mathcal{C}(\mathcal{M})\). The following conditions are equivalent:
1. \(\mathcal{M}^{\triangle}\) is an adjoint of \(\mathcal{M}\).
2. \(I((\mathcal{L}^{*})^{opp})\subseteq I(\mathcal{M}^{\triangle})\subseteq \mathcal{S}(\mathcal{M})\).
3. \(I((\mathcal{L}^{*})^{opp})\subseteq I(\mathcal{M}^{\triangle})\) and \(rank_{\mathcal{M}^{\triangle}}\left\{C\mid i\notin C\in\mathcal{C}\left( \mathcal{M}\right)\right\}=n-d-1\), \(\forall i\in[n]\) with \(i\) not a co-loop.
**Proof:**
1. If \(\mathcal{M}^{\triangle}\) is an an adjoint of \(\mathcal{M}\) then by lemma [3.5] we have \(I((\mathcal{L}^{*})^{opp})\subseteq\mathcal{M}^{\triangle}\). Let \(\mathcal{A}\notin\mathcal{S}(\mathcal{M})\) and \(I\) a geometric independent set of \((\mathcal{L}^{*})^{opp}\) spanning \(\cap_{C\in\mathcal{A}}C^{c}\). By theorem [3.7], we have \(|I|=rank_{\mathcal{M}^{\triangle}}(\overline{\mathcal{A}})<|\mathcal{A}|\) which means \(\mathcal{A}\notin\mathcal{M}^{\triangle}\).
2. If \(i\in E\) is not a co-loop then it spans an atom \(\overline{i}\) of \(\mathcal{L}^{*}\). As \(\cap_{i\notin C}C^{c}=\overline{i}\) and \(I((\mathcal{L}^{*})^{opp})\subseteq\mathcal{M}^{\triangle}\) we have \(rank_{\mathcal{M}^{\triangle}}\left\{C\mid i\notin C\in\mathcal{C}\left( \mathcal{M}\right)\right\}\geq n-d-1\). As \(\mathcal{M}^{\triangle}\subseteq\mathcal{S}(\mathcal{M})\) we have \(rank_{\mathcal{M}^{\triangle}}\left\{C\mid i\notin C\in\mathcal{C}\left( \mathcal{M}\right)\right\}\leq n-d-1\).
3. We will prove the conditions in theorem [3.7]. To show the rank of every geometric independent set is preserved it is enough to prove \(n-d-rank_{\mathcal{M}^{*}}(\mathcal{I})=rank_{\mathcal{M}^{\triangle}}( \mathcal{I})\) for every geometric independent set \(\mathcal{I}\in I((\mathcal{L}^{*})^{opp})\). As every co-atom of \((\mathcal{L}^{*})^{opp}\) is of the form \(\{C\mid i\notin C\in\mathcal{C}\left(\mathcal{M}\right)\}\) with \(i\) not a co-loop of \(\mathcal{M}\), the assumption guarantees every geometric independent set of size \(n-d-1\) to have the required rank. The rest of the sizes then follow as otherwise they will lengthen some size \(n-d-1\) chain.
### Corollary:
We can see that if either \(I((\mathcal{L}^{*})^{opp})\) or \(\mathcal{S}(\mathcal{M})\) are matroids then \(\mathcal{M}\) has an adjoint. Moreover, \(I((\mathcal{L}^{*})^{opp})\) will be a minimal adjoint and \(\mathcal{S}(\mathcal{M})\) the maximal adjoint in the weak order on the class of matroids with ground set \(\mathcal{C}(\mathcal{M})\).
Two separate works, one by James Oxley and Suijie Wang and the second by Relinde Jurrius and Ruud Pellikaan, define an adjoint for linear matroids. We will use the notations of Oxley and Wang as Jurrius and Pellikaan used a dual definition.
Let \(\mathcal{M}\) a representable matroid on the geound set \(E=(e_{1},...,e_{n})\), and let \(\varphi:E\rightarrow\mathbb{F}^{m}\) a representation of \(\mathcal{M}\). To each circuit \(C\) of \(\mathcal{M}\) is associated with respect to \(\varphi\) a unique (up to scalar multiplication) vector \(v_{C}=(c_{1},...,c_{n})\in\mathbb{F}^{m}\) such that \(\sum_{i=1}^{n}c_{i}\varphi\left(e_{i}\right)=0\), where \(c_{i}\neq 0\) iff \(i\in C\).
### Definition:
To a pair of representable matroid and representation \((\mathcal{M},\varphi)\) define the derived matroid \((\delta\mathcal{M},\delta\varphi)\) with ground set \(\mathcal{C}\left(\mathcal{M}\right)\) to be the representable matroid determined by \(\delta\varphi(C)=v_{C}\).
In the work of Jurrius and Pellikaan it was proven that derived matroids are adjoints; here we will give an alternative proof.
We first show that a collection of circuits \(\mathcal{A}\) of a linear matroid \(\mathcal{M}\) corresponding to an independent set of \(I((\mathcal{L}^{*})^{opp})\) is independent in the derived matroid for any representation. As \(\mathcal{A}\) corresponds to an independent set we can order the circuits \((C_{1},...,C_{k})\) such that:
\[C_{i}\setminus(\cup_{j<i}C_{j})\neq\emptyset,\forall i\in[k]\]
We then have \(\mathcal{A}\) an independent set in the derived matroid by simple linear algebra.
To see we cannot have \(rank_{\mathcal{M}^{\triangle}}\left\{C\mid i\notin C\in\mathcal{C}\left( \mathcal{M}\right)\right\}\geq n-d\) for some co-loop \(i\) of \(\mathcal{M}\) observe that if \(\mathcal{B}\) is a basis of \(\mathcal{M}^{\triangle}\) contained in \(\left\{C\mid i\notin C\in\mathcal{C}\left(\mathcal{M}\right)\right\}\) it can be extended to an independent set by a circuit containing \(i\). A circuit containing \(i\) exists as it is not a co-loop and so there exists a basis \(i\notin B\) of \(\mathcal{M}\), \(\mathcal{B}_{B,i}\) is then such a circuit. As we have already proved \(I((\mathcal{L}^{*})^{opp})\subseteq\delta\mathcal{M}\) we have \(rank_{\mathcal{M}^{\triangle}}\left\{C\mid i\notin C\in\mathcal{C}\left( \mathcal{M}\right)\right\}=n-d-1\) and so satisfy the third condition of theorem [5.1].
Another old result we can rediscover is that every rank \(n-3\) matroid \(\mathcal{M}\) has an adjoint. To see this let \(\mathcal{M}^{\prime}=\left\{\left\{C_{1},C_{2},C_{3}\right\}\mid\cup_{i=1}^{3} C_{i}=[n]\right\}\). It is easy to check that \(\mathcal{M}^{\prime}=\mathcal{S}(\mathcal{M})\) and so is an adjoint.
Our next goal is to find an easier way to compute requirements for being an adjoint as computing \(I((\mathcal{L}^{*})^{opp})\) can be computationally expensive. We start by observing the fact that the fundamental circuit of any basis of \((\mathcal{L}^{*})^{opp}\) must be a circuit of every adjoint of \(\mathcal{M}\) as it does not belongs to \(\mathcal{S}(\mathcal{M})\). We also observe that as \(\mathcal{L}^{*}\) is a geometric lattice we have every basis of \((\mathcal{L}^{*})^{opp}\) of the form \((E\setminus B)^{opp}=\left\{C_{B,i}\mid i\notin B\right\}=\mathcal{C}_{B}\) for a basis \(B\) of \(\mathcal{M}\). The following gives an explicit form to the "fundamental circuits" of \((\mathcal{L}^{*})^{opp}\):
### Lemma:
Let \(B\) be a basis of \(\mathcal{L}\) and \(C\in\mathcal{C}(\mathcal{M})\), then:
\[\mathcal{C}_{\mathcal{C}_{\mathcal{B}},C}=\left\{C_{B,e}\mid e\in C\setminus B \right\}\cup\left\{C\right\}\]
**Proof:** If \(e\in C\setminus B\) then as \(e\notin\cup_{a\in(E\setminus(B\cup\{C\}))}C_{B,a}\) we must have \((\mathcal{C}\setminus(\{C_{B,e}\}))\cup\{C\}\) independent in \((\mathcal{L}^{*})^{opp}\) and so \(C_{B,e}\in\mathcal{C}_{\mathcal{C}_{\mathcal{B}},C}\). On the other hand, let \(e\notin C,B\) and assume \(C_{B,e}\in\mathcal{C}_{\mathcal{C}_{\mathcal{B}},C}\). We then have \(\mathcal{C}_{\mathcal{C}_{\mathcal{B}},C}\setminus\{C_{B,e}\}\) independent with \(e\notin\cup_{C^{\prime}\in(\mathcal{C}_{\mathcal{C}_{\mathcal{B}},C})}C^{\prime}\), consequently \(\mathcal{C}_{\mathcal{C}_{\mathcal{B}},C}\) is independent.
We continue with an algorithm for computing maximal independent sets of \((\mathcal{L}^{*})^{opp}\) from its bases. Starting with a basis \(\mathcal{C}_{B}\) of \((\mathcal{L}^{*})^{opp}\) we have \(rank(\mathcal{C}_{B}\setminus\{C_{B,i_{1}}\})=n-d-1\) for every \(i_{1}\notin B\). Therefore \((\mathcal{C}_{B}\setminus\{C_{B,i_{1}}\})\cup\{C_{1}\}\in I((\mathcal{L}^{*}) ^{opp}\) for every circuit \(C_{1}\) of \(\mathcal{M}\) containing \(i_{1}\). We can then repeat this process, in the \(k\)'th iteration replacing \(C_{B,i_{k}}\) with \(C_{k}\) for \(k\) such that \(i_{k}\in C_{k}\setminus(B\cup(\cup_{j=1}^{k-1}C_{j}))\) and obtain \((\mathcal{C}_{B}\setminus\{C_{B,i_{1}},...,C_{B,i_{k}}\})\cup\{C_{1},..,C_{k} \}\in I((\mathcal{L}^{*})^{opp})\).
### Lemma:
Every maximal independent set of \(I((\mathcal{L}^{*})^{opp})\) is obtained by the algorithm above.
**Proof:** Let \(\mathcal{I}\) be a maximal independent set and \(\{C_{1},...,C_{n-d}\}\) an order on its element for which \(\omega_{\mathcal{I}}\) exist. By theorem [4.2] we have a basis \(B\) of \(\mathcal{M}\) with \(E\setminus B=\{a_{1},...,a_{d-n}\}\) such that \(\cap_{i=0}^{k}C_{i}^{c}=\vee_{i=1}^{n-d-k}a_{i}\) for all \(t\in\{0,..,d-n\}\), with the join of the \(a_{i}\)'s being in \(\mathcal{L}^{*}\) and we define \(\cap_{i=0}^{0}C_{i}=E\). We must then have \(\{a_{d-n-k-1},...,a_{d-n}\}\in C_{k}\) and \(\{a_{1},...,a_{k}\}\notin C_{k}\) for all \(k\in[d-n-1]\). Therefore the algorithm process starting with \(\mathcal{C}_{B}\) and replacing the circuits in the defined order \((a_{d-n},...a_{1})\) with the circuits \(\{C_{1},...,C_{n-d}\}\) will result in \(\mathcal{I}\).
We can now formulate the following equivalent condition to being an adjoint:
### Theorem:
\(\mathcal{M}^{\triangle}\) is an adjoint of \(\mathcal{M}\) iff \(\mathcal{C}_{B}\) is a basis of \(\mathcal{M}^{\triangle}\) for every basis \(B\) of \(\mathcal{M}\) and \(\mathcal{C}_{\mathcal{C}_{B},C}\) is a circuit of \(\mathcal{M}^{\triangle}\) for every \(C\notin\mathcal{C}_{B}\).
**Proof:** We have already seen that if \(\mathcal{M}^{\triangle}\) is an adjoint then the conditions hold. For the other direction we first have \(\mathcal{M}^{\triangle}\) of the required rank as \(\mathcal{C}_{B}\) is a basis, we will now prove that condition (3) of theorem [5.1] hold. Let \(i\) not a co-loop of \(\mathcal{M}\) and \(B\) a basis of \(\mathcal{M}\) not containing it, we will show that the closure of \(\mathcal{C}_{B}\setminus\{C_{B,i}\}\) in \(\mathcal{M}^{\triangle}\) is \(\{C\mid i\notin C\in\mathcal{C}\left(\mathcal{M}\right)\}\). Assume that \(\exists C\in\{C\mid i\notin C\in\mathcal{C}\left(\mathcal{M}\right)\}\) such that \((\mathcal{C}_{B}\setminus\{C_{B,i}\})\cup\{C\}\) is independent in \(\mathcal{M}^{\triangle}\). As \(\mathcal{C}_{\mathcal{C}_{B},C}\) is a circuit of \(\mathcal{M}^{\triangle}\) we must have \(C_{B,i}\in\mathcal{C}_{\mathcal{C}_{B},C}\) which contradicts \(\mathcal{C}_{\mathcal{C}_{B},C}\) being a minimal independent set of \(I((\mathcal{L}^{*})^{opp})\) as no other element in \(\mathcal{C}_{\mathcal{C}_{B},C}\) contains \(i\).
It remains to prove that if \(i\in C\) then \((\mathcal{C}_{B}\setminus\{C_{B,i}\})\cup\{C\}\) is independent in \(\mathcal{M}^{\triangle}\). To show that we will prove \(I((\mathcal{L}^{*})^{opp})\subseteq\mathcal{M}^{\triangle}\). Let \(\{C_{1},...,C_{n-d}\}\) be a maximal independent set of \((\mathcal{L}^{*})^{opp}\) and \(\mathcal{C}_{B}\) the basis of \((\mathcal{L}^{*})^{opp}\) corresponding to it by the algorithm in lemma [3.5], we will show that in each step of the algorithm we obtain an independent set of \(\mathcal{M}^{\triangle}\). Assume that we get a dependent set is the \(k\)'th step, let \(\mathcal{S}\) be the fundamental circuit of \((\mathcal{C}_{B}\setminus\{C_{B,i_{1}},...,C_{B,i_{k-1}}\})\cup\{C_{1},..,C_{k}\}\) in \(\mathcal{M}^{\triangle}\). As \(\mathcal{M}\) is a matroid we have for each two distinct circuits \(\mathcal{C},\mathcal{C}^{\prime}\) that \((\mathcal{C}\cup\mathcal{C}^{\prime})\setminus\{C\}\) contains a circuit for every \(C\in\mathcal{C}\cap\mathcal{C}^{\prime}\). We can use this property to eliminate every \(C_{i}\in\mathcal{S}\) for \(i\in[k-1]\). Let \(\{C_{i_{1}},...,C_{i_{t}}\}\subset\mathcal{S}\setminus C_{k}\), define \(\mathcal{C}^{1}\) to be a circuit of \(\mathcal{M}^{\triangle}\) contained in \((\mathcal{S}\cup\mathcal{C}_{\mathcal{C}_{B},C_{i_{1}}})\setminus\{C_{i_{1}}\}\) and \(\mathcal{C}^{j}\) to be a circuit of \(\mathcal{M}^{\triangle}\) contained in \((\mathcal{C}^{j-1}\cup\mathcal{C}_{\mathcal{C}_{B},C_{i_{j}}})\setminus\{C_{i_{j }}\}\). In the end we must have \(\mathcal{C}^{t}=\mathcal{C}_{\mathcal{C}_{B},C_{k}}\) as there is a unique circuit contained in \(\mathcal{C}_{B}\cup\{C_{k}\}\), therefore we must have \(C_{B,k}\in\mathcal{C}^{t}\). We have \(C_{B,k}\notin\mathcal{C}_{\mathcal{C}_{B},C_{j}}\) for any \(j\in[k-1]\) as \(k\notin C_{j}\) for any \(j\in[k-1]\) by the algorithm process. We then must have \(C_{B,k}\in\mathcal{S}\) contradicting the assumption.
### Corollary:
If \(\mathcal{M}\) is a matroid corresponding to a modular geometric lattice \(\mathcal{L}\) then \(I((\mathcal{L}^{*})^{opp})\) is its only adjoint matroid.
**Proof:** As \(\mathcal{L}\) is modular we have every maximal independent set of \(I((\mathcal{L}^{*})^{opp})\) of the form \(\mathcal{C}_{B}\) for a basis \(B\) of \(\mathcal{M}\). If \(\mathcal{M}^{\triangle}\) is an adjoint of \(\mathcal{M}\) with a basis \(\mathcal{B}\notin I((\mathcal{L}^{*})^{opp})\) we must have some basis \(B\) of \(\mathcal{M}\) with \(C_{1}\in\mathcal{C}_{B}\) such that for every \(C_{2}\in\mathcal{B}\), \((\mathcal{C}_{B}\setminus\{C_{1}\})\cup\{C_{2}\}\notin I((\mathcal{L}^{*})^{ opp})\). Therefore \((\mathcal{C}_{B}\setminus\{C_{1}\})\cup\{C_{2}\}\in(\mathcal{S}(\mathcal{M}))^ {c}\) contradicting \(\mathcal{M}^{\triangle}\) being an adjoint.
## 6. The Combinatorial derived matroid and maximal adjoints
Applying the caractarizations of adjoints developed in section 5 we can show that the combinatorial derived matroid defined in [6] is an adjoint for two relatively simple cases. We repeat some of the definitions from [6] for the sake of completeness.
### Definition:
Let \(\mathcal{M}\) be a matroid, \(\mathfrak{A}\subset 2^{C(\mathcal{M})}\) a collection of sets of circuits and \(k\in\mathbb{N}\). We define the following operations:
1. \[\mathfrak{A}_{k}=\left\{\mathcal{A}\in\mathfrak{A}\mid\left|\mathcal{A}\right| =k\right\},\mathfrak{A}_{\leq k}=\left\{\mathcal{A}\in\mathfrak{A}\mid\left| \mathcal{A}\right|\leq k\right\}\]
2. \[\epsilon\left(\mathfrak{A}\right)=\mathfrak{A}\cup\left\{\left(\mathcal{A}_{1 }\cup\mathcal{A}_{2}\right)\setminus\{C\}\mid\mathcal{A}_{1},\mathcal{A}_{2} \subseteq\mathfrak{A},\,\mathcal{A}_{1}\cap\mathcal{A}_{2}\notin\mathfrak{A}, \,C\in\mathcal{A}_{1}\cap\mathcal{A}_{2}\right\}\]
3. \[\uparrow\mathfrak{A}=\left\{\mathcal{A}\in P\left(C\left(\mathcal{M}\right) \right)\mid\exists\mathcal{A}^{\prime}\in\mathfrak{A},\,\mathcal{A}^{\prime} \subseteq\mathcal{A}\right\}\]
In [6] the second and third operations were used to construct a matroid on the the set of circuits \(C(\mathcal{M})\). Given some initial collection \(\mathfrak{A}\subseteq 2^{C(\mathcal{M})}\) Define \(\mathfrak{A}_{0}=\mathfrak{A}\) and construct inductively the collections \(\mathfrak{A}_{i+1}=\uparrow\epsilon\mathfrak{A}_{i}\). As this sequence is increasing and contained in the finite set \(2^{C(\mathcal{M})}\) we have the well defined limit:
\[D(\mathfrak{A})=\cup_{i\geq 0}\mathfrak{A}_{i}\]
The next theorem explains the abuse of notation in writing \(D(\mathfrak{A})\), showing it is indeed the family of dependent set of the unique matroid constructed from \(\mathfrak{A}\).
### Theorem
(Proposition 4.7. of [6]) If \(\emptyset\notin\mathfrak{A}\) then \(D(\mathfrak{A})\) is the family of dependent sets of a matroid on \(C(\mathcal{M})\).
### Definition:
The combinatorial derived matroid \(\delta\mathcal{M}\) of \(\mathcal{M}\) is defined to be the matroid on the the set of circuits \(C(\mathcal{M})\) and dependent sets \(D(2^{C(\mathcal{M})}\setminus\mathcal{S}(\mathcal{M}))\).
In [6] they did not have criteria for \(\delta\mathcal{M}\) to be an adjoint of \(\mathcal{M}\). Using theorem [5.1] we know the it is enough for \(I((\mathcal{L}^{*})^{opp})\) to be independent in \(\delta\mathcal{M}\) as \(2^{C(\mathcal{M})}\setminus\mathcal{S}(\mathcal{M})\) is already dependent. In particular, if \(D(2^{C(\mathcal{M})}\setminus\mathcal{S}(\mathcal{M}))=\)
\(\uparrow(2^{C(\mathcal{M})}\setminus\mathcal{S}(\mathcal{M}))\) then \(\delta\mathcal{M}\) is an adjoint. Two simple examples follow:
### Theorem
If \(\mathcal{M}\) is of co-rank \(3\) then \(\delta\mathcal{M}\) is an adjoint of \(\mathcal{M}\).
**Proof:** The only elements in \(2^{C(\mathcal{M})}\setminus\mathcal{S}(\mathcal{M})\) of size three are sets of the form \(\{C_{1},C_{2},C_{3}\}\) with:
\[1=rank_{\mathcal{M}^{*}}(C_{1}^{c}\cap C_{2}^{c}\cap C_{3}^{c})=rank_{\mathcal{ M}^{*}}(C_{1}^{c}\cap C_{2}^{c})=rank_{\mathcal{M}^{*}}(C_{1}^{c}\cap C_{3}^{c})= rank_{\mathcal{M}^{*}}(C_{2}^{c}\cap C_{3}^{c})\]
A set of size three in \(\epsilon(2^{C(\mathcal{M})}\setminus\mathcal{S}(\mathcal{M}))\) is of the form \(\{C_{2},C_{3},C_{4}\}\) with some \(C_{1}\in\mathcal{C}(\mathcal{M})\) such that \(\{C_{1},C_{2},C_{3}\},\{C_{1},C_{2},C_{4}\}\in 2^{C(\mathcal{M})}\setminus \mathcal{S}(\mathcal{M})\), which is again in \(2^{C(\mathcal{M})}\setminus\mathcal{S}(\mathcal{M})\). As we also have \((\uparrow(2^{C(\mathcal{M})}\setminus\mathcal{S}(\mathcal{M})))_{3}=(2^{C( \mathcal{M})}\setminus\mathcal{S}(\mathcal{M}))_{3}\) and the maximal \(rank_{\mathcal{M}^{*}}\) of a set is \(3\) we get \(D(\delta\mathcal{M})=2^{C(\mathcal{M})}\setminus\mathcal{S}(\mathcal{M})\),
### Theorem
\(\delta U(k,n)\) is an adjoint of \(U(k,n)\), the uniform matroid of rank \(k\) on the ground set \([n]\).
**Proof:** Let \(\mathcal{A}_{1},\mathcal{A}_{2}\in 2^{C(U(k,n))}\setminus\mathcal{S}(U(k,n))\) with \(C\in\mathcal{A}_{1}\cap\mathcal{A}_{2}\). As \(U(k,n)^{*}=U(n-k,n)\) we have:
\[n-k-rank_{\mathcal{M}^{*}}(\cap_{A\in\mathcal{A}_{i}}A)=|(\cup_{A\in\mathcal{ A}_{i}}A)\setminus C|+1<|\mathcal{A}_{i}|\]
Therefore, we must also have:
\[n-k-rank_{\mathcal{M}^{*}}(\cap_{A\in(\mathcal{A}_{1}\cup\mathcal{A}_{2}) \setminus\{C\}}A)\leq n-k-rank_{\mathcal{M}^{*}}(\cap_{A\in\mathcal{A}_{1} \cup\mathcal{A}_{2}}A)\]
\[=|(\cup_{A\in\mathcal{A}_{1}\cup\mathcal{A}_{2}}A)\setminus C|+1\leq|(\cup_{ A\in\mathcal{A}_{1}}A)\setminus C|+1+|(\cup_{A\in\mathcal{A}_{2}}A)\setminus C|+1-1\]
\[<|\mathcal{A}_{1}|+|\mathcal{A}_{2}|-1=|(\mathcal{A}_{1}\cup\mathcal{A}_{2}) \setminus\{C\}|\]
To complete the proof we observe that if \(\mathcal{A}_{1},\mathcal{A}_{2}\in\uparrow(2^{C(U(k,n))}\setminus\mathcal{S}( \mathcal{M}))\) then \((\mathcal{A}_{1}\cup\mathcal{A}_{2})\setminus\{C\}\) will either contain a subset in \(2^{C(U(k,n))}\setminus\mathcal{S}(U(k,n))\) already contained in one of the \(\mathcal{A}_{i}\)'s or contain a collection of the form \((\mathcal{B}_{1}\cup\mathcal{B}_{2})\setminus\{C\}\) with \(\mathcal{B}_{1},\mathcal{B}_{2}\in 2^{C(U(k,n))}\setminus\mathcal{S}(U(k,n))\) which is again in \(2^{C(U(k,n))}\setminus\mathcal{S}(U(k,n))\). Consequently, \(D(\delta U(k,n))=\uparrow(2^{C(U(k,n))}\setminus\mathcal{S}(U(k,n)))\).
We notice that in all of the observed cases, as well as being suggested in [6], we have \(\delta\mathcal{M}\) a maximal adjoint of \(\mathcal{M}\) in the weak order of matroids. Using the tools developed in [11] we can construct a maximal matroid structure on a set of circuits, if some conditions hold. We start by introducing the construction, Let \(\mathcal{X}\subseteq P(E)\) for the ground set \(E\). A \(\mathcal{X}\)-matroid is a matroid \(\mathcal{M}\) on the ground set \(E\) such that \(\mathcal{X}\subseteq C(\mathcal{M})\). [11] introduced the following upper bound on the rank function on an \(\mathcal{X}\)-matroid.
### Definition
A sequence \(\mathcal{A}=(X_{1},...,X_{k})\) will be called a proper \(\mathcal{X}\)-sequence if \(X_{i}\in\mathcal{X}\) for all \(i\in[k]\) and \(X_{i}\subsetneq\cup_{j=1}^{i-1}X_{j}\) for all \(2\leq i\leq k\). For \(F\subseteq E\) we define \(val(F,A)=\left|F\cup(\cup_{i=1}^{k}X_{i})\right|-k\).
### Lemma
Let \(\mathcal{M}\) be an \(\mathcal{X}\)-matroid and \(F\subseteq E\). Then \(rank_{\mathcal{M}}(F)\leq val(F,S)\) for any proper \(\mathcal{X}\)-sequence \(\mathcal{A}\). Furthermore, if equality holds, then \(rank_{\mathcal{M}}(F\setminus\{e\})=rank_{\mathcal{M}}(F)-1\) for all \(e\in F\setminus\cup_{X\in\mathcal{A}}X\) and \(rank_{\mathcal{M}}(F\cup\{e\})=rank_{\mathcal{M}}(F)\) for all \(e\in\cup_{X\in\mathcal{A}}X\).
Using this upper bound, the following function was introduced and shown to be a candidate for a maximal matroid on the set of circuits.
### Theorem:
Let \(\mathcal{X}\) be a family of subsets of of a ground set \(E\) and the set of \(\mathcal{X}\)-matroids not empty. If the following function \(val_{\mathcal{X}}:2^{E}\rightarrow\mathbb{Z}\) is sub-modular then it is the rank function of the unique maximal \(\mathcal{X}\)-matroid on the ground set \(E\).
\[val_{\mathcal{X}}\left(F\right)=min\left\{val\left(F,\mathcal{A}\right)\mid \mathcal{A}\text{ is a proper $\mathcal{X}$-sequence}\right\}\]
In our case we saw in theorem [5.5] that every collection of circuits of the form \(\mathcal{C}_{\mathcal{C}_{B},C}\) is a circuit for every adjoint matroid, therefore the natural choice is \(\mathcal{X}=\cup_{B\in\mathcal{M}}\left\{\mathcal{C}_{\mathcal{C}_{\mathcal{B} },C}\mid C\notin\mathcal{C}_{\mathcal{B}}\right\}\) making every adjoint of \(\mathcal{M}\) an \(\mathcal{X}\)-matroid. Moreover, if \(B\in\mathcal{M}\) and \(\mathcal{A}=\left\{C\notin\mathcal{C}_{B}\right\}\), we have:
\[val_{\mathcal{X}}\left(C(\mathcal{M})\right)\leq val(C(\mathcal{M}),\mathcal{ A})=|C(\mathcal{M})|-|C(\mathcal{M})\setminus\mathcal{C}_{B}|=|\mathcal{C}_{B}|= rank(\mathcal{M}^{*})\]
Therefore \(val_{\mathcal{X}}\left(\mathcal{A}\right)\leq rank(\mathcal{M}^{*})\) for every \(\mathcal{A}\subseteq C(\mathcal{M})\).
### Lemma:
\(val_{\mathcal{X}}\left(\mathcal{D}\right)<|\mathcal{D}|\) for every \(\mathcal{D}\in\uparrow(2^{C(\mathcal{M})}\setminus\mathcal{S}(\mathcal{M}))\).
**Proof:** As \(val_{\mathcal{X}}(\mathcal{A})\leq val_{\mathcal{X}}(\mathcal{A}\cup\left\{C \right\})\leq val_{\mathcal{X}}(\mathcal{A})+1\) for every \(\mathcal{A}\in 2^{C(\mathcal{M})}\) and \(C\in C(\mathcal{M})\) it is enough to prove the inequality for \(\mathcal{D}\in(2^{C(\mathcal{M})}\setminus\mathcal{S}(\mathcal{M}))\). If \(rank_{\mathcal{M}^{*}}(\cap_{C\in\mathcal{D}}C^{c})=0\) we must have \(|\mathcal{D}|>rank(\mathcal{M}^{*})\) and the lemma follows from the above discussion. Otherwise, there exist an independent set \(I\) of \(\mathcal{M}^{*}\) of size \(rank_{\mathcal{M}^{*}}(\cap_{C\in\mathcal{D}}C^{c})\) such that \(I\cap C=\emptyset\) for all \(C\in\mathcal{D}\). Let \(B\) be a basis of \(\mathcal{M}\) contained in the complement of \(I\) and \(\mathcal{A}=\left\{\mathcal{C}_{\mathcal{C}_{B},C}\mid C\in\mathcal{D}\right\}\). As \(\mathcal{A}\) is a proper \(\mathcal{X}\)-sequence (in any order) we have:
\[val_{\mathcal{X}}(\mathcal{D})\leq val(\mathcal{D},\mathcal{A})=| \mathcal{D}\cup(\cup_{C\in\mathcal{D}}\mathcal{C}_{\mathcal{C}_{B},C})|-| \mathcal{D}|=|\cup_{C\in\mathcal{D}}\mathcal{C}_{\mathcal{C}_{B},C}|-| \mathcal{D}|\leq\] \[|\{C_{B,e}\mid e\in C\setminus B,\,C\in\mathcal{D}\}|=|B^{c}|-| I|=rank_{\mathcal{M}^{*}}(E)-rank_{\mathcal{M}^{*}}(\cap_{C\in\mathcal{D}}C^{c})\]
### Corollary:
If \(\delta\mathcal{M}=\uparrow(2^{C(\mathcal{M})}\setminus\mathcal{S}(\mathcal{M}))\) then \(val_{\mathcal{X}}\) is its rank function. In particular, we have seen that this is the case for co-rank \(3\) and uniform matroids.
Finally, we show that under a small change to its construction we have that \(\delta\mathcal{M}\) is not smaller, in the weak order of matroids, then any adjoint of \(\updownarrow\). Consequently, if it is an adjoint then it is a maximal adjoint. Furthermore, using theorem [6.8], we have that if \(val_{\mathcal{X}}\) is a rank function of a matroid then it is the rank function of the small change of \(\delta\mathcal{M}\).
### Definition:
The small change to the derived matroid \(\delta^{\prime}\mathcal{M}\) is defined to be the matroid on the the set of circuits \(C(\mathcal{M})\) and dependent sets \(D^{(rank_{\mathcal{M}^{*}}(E))}(2^{C(\mathcal{M})}\setminus\mathcal{S})\), with \(D(k+1)(\mathcal{A})=D(\mathcal{A}\cup(D(k)(\mathcal{A}))_{\leq k})\) and \(D^{(0)}(\mathcal{A})=\mathcal{A}\).
We can see that the only change to \(\delta\mathcal{M}\) is that it can happen in \(\delta\mathcal{M}\) that two dependent sets will have an independent intersection which will become dependent in a later iteration. Let \(\mathcal{D}\) be a minimal (in size and in the minimal number of iterations required to obtain it) dependent set in \(\delta^{\prime}\mathcal{M}\) which is independent in an adjoint \(\mathcal{M}^{\triangle}\). It must be the case that \(\mathcal{D}=(\mathcal{D}_{1}\cup\mathcal{D}_{2})\setminus\{C\}\) with \(\mathcal{D}_{1},\mathcal{D}_{2}\) dependent in \(\mathcal{M}^{\triangle}\). Therefore, \(\mathcal{D}_{1}\cap\mathcal{D}_{2}\) is dependent in \(\mathcal{M}^{\triangle}\) and \(\delta^{\prime}\mathcal{M}\nleq\mathcal{M}^{\triangle}\), resulting in
\(\delta^{\prime}\mathcal{M}\) not being smaller than any adjoint of \(\mathcal{M}\). We finish with a conjecture that will follow from conjecture 1.3 in [11].
### Conjecture:
Let \(\mathcal{M}\) a matroid, if there exist a maximal adjoint \(\mathcal{M}^{\triangle}\) then \(\mathcal{M}^{\triangle}=\delta^{\prime}\mathcal{M}\) with rank function \(val_{\mathcal{X}}\).
|
2307.12579
|
Prototyping a ROOT-based distributed analysis workflow for HL-LHC: the
CMS use case
|
The challenges expected for the next era of the Large Hadron Collider (LHC),
both in terms of storage and computing resources, provide LHC experiments with
a strong motivation for evaluating ways of rethinking their computing models at
many levels. Great efforts have been put into optimizing the computing resource
utilization for the data analysis, which leads both to lower hardware
requirements and faster turnaround for physics analyses. In this scenario, the
Compact Muon Solenoid (CMS) collaboration is involved in several activities
aimed at benchmarking different solutions for running High Energy Physics (HEP)
analysis workflows. A promising solution is evolving software towards more
user-friendly approaches featuring a declarative programming model and
interactive workflows. The computing infrastructure should keep up with this
trend by offering on the one side modern interfaces, and on the other side
hiding the complexity of the underlying environment, while efficiently
leveraging the already deployed grid infrastructure and scaling toward
opportunistic resources like public cloud or HPC centers. This article presents
the first example of using the ROOT RDataFrame technology to exploit such
next-generation approaches for a production-grade CMS physics analysis. A new
analysis facility is created to offer users a modern interactive web interface
based on JupyterLab that can leverage HTCondor-based grid resources on
different geographical sites. The physics analysis is converted from a legacy
iterative approach to the modern declarative approach offered by RDataFrame and
distributed over multiple computing nodes. The new scenario offers not only an
overall improved programming experience, but also an order of magnitude speedup
increase with respect to the previous approach.
|
Tommaso Tedeschi, Vincenzo Eduardo Padulano, Daniele Spiga, Diego Ciangottini, Mirco Tracolli, Enric Tejedor Saavedra, Enrico Guiraud, Massimo Biasotto
|
2023-07-24T07:49:22Z
|
http://arxiv.org/abs/2307.12579v1
|
# Prototyping a ROOT-based distributed analysis workflow for HL-LHC: the CMS use case
###### Abstract
The challenges expected for the next era of the Large Hadron Collider (LHC), both in terms of storage and computing resources, provide LHC experiments with a strong motivation for evaluating ways of rethinking their computing models at many levels. Great efforts have been put into optimizing the computing resource utilization for the data analysis, which leads both to lower hardware requirements and faster turnaround for physics analyses. In this scenario, the Compact Muon Solenoid (CMS) collaboration is involved in several activities aimed at benchmarking different solutions for running High Energy Physics (HEP) analysis workflows. A promising solution is evolving software towards more user-friendly approaches featuring a declarative programming model and interactive workflows. The computing infrastructure should keep up with this trend by offering on the one side modern interfaces,
and on the other side hiding the complexity of the underlying environment, while efficiently leveraging the already deployed grid infrastructure and scaling toward opportunistic resources like public cloud or HPC centers. This article presents the first example of using the ROOT RDataFrame technology to exploit such next-generation approaches for a production-grade CMS physics analysis. A new analysis facility is created to offer users a modern interactive web interface based on JupyterLab that can leverage HTCondor-based grid resources on different geographical sites. The physics analysis is converted from a legacy iterative approach to the modern declarative approach offered by RDataFrame and distributed over multiple computing nodes. The new scenario offers not only an overall improved programming experience, but also an order of magnitude speedup increase with respect to the previous approach.
keywords: High Energy Physics, Distributed Computing, Analysis Facility, ROOT, Dask, HTCondor +
Footnote †: journal: Journal of Computer Science
## 1 Introduction
Research in High Energy Physics (HEP) is characterized by complex computational challenges raised by the need for processing huge amounts of data regarding particle collisions. The largest source of such data is the Large Hadron Collider (LHC), hosted at CERN in Switzerland, which since its start has reached peaks of 1 PB/s of data generated from physics events. The machine follows a cycle of on and off periods, also called _runs_. The current run, Run 3, has begun in 2022 and will last until 2025. The next run will see an upgraded hardware configuration of the machine, named High Luminosity LHC (HL-LHC) [1], which will start operations in 2029 and it is estimated that it will require between 50 and 100 times more computational resources than those currently used [2].
Such estimate reinforces the need for developing performant software tailored to the HEP use case, something which has always been addressed in the field. Traditionally, distributed computing has been one of the main strategies to tackle processing the large physics datasets. In particular, the Worldwide Computing LHC Grid (WLCG) [3] was developed in cooperation between CERN and other research institutes as a shared computing infrastructure serving all interested scientists around the world. Alongside this main distributed facility, it is common to have smaller computing clusters at
the level of the single research institution.
In this context, the workflow of analysts involves developing applications that are submitted to the grid through thousands of jobs, each processing a different set of physics events. This is enabled by the fact that the events are statistically independent, so even though they are stored in large datasets with billions of entries, they can be processed independently thus allowing for an embarrassing parallelization of the analysis. A single large-scale analysis in production run by an LHC experiment collaboration can process multiple TBs of data, involving thousands of jobs sent to the grid and spanning multiple hours or days. It also comprises two main stages, which closely resemble the MapReduce paradigm [4]: in the first stage all jobs are submitted and process different portions of the input dataset; in the second stage all the partial results need to be merged in order to produce the final desired result, which usually comes in the form of a high number of relevant statistics and plots such as histograms. This workflow becomes particularly tedious since the two stages are independently developed and deployed: users need to write separate applications to submit the initial jobs and then retrieve and merge their results.
Tackling the previously mentioned computing challenges is a matter of developing faster software as well as improving the productivity of the final users. On the one hand, increasing data processing throughput is crucial to cope with the future increasing data rates. On the other hand, ergonomic interfaces should be added to remove the lower-level programming burden from analysts. Other industries have faced similar issues and a few solutions have spawned in the data science community at large to streamline data processing pipelines with higher-level interfaces. The most widely used implementation of the MapReduce paradigm comes from the Apache software suite and in particular Apache Spark [5] has gained wide popularity as a distributed execution engine for many types of workloads. A similar example comes from the Dask Python library [6], which also supports arbitrary computation graphs not strictly applicable with MapReduce.
In the same scope, users should not have to deal with building and packaging the entire software stack needed for their analysis. At the same time, each research group may need slight adjustments to their analysis algorithms and applications. It is quite often seen that many small software frameworks are developed, based on larger utility libraries used in the field. If people interested in using a specific set of libraries do not have access to exactly the same machine, creating a coherent software stack over different nodes can
become a burden quite quickly. In recent years, the efforts towards improving user productivity have also started focusing on streamlining the creation of a software stack that can be easily set up and reproduced over different nodes. On the one hand, services like CVMFS [7] make it easy to ship centrally produced environments to user machines or even computing clusters. On the other hand, various institutions have begun proposing a combination of coherent and easily accessible software and hardware resources with the general label of _analysis facilities_.
Although each research group may need to tweak their analysis to use specific libraries, most software environments in HEP use ROOT [8] as the tool for storing, analyzing and visualising physics data coming from the accelerator. This library defines a I/O layer and a data format through which more than 1 Exabyte of data is stored. It also offers a high-level interface to data analysis called RDataFrame [9], which is more user-friendly with respect to other ROOT facilities and has already seen wide usage in the community.
This article highlights the recent efforts in building an infrastructure for HEP analysts that provides solutions to the aforementioned challenges. A new analysis facility is engineered on INFN (Italian national institute for nuclear physics research) resources, accessible through a web-based interface where users can develop their analysis in Python, a language that is gaining increasing popularity in the HEP community. The code can be written in a Jupyter notebook [10], through which a set of distributed resources can be accessed. As a benchmark of this new facility, a full-scale CMS analysis is ported to RDataFrame, which can take full leverage of the remote computing resources transparently while running the full application within the same notebook. Furthermore, using RDataFrame provides tangible performance improvements over the previous approach.
The document is structured as follows. Section 2 highlights relevant work that can be connected to some of the issues brought up so far. The main software building blocks for this work are described in Section 3. Section 4 provides more details about the concept of _analysis facility_ in HEP. Sections 5 and 6 describe more concretely the proposed new developments of this work. Section 7 shows the results obtained in comparison with the old approach. Finally, Section 8 closes the discussion and gives some perspective for future work.
## 2 Related work
The traditional programming model for HEP data analysis applications is usually based on custom loops over the events of a ROOT dataset. Each event is processed as needed, for example by filtering it out if not interesting or using it to compute new observables. A more simplified syntax was offered by TTreeFormula [11], a DSL within ROOT to access and process the events in a dataset. At a broader scope, different utility libraries have been developed in the field to abstract from the lower-level syntaxes, usually only of practical use in small use cases. Some examples include the nanoAODtools [12] by CMS, coffee [13], the Latinos framework [14], Bamboo [15] or CMGTools [16]. It is worth noting that, although providing a more modern and abstract interface than what was previously available, some facilities such as RDataFrame or coffea still represent a low-level approach when compared to more experiment-specific tools such as the others cited above. This usually boils down to the fact that the latter kind of libraries usually feature functions in their API which directly refer to computations or calibrations that physicists may need to use in their daily analysis routines.
In the last few decades, most programming models in this field applied inside user code were decoupled from the job distribution, which happened by manually submitting job description files to the scheduler of a batch system such as HTCondor [17] or Slurm [18]. In parallel, other scientific and industrial communities investigated more interactive approaches, where the programming interface could describe both the computation and the connection to the distributed resources. Apache Spark became a popular tool in a wide range of use cases, thanks to the efficient usage of resources through the MapReduce paradigm coupled with a declarative approach [19]. Dask has gained traction in more recent years, with examples from Earth and Climate sciences [20; 21] and from molecular dynamics [22]. Although the idea of streamlining distributed HEP analysis is not a novel idea, since ROOT has been offering a parallel system for analysis called PROOF [23] for many years now, the issue of making this process truly flexible and smooth for final users is yet to be solved.
The recent trend to steer towards more interactive data analysis and access to distributed resources is being picked up in the HEP community. In particular, the ingredients of HEP _analysis facilities_ should include modern programming models, a coherent software stack, abstracting the infrastructure for the final user and giving a full interactive analysis experience end-to
end. An example of analysis facility developed at CERN is SWAN [24]. This is a web-based platform that all CERN users can access, giving them storage, software and computing power in the same web page. Analysts write their applications in Jupyter notebooks and also have a filesystem with storage quota at their disposal. If their notebook needs to distribute computations, a Spark cluster at CERN is made available and the user can connect to it via a GUI. Another example is given by a prototype analysis facility in the USA called coffea-casa (University of Lincoln, Nebraska) [25]. This facility leverages Dask to distribute computations. It is built on top of a local Kubernetes cluster and integrates dedicated resources allocated via fairshare through an HTCondor scheduler. Similarly, another analysis facility prototype was developed at Fermilab with the label "Elastic Analysis Facility" [26]. The analysis facility implementation described in this work follows the same trend as the prototypes just mentioned, while aiming to involve the different geographic clusters at INFN with a novel scheduler-client connection system.
Of the available literature regarding LHC experiments analysis workflows, many were carried out with the aim of evaluating the current processes at the time (e.g. [27]). This work instead offers a one-to-one comparison between the more modern, declarative and interactive approach offered by distributed RDataFrame and the traditional approach used in CMS data analysis.
## 3 Tools
### Root
ROOT is the most widely used software framework for storing, analysing, processing, and displaying HEP data. It has seen wide adoption at CERN and several other institutions worldwide connected with it, such as those participating in WLCG.
The framework defines a common data structure and data layout to store HEP datasets, called TTree [28]. Its layout is columnar on the disk, so that different columns can be treated independently. The ROOT I/O subsystem is able to read just a portion of a dataset, to minimize read requests to the filesystem. The minimal amount of information that can be read independently from other parts of the file is called a cluster, which corresponds to a range of entries that can belong to one or more columns. ROOT datasets can be stored and read within the local filesystem of the user machine, but very often are located in remote, distributed storage systems and can be accessed through remote protocols like HTTP or XRootD [29].
The main interface for analysing a TTree (and other data formats) within ROOT is called RDataFrame. With RDataFrame, users can focus on their analysis as a sequence of operations to be performed on the dataset, while the framework takes care of the management of the loop entries as well as low-level details such as I/O operations and parallelization, effectively creating a computation graph, that is a directed graph where each node corresponds to one of the operations to be performed on data. RDataFrame provides methods to perform the most common operations required by HEP analyses, such as Define to create a new column in the dataset or Histo1D to create a histogram out of a set of values. Other than TTree, the interface supports processing datasets stored in formats like CSV, Apache Arrow or NumPy arrays [30]. Users can also create an empty dataset with a certain amount of rows that can be filled through the operations in the API. This is particularly useful for benchmark and simulation scenarios.
RDataFrame has been built with parallelism in mind. In fact, it is natively able to exploit all cores of a single machine through the implicit multi-threading interface available in ROOT. Moreover, the scalability of this tool is ensured by its distributed version. Indeed, RDataFrame can natively distribute physics computations on multiple nodes by splitting the analysis in tasks and executing them in a MapReduce pattern [31].
### Dask
Dask [6] is a Python library that allows to easily parallelise existing workflows. It is mainly targeted at supporting other common Python analysis tools like Numpy [30] or Pandas [32], but is flexible enough to accommodate any type of computation. Thus, it offers many interfaces for data processing, including machine learning and real-time analysis. In the context of this work, Dask is employed as a distributed scheduler, offering a wide set of configurations thanks to which an application can be scaled to different cluster setups like:
1. Start all the remote nodes from a single machine through SSH.
2. Leverage existing cluster deployments with Kubernetes or YARN.
3. Connect to high performance computing resource managers that implement batch submission systems, like HTCondor, Slurm or PBS.
Two ingredients are necessary in order to distribute computations in a Dask application. The first is the object representing the remote cluster itself, including how many resources will be assigned to it for the duration of the
application. The second is an object representing the connection between the local machine and the remote cluster. This is simply called Client and can be used with any of the different implementations of resource managers available in Dask described above. The Client API allows users to asynchronously launch tasks to the remote cluster.
### XRootD
The XRootD [29] framework is a C++-based suite targeting fast, low latency and scalable data access. Generically it can serve any kind of data that can fit in a hierarchical filesystem-like approach, abstracting away from the particular implementation of the data format. The core functionalities are greatly extended by a rich plugin system. It is widely used in High Energy Physics both for its remote I/O protocol and for the suite of data access tools that allow to expose the presence of large physics datasets from the storage facilities to other nodes of the Grid. The library supports caching data on one node or on a federated system of nodes, a technology that is also referred to as XCache by the community.
ROOT natively supports reading/writing files from/to remote servers via the XRootD protocol, thanks to a plugin of the TFile class. Whenever a user specifies a path that contains the root:// prefix, that file will redirect all I/O transactions through the XRootD API.
## 4 The analysis facility paradigm
### Enhancing analysis turnaround
As highlighted in the introduction of this work, the challenges that LHC experiments are expected to deal with are forcing the corresponding communities to rethink their computing models, moving towards more efficient approaches. More specifically, considering the case of CMS, the introduction of NanoAOD format five years ago (an extremely reduced columnar data format, which still contains all necessary information on high-level physics objects to run an analysis [33]) pushed towards a shift in the analysis paradigm, allowing for the adoption of a quasi-interactive approach, which at the same time delivers a lower time-to-insight. Since LHC experiments share performance needs that can be tackled together, and their needs in terms of software can be made generic enough, recent R&D was aimed at investigating industrial data-science-like approaches for the data exploration capable of efficiently exploiting the existing resources.
### Federated, distributed, heterogeneous resources
For all its computing operations, CMS exploits the previously-introduced WLCG, built up from the different resources provided by the collaboration members. The reason for this is two-fold: on the one hand, it would be difficult to concentrate all CMS operations in a single place due to the high experiment resource demands; on the other hand, in this way the experiment can exploit facilities outside of CERN, provided by member institutions from all over the world. Computing centers were originally arranged in a strict tiered structure, with well-defined tasks for each tier: only one Tier-0, around ten Tier-1s, and a large number of Tier-2s. The last ones are responsible for providing resources for analysis, even though the differences between Tier-1s and Tier-2s have been blurring in the recent years. In addition, CMS members can also exploit available opportunistic (cloud) and specialized (HPC) resources [34].
## 5 Developing a facility model: the strategy
### Implementation pillars
The _analysis facility_ solution we propose is founded on a few pillars: a single central JupyterHUB [35] for the data analysis, to which users get access interacting with a single entrypoint via CMS INDIGO-IAM, deploying their own JupyterLab [36] instance; containerized solutions (Singularity [37]) to allow the user to bring their own computational environment, both in the hub and on distributed resources; the possibility to scale computations leveraging distributed computing resources from Italian Tier-2 centers, HPC or even opportunistic accessible via ssh connection; access to experiment data, obtained via XRootD protocol from the grid or local XCache instances.
### A declarative and scalable software framework
The usage of a declarative approach is crucial since it allows the analyzer to focus on the physics itself, removing from the user scope all the boilerplate code necessary to access data, loop on events, distribute computation, and aggregate results: the time needed for the user to set up, test, tune and debug each of these steps is non-negligible, and distracts the analyzer from the physics goals. As explained in section 3.1, ROOT's RDataFrame interface offers a declarative solution to efficiently augment and filter data stored in NanoAOD-like file types. The users only need to specify the operations they
want to run on the data, then start the analysis. The execution of the computation graph together with the retrieval of the results happens in the very same Jupyter notebook, as opposed to the various scripts that have to be run asynchronously in the legacy approach. Thus, the user can profit from the interactivity of this approach, running cells multiple times and drawing histograms directly as output of the cells.
### A flexible and distributed computing infrastructure
In this context, the infrastructure of the facility should comply with two main characteristics: on one hand, it should be easily extensible, allowing to accommodate multiple users and different analysis needs; on the other hand, it should be capable of exploiting the very same legacy infrastructure of present Tier-2s with no additional hardware requirements. This can be achieved by leveraging Dask and its compatibility with HTCondor. In fact, the Dask-jobqueue [38] library allows to deploy a Dask cluster on top of an HTCondor pool. In such a way, the same resources can be used in a legacy fashion (with a batch computing approach) or quasi-interactively from a notebook after having deployed the necessary Dask cluster.
## 6 Exploring distributed RDataFrame on geographic cluster at INFN
### Infrastructure of the analysis facility
The ideas and practices highlighted in Sections 4 and 5 are concretely applied in the creation of an analysis facility, whose infrastructure is depicted in Figure 1. First of all, a JupyterHUB [35] instance is deployed on the central Kubernetes cluster hosted at INFN-CNAF (Italy), as well as all HTCondor central components (Collector, Negotiator, CCB and Schedd). The Dask cluster deployment model presents some peculiarities enabling the shipping of a whole self-contained Dask cluster (meaning both the scheduling and executing parts) on any remote set of resources with "egress" capabilities. In fact, in the presented use case we were able to spawn a Dask cluster on a Tier-2 grid site with minimal changes from the site perspective. A full Dask cluster offloading capability is the key concept of such an R&D that allows for the implementation of an overlay federation mechanism where an heterogeneous set of resource providers can be made available to the users in a seamless fashion, with minimal operational requirements. From a technical perspective, the deployment of the Dask cluster happens on top of HTCondor via the Dask-jobqueue library, enriched with a custom-derived plugin [38]
(integrated with a dedicated Dask Labextension [39]) developed to support an HTCondor pool with specific requirements and to allow the submission of the Dask Scheduler job and the interaction with it. A forwarder service is used to make Dask HTCondor jobs reachable from the JupyterLab instance via ssh connections. Finally, an HTTP controller service controls the interaction between the Dask Labextension in the JupyterLab instance and the Dask Scheduler. The overlay system implemented through HTCondor has a key role for a fair comparison of the presented results: it allows to use the very same infrastructural setup and the very same configuration changing only the software that runs on top.
### Prototype usage of the infrastructure
The proposed infrastructure can support analysis workflows implemented in various ways, in particular it still supports the legacy batch-like approach while at the same time enabling more modern distributed workflows with RDataFrame. For the latter case, users can deploy a Dask cluster autonomously through a GUI (the Dask Labextension previously mentioned). This provides a few options, for example selecting the desired computing site or the container image for the distributed worker. Once selected, the system will submit a Dask scheduler job to the selected HTCondor pool (which in
Figure 1: Simple schema of INFN Analysis Facility prototype
turn can exploit available Tier-2 sites, opportunistic resources, HPC facilities, etc.). Once the job is running, via the same extension, the user can scale up the cluster, submitting Dask worker jobs. As for the data access, a VOMS [40] proxy-file is needed to be uploaded to the workers via a Dask Plugin. The user can also replicate data on the grid into a desired site via Rucio [41; 42].
## 7 Commissioning and first benchmarks with a real use case
In order to achieve the first benchmark for this infrastructure and approach, a real use case has been chosen. More specifically, the very same CMS analysis has been implemented using a legacy batch-like approach and an RDataFrame-based one, and both workflows have been run on the presented facility and compared. In this section, the details of the use case are shown, as well as the metrics used for the comparison benchmark are detailed. Finally, the results of the comparison benchmark are presented and discussed.
### The analysis use case
This study takes into account a production-grade analysis with the CMS detector which runs, for one data taking year, over nearly 700 million Monte Carlo (MC) events (produced by the CMS Collaboration), of which more than 300k populate the final histograms considered in the statistical analysis.
### Legacy approach
The legacy approach of this analysis is based on a two-step procedure (see left column of Figure 2): a preselection step, where the original files are skimmed (using a selection based on triggers and loose requirements on objects) and corrections are computed, producing reduced flat ROOT-files; and a postselection step, in which the proper analysis is run, with the production of histograms, for each systematic variation, to be used for the final statistical analysis. The preselection part exploits the NanoAOD-tools [12] suites, which is a collection of Python-based analysis modules orchestrated by a post-processor, developed by the CMS Collaboration. The postselection part is run using simple PyROOT scripts, with some helper functions from NanoAOD-tools. Both are parallelized using a simple HTCondor submission procedure. The postselection step also needs a subsequent local merging procedure to aggregate output from different jobs, as well as a local histogramming step.
### RDataFrame approach
The new RDataFrame-based approach keeps the same workflow, in order to achieve a one-to-one mapping to the legacy approach (as depicted in Figure 2): legacy Python-based modules and functions have been translated to C++ functions that manipulate ROOT's RVec objects, also exploiting an existing solution developed inside the CMS Collaboration for jet and MET corrections [43]. This allowed to use a distributed RDataFrame approach, with Dask as backend, for both steps of analysis. The MapReduce nature of distributed RDataFrame computations makes any merging steps unnecessary, allowing to directly produce, in a single event loop, final histograms, even including all systematic variations. In order to use Dask as a backend for the distributed execution of ROOT's RDataFrame computations from a notebook, the user needs to instantiate a Dask client and use it in the definition of a distributed RDataFrame.
### Benchmark procedure and metrics
The testbed used for this benchmark measurement is a 3-node HTCondor pool deployed at the Tier-2 data center of LNL laboratories in Legnaro, Italy [44]. Each node is a Dell R430 server with the following properties: two Intel Xeon E5-2640 v3 @2.60 GHz, 8 physical cores (16 logical) each, 128 GB of RAM, 1 TB of spinning disk storage and one ethernet controller Broadcom
Figure 2: One-to-one mapping between legacy and new approaches.
BCM5720, 1 Gb/s. Each node features Telegraf [45] sensors, that inject metrics time-series into a dedicated influxDB [46] (with 1-minute granularity). In such a way, metrics values are accessible via web through an interactive dashboard. These metrics include: CPU usage percentage; amount of occupied memory; cumulated amount of data read from the network and first derivative of data read from the network (network read throughput). Moreover, monitoring scripts were also added to the single jobs in order to have complete information about the execution. More specifically, for each HTCondor job and Dask task, overall and event-loop time durations are retrieved, and CPU usage and memory occupancy (of the specific process) as functions of time are obtained leveraging psutil Python library [47] and saved in.csv files. Combining information from the dashboard and from the single jobs, the comparison can be made on the basis of certain metrics.
* Overall execution time: time elapsed from the start of the distributed analysis execution to the end of it. This quantifies the actual analysis time experienced by the user. A derived quantity is the overall rate (events analyzed per second), which is the ratio between the total number of events analyzed and the overall execution time.
* Network read: per-node information about the total amount of bytes read from the network during the execution, taken from the dashboard. This information is then summed up across all the nodes. This allows to monitor if the tool efficiently reads only what is really needed for the analysis. This measurement is crucial since the future CMS data management model (the so-called Data Lake) will strongly rely on a cache layer to distribute data to the computing centers: a minimal data read directly maps to lower requests on caches performances, as well as to higher CPU efficiency.
* Absolute memory occupancy: this value is directly taken from the dashboard for each node, and then it is averaged across the execution time and across all the available nodes. This metric is monitored to ensure that new approaches do not introduce any unsustainable increase in resource usage.
* Job rate, which indicates the actual throughput of this approach: this value is obtained as \[rate=\frac{\sum\#events_{i}}{\sum t_{i}}\] (1)
where \(i\) is the index of \(i\)-th job, \(t_{i}\) its time duration, and \(\#events_{i}\) the number of events read by it. Depending on the way \(t_{i}\) is computed, one can obtain the rate quantity either including (job rate) or excluding (job event-loop rate) script initialization time. This metric is chosen since it measures the throughput of the approach with minimal dependence on job-splitting pattern or cluster size.
The benchmark comparison is done considering separately only the distributed steps of preselection and postselection: more specifically, the latter considers 3 different kinematic variables and 30 different systematic variations (of which 8 modify the topology of the event, and thus require additional event loops in the legacy approach).
### Results
The target of the benchmark is the analysis of MC samples, simulating 2017 data-taking operating conditions, for a total of 656978035 events, divided into 1274 nanoAOD files, summing up to around 1.1 TB. The legacy approach implements, for both preselection and postselection, one job per file, while the RDataFrame-based approach is applied using a number of partitions (i.e. Dask tasks) approximately equal to three times the number of workers in the Dask cluster. HTCondor legacy jobs require 1 CPU, while the CPU resources taken up by the Dask scheduler job and by each Dask worker job are, respectively, 4 CPUs and 1 CPU. Input and output data, for all analysis steps, are stored at the LNL laboratories Tier-2 and are accessed via the XRootD protocol.
First of all, for each scenario, the per-job (per-task) CPU usage and memory consumption were checked in order to detect pathological or wrong behaviors, like memory leaks.
Figure 3 shows CPU usage and memory occupancy, as functions of time, of one example job or task for the legacy and RDataFrame-based preselection, as they are retrieved by psutil [47] Python library. As one can see, in both cases the 100% CPU usage is reached, but the RDataFrame-based preselection task presents a noticeable oscillation in the second part of the execution, which is related to the saturation of the bandwidth (as will be highlighted in the following), while such oscillations are smaller in the other case. As for the occupied memory, an ascending behavior in the first seconds of execution is detected, which can be justified by the loading of all necessary functions and libraries, as well as by the reading of the first chunk
of data. After this initial phase, the memory value remains approximately stable during the execution.
Figure 4 shows the same quantities for the postselection scenario: also here, for both approaches, the 100% CPU usage is reached during the proper event-loop execution with no significant oscillating behavior with respect to the preselection case. Correspondingly, the stabilization of memory usage, for both approaches, happens at lower values with respect to preselection.
Afterwards, the overall behaviour of the execution was checked by looking
Figure 4: CPU usage and memory consumption for one job or task of legacy (left column) and RDataFrame-based (right column) main postselection scenario.
Figure 3: CPU usage and memory consumption for one job or task of legacy (left column) and RDataFrame-based (right column) preselection scenario.
at the dashboard values. More specifically, figure 5 shows the CPU usage and the network throughput, as functions of the time of execution, reported for each one of the 3 nodes (represented by lines of different colours), in the case of legacy and RDataFrame-based preselection. As one can see, in both cases the overall execution does not reach 100% of CPU usage: in the case of RDataFrame, this is justified by the network read throughput, which clearly reaches a plateau at around 120 MB/s, that corresponds to the nominal throughput of the network interface on the node, namely 1 Gb/s; in the case of legacy, no saturation in bandwidth is detected.
Figure 6 shows the same quantities for the postselection scenario. In this case, for both approaches, the CPU usage is nearly 100% for most of the execution and no saturation in the bandwidth is detected.
A set of 3 measurements for each scenario and approach was performed and metrics values were recorded: then, for each metric, the average was taken as the estimated value with the maximum semi-dispersion as its error. Results, presented separately for each scenario, are shown in table 1.
Additionally, a check has been made in order to test if the preselection RDataFrame performance is actually limited by the bandwidth and thus the
Figure 5: Per-node (differently-coloured lines) CPU usage and network read throughput for legacy (left column) and RDataFrame-based (right column) preselection scenario, when using 96 logical CPUs (48 physical).
\begin{table}
\begin{tabular}{l||l|l} \multicolumn{3}{c}{PRESELECTION - 96 logical CPUs (48 physical)} \\ \hline
**Metrics** & **Legacy** & **New** \\ Overall time [min] & \(164.18\pm 0.08\) & \(21.9\pm 0.8\) \\ Overall rate [Hz] & \(66.69\)k \(\pm\) 0.03k & 500k \(\pm\) 18k \\ Job rate [Hz] & \(859\pm 1\) & \(7371\pm 79\) \\ Job event-loop rate [Hz] & \(951\pm 2\) & \(8148\pm 92\) \\ Network read [GB] & \(484.6\pm 0.6\) & \(353.37\pm 0.08\) \\ Average per-node memory occupancy [GB] & \(18.35\pm 0.08\) & \(29.6\pm 0.2\) \\ \hline \multicolumn{3}{c}{POSTSELECTION - 96 logical CPUs (48 physical)} \\ \hline
**Metrics** & **Legacy** & **New** \\ Overall time [min] & \(46.7\pm 0.1\) & \(11.8\pm 0.4\) \\ Overall rate [Hz] & \(4.72\)k \(\pm\) 0.01k & 18.8k \(\pm\) 0.6k \\ Job rate [Hz] & \(63.04\pm 0.06\) & \(303\pm 9\) \\ Job event-loop rate [Hz] & \(65.82\pm 0.08\) & \(366\pm 13\) \\ Network read [GB] & \(84.4\pm 0.1\) & \(17.56\pm 0.06\) \\ Average per-node memory occupancy [GB] & \(8.86\pm 0.07\) & \(28.3\pm 0.3\) \\ \hline \end{tabular}
\end{table}
Table 1: Benchmark results for legacy and new approaches, for both preselection and postselection scenarios.
Figure 6: Per-node (differently-coloured lines) CPU usage and network read throughput for legacy (left column) and RDataFrame-based (right column) postselection scenario, when using 96 logical CPUs (48 physical).
related results we present represent a lower limit. Actually, when manually lowering the number of CPUs (and thus Dask workers) per node, from the 32 logical (16 physical) CPUs to 8 logical and physical CPUs, one obtains the results shown in figure 7. As expected, keeping the same number of tasks, the RDataFrame-based CPU usage rises, reaching a plateau at over 80% (the node with lower CPU usage is affected by the presence of the Dask scheduler), while no saturation effect is present: the network read throughput averages at 60/70 MB/s.
### Discussion
Considering the results shown in table 1, one can conclude that the RDataFrame-based approach outperforms the legacy one in terms of time and job rate in every scenario. More specifically, considering the preselection and postselection scenarios as a whole, one can see that, moving to the new approach, a factor six speedup is achieved, corresponding to a net 84% reduction of overall execution time. This can be justified by both the pure increase of the average job rate (around 8.6 times and 4.8 times higher for RDataFrame-based preselection and postselection, respectively) and the
Figure 7: Per-node (differently-coloured lines) CPU usage and network read throughput for RDataFrame-based (right column) preselection scenario, when using 24 logical CPUs (24 physical).
higher efficiency in task distribution and data read of the method itself. This means that, for an analysis similar to the one that is discussed here, a physicist can analyze, in a given time window, a factor of 6 more simulated events with respect to the old method. Furthermore, there is a reduction in network read of about 35%. This is expected for two main reasons: on one hand, the legacy preselection step needs each job to download the full necessary nanoAOD-tools repository branch, which is worth 101 MB (summing up to 129 GB, and therefore accounting for most of the difference in network read for preselection); on the other hand, data is read 9 times in the case of legacy postselection (once for nominal values and once for each non-event-weight systematic variation), whereas, in the case of RDataFrame, just one event loop is performed. Moreover, the memory occupancy of the new approach remains bearable for such a system, since the overall node memory is 128 GB. Finally, the bandwidth-saturating behavior of RDataFrame-based preselection shows that this approach pushes the I/O capabilities of the node to the limit, as confirmed by the aforementioned additional check.
## 8 Conclusions and future outlook
The future computational challenges that High Energy Physics communities have to deal with are pushing towards intensive R&D activities which include also the search for new efficient data analysis approaches and resource access: this translates to the adoption of declarative tools and flexible infrastructures. In this work we show the _analysis facility_ model implemented and deployed at INFN resources that offers a way of running CMS data analyses by accessing a single customizable JupyterLab environment and scaling up the computation on Italian geographically distributed Tier-2 resources, with the possibility to implement both batch-like approaches and distributed RDataFrame-based workflows, taking advantage of Dask and custom plugins. This infrastructure has been tested and benchmarked considering a real CMS analysis: this was implemented using both a legacy and an RDataFrame-based approach. The two were run on the very same resources and compared on the basis of several metrics. More specifically, the modern RDataFrame approach proves to be 6 times faster, while reducing network read by more than 30 %. Projecting these numbers into a hypothetical HL-LHC scenario, considering the same analysis, the new approach could therefore introduce a CPU resources saving of a factor around an order of magnitude with respect to the legacy one for the same time-to-insight, also
opening to the possibility of running the analysis in just 1 step, with a different impact on the resource scheduling with respect to a pure batch system (spikes of utilization of many resources instead of long-lasting jobs on few resources), and on the end-user experience. If further studies could confirm this order of magnitude of gain on a broad spectrum of different analyses (and the adoption of NanoAODs becomes even wider throughout the Collaboration), the effort in changing the CMS analysis model could be justified. More specifically, the effort would reside only on the adoption of declarative data analysis tools, since the _analysis facility_ concept allows to seamlessly exploit the current available WLCG infrastructure: only the software should be rewritten. On one hand, this demonstrates the strategic importance of R&D activities for CMS, given their actual impact; on the other hand, this motivates further studies that will be funded by CMS in the future, which would possibly include the test of different tools and the integration of legacy interfaces with the modern backends.
## 9 Acknowledgments
The authors thank the CMS Collaboration: in particular, the CMS Offline Software and Computing groups for the technical discussions and the CMS Physics Coordination groups for the physics-related discussions, which both helped the development of this work, as well as all the other members of the Collaboration, who contributed to preparing, producing, and curating the simulated data and part of the code considered in this work.
The authors of this work are funded by the respective affiliations and this research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.
|
2304.13337
|
Nominal Topology for Data Languages
|
We propose a novel topological perspective on data languages recognizable by
orbit-finite nominal monoids. For this purpose, we introduce pro-orbit-finite
nominal topological spaces. Assuming globally bounded support sizes, they
coincide with nominal Stone spaces and are shown to be dually equivalent to a
subcategory of nominal boolean algebras. Recognizable data languages are
characterized as topologically clopen sets of pro-orbit-finite words. In
addition, we explore the expressive power of pro-orbit-finite equations by
establishing a nominal version of Reiterman's pseudovariety theorem.
|
Fabian Birkmann, Stefan Milius, Henning Urbat
|
2023-04-26T07:11:44Z
|
http://arxiv.org/abs/2304.13337v2
|
# Nominal Topology for Data Languages
###### Abstract
We propose a novel topological perspective on data languages recognizable by orbit-finite nominal monoids. For this purpose, we introduce pro-orbit-finite nominal topological spaces. Assuming globally bounded support sizes, they coincide with nominal Stone spaces and are shown to be dually equivalent to a subcategory of nominal boolean algebras. Recognizable data languages are characterized as topologically clopen sets of pro-orbit-finite words. In addition, we explore the expressive power of pro-orbit-finite equations by establishing a nominal version of Reiterman's pseudovariety theorem.
Nominal sets, Stone duality, Profinite space, Data languages 10.4230/LIPIcs... 10.4230/LIPIcs. 10.4230/LIPIcs.
of monadic second-order logic with equality tests [13], _single-use register automata_[10] (both one-way and two-way), and _orbit-finite regular list functions_[10]. In addition, several landmark results from the algebraic theory of regular languages, namely the McNaughton-Papert-Schutzenberger theorem [29, 41], the Krohn-Rhodes theorem [25], and Eilenberg's variety theorem [14] have been extended to recognizable data languages [7, 10, 13, 45].
In the present paper, we investigate recognizable data languages through the lens of _topology_, thereby providing a further bridge to classical regular languages. The topological approach to the latter is closely tied to the algebraic one, which regards regular languages as the languages recognizable by finite monoids. Its starting point is the construction of the topological space \(\widehat{\Sigma^{*}}\) of _profinite words_. Informally, this space casts all information represented by regular languages over \(\Sigma\) and their recognizing monoids into a single mathematical object. Regular languages can then be characterized by purely topological means: they may be interpreted as precisely the clopen subsets of \(\widehat{\Sigma^{*}}\), in such way that algebraic recognition by finite monoids becomes a continuous process. Properties of regular languages are often most conveniently classified in terms of the topological concept of _profinite equations_, that is, equations between profinite words; see [3, 4, 33] for a survey of profinite methods in automata theory. Moreover, since \(\widehat{\Sigma^{*}}\) forms a Stone space, the power of _Stone duality_ - the dual equivalence between Stone spaces and boolean algebras - becomes available. This allows for the use of duality-theoretic methods for the study of regular languages and their connection to logic and model theory, which in part even extend to _non-regular_ languages [18, 19, 20, 21, 35].
On a conceptual level, the topological view of regular languages rests on a single category-theoretic fact: Stone spaces admit a universal property. In fact, they arise from the category of finite sets as the free completion under codirected limits, a.k.a. its _Pro-completion_:
\[\mathbf{Stone}\simeq\mathrm{Pro}(\mathbf{Set}_{\!t}). \tag{1.1}\]
In the world of data languages, the role of finite sets is taken over by orbit-finite nominal sets. This strongly suggests to base a topological approach on their free completion \(\mathrm{Pro}(\mathbf{Norm}_{\mathrm{of}})\). However, this turns out to be infeasible: the category \(\mathrm{Pro}(\mathbf{Norm}_{\mathrm{of}})\) is not concrete over nominal sets (Proposition 3.6), hence it cannot be described via any kind of nominal topological spaces. This is ultimately unsurprising given that the description (1.1) of Stone spaces as a free completion depends on the axiom of choice, which is well-known to fail in the topos of nominal sets. As a remedy, we impose _global bounds_ on the support sizes of nominal sets, that is, we consider the categories \(\mathbf{Norm}_{k}\) and \(\mathbf{Norm}_{\mathrm{of},k}\) of (orbit-finite) nominal sets where every element has a support of size \(k\), for some fixed natural number \(k\). This restriction is natural from an automata-theoretic perspective, as it corresponds to imposing a bound \(k\) on the number of registers of automata, and it fixes exactly the issue making unrestricted nominal sets non-amenable (Lemma 3.9). Let us emphasize, however, that the category \(\mathbf{Norm}_{k}\) is not proposed as a new foundation for names and variable binding; for instance, it generally fails to be a topos.
The first main contribution of our paper is a generalization of (1.1) to \(k\)-bounded nominal sets. For this purpose we introduce _nominal Stone spaces_, a suitable nominalization of the classical concept, and prove that \(k\)-bounded nominal Stone spaces form the Pro-completion of the category of \(k\)-bounded orbit-finite sets. We also derive a nominal version of Stone duality, which relates \(k\)-bounded nominal Stone spaces to _locally \(k\)-atomic orbit-finitely complete nominal boolean algebras_. Hence we establish the following equivalences of categories:
\[\mathbf{nC}_{\mathrm{of}}\mathbf{A}_{lk}\mathbf{BA}\simeq^{\mathrm{op}} \mathbf{nStone}_{k}\simeq\mathrm{Pro}(\mathbf{Norm}_{\mathrm{of},k}).\]
The above equivalences are somewhat remarkable since even the category of \(k\)-bounded nominal sets does not feature choice. They hold because the presence of bounds allows us to
reduce topological properties of nominal Stone spaces, most notably compactness, to their classical counterparts.
Building on the above topological foundations, which we regard to be of independent interest, we subsequently develop first steps of a topological theory of data languages. Specifically, we introduce nominal Stone spaces of (bounded) _pro-orbit-finite words_ and prove their clopen subsets to correspond to data languages recognizable by bounded equivariant monoid morphisms, generalizing the topological characterization of classical regular languages (Theorem 5.14). Moreover, we investigate the expressivity of _pro-orbit-finite equations_ and show that they model precisely classes of orbit-finite monoids closed under finite products, submonoids, and _multiplicatively support-reflecting quotients_ (Theorem 6.8). This provides a nominal version of Reiterman's celebrated pseudovariety theorem [37] for finite monoids.
Related work.The perspective taken in our paper draws much of its inspiration from the recent categorical approach to algebraic recognition based on monads [8, 39, 43]. The importance of Pro-completions in algebraic language theory has been isolated in the work of Chen et al. [12] and Urbat et al. [43]. In the latter work the authors introduce _profinite monads_ and present a general version of Eilenberg's variety theorem parametric in a given Stone-type duality. The theory developed there applies to algebraic base categories, but not to the category of nominal sets.
Our version of nominal Stone duality builds on the orbit-finite restriction of the duality between nominal sets and complete atomic nominal boolean algebras due to Petrisan [17]. It is fundamentally different from the nominal Stone duality proposed by Gabbay, Litak, and Petrisan [16], which relates _nominal Stone spaces with \(\mathsf{M}\)_ to _nominal boolean algebras with \(\mathsf{W}\)_. The latter duality is not amenable for the theory of data languages; see Remark 3.17.
Reiterman's pseudovariety theorem has recently been generalized to the level of finite algebras for a monad [1, 12] and, in a more abstract disguise, finite objects in a category [30]. For nominal sets, varieties of algebras over binding signatures have been studied by Gabbay [16] and by Kurz and Petrisan [27], resulting in nominal Birkhoff-type theorems [6]. Urbat and Milius [45] characterize classes of orbit-finite monoids called _weak_ pseudovarieties by sequences of nominal word equations. This gives a nominal generalization of the classical Eilenberg-Schutzenberger theorem [15], which in fact is a special case of the general HSP theorem in [30]. Nominal pro-orbit-finite equations as introduced in the present paper are strictly more expressive than sequences of nominal word equations (Example 6.11), hence our nominal Reiterman theorem is not equivalent to the nominal Eilenberg-Schutzenberger theorem. Moreover, we note that the nominal Reiterman theorem does not appear to be an instance of any of the abstract categorical frameworks mentioned above.
## 2 Preliminaries
We assume that readers are familiar with basic notions from category theory, e.g. functors, natural transformations, and (co)limits, and from point-set topology, e.g. metric and topological spaces, continuous maps, and compactness. In the following we recall some facts about Pro-completions, the key categorical concept underlying our topological approach to data languages. Moreover, we give a brief introduction to the theory of nominal sets [36].
Pro-completions.A small category \(I\) is _cofiltered_ if (i) \(I\) is non-empty, (ii) for every pair of objects \(i,j\in I\) there exists a span \(i\gets k\to j\), and (iii) for every pair of parallel arrows \(f,g\colon j\to k\), there exists a morphism \(h\colon i\to j\) such that \(f\cdot h=g\cdot h\). Cofiltered preorders are called _codirected_; thus a preorder \(I\) is codirected if \(I\neq\emptyset\) and every pair \(i,j\in I\) has a lower bound \(k\leq i,j\). For instance, every meet-semilattice with bottom is codirected. A diagram
\(D\colon I\to\mathbf{C}\) in a category \(\mathbf{C}\) is _cofiltered_ if its index category \(I\) is cofiltered. A _cofiltered limit_ is a limit of a cofiltered diagram. _Codirected limits_ are defined analogously. The two concepts are closely related: a category has cofiltered limits iff it has codirected limits, and a functor preserves cofiltered limits iff it preserves codirected limits [2, Cor. 1.5]. The dual concept is that of a _filtered colimit_ or a _directed colimit_, respectively.
1. In the category \(\mathbf{Set}\) of sets and functions, every filtered diagram \(D\colon I\to\mathbf{Set}\) has a colimit cocone \(c_{i}\colon D_{i}\to\operatorname{colim}D\) (\(i\in I\)) given by \(\operatorname{colim}D=\big{(}\coprod_{i\in I}D_{i}\big{)}/\sim\) and \(c_{i}(x)=[x]_{\sim}\), where the equivalence relation \(\sim\) on the coproduct (i.e. disjoint union) \(\coprod_{i\in I}D_{i}\) relates \(x\in D_{i}\) and \(y\in D_{j}\) iff there exist morphisms \(f\colon i\to k\) and \(g\colon j\to k\) in \(I\) such that \(Df(x)=Dg(y)\).
2. Every cofiltered diagram \(D\colon I\to\mathbf{Set}\) has a limit whose cone \(p_{i}\colon\,\lim D\to D_{i}\) (\(i\in I\)) is given by the _compatible families_ of \(D\) and projection maps: \(\lim D=\{(x_{i})_{i\in I}\mid x_{i}\in D_{i}\text{ and }Df(x_{i})=x_{j}\text{ for all }f \colon i\to j\text{ in }I\}\quad\text{and}\quad p_{j}((x_{i})_{i\in I})=x_{j}\).
3. In the category \(\mathbf{Top}\) of topological spaces and continuous maps, the limit cone of a cofiltered diagram \(D\colon I\to\mathbf{Top}\) is formed by taking the limit in \(\mathbf{Set}\) and equipping \(\lim D\) with the _initial topology_, viz. the topology generated by the basic open sets \(p_{i}^{-1}[U_{i}]\) for \(i\in I\) and \(U_{i}\subseteq D_{i}\) open. An object \(C\) of a category \(\mathbf{C}\) is _finitely copresentable_ if the contravariant hom-functor \(\mathbf{C}(-,C)\colon\mathbf{C}^{\mathrm{op}}\to\mathbf{Set}\) preserves directed colimits. In more elementary terms, this means that for every codirected diagram \(D\colon I\to\mathbf{C}\) with limit cone \(p_{i}\colon L\to D_{i}\) (\(i\in I\)),
1. every morphism \(f\colon L\to C\) factorizes as \(f=g\circ p_{i}\) for some \(i\in I\) and \(g\colon D_{i}\to C\), and
2. the factorization is essentially unique: given another factorization \(f=h\cdot p_{i}\), there exists \(j\leq i\) such that \(g\cdot D_{j,i}=h\cdot D_{j,i}\). A _Pro-completion_ of a small category \(\mathbf{C}\) is a free completion under codirected (equivalently cofiltered) limits. It is given by a category \(\operatorname{Pro}(\mathbf{C})\) with codirected limits together with a full embedding \(E\colon\mathbf{C}\hookrightarrow\operatorname{Pro}(\mathbf{C})\) satisfying the following universal property:
1. every functor \(F\colon\mathbf{C}\to\mathbf{D}\), where \(\mathbf{D}\) has codirected limits, extends to a functor \(\vec{F}\colon\operatorname{Pro}(\mathbf{C})\to\mathbf{D}\) that preserves codirected limits and satisfies \(F=\vec{F}\circ E\);
2. \(\vec{F}\) is essentially unique: For every functor \(G\) that preserves codirected limits and satisfies \(F=G\circ E\), there exists a natural isomorphism \(\alpha\colon\vec{F}\cong G\) such that \(\alpha E=\operatorname{id}_{F}\).
The universal property determines \(\operatorname{Pro}(\mathbf{C})\) uniquely up to equivalence of categories. We note that every object \(EC\) (\(C\in\mathbf{C}\)) is finitely copresentable in \(\operatorname{Pro}(\mathbf{C})\), see e.g. [1, Thm A.4]. The dual of Pro-completions are _Ind-completions_: free completions under _directed colimits_.
The Pro-completion \(\operatorname{Pro}(\mathbf{Set}_{\vec{t}})\) of the category of finite sets is the full subcategory of \(\mathbf{Top}\) given by _profinite spaces_ (topological spaces that are codirected limits of finite discrete spaces). Profinite spaces are also known as _Stone spaces_ or _boolean spaces_ and can be characterized by topological properties: they are precisely compact Hausdorff spaces with a basis of clopen sets. This equivalent characterization depends on the axiom of choice (or rather the ultrafilter theorem, a weak form of choice), as does _Stone duality_, the dual equivalence between the categories of Stone spaces and boolean algebras. The duality maps a Stone space to its boolean algebra of clopen sets, equipped with the set-theoretic boolean operations. Its inverse maps a boolean algebra the set of ultrafilters (equivalently, prime filters) on it, equipped with a suitable profinite topology.
Profinite words.The topological approach to classical regular languages is based on the space \(\widehat{\Sigma^{*}}\) of _profinite words_ over the alphabet \(\Sigma\). This space is constructed as the codirected limit of all finite quotient monoids of \(\Sigma^{*}\), the free monoid of finite words generated by \(\Sigma\). Formally, let \(\Sigma^{*}\!\downdowndown\!\downdown\!\down\!\downdown\!\down\!\down\! \down\!\down\!\down\!\down\!\down\!\down\!\down\!\down\!\down\!\down\! \down\!\down\!\down\!\down\!\down\!\!\down\!\down\!\down\!\down\!\! \down\!\down\!\!\down\!\down\!\!\down\!\down\!\!\down\!\!\down\!\down\!\! \down\!\!\down\!\down\!\!\down\!\down\!\!\down\!\!\down\!\down\!\!\down\! \down\!\!\down\!\!\down\!\!\down\!\down\!\!\down\!\!\down\!\!\down\!\!\down\! \!\down\!\!\down\!\!\down\!\!\down\!\!\down\!\!\down\!\!\down\!\!\down\!\! \down\!\!\down\!\!\down\!\!\down\!\!\down\!\!\down\!\!\down\!\!\down\!\!\down \!\!\down\!\!\down\!\!\down\!\!\down\!\!\down\!\!\down\!\!\down\!\!\down\!\! \down\!\!\down\!\!\down\!\!\down\!\!\down\!\!\down\!\!\down\!\!\down\!\!\! \down\!\!\down\!\!\down\!\!\down\!\!\down\!\!\down\!\!\down\!\!\down\!\!\! \down\!\!\down\!\!\down\!\!\down\!\!\!\down\!\!\down\!\!\!\down\!\!\!\down\!\! \down\!\!\down\!\!\down\!\!\down\!\!\down\!\!\!\down\!\!\!\down\!\!\!\down\!\! \down\!\!\down\!\!\down\!\!\!\down\!\!\!\down\!\!\!\down\!\!\!\down\!\!\! \down\!\!\down\!\!\!\down\!\!\down\!\!\!\down\!\!\!\down\!\!\!\down\!\!\! \down\!\!\down\!\!\!\down\!\!\down\!\!\!\down\!\!\!\down\!\!\!\down\!\!\! \down\!\!\!\down\!\!\down\!\!\!\down\!\!\!\down\!\!\!\down\!\!\!\down\!\!\! \down\!\!\!\down\!\!\!\down\!\!\!\down\!\!\!\down\!\!\!\down\!\!\!\down\!\!\! \down\!\!\!\down\!\!\!\down\!\!\!\down\!\!\!\down\!\!\!\!\down\!\!\!\!\down\!\! \!\down\!\!\!\down\!\!\!\down\!\!\!\!\down\!\!\!\down\!\!\!\!\down\!\!\!\! \down\!\!\!\down\!\!\!\down\!\!\!\down\!\!\!\!\down\!\!\!\!\down\!\!\!\!\! \down\!\!\!\down\!\!\!\down\!\!\!\down\!\!\!\down\!\!\!\!\down\!\!\!\!\down\!\! \!\!\down\!\!\!\down\!\!\!\!\down\!\!\!\down\!\!\!\!\down\!\!\!\!\down\!\!\! \down\!\!\!\!\down\!\!\!\down\!\!\!\!\down\!\!\!\down\!\!\!\!\down\!\!\!\!\! \down\!\!\!\down\!\!\!\down\!\!\!\!\down\!\!\!\!\down\!\!\!\!\down\!\!\!\! \down\!\!\!\!\down\!\!\!\down\!\!\!\!\down\!\!\!\!\down\!\!\!\!\!\down\!\!\! \down\!\!\!\!\down\!\!\!\!\down\!\!\!\!\down\!\!\!\!\down\!\!\!\!\!\down\!\!\! \!\!\down\!\!\!\!\down\!\!\!\!\!\down\!\!\!\!\!\down\!\!\!\!\!\down\!\!\!\! \down\!\!\!\!\!\down\!\!\!\!\!\down\!\!\!\!\!\down\!\!\!\!\!\!\down\!\!\!\! \down\!\!\!\!\!\!\down\!\!\!\!\!\!\down\!\!\!\!\!\!\down\!\!\!\!\!\!\!\!\! \down\!\!\!\!\!\!\down\!\!\!\!\!\!\down\!\!\!\!\!\!\down\!\!\!\!\!\!\!\!\! \down\!\!\!\!\!\!\!\down\!\!\!\!\!\!\!\!\down\!\!\!\!\!\!\!\!\!\!\!\!\!\! \down\!\!\!\!\!\!\!\!\!\!\!\!\!\down\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \down\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \down\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
A map \(f\colon X\to Y\) between nominal sets is _finitely supported_ if there exists a finite set \(S\subseteq\mathbb{A}\) such that \(f(\pi\cdot x)=\pi\cdot f(x)\) for all \(x\in X\) and \(\pi\in\operatorname{Perm}_{S}\mathbb{A}\), and _equivariant_ if it is supported by \(S=\emptyset\). Equivariant maps satisfy \(\operatorname{supp}f(x)\subseteq\operatorname{supp}x\) for all \(x\in X\). Nominal sets and equivariant maps form a category \(\operatorname{\mathbf{Nom}}\), with the full subcategory \(\operatorname{\mathbf{Nom}}_{\text{of}}\) of orbit-finite nominal sets. The category \(\operatorname{\mathbf{Nom}}\) is complete and cocomplete. Colimits and finite limits are formed like in \(\operatorname{\mathbf{Set}}\); general limits are formed by taking the limit in \(\operatorname{\mathbf{Set}}\) and restricting to finitely supported elements. The category \(\operatorname{\mathbf{Nom}}_{\text{of}}\) is closed under finite limits and finite colimits in \(\operatorname{\mathbf{Nom}}\). _Quotients_ and _subobjects_ in \(\operatorname{\mathbf{Nom}}\) are represented by surjective and injective equivariant maps. Every equivariant map \(f\) has an image factorization \(f=m\cdot e\) with \(m\) injective and \(e\) surjective; we call \(e\) the _coimage_ of \(f\).
A nominal set is _strong_ if for all \(x\in X\) and \(\pi\in\operatorname{Perm}\mathbb{A}\) one has \(\pi\cdot x=x\) iff \(\pi\in\operatorname{Perm}_{S}\mathbb{A}\), where \(S=\operatorname{supp}x\). (Note that the "if" direction holds in every nominal set.) For example, the nominal set \(\mathbb{A}^{\#n}=\{f\colon n\to\mathbb{A}\mid f\text{ injective}\}\) with pointwise action is strong and has a single orbit. Up to isomorphism, (orbit-finite) strong nominal sets are precisely (finite) coproducts of such sets.
## 3 Nominal Stone Spaces
In this section, we establish the topological foundations for our pro-orbit-finite approach to data languages. We start by recalling the basic definitions of nominal topology [17, 32].
**Definition 3.1**.:
1. A _nominal topology_ on a nominal set \(X\) is an equivariant subset \(\mathcal{O}_{X}\subseteq\mathcal{P}_{\text{fs}}X\) closed under finitely supported union (that is, if \(\mathcal{U}\subseteq\mathcal{O}_{X}\) is finitely supported then \(\bigcup\mathcal{U}\in\mathcal{O}_{X}\)) and finite intersection. Sets \(U\in\mathcal{O}_{X}\) are called _open_ and their complements _closed_; sets that are both open and closed are _clopen_. A nominal set \(X\) together with a nominal topology \(\mathcal{O}_{X}\) is a _nominal topological space_. An equivariant map \(f\colon X\to Y\) between nominal topological spaces is _continuous_ if for every open set \(U\) of \(Y\) its preimage \(f^{-1}[U]\) is an open set of \(X\). Nominal topological spaces and continuous maps form the category \(\operatorname{\mathbf{nTop}}\).
2. A _subbasis_ of a nominal topological space \((X,\mathcal{O}_{X})\) is an equivariant subset \(\mathcal{B}\subseteq\mathcal{O}_{X}\) such that every open set of \(X\) is a finitely supported union of finite intersections of sets in \(\mathcal{B}\). If additionally every finite intersection of sets in \(\mathcal{B}\) is a finitely supported union of sets in \(\mathcal{B}\), then \(\mathcal{B}\) is called a _basis_. In this case, every open set of \(X\) is a finitely supported union of elements of \(\mathcal{B}\).
**Example 3.2**.:
1. A topological space may be viewed as a nominal topological space equipped with the trivial group action. Then every (open) subset has empty support and every union is finitely supported, so we recover the axioms of classical topology.
2. Every nominal set \(X\) equipped with the _discrete topology_, where all finitely supported subsets are open, is a nominal topological space. It has a basis given by all singleton sets.
3. A _nominal (pseudo-)metric space_ is given by a nominal set \(X\) with a (pseudo-)metric1\(d\colon X\times X\to\mathbb{R}\) which is equivariant as a function into the set \(\mathbb{R}\), regarded as a nominal set with the trivial group action. As usual, the open ball around \(x\in X\) with radius \(r>0\) is given by \(B_{r}x=\{y\in X\mid d(x,y)<r\}\). Since \(\pi\cdot B_{r}(x)=B_{r}(\pi\cdot x)\) for all \(\pi\in\operatorname{Perm}\mathbb{A}\) and \(x\in X\), every nominal (pseudo-)metric space carries a nominal topology whose basic opens are the open balls.
**Remark 3.3**.: Every nominal topological space induces two families of ordinary topological spaces, one by taking only opens with a certain support and the other by forming orbits. In more detail, let \(S\subseteq\mathbb{A}\) be a finite set of names and let \(X\) be a nominal topological space with topology \(\mathcal{O}\).
1. The underlying set of the nominal space \(X\) carries a classical topology \(\mathcal{O}_{S}\) consisting of all \(S\)-supported open sets of \(\mathcal{O}\). We denote the resulting topological space by \(|X|_{S}\).
2. The set \(\operatorname{orb}_{S}X\) of \(S\)-orbits can be equipped with the quotient topology \(\mathcal{O}_{\operatorname{orb}_{S}}\) induced by the projection \(X\twoheadrightarrow\operatorname{orb}_{S}X\) mapping each \(x\in X\) to its \(S\)-orbit \(\operatorname{orb}_{S}x\). In this topology, a set \(O\subseteq\operatorname{orb}_{S}X\) of \(S\)-orbits is open iff its union \(\bigcup O\) is open in \(X\).
These constructions give rise to functors \(|-|_{S},\operatorname{orb}_{S}\colon\mathbf{nTop}\to\mathbf{Top}\). They allow us to switch between nominal and classical topology.
As noted in Example 2.2, the Pro-completion of the category \(\mathbf{Set}_{\bar{f}}\) is the category of profinite spaces. One may expect that the Pro-completion of \(\mathbf{Nom}_{\operatorname{of}}\) analogously consists of all _pro-orbit-finite_ spaces, that is, nominal topological spaces that are codirected limits of orbit-finite discrete spaces. However, this fails due to a simple fact: while codirected limits of non-empty finite sets are always non-empty (which is a consequence of Tychonoff's theorem, thus the axiom of choice), codirected limits of non-empty orbit-finite nominal sets may be empty.
**Remark 3.4**.: Similar to \(\mathbf{Top}\), codirected limits in \(\mathbf{nTop}\) are formed by taking the limit in \(\mathbf{Nom}\) equipping it with the initial topology.
**Example 3.5**.: Consider the \(\omega^{\operatorname{op}}\)-chain \(1\leftarrow\mathbb{A}\leftarrow\mathbb{A}^{\#2}\leftarrow\mathbb{A}^{\#3} \leftarrow\cdots\) in \(\mathbf{Nom}_{\operatorname{of}}\) with connecting maps omitting the last component. Its limit in \(\mathbf{Set}\) (see Example 2.1) is given by \(\mathbb{A}^{\#\omega}\), the set of all injective functions from \(\omega\) to \(\mathbb{A}\). Clearly no such function has finite support, thus the limit in \(\mathbf{Nom}\) (and therefore also in \(\mathbf{nTop}\)) is empty.
This entails that it is in fact impossible to characterize \(\operatorname{Pro}(\mathbf{Nom}_{\operatorname{of}})\) by any sort of spaces. By definition of the free completion \(\operatorname{Pro}(\mathbf{Nom}_{\operatorname{of}})\), the inclusion functor \(I\colon\mathbf{Nom}_{\operatorname{of}}\hookrightarrow\mathbf{Nom}\) extends uniquely to a functor \(\bar{I}\colon\operatorname{Pro}(\mathbf{Nom}_{\operatorname{of}})\to \mathbf{Nom}\) preserving codirected limits. The analogous functor \(\bar{I}\colon\operatorname{Pro}(\mathbf{Set}_{\bar{f}})\to\mathbf{Set}\) is the forgetful functor of the category of profinite spaces. In contrast, we have
**Proposition 3.6**.: _The category \(\operatorname{Pro}(\mathbf{Nom}_{\operatorname{of}})\) is not concrete: the functor \(\bar{I}\) is not faithful._
Proof.: Consider the chain \(1\leftarrow\mathbb{A}\leftarrow\mathbb{A}^{\#2}\leftarrow\cdots\) of Example 3.5. Let \(D\colon\omega^{\operatorname{op}}\to\mathbf{Nom}_{\operatorname{of}}\) denote the corresponding diagram, and let \(E\colon\mathbf{Nom}_{\operatorname{of}}\hookrightarrow\operatorname{Pro}( \mathbf{Nom}_{\operatorname{of}})\) be the embedding. To prove that \(\bar{I}\) is not faithful, we show that \(|\operatorname{Pro}(\mathbf{Nom}_{\operatorname{of}})(\lim ED,E2)|>|\mathbf{Nom }(\bar{I}(\lim ED),\bar{I}E2)|\), where \(2\) is the two-element nominal set. Indeed, we have
\[\operatorname{Pro}(\mathbf{Nom}_{\operatorname{of}})(\lim_{n< \omega}ED_{n},E2) \cong\operatorname{colim}_{n<\omega}\operatorname{Pro}(\mathbf{Nom }_{\operatorname{of}})(ED_{n},E2) E2\] finitely representable \[\cong\operatorname{colim}_{n<\omega}\mathbf{Nom}_{\operatorname{of }}(D_{n},2) E\] full embedding \[\cong 2\]
because \(\mathbf{Nom}_{\operatorname{of}}(D_{0},2)\cong 2\) and the two elements are not merged by the colimit injection. However,
\[\mathbf{Nom}(\bar{I}(\lim_{n<\omega}ED_{n}),\bar{I}E2) \cong\mathbf{Nom}(\lim_{n<\omega}\bar{I}ED_{n},\bar{I}E2) \bar{I}\] preserves codirected limits \[\cong\mathbf{Nom}(\lim_{n<\omega}ID_{n},2) I=\bar{I}E\] \[\cong\mathbf{Nom}(\emptyset,2) \text{Example \ref{Example:3.5}}\] \[\cong 1. \qed\]
We thus restrict our focus to well-behaved subcategories of \(\mathbf{Nom}_{\mathrm{of}}\). We choose these subcategories in such way that situations like in Example 3.5, where unrestricted accumulation of supports results in empty codirected limits, are avoided.
A nominal set \(X\) is _\(k\)-bounded_, for \(k\in\mathbb{N}\), if \(|\mathrm{supp}\,x|\leq k\) for every \(x\in X\).
For concrete categories \(\mathbf{C}\) over \(\mathbf{Nom}\) (or \(\mathbf{Nom}_{\mathrm{of}}\)) we denote by \(\mathbf{C}_{k}\) the full subcategory of \(\mathbf{C}\) whose underlying objects are \(k\)-bounded. For instance, \(\mathbf{Nom}_{k}\) is the category of \(k\)-bounded nominal sets, and \(\mathbf{nTop}_{k}\) is the category of \(k\)-bounded nominal topological spaces.
**Remark 3.8**.:
1. The full subcategories \(\mathbf{Nom}_{k}\hookrightarrow\mathbf{Nom}\) and \(\mathbf{Nom}_{\mathrm{of},k}\hookrightarrow\mathbf{Nom}_{\mathrm{of}}\) are coreflective [28, Section IV.3]: the coreflector (viz. the right adjoint of the inclusion functor) sends a nominal set \(X\) to its subset \(X_{k}=\{x\in X\mid|\mathrm{supp}\,x|\leq k\}\). Hence \(\mathbf{Nom}_{k}\) is complete: limits are formed by taking the limit in \(\mathbf{Nom}\) and applying the coreflector. Analogously, \(\mathbf{Nom}_{\mathrm{of},k}\) is finitely complete.
2. In contrast to \(\mathbf{Nom}\), the category \(\mathbf{Nom}_{k}\) generally fails to be a topos because it is not cartesian closed. For instance, the functor \(\mathbb{A}^{\#2}\times(-)\) on \(\mathbf{Nom}_{2}\) does not preserve coequalizers, hence it is not a left adjoint.
3. The category \(\mathbf{Nom}\) is known to be equivalent to the category of pullback-preserving presheaves \(\mathbb{I}\to\mathbf{Set}\), where \(\mathbb{I}\) is the category of finite sets and injective functions [36, Theorem 6.8]. By inspecting the proof it is easy to see that this restricts to an equivalence between \(\mathbf{Nom}_{k}\) and the category of _\(k\)-generated_ pullback-preserving presheaves \(\mathbb{I}\to\mathbf{Set}\). Here a presheaf \(F\colon\mathbb{I}\to\mathbf{Set}\) is \(k\)-generated if for every finite set \(S\) and every \(x\in FS\) there exists a set \(S^{\prime}\) of cardinality at most \(k\) and an injective map \(f\colon S^{\prime}\to S\) such that \(x\in Ff[FS^{\prime}]\).
With regard to codirected limits, the restriction to bounded nominal sets fixes the issue arising in Example 3.5:
**Lemma 3.9**.: _Codirected limits in \(\mathbf{Nom}_{k}\) are formed at the level of \(\mathbf{Set}\)._
We proceed to give a topological characterization of \(\mathrm{Pro}(\mathbf{Nom}_{\mathrm{of},k})\) in terms of nominal Stone spaces, generalizing the corresponding result (1.1) for \(\mathrm{Pro}(\mathbf{Set}_{\mathrm{f}})\). To this end, we introduce suitable nominalizations of the three characteristic properties of Stone spaces: compactness, Hausdorffness, and existence of a basis of clopens. The nominal version of compactness comes natural and is compatible with the functors \(|-|_{S}\) and \(\mathrm{orb}_{S}\) of Remark 3.3.
**Definition 3.10**.: An _open cover_ of a nominal topological space \((X,\mathcal{O})\) is a finitely supported set \(\mathcal{C}\subseteq\mathcal{O}\) that covers \(X\), i.e. \(\bigcup\mathcal{C}=X\). A _subcover_ of \(\mathcal{C}\) is a finitely supported subset of \(\mathcal{C}\) that also covers \(X\). A nominal topological space \(X\) is _compact_ if every open cover \(\mathcal{C}\) of \(X\) has an orbit-finite subcover: there exist \(U_{1},\dots,U_{n}\in\mathcal{C}\) such that \(X=\bigcup_{i=1}^{n}\bigcup\mathrm{orb}\,U_{i}\).
**Lemma 3.11**.: _For every nominal topological space \(X\) the following conditions are equivalent:_
1. _The space_ \(X\) _is compact._
2. _Every uniformly finitely supported open cover of_ \(X\) _has a finite subcover._
3. _For every finite set_ \(S\subseteq\mathbb{A}\) _the topological space_ \(|X|_{S}\) _is compact._
4. _For every finite set_ \(S\subseteq\mathbb{A}\) _the topological space_ \(\mathrm{orb}_{S}\,X\) _is compact._
The Hausdorff property is more subtle: rather than just separation of points, we require separation of \(S\)-orbits ("thick points") by disjoint \(S\)-supported open neighbourhoods.
**Definition 3.12**.: A nominal topological space \(X\) is _(nominal) Hausdorff_ if for every finite set \(S\subseteq\mathbb{A}\) and every pair \(x_{1},x_{2}\in X\) of points lying in different \(S\)-orbits, there exist disjoint \(S\)-supported open sets \(U_{1},U_{2}\subseteq X\) such that \(x_{i}\in U_{i}\) for \(i=1,2\).
Note that the nominal Hausdorff condition is clearly equivalent to being able to separate disjoint _\(S\)-orbits_: If \(\operatorname{orb}_{S}x_{1}\neq\operatorname{orb}_{S}x_{2}\), then any two disjoint open \(S\)-supported neighbourhoods \(U_{1},U_{2}\) of \(x_{1},x_{2}\) satisfy \(\operatorname{orb}_{S}x_{i}\subseteq U_{i}\) for \(i=1,2\). Note also that \(\operatorname{orb}_{S}x=\{x\}\) whenever \(\operatorname{supp}x\subseteq S\), hence the nominal Hausdorff condition implies the ordinary one. For bounded nominal compact Hausdorff spaces, we have a codirected Tychonoff theorem:
For every codirected diagram of non-empty \(k\)-bounded nominal compact Hausdorff spaces, the limit in \(\mathbf{nTop}\) is a non-empty \(k\)-bounded nominal compact Hausdorff space.
Finally, having a basis of clopen sets is not sufficient in our setting. To see this, note that in an ordinary topological space \(X\) every clopen subset \(C\subseteq X\) can be represented as \(C=f^{-1}[A]\) for some continuous map \(f\colon X\to Y\) into a finite discrete space \(Y\) and some subset \(A\subseteq Y\). (In fact, one may always take \(Y=2\) and \(A=\{1\}\).) This is no longer true in the nominal setting, see Remark 3.2 below. Therefore, in lieu of clopens we work with _representable_ subsets: A subset \(R\subseteq X\) of a nominal space \(X\) is _representable_ if there exists a continuous map \(f\colon X\to Y\) into an orbit-finite discrete space \(Y\) such that \(R=f^{-1}[A]\) for some \(A\in\mathcal{P}_{\operatorname{fs}}Y\).
1. Every representable set is clopen, but the converse generally fails. To see this, consider the discrete space \(X=\coprod_{n<\omega}\mathbb{A}^{\#n}\). We show that for fixed \(a\in\mathbb{A}\) the (clopen) subset \(R=\{x\mid a\in\operatorname{supp}x\}\subseteq X\) is not representable. Towards a contradiction suppose that \(R\) is represented by \(f\colon X\to Y\) as \(R=f^{-1}[A]\) for some \(A\in\mathcal{P}_{\operatorname{fs}}Y\). Since \(Y\) is orbit-finite, we can choose \(m\) large enough such that there exists some \(x\in\mathbb{A}^{\#m}\backslash R\subseteq X\) for which \(\operatorname{supp}f(x)\subsetneq\operatorname{supp}x\). Choose a name \(b\in\operatorname{supp}x\setminus\operatorname{supp}f(x)\). Then \(a,b\not\in\operatorname{supp}f(x)\), and so we have \[f((a\ b)\cdot x)=(a\ b)\cdot f(x)=f(x).\] Since \((a\ b)\cdot x\in R\), this shows \(f(x)\in A\) and thus \(x\in R\). This contradicts the above choice of \(x\).
2. If a nominal space \(X\) has a basis of representable sets, then we may assume without loss of generality that the basic open sets are of the form \(f^{-1}[y]\) for some \(f\colon X\to Y\) and \(y\in Y\), where \(Y\) is orbit-finite and discrete. Indeed, if \(R=f^{-1}[A]\) for \(A\in\mathcal{P}_{\operatorname{fs}}Y\), then \(R=\bigcup_{y\in A}f^{-1}[y]\). Moreover, given representable sets \(R_{i}=f_{i}^{-1}[y_{i}]\), \(i=1,2\), the set \(R_{1}\cap R_{2}\) is equal to \(\langle f_{1},f_{2}\rangle^{-1}(y_{1},y_{2})\) and therefore representable as well. Hence, to show that representable subsets form a basis it suffices to check whether every open set is a finitely supported union of subsets of the form \(f^{-1}[y]\).
A nominal Stone space is a nominal compact Hausdorff space with a basis of representables. We let \(\mathbf{nStone}\) denote the full subcategory of \(\mathbf{nTop}\) given by nominal Stone spaces.
Nominal Stone spaces as per Definition 3.2 are conceptually very different from _nominal Stone spaces with \(\mathsf{M}\)_, introduced by Gabbay et al. [17] as the dual of _nominal boolean algebras with \(\mathsf{M}\)_. The latter are equipped with a _restriction operator_\(\mathbf{n}\) tightly related to the freshness quantifier \(\mathsf{M}\) of nominal sets, which enables a nominal version of the ultrafilter theorem and thus a represention of boolean algebras with \(\mathsf{M}\) via spaces of ultrafilters. In nominal Stone spaces with \(\mathsf{M}\), the Hausdorff property is implicit (but would be analogous to that in standard topology), the basis is given by clopen rather than representable sets, and
the notion of compactness (called _\(\mathbf{n}\)-compactness_) considers open covers closed under the operator \(\mathbf{n}\), which are required to have a _finite_ subcover. By this definition, the orbit-finite discrete space \(\mathbb{A}\) fails to be compact (the \(\mathbf{n}\)-cover \(\{\{a\}\mid a\in\mathbb{A}\}\cup\{\emptyset\}\) has no finite subcover). Hence, given that algebraic recognition is based on orbit-finite sets, nominal Stone spaces with \(\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\)\)\)\)\)\) \ \ \ \ \ \ \ \mapsto\
2. Every \(k\)-atomic orbit-finitely complete nominal boolean algebra is complete: For every finitely supported subset \(X\subseteq B\) we have \(\bigvee X=\bigvee\{b\in\operatorname{At}(B)\mid\exists(x\in X).\ b\leq x\}\), which is a supremum of an orbit-finite subset.
For each \(k\in\mathbb{N}\), the category of locally \(k\)-atomic orbit-finitely complete nominal boolean algebras is the _Ind-completion of the category of \(k\)-atomic complete nominal boolean algebras_:
\[\mathbf{nC}_{\operatorname{of}}\mathbf{A}_{lk}\mathbf{BA}\simeq\operatorname{ Ind}(\mathbf{nCA}_{k}\mathbf{BA}).\]
[Nominal Stone Duality] For each \(k\in\mathbb{N}\), the category of locally \(k\)-atomic orbit-finitely complete nominal boolean algebras is dual to the category of \(k\)-bounded nominal Stone spaces:
\[\mathbf{nC}_{\operatorname{of}}\mathbf{A}_{lk}\mathbf{BA}\simeq^{\operatorname {op}}\mathbf{nStone}_{k}.\]
Proof.: The category \(\mathbf{Norm}\) of nominal sets is dually equivalent to the category \(\mathbf{nCABA}\) of complete atomic nominal boolean algebras [32]. The duality sends a nominal set \(X\) to the boolean algebra \(\mathcal{P}_{\operatorname{fs}}X\), equipped with the set-theoretic boolean structure. Conversely, a complete atomic nominal boolean algebra \(B\) is mapped to the nominal set \(\operatorname{At}(B)\) of its atoms, and an \(\mathbf{nCABA}\)-morphism \(h\colon C\to B\) to the equivariant map \(\operatorname{At}(B)\to\operatorname{At}(C)\) sending \(b\in\operatorname{At}(B)\) to the unique \(c\in\operatorname{At}(C)\) such that \(c\leq h(b)\). For every \(k\in\mathbb{N}\) the duality clearly restricts to one between \(k\)-bounded orbit-finite nominal sets and \(k\)-atomic complete nominal boolean algebras. Thus Theorem 4.1 and Theorem 3.1 yield
\[\mathbf{nC}_{\operatorname{of}}\mathbf{A}_{lk}\mathbf{BA}\simeq\operatorname{ Ind}(\mathbf{nCA}_{k}\mathbf{BA})\simeq^{\operatorname{op}}\operatorname{Pro}( \mathbf{nCA}_{k}\mathbf{BA}^{\operatorname{op}})\simeq\operatorname{Pro}( \mathbf{Norm}_{\operatorname{of},k})\simeq\mathbf{nStone}_{k}.\qed\]
We give an explicit description of the dual equivalence of Theorem 4.1.
1. In the direction \(\mathbf{nStone}_{k}\to\mathbf{nC}_{\operatorname{of}}\mathbf{A}_{lk}\mathbf{BA}\) it maps a \(k\)-bounded nominal Stone space \(X\) to the nominal boolean algebra \(\operatorname{Cl}(X)\) of clopens (or representables, see Lemma 3.19). A continuous map \(f\colon X\to Y\) is mapped to the homomorphism \(f^{-1}\colon\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname \{{ \cdot
## 5 Pro-Orbit-Finite Words
In this section, we generalize the topological characterization of regular languages to data languages recognizable by orbit-finite nominal monoids [7, 10, 13].
A _nominal monoid_\(M\) is a monoid object in \(\mathbf{Nom}\), that is, it is given by nominal set \(M\) equipped with an equivariant associative multiplication \(M\times M\to M\) and an equivariant unit \(1\in M\). Nominal monoids and equivariant monoid homomorphisms form a category \(\mathbf{nMon}\).
As for ordinary monoids, the _free monoid_ generated by \(\Sigma\in\mathbf{Nom}\) is the nominal set \(\Sigma^{*}\) of finite words (with pointwise group action); its multipliation is concatenation and its unit the empty word.
We emphasize the difference between \(k\)-bounded nominal monoids - nominal monoids whose carrier is \(k\)-bounded - and monoid objects in \(\mathbf{Nom}_{k}\), which are partial nominal monoids where the product \(x\cdot y\) is defined iff \(|\mathrm{supp}\,x\cup\mathrm{supp}\,y|\leq k\).
A _data language_ over \(\Sigma\in\mathbf{Nom}_{\mathrm{of}}\) is a finitely supported subset \(L\subseteq\Sigma^{*}\). It is _recognizable_ if there exists an equivariant monoid morphism \(h\colon\Sigma^{*}\to M\) with \(M\) orbit-finite and a finitely supported subset \(P\subseteq M\) such that \(L=h^{-1}[P]\). In this case, we say that the morphism \(h\)_recognizes_\(L\).
For example, the equivariant language \(L_{0}\) from the Introduction is recognizable, while the language \(L_{1}\) is not recognizable.
1. The morphism \(h\) can be taken to be surjective; otherwise, take its coimage.
2. Via characteristic functions, data languages correspond precisely to finitely supported maps \(L\colon\Sigma^{*}\to 2\), where \(2\) is the two-element nominal set. Recognizablity then states that \(L\) factorizes through some equivariant monoid morphism with orbit-finite codomain.
Recall from Section 2 that the Stone space \(\widehat{\Sigma^{*}}\) of profinite words over a finite alphabet \(\Sigma\) is constructed as the limit in \(\mathbf{Stone}\simeq\mathrm{Pro}(\mathbf{Set}_{\mathrm{f}})\) of all finite quotient monoids of \(\Sigma^{*}\). The obvious generalization to a _nominal_ alphabet \(\Sigma\in\mathbf{Nom}_{\mathrm{of}}\), which constructs the limit of all orbit-finite quotient monoids in \(\mathrm{Pro}(\mathbf{Nom}_{\mathrm{of}})\), is unlikely to yield a useful object since this category is not concrete (Proposition 3.1); in fact, it is futile from a language-theoretic perspective, cf. Remark 5.2. Instead, our results of Section 3 suggest to restrict the diagram scheme to \(\Sigma^{*}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Hence, \(k\) is a lower bound for \(h,h^{\prime}\) in the poset \(\Sigma^{*}\mathord{\downarrow}_{s}\mathbf{nMon}_{\mathrm{of},k}\).
For an orbit-finite nominal set \(\Sigma\) and a support bound \(s\colon\Sigma^{*}\to\mathcal{P}_{k}\mathbb{A}\) we define the nominal Stone space \(\widehat{\Sigma^{*}_{s}}\) to be the limit of the codirected diagram
\[D\colon\Sigma^{*}\mathord{\downarrow}_{s}\mathbf{nMon}_{\mathrm{of},k}\to \mathbf{nStone}_{k},\qquad(e\colon\Sigma^{*}\twoheadrightarrow_{s}M)\;\mapsto \;|M|,\]
where \(|M|\) is the nominal set underlying \(M\), regarded as a discrete nominal topological space. The elements of \(\widehat{\Sigma^{*}_{s}}\) are called the _(s-bounded) pro-orbit-finite words over \(\Sigma\)_. We denote by \(\hat{e}\colon\widehat{\Sigma^{*}_{s}}\to M\) the limit projection associated to \(e\colon\Sigma^{*}\twoheadrightarrow_{s}M\) in \(\Sigma^{*}\mathord{\downarrow}_{s}\mathbf{nMon}_{\mathrm{of},k}\).
1. One may equivalently define \(\widehat{\Sigma^{*}_{s}}\) as the limit of the larger cofiltered diagram \(D^{\prime}\) given by \[D^{\prime}\colon\Sigma^{*}\mathord{\downarrow}_{s}\mathbf{nMon}_{\mathrm{of},k }\to\mathbf{nStone}_{k},\qquad(e\colon\Sigma^{*}\to_{s}M)\;\mapsto\;|M|,\] where \(\Sigma^{*}\mathord{\downarrow}_{s}\mathbf{nMon}_{\mathrm{of},k}\) is the category of all equivariant \(s\)-bounded monoid morphisms \(h\colon\Sigma^{*}\to_{s}M\) with \(k\)-bounded orbit-finite codomain; a morphism from \(h\) to \(h^{\prime}\colon\Sigma^{*}\to_{s}M^{\prime}\) is an equivariant monoid morphism \(k\colon M\to M^{\prime}\) such that \(h^{\prime}=k\cdot h\). In fact, the inclusion \(\Sigma^{*}\mathord{\downarrow}_{s}\mathbf{nMon}_{\mathrm{of},k}\hookrightarrow \Sigma^{*}\mathord{\downarrow}_{s}\mathbf{nMon}_{\mathrm{of},k}\) is an initial functor, hence the limits of \(D\) and \(D^{\prime}\) coincide. Since the limit of \(D^{\prime}\) is formed as in **Set** (Lemma 3.9), the space \(\widehat{\Sigma^{*}_{s}}\) is carried by the nominal set of compatible families \((x_{h})_{h}\) of \(D^{\prime}\), and the limit projection \(\hat{h}\) associated to \(h\colon\Sigma^{*}\to_{s}M\) is given by \((x_{h})_{h}\mapsto x_{h}\).
2. The forgetful functor \(V\colon\mathbf{nStone}_{k}\to\mathbf{Nom}_{k}\) and the inclusion \(I\colon\mathbf{Nom}_{k}\to\mathbf{Nom}\) both preserve codirected limits. The morphisms \(\Sigma^{*}\mathord{\downarrow}_{s}\mathbf{nMon}_{\mathrm{of},k}\) viewed as equivariant functions form a cone for the diagram \(IVD^{\prime}\), so there exists a unique equivariant map \(\eta\colon\Sigma^{*}\to IV\widehat{\Sigma^{*}_{s}}\) such that \[h=\left(\Sigma^{*}\mathbin{\dashrightarrow}\overset{\eta}{\longrightarrow}IV \widehat{\Sigma^{*}_{s}}\overset{IV\hat{h}}{\longrightarrow}IVM\right)\qquad \text{for all }h\in\Sigma^{*}\mathord{\downarrow}_{s}\mathbf{nMon}_{\mathrm{of},k}.\] In more explicit terms, the map \(\eta\) is given by \(\eta(w)=(h(w))_{h}\) for \(w\in\Sigma^{*}\). For simplicity we omit \(I\) and \(V\) and write \(\eta\colon\Sigma^{*}\to\widehat{\Sigma^{*}_{s}}\). The image of \(\eta\) forms a dense subset of \(\widehat{\Sigma^{*}_{s}}\). We note that \(\eta\) is generally not injective since we restrict a subdiagram \(\Sigma^{*}\mathord{\downarrow}_{s}\mathbf{nMon}_{\mathrm{of},k}\) of the diagram \(\Sigma^{*}\mathord{\downarrow}\mathbf{nMon}_{\mathrm{of},k}\),
3. The space \(\widehat{\Sigma^{*}_{s}}\) is a nominal monoid with product \(\hat{h}(x\cdot y)=\hat{h}(x)\cdot\hat{h}(y)\) and unit \(\eta(\varepsilon)\), with \(\varepsilon\) the empty word. Since the multiplication is readily seen to be continuous, \(\widehat{\Sigma^{*}_{s}}\) can be regarded as an object of \(\mathbf{Mon}(\mathbf{nStone})\), the category of nominal Stone spaces equipped with a continuous monoid structure and continuous equivariant monoid morphisms.
Now recall from Section 2 that the space \(\widehat{\Sigma^{*}}\) can be constructed as the metric completion of \(\Sigma^{*}\), where the metric measures the size of separating monoids. We now investigate to what extent the metric approach applies to the nominal setting, using nominal (pseudo-)metrics; see Example 3.2.
Let \(s\) be a support bound on \(\Sigma^{*}\). We say that a nominal monoid \(M\)_s-separates_\(v,w\in\Sigma^{*}\) if there exists an \(s\)-bounded equivariant monoid morphism \(h\colon\Sigma^{*}\to_{s}M\) such that \(h(v)\neq h(w)\). We define a nominal pseudometric \(d_{s}\) on \(\Sigma^{*}\) by setting
\[d_{s}(v,w)=\sup\{\,2^{-|\operatorname{orb}M|}\mid\text{the orbit-finite nominal monoid $M$ $s$-separates $v,w$}\,\}.\]
We let \(\Sigma^{*}/d_{s}\) denote the corresponding nominal metric space, obtained as a quotient space of the pseudometric space \((\Sigma^{*},d_{s})\) by identifying \(v,w\) if \(d_{s}(v,w)=0\).
**Remark 5.10**.: In contrast to the classical case, \(d_{s}\) is generally not a metric: there may exist words \(v\neq w\) which are not \(s\)-separated by any orbit-finite nominal monoids. For example, if \(\Sigma=\mathbb{A}\) and \(s(a_{1}\cdots a_{n})=a_{1}\) for \(a_{1},\ldots,a_{n}\in\Sigma\), then for every \(s\)-bounded \(h\) and distinct names \(a,b,c\in\mathbb{A}\) we have \(h(ab)=h((b\;c)\cdot ac)=(b\;c)\cdot h(ac)=h(ac)\) since \(b,c\not\in s(ac)\supseteq\operatorname{supp}h(ac)\). Therefore, the additional metrization process is required.
For the next lemma we need some terminology. A nominal metric space is _complete_ if every finitely supported Cauchy sequence has a limit. A nominal topological space is _completely metrizable_ if its topology is induced by a complete metric. A subset \(D\subseteq X\) of a nominal metric space is _(topologically) dense_ if every open neighbourhood of a point \(x\in X\) contains an element of \(D\).
**Remark 5.11**.: In contrast to classical metric spaces, density is not equivalent to _sequential density_ (every point \(x\in X\) is a limit of a finitely supported sequence in \(D\)). To see this, consider the space \(\mathbb{A}^{\omega}\) of finitely supported infinite words with the prefix metric, that is, \(d(v,w)=2^{-n}\) if \(n\) is the length of the longest common prefix of \(v,w\). Let \(D\subseteq X\) be the equivariant subset given by
\[D=\{\,x\in\mathbb{A}^{\omega}\mid|\operatorname{supp}x|\geq 2\text{ and }| \operatorname{supp}x|\geq|\text{initialblock}(x)|\,\},\]
where \(\text{initialblock}(x)\) is the longest prefix of \(x\) of the form \(a^{n}\) (\(a\in\mathbb{A}\)). The set \(D\) is dense, but not sequentially dense: \(a^{\omega}\in\mathbb{A}^{\omega}\) is not the limit of any finitely supported sequence in \(D\).
**Lemma 5.12**.:
1. _The space_ \(\widehat{\Sigma}^{*}_{s}\) _is completely metrizable via the complete nominal metric_ \[\hat{d}_{s}(x,y)=\sup\{\,2^{-|\operatorname{orb}M|}\mid\exists(h\colon\Sigma^ {*}\to_{s}M)\colon\hat{h}(x)\neq\hat{h}(y)\,\}.\] (5.1)
2. _The canonical map_ \(\eta\) _(Remark_ 5.8_) yields a dense isometry_ \(\eta\colon(\Sigma^{*},d_{s})\to(\widehat{\Sigma}^{*}_{s},\hat{d}_{s})\)_._
**Remark 5.13**.: In classical topology, it would now be clear that \(\widehat{\Sigma}^{*}_{s}\) is the metric completion of the metric space \(\Sigma^{*}/d_{s}\), i.e. it satisfies the universal property that every uniformly continuous map from \(\Sigma^{*}/d_{s}\) to a complete metric space has a unique uniformly continuous extension to \(\widehat{\Sigma}^{*}_{s}\). However, this rests on the coincidence of topological and sequential density, which fails over nominal sets as seen in Remark 5.11. We therefore conjecture that \(\widehat{\Sigma}^{*}_{s}\) is not the nominal metric completion of \(\Sigma^{*}/d_{s}\).
By using support bounds, we obtain a topological perspective on recognizable data languages. Let \(\operatorname{Rec}_{s}\Sigma\) denote the set of data languages recognized by \(s\)-bounded equivariant monoid morphisms.
**Theorem 5.14**.: _For every support bound \(s\colon\Sigma^{*}\to\mathcal{P}_{k}\mathbb{A}\), the \(k\)-bounded nominal Stone space \(\widehat{\Sigma}^{*}_{s}\) of \(s\)-bounded pro-orbit-finite words is dual to the locally \(k\)-atomic orbit-finitely complete boolean algebra \(\operatorname{Rec}_{s}(\Sigma^{*})\) of \(s\)-recognizable languages. In particular, we have the isomorphism_
\[\operatorname{Rec}_{s}(\Sigma^{*})\;\cong\;\operatorname{Clo}(\widehat{ \Sigma}^{*}_{s})\qquad\text{ in }\qquad\mathbf{n}\mathbf{C}_{\operatorname{of}}\mathbf{A}_{lk}\mathbf{B} \mathbf{A}.\]
Proof (Sketch).: The isomorphism is illustrated by the two diagrams below:
\[\begin{array}{
In more detail, if \(L\subseteq\Sigma^{*}\) is \(s\)-recognizable, say \(L=h^{-1}[P]\) for an \(s\)-bounded morphism \(h\), then its corresponding clopen is the topological closure \(\overline{\eta[L]}=\hat{h}^{-1}[P]\) represented by the continuous extension \(\hat{h}\). Conversely, every clopen \(C\subseteq\widehat{\Sigma^{*}_{s}}\) restricts to an \(s\)-recognizable language \(\eta^{-1}[C]\subseteq\Sigma^{*}\). We get \(s\)-recognizability of \(\eta^{-1}[C]\) by factorizing a representation \(f\colon\widehat{\Sigma^{*}_{s}}\to Y\) of \(C\) through a limit projection \(\hat{h}\) as \(f=p\cdot\hat{h}\), using that \(Y\) is finitely copresentable. Thus \(h\) recognizes \(\eta^{-1}[C]\).
In the proof of Theorem 5.1, finite copresentability of orbit-finite sets is crucial to recover recognizable languages from representable subsets, highlighting the importance of working in the Pro-completion \(\operatorname{Pro}(\mathbf{Nom}_{\mathrm{or},k})=\mathbf{nStone}_{k}\). In a naive approach one might instead want to consider the limit of the diagram \(D\colon\Sigma^{*}\mathbf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{ \mathsf{ }}}}}}}}}}}}} \mathbf{nTop}\) of _all_ equivariant morphisms from \(\Sigma^{*}\) to orbit-finite monoids. The resulting space \(\widehat{\Sigma^{*}}\) is still a nominal Hausdorff space with a basis of representables, but it generally fails to be compact, and its representable subsets do not correspond to recognizable data languages. To see this, consider the space \(\widehat{\mathbb{A}^{*}}\) and the orbit-finite nominal monoids \(\mathbb{A}^{\leq n}\) (words of length at most \(n\)) with multiplication cutting off after \(n\) letters. We denote by \(h_{n}\colon\mathbb{A}^{*}\to\mathbb{A}^{\leq n}\) and \(p_{k,n}\colon\mathbb{A}^{\leq k}\twoheadrightarrow\mathbb{A}^{\leq n}\), \(n\leq k\), the equivariant monoid morphisms given by projection to the first \(n\) letters. For every compatible family \(x=(x_{h})\in\widehat{\mathbb{A}^{*}}\) its subfamily \((x_{h_{n}})_{n\in\mathbb{N}}\) corresponds to a (possibly infinite) word over \(\mathbb{A}\) with finite support. Hence there exists a largest natural number \(N=N(x)\) such that \(|\operatorname{supp}x_{h_{N}}|=N\). The subsets \(C_{n}=\{x\in\widehat{\mathbb{A}^{*}}\mid N(x)=n\}\), \(n\in\mathbb{N}\), are equivariant clopens since \(C_{n}=\hat{h}_{n}^{-1}[\mathbb{A}^{\#n}]\cap\hat{h}_{n+1}^{-1}[\mathbb{A}^{ \odot n+1}\setminus\mathbb{A}^{\#(n+1)}]\). Thus each \(C_{n}\) is representable (by a continuous map into the two-element discrete space), non-empty (since \(\eta(w)=(h(w))_{h}\in C_{n}\) for every word \(w\in\mathbb{A}^{\#n}\subseteq\mathbb{A}^{*}\) of pairwise distinct letters), and pairwise disjoint. Hence they form a cover of \(\widehat{\mathbb{A}^{*}}\) that admits no orbit-finite (equivalently, finite) subcover, showing that \(\widehat{\mathbb{A}^{*}}\) is not compact. Moreover, the sets \(C_{M}=\bigcup_{m\in M}C_{m}\), where \(M\subseteq\mathbb{N}\), are equivariant clopens (hence representable) and pairwise distinct. Thus \(\widehat{\mathbb{A}^{*}}\) has uncountably many clopens. On the other hand, there exist only countably many recognizable languages over \(\mathbb{A}\) (using that, up to isomorphism, there exist only countably many orbit-finite sets [36, Thm. 5.13] and thus countably many orbit-finite nominal monoids), showing that there is no bijective correspondence between representable sets in \(\widehat{\mathbb{A}^{*}}\) and recognizable data languages over \(\mathbb{A}\).
## 6 A Nominal Reiterman Theorem
As an application of pro-orbit-finite methods, we present a nominal extension of Reiterman's classical pseudovariety theorem [37]. The latter characterizes classes of finite algebras presentable by profinite equations as precisely those closed under finite products, subalgebras, and homomorphic images. This result has been generalized to first-order structures [34] and, recently, to abstract categories [1, 30]. A key insight for the categorical perspective is that equations should be formed over projective objects. (Recall that an object \(X\) in a category is projective w.r.t. a class \(\mathcal{E}\) of morphisms if for all cospans \(X\xrightarrow{f}Y\xleftarrow{}Z\) with \(e\in\mathcal{E}\) there exists a factorization of \(f\) through \(e\).) In **Nom**, one takes strong nominal sets, which are projective with respect to support-reflecting quotients (see Definition 6.1.2). For spaces of pro-orbit-finite words we have the support bound as an additional constraint, which makes the situation more complex: In a cospan \(\widehat{\Sigma^{*}_{s}}\xrightarrow{\hat{h}}N\xleftarrow{}M\) with \(e\) support-reflecting, no \(s\)-bounded factorization of \(\hat{h}\) through \(e\) may exist (Example A.20). Surprisingly, there nonetheless exists a suitable type of quotients for nominal monoids, called _MSR quotients_, which is _independent_ of the support bound \(s\).
**Definition 6.1**.: A surjective equivariant morphism \(e\colon M\twoheadrightarrow N\) of nominal monoids is
1. _support-preserving_ if \(\operatorname{supp}e(x)=\operatorname{supp}x\) for every \(x\in X\);
2. _support-reflecting_ if for every \(y\in Y\) there exists \(x\in e^{-1}[y]\) such that \(\operatorname{supp}x=\operatorname{supp}y\);
3. _multiplicatively support-reflecting_ (_MSR_ for short) if there exists a nominal submonoid \(M^{\prime}\subseteq M\) such that the domain restriction \(e|_{M^{\prime}}\colon M^{\prime}\to N\) of \(e\) is surjective and support-preserving.
**Remark 6.2**.: Note that a surjective morphism \(e\) is support-reflecting iff it restricts to a support-preserving surjection \(e|_{M^{\prime}}\) for some equivariant subset \(M^{\prime}\subseteq M\). For MSR morphisms one additionally requires that \(M^{\prime}\) may be chosen to form a submonoid. We thus have
\[\text{support-preserving}\quad\Longrightarrow\quad\text{multiplicatively support-reflecting}\quad\Longrightarrow\quad\text{support-reflecting}.\]
None of the two converses holds in general; for the first one consider the morphism \(\mathbb{A}^{*}\to 1\) into the trivial monoid, and for the second one see Example 6.11.
**Proposition 6.3**.: _A surjective equivariant morphism \(e\colon M\twoheadrightarrow N\) between orbit-finite nominal monoids is MSR iff all the monoids \(\widehat{\Sigma_{s}^{*}}\) (where \(\Sigma\in\mathbf{Nom}_{\mathrm{of}}\) is strong and \(s\colon\Sigma^{*}\to\mathcal{P}\mathbb{A}\) is a support bound) are projective with respect to \(e\) in \(\mathbf{Mon}(\mathbf{nStone})\), with \(M\) and \(N\) regarded as discrete spaces._
**Definition 6.4**.: An _MSR-pseudovariety of nominal monoids_ is a class \(\mathcal{V}\subseteq\mathbf{nMon}_{\mathrm{of}}\) of orbit-finite nominal monoids closed under
1. finite products: if \(M_{1},\ldots,M_{n}\in\mathcal{V}\), \(n\in\mathbb{N}\), then \(M_{1}\times\cdots\times M_{n}\in\mathcal{V}\);
2. submonoids: if \(M\in\mathcal{V}\) and \(N\subseteq M\) is a nominal submonoid, then \(N\in\mathcal{V}\):
3. MSR quotients: if \(M\in\mathcal{V}\) and \(e\colon M\twoheadrightarrow N\) is an MSR quotient, then \(N\in\mathcal{V}\).
**Definition 6.5**.: Let \(s\colon\Sigma^{*}\to\mathcal{P}\mathbb{A}\) be a support bound. A _morphic pro-orbit-finite equation_, or _morphic proequation_ for short, is a surjective \(\mathbf{nStone}\)-morphism \(\varphi\colon\widehat{\Sigma_{s}^{*}}\twoheadrightarrow E\). An orbit-finite monoid \(M\)_satisfies_\(\varphi\) if for every \(s\)-bounded morphism \(h\colon\Sigma^{*}\to M\), the limit projection \(\hat{h}\colon\widehat{\Sigma_{s}^{*}}\to M\) factorizes through \(\varphi\) in \(\mathbf{nStone}_{k}\), for some \(k\in\mathbb{N}\) such that \(M\in\mathbf{Nom}_{\mathrm{of},k}\) and \(s\) corestricts to \(\mathcal{P}_{k}\mathbb{A}\):
\[\hat{h}=\big{(}\widehat{\Sigma_{s}^{*}}\stackrel{{\varphi}}{{ \longrightarrow}}E\dashrightarrow{\exists}\big{\rightsquigarrow}M\big{)}.\]
For a set \(\mathcal{T}\) of morphic proequations, taken over possibly different \(\widehat{\Sigma_{s}^{*}}\), we denote by \(\mathcal{V}(\mathcal{T})\) the class of orbit-finite monoids satisfying all proequations in \(\mathcal{T}\). A class \(\mathcal{V}\) of orbit-finite monoids is _presentable by morphic proequations_ if \(\mathcal{V}=\mathcal{V}(\mathcal{T})\) for some set \(\mathcal{T}\) of morphic proequations.
Note that proequations use support bounds, while the definition of an MSR-pseudovariety does not.
**Theorem 6.6** (Nominal Reiterman).: _A class of orbit-finite nominal monoids is an MSR-pseudovariety iff it is presentable by morphic proequations._
The main technical observations for the proof are that (i) every orbit-finite set is \(k\)-bounded for some \(k\), hence finitely representable in \(\mathbf{nStone}_{k}\), and (ii) there are "enough" proequations in the sense that every orbit-finite nominal monoid is a quotient of some \(\widehat{\Sigma_{s}^{*}}\). The quotient is not necessarily MSR, which entails that abstract pseudovariety theorems [30, 1] do not apply to our present setting.
We also give a syntactic version of our nominal Reiterman theorem, which uses explicit proequations in lieu of morphic proequations.
**Definition 6.7**.: An _explicit proequation_ is a pair \((x,y)\in\widehat{\Sigma}_{s}^{*}\times\widehat{\Sigma}_{s}^{*}\) for some strong \(\Sigma\in\mathbf{Nom}_{\mathrm{of}}\) and some support bound \(s\), denoted by \(x=y\). An orbit-finite monoid \(M\)_satisfies_ the explicit proequation \(x=y\) if
\[\hat{h}(x)=\hat{h}(y)\qquad\text{for every $s$-bounded equivariant monoid morphism $h\colon\Sigma^{*}\to M$.}\]
(Here choose a common support size bound \(k\) for \(M\) and \(s\), so that \(\hat{h}\) lies in \(\mathbf{nStone}_{k}\).)
**Theorem 6.8** (Explicit Nominal Reiterman).: _A class of orbit-finite nominal monoids is an MSR-pseudovariety iff it is presentable by explicit proequations._
**Example 6.9**.: Recall that in a finite monoid \(M\) every element \(m\) has a unique idempotent power, denoted by \(m^{\omega}\). This holds analogously for orbit-finite nominal monoids \(M\)[7, Theorem 5.1]: one has \(m^{\omega}=m^{(n\cdot k)!}\) where \(n\) is the number of orbits \(M\) and \(k\) is the maximum support size. (The number \(n\cdot k!\) is an upper bound on the number of elements of \(M\) with any given finite support [36, Thm. 5.13], hence on the cardinality of the set \(\{m^{i}\colon i\in\mathbb{N}\}\).) The nominal monoid \(M\) is _aperiodic_ if \(m^{\omega}\cdot m=m^{\omega}\) for all \(m\in M\). Languages recognizable by aperiodic orbit-finite monoids are captured precisely by first-order logic on data words [7, 13]. One readily verifies that the class of aperiodic orbit-finite monoids forms an MSR-pseudovariety; in fact, it is closed under all quotients. To present it by pro-orbit-finite equations, note that for every \(x\in\widehat{\Sigma}_{s}^{*}\) the family \(x^{\omega}=(\hat{h}(x)^{\omega})_{h}\) is again compatible, hence \(x^{\omega}\in\widehat{\Sigma}_{s}^{*}\). If \(s\colon\Sigma^{*}\to\mathcal{P}_{k}\mathbb{A}\) and \(h\colon\Sigma^{*}\to_{s}M\) is an \(s\)-bounded equivariant monoid morphism such that \(M\) has at most \(n\) orbits, then \(\hat{h}(x^{\omega})=\hat{h}(x)^{\omega}=\hat{h}(x)^{(n\cdot k)!}=\hat{h}(x^{(n \cdot k)!})\), hence \(\tilde{d}_{s}(x^{\omega},x^{(n\cdot k)!})<2^{-n}\) in the metric (5.1) on \(\widehat{\Sigma}_{s}^{*}\). This shows that \(x^{\omega}\) is the limit of the sequence \((x^{(n\cdot k)!})_{n\in\mathbb{N}}\) in \(\widehat{\Sigma}_{s}^{*}\), and moreover that the pseudovariety of aperiodic orbit-finite monoids is presented by the explicit proequations \(x^{\omega}\cdot x=x^{\omega}\), where \(x\in\widehat{\Sigma}_{s}^{*}\) and \(s\colon\Sigma^{*}\to\mathcal{P}_{k}\mathbb{A}\) ranges over all support bounds on strong orbit-finite alphabets. Restricting to \(k=0\), we recover the well-known description of aperiodic finite monoids by the (single) profinite equation \(x^{\omega}\cdot x=x^{\omega}\).
**Remark 6.10**.:
1. Pseudovarieties of finite monoids admit an alternative equational characterization based on sequences of word equations rather than profinite equations. A _word equation_ is a pair \((v,w)\in\Sigma^{*}\times\Sigma^{*}\) of words over some finite alphabet \(\Sigma\), denoted \(v=w\); it is _satisfied_ by a monoid \(M\) if \(h(v)=h(w)\) for every monoid morphism \(h\colon\Sigma^{*}\to M\). More generally, a sequence \((v_{0}=w_{0},v_{1}=w_{1},\ldots)\) of word equations, taken over possibly different finite alphabets, is _eventually satisfied_ by \(M\) if it satisfies all but finitely many of the equations. As shown by Eilenberg and Schutzenberger [15], a class of finite monoids forms a pseudovariety iff it is presentable by a (single) sequence of word equations.
2. Urbat and Milius [45] recently established a nominal version of the Eilenberg-Schutzenberger theorem. They consider nominal word equations (defined as above, where \(\Sigma\) is now a strong orbit-finite nominal set) and show that sequences of nominal word equations present precisely _weak pseudovarieties_, i.e. classes of orbit-finite nominal monoids closed under finite products, submonoids, and support-reflecting quotients. Clearly every MSR-pseudovariety is weak, but the converse does not hold; hence over nominal sets, sequences of word equations and pro-orbit-finite equations are of different expressivity. The example below illustrates one source of additional expressivity of pro-orbit-finite equations: The support bound \(s\) can control how the support changes during multiplication, which is not expressible by sequences of word equations.
**Example 6.11**.: An example of an MSR-pseudovariety that is not a weak pseudovariety is given by the class \(\mathcal{V}\) of all orbit-finite nominal monoids \(M\) such that
\[\forall(m,n\in M)\colon\quad\operatorname{supp}(mn)=\emptyset\quad \Longleftrightarrow\quad\operatorname{supp}(m,n)=\emptyset. \tag{6.1}\]
(Note that \(\operatorname{supp}(m,n)=\operatorname{supp}m\cup\operatorname{supp}n\) and that "\(\Leftarrow\)" always holds by equivariance of the monoid multiplication.) It is not difficult to prove that \(\mathcal{V}\) is an MSR-pseudovariety. To show that \(\mathcal{V}\) is not a weak pseudovariety, we construct a support-reflecting quotient under which \(\mathcal{V}\) is not closed. The nominal set \(\overline{1}+\overline{\mathbb{A}}=\{\overline{1}\}+\{\overline{a}\mid a\in \mathbb{A}\}\) forms a nominal monoid with multiplication given by projection on the first component and unit \(\overline{1}\). We extend the multiplication to the nominal set \(M=1+\mathbb{A}+\overline{1}+\overline{\mathbb{A}}\) by letting \(1\) be the unit and setting \(x\cdot y=\overline{x}\cdot\overline{y}\) whenever \(x,y\neq 1\); here overlining is idempotent (\(\overline{\overline{x}}:=\overline{x}\)). This makes the multiplication associative and equivariant. Thus, \(M\) is a nominal monoid. Now let \(N=1+\mathbb{A}+0=\{1\}+\mathbb{A}+\{0\}\) be the nominal monoid with multiplication \(x\cdot y=0\) for \(x,y\neq 1\). Thus \(0\) is an absorbing element. Letting \(\operatorname{const}_{0}\colon\overline{1}+\overline{\mathbb{A}}\to 0\) denote the constant map, we have the equivariant surjective map
\[e=\operatorname{id}_{1+\mathbb{A}}+\operatorname{const}_{0}\colon M=(1+ \mathbb{A})+(\overline{1}+\overline{\mathbb{A}})\twoheadrightarrow(1+ \mathbb{A})+0=N.\]
Note that \(e\) is a monoid morphism: it maps \(1\) to \(1\) and if \(x,y\neq 1\) then \(e(x),e(y)\neq 1\) and hence \(e(x\cdot y)=e(\overline{x}\cdot\overline{y})=0=e(x)\cdot e(y)\). The quotient \(e\) is support-reflecting, but it is not MSR: the subset \(1+\mathbb{A}+\overline{1}\subseteq M\) of support-preserving elements does not form a submonoid of \(M\). Finally, clearly \(M\) satisfies (6.1) while \(N\) does not.
## 7 Conclusion and Future Work
We have introduced topological methods to the theory of data languages, and also explored some of their subtleties and limitations. Following the spirit of Marshall Stone's slogan _"always topologize"_, the core insight of our paper may be summarized as:
_Data languages topologize for bounded supports._
In fact, by restricting to support-bounded orbit-finite nominal sets and analyzing their Pro-completion, we have shown that fundamental results from profinite topology (notably Stone duality and the equivalence between profinite spaces and Stone spaces) generalize to the pro-orbit-finite world. These results are of independent interest; in particular, they are potentially applicable to data languages recognizable by all kinds of orbit-finite structures. For the case of monoids, we derived a topological interpretation of recognizable data languages via clopen sets of pro-orbit-finite words, as well as a nominal version of Reiterman's pseudovariety theorem characterizing the expressive power of pro-orbit-finite equations.
The foundations laid in the present paper open up a number of promising directions for future research. One first goal is to develop a fully fledged duality theory for data languages along the lines of the work of Gehrke et al. [18] on classical regular languages, based on an _extended nominal Stone duality_ between pro-orbit-finite monoids and nominal boolean algebras with operators.
Regarding specific applications, we aim to analyze further classes of orbit-finite monoids in terms of pro-orbit-finite equations, following the lines of Example 6.9, in order to classify the corresponding data languages. One natural candidate is the class of \(\mathcal{J}\)-trivial monoids, with the vision of a nominal version of Simon's theorem [42] relating \(\mathcal{J}\)-triviality to existential first-order logic on data words.
Finally, we aim to extend our topological theory of recognizable data languages, and the corresponding nominal Reiterman theorem, to algebraic structures beyond orbit-finite monoids. Potential instances include algebras for a signature \(\Sigma\), which serve as recognizers for data tree languages, infinitary structures such as nominal \(\omega\)-semigroups [46], modeling languages of infinite data words, and algebraic structures with binders, which we expect to bear interesting connections to data languages with binders and their automata models [40, 44].
|
2306.02443
|
ESTISR: Adapting Efficient Scene Text Image Super-resolution for
Real-Scenes
|
While scene text image super-resolution (STISR) has yielded remarkable
improvements in accurately recognizing scene text, prior methodologies have
placed excessive emphasis on optimizing performance, rather than paying due
attention to efficiency - a crucial factor in ensuring deployment of the
STISR-STR pipeline. In this work, we propose a novel Efficient Scene Text Image
Super-resolution (ESTISR) Network for resource-limited deployment platform.
ESTISR's functionality primarily depends on two critical components: a
CNN-based feature extractor and an efficient self-attention mechanism used for
decoding low-resolution images. We designed a re-parameterized inverted
residual block specifically suited for resource-limited circumstances as the
feature extractor. Meanwhile, we proposed a novel self-attention mechanism,
softmax shrinking, based on a kernel-based approach. This innovative technique
offers linear complexity while also naturally incorporating discriminating
low-level features into the self-attention structure. Extensive experiments on
TextZoom show that ESTISR retains a high image restoration quality and improved
STR accuracy of low-resolution images. Furthermore, ESTISR consistently
outperforms current methods in terms of actual running time and peak memory
consumption, while achieving a better trade-off between performance and
efficiency.
|
Minghao Fu, Xin Man, Yihan Xu, Jie Shao
|
2023-06-04T19:14:44Z
|
http://arxiv.org/abs/2306.02443v1
|
# ESTISR: Adapting Efficient Scene Text Image Super-resolution for Real-Scenes
###### Abstract
While scene text image super-resolution (STISR) has yielded remarkable improvements in accurately recognizing scene text, prior methodologies have placed excessive emphasis on optimizing performance, rather than paying due attention to efficiency - a crucial factor in ensuring deployment of the STISR-STR pipeline. In this work, we propose a novel Efficient Scene Text Image Super-resolution (ESTISR) Network for resource-limited deployment platform. ESTISR's functionality primarily depends on two critical components: a CNN-based feature extractor and an efficient self-attention mechanism used for decoding low-resolution images. We designed a re-parameterized inverted residual block specifically suited for resource-limited circumstances as the feature extractor. Meanwhile, we proposed a novel self-attention mechanism, softmax shrinking, based on a kernel-based approach. This innovative technique offers linear complexity while also naturally incorporating discriminating low-level features into the self-attention structure.
Extensive experiments on TextZoom show that ESTISR retains a high image restoration quality and improved STR accuracy of low-resolution images. Furthermore, ESTISR consistently outperforms current methods in terms of actual running time and peak memory consumption, while achieving a better trade-off between performance and efficiency.
## 1 Introduction
Scene text recognition (STR) aims at recognizing text characters in images, which has revolutionized various downstream tasks such as license plate recognition [33], handwriting recognition [39], and scene text visual question answering [2]. However, imperfect circumstances of image acquisitions always impede the development of these fields. For example, an image captured under weak light is hard to be understood in its dark parts, and long focal length will lead to an undiscriminating image due to its blurry areas. Therefore, scene text recognition remains a challenge in the wild.
Image super-resolution (SR) is a popular research topic in computer vision, which aims at recovering low-resolution (LR) images to high-resolution (HR) correspondence. Since the pioneering work SRCNN [9] was proposed, deep learning based methods have made a breakthrough in the image restoration. Many studies [10, 43, 16] have achieved compelling performance in restoration quality, and recent work [1, 34, 21, 13] improve the model efficiency to support the wider range of SR deployments. However, existing SR methods focus on generic image restoration but not perform well in text images. To address this issue, TSRN [37] first leverages SR as the pre-processor of STR. SR can retrieve low-resolution (LR) text images to high-resolution (HR) correspondence, ultimately alleviates the text recognition difficulty. Experiments show that SR-processed text images have cleaner and sharper text features than the original ones. This make them more suitable for neural network recognition.
After that, several methods are proposed to improve recognition accuracy and super-resolution quality of STISR. TBSRN [5] highlights the position and content information in the text-level layout to establish a text-focused framework. TPGSR [26] introduces the text priors into STISR to provide categorical guidance for image recovery. TATT [27] proposes a CNN-based text attention network to reformulate deformed texts in processed images. Despite the great advances in SR effectiveness, they use complicated networks and sometimes require prior knowledge of a large pre-trained model, which are hard to be applied on resources-limited devices. Moreover, STISR was essentially born as a pre-processor to strengthen STR model, but it occupies a large amount of time from the main task, as shown in Table 1. Results show that each STISR model takes up a lot of time throughout the inference process, because these models have properties including the large parameter number and the high computation complexity in model forwarding. Hence, it is truly imperative to build an accurate and efficient STISR model as the qualified plug-in pre-processor for STR.
A handicap of current STISR methods is the lack of ef
ficient and powerful feature extractors. Most representative single image super-resolution (SISR) models essentially are composed of variants of convolutional layers. Meanwhile, STISR methods tend to adopt CNN-based modules as the feature extractors before sequence-to-sequence models. However, traditional CNN-based modules are very complex and inefficient for processing images with large channel. They do not consider model simplicity when interacting locally across channels and spatial dimensions. To make the process of providing feature maps into a sequential model more lightweight, we propose a re-parameterized inverted residual block (RIRB) as our feature extractor backbone. Due to the preservation of the manifold of interest and the benefits of over-parameterization, RIRB exhibits a powerful feature extracting capacity. During the inference stage, RIRB maintains running speed as fast as \(3\times 3\) convolution by the structural re-parameterization strategy [8].
After reviewing the architecture of all the current STISR methods, we discovered that Transformer-based modules play a significant role as components of [5, 26, 27]. Transformer has achieved great success in processing sequential information and capturing cross-domain dependencies, but it is extremely computationally expensive for long input sequences as a result of quadratic space and time complexity in its multi-head self-attention. Based on the kernel-based method, we propose a novel self-attention structure with linear complexity. Besides, in order to make efficient Transformer adapt to STISR, we introduce low-level information into self-attention matrix, as the third-party similarity factor between \(Q\) and \(K\). We utilize a CNN-based module with a lightweight architecture to produce a matrix that introduces low-level prior knowledge into the similarity computation of scale dot product. This low-level feature generator offers a robust prior knowledge to modify locality discrimination and feature interactions in spatially embedded sequences.
Our main contributions can be summarized as follows:
* A re-parameterized inverted residual block (RIRB) is proposed to effectively extract significant feature to be fed into sequence model. Moreover, RIRB keeps a practically fast running speed as same as \(3\times 3\) convolution.
* Our proposal involves an innovative and efficient self-attention, termed as softmax shrinking, which reformulates the scale dot product to reduce the complexity of self-attention from quadratic to linear.
* Building upon this, we have developed an efficient scene text image super-resolution (ESTISR) network that achieves a superior trade-off between performance and efficiency. Compared with existing methods, ESTISR reduces the running time by up to 60%.
## 2 Related Work
In this section, we provide an overview of how previous SR models achieved excellent performance and improve efficiency. Then, we go down to the most relevant works that seek to STISR addressing the STR accuracy degradation in low-resolution images. Finally, we introduce recent efficient Transformers including sparse methods and kernel methods.
**Single image super-resolution.** In recent years, deep learning based methods for single image super-resolution (SISR) [9, 16, 20, 43, 22] have been characterized by a variety of advancements since the pioneering work of SR-CNN [9]. By introducing residual learning, attention mechanism, and CNNs into low-level vision tasks, numerous SR methods have made progresses in image quality measured by PSNR/SSIM. However, due to the dilemma of huge memory costs and limited computing resources on mobile devices, it is imperative to develop efficient SR [1, 10, 13, 19, 23] for real-world usages. Previous efficient SISR models tend to reduce FLOPs until Zhang [42] point out that FLOPs may not be an equivalent judgment of running speed. They propose a straightforward convolution network with structural re-parameterization [8] and achieve the fastest inference speed in mobile devices. Nevertheless, existing SISR methods do not perform satisfactorily on scene text images.
**Scene text image super-resolution.** There has been great progress in the field of scene text recognition (STR) in recent years. CRNN [31] first adopts recurrent neural network to capture scene text semantic and deduce characteristic substance. After that, ASTER [32] and MORAN [25] introduce attention mechanism to provide rectification cross text content. However, scene text recognition in low-resolution (LR) images remains a challenging task due to their blurry texture and indistinct contours. Hence, Wang [37] build TextZoom, a text-focused SR dataset which is collected by different focal lengths, which aim at using image super-resolution as the pre-processor before the STR process to alleviate the recognition difficulty. TBSRN [5]
\begin{table}
\begin{tabular}{l c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{CRNN [31] (6.43ms)} \\ \cline{2-3} & SR time (ms) & SR time occupation \\ \hline TSRN [37] & 39.41 & 85.97\% \\ \hline TBSRN [5] & 64.58 & 90.94\% \\ \hline TPGSR [26] & 59.72 & 90.28\% \\ \hline TATT [27] & 68.49 & 91.42\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of average running time in different STISR networks. We choose the classic text recognition model CRNN [31] as the subsequent recognizer. SR time is calculated from STISR model inference on TextZoom [37]. SR time occupation denotes the percentage of SR processing time during the entire recognition.
concentrates on character region and text identification by position-aware module and context-aware module. TPGSR [26] injects text priors into the SR module to provide categorical guidance, and furthermore, TATT [27] proposes a CNN-based text attention to address the variation caused by text deformations. Nevertheless, the methods above consume large amounts of computational resource and memory, which is unfriendly for deploying practical applications. Therefore, our ESTISR aims to modify STISR from an efficiency perspective, and retains its effectiveness on both recognition accuracy and image quality.
**Efficient self-attention.** After years of development, the Transformer architecture [36] has been successfully applied to computer vision tasks [4, 11, 24]. For the power of capturing contextual information in fore-and-aft character sequence, all recent STISR [26, 5, 27] utilize multi-head self-attention as fundamental components to improve learning global texture. However, quadratic space and time complexities of scale dot product impede their practical applications in STR. An intuitive solution is to modify self-attention as the new architecture backbone. Recent methods can be broadly divided into two categories, sparse attention and linear attention. Sparse attention [18, 35, 40] aim to reformulate the attention matrix, and kernel-based methods [28, 6, 29] have pushed the acceleration of Transformer in various fields. However, an in-depth examination about the literature shows that they either lack theoretical validity or are ineffective empirically in low-level vision. Our work addresses this issue by incorporating low-level knowledge into our proposed efficient self-attention mechanism.
## 3 Methodology
### Network Architecture
The architecture of the proposed ESTISR is depicted in Figure 1. ESTISR applies a sequential backbone to minimize memory consumption. We first apply a spatial Transformer network (STN) [14] on low-resolution (LR) images to tackle the misalignment problem. After understanding input images through 2 RIRBs as feature extractors, cascaded decoder layers process flattened feature maps for capturing global self-attention. We perform an upsampling operation at the end via the convolution layer and pixel-shuffle layer, ultimately getting \(\times 2\) output images.
### Re-parameterized Inverted Residual Block
#### 3.2.1 Potential of Sequential Linear Layer Merging
Structural re-parameterization is a prevalent methodology in the domain of Super Resolution (SR), serving as a lightweight strategy for employing a CNN backbone. Nevertheless, existing approaches [38, 42] primarily focus on multi-branch merging while neglecting the sequential linear layers merging aspect. In our investigation, we have identified that each structural re-parameterization technique proposed in the works of [38, 42] possesses an essential property of invertibility. This key characteristic enables the facilitation of sequential linear layers merging, thereby aug
Figure 1: Overall network architecture of the proposed efficient scene text image super-resolution (ESTISR). LMHSA represents multi-head self-attention with linear complexity, LN denotes layer normalization, BN denotes batch normalization, and FFN is the feed-forward network [36].
menting the potential of the SR framework.
Derived from the original residual block proposed by He et al. [12] as depicted in Figure 2(a), the inverted residual block introduced by Sandler et al. [30] in Figure 2(b) has demonstrated remarkable efficacy in achieving model lightweightness. However, it does come with inherent limitations that constrain the model's representational capacity. To strike a balance between the desired model complexity and runtime efficiency, we propose a novel re-parameterized inverted residual block (RIRB) that capitalizes on the intrinsic associativity and linearity of sequential linear layers, with removing activation in inverted residual, as illustrated in Figure 2(c). Despite the inherent limitation posed by the absence of non-linear activation, our innovative re-parameterization approach facilitates a reduction in network depth through the fusion of sequential linear layers. This strategic consolidation allows us to take advantage of the computational efficiency demonstrated by standard \(3\times 3\) convolutions, particularly within the context of mobile devices [42]. Notably, to mitigate the information loss caused by the activation reduction, we decrease the expanding ratio in the bottleneck to 2, in contrast to the original ratio of 6.
Our RIRB differs from existing methods such as the RepSR block and edge-oriented convolution block (ECB) [38, 42] in several ways. While these techniques focus on lightening multi-branch structures, they do not fully exploit the potential of sequential linear layers. In contrast, RIRB leverages the flexibility of channel transformation to preserve the desired manifold with a simple structure. Moreover, compared to image classification techniques like AC-Net [7] and RepVGG [8], which are designed for high-level vision, RIRB is more adaptive to low-level vision tasks.
#### 3.2.2 Structural Re-parameterization Details
We will illustrate the underlying principles behind the indispensability of sequential linear layers merging in order to unleash the potential of structural re-parameterization. By training a more intricate formulation, we are able to enhance the representational capacity of the standard convolution.
We now describe how to re-parameterize RIRB into a standard \(3\times 3\) convolution. After re-parameterization, output feature \(F\) can be calculated by the final weight and bias \(\{W_{rep},B_{rep}\}\) of convolution:
\[F=W_{rep}\mathbin{\raisebox{-1.29pt}{\scalebox{0.8}{$\circ$}}}X+B_{rep}. \tag{1}\]
Below we show the calculation of \(\{W_{rep},B_{rep}\}\):
\[W_{0,1}= perm(W_{0})\mathbin{\raisebox{-1.29pt}{\scalebox{0.8}{$ \circ$}}}W_{1}, \tag{2}\] \[B_{0,1}= (W_{1})\mathbin{\raisebox{-1.29pt}{\scalebox{0.8}{$\circ$}}}(B_{0 }\:pad\:B_{0})+B_{1}, \tag{3}\]
where \(\{W_{0},B_{0}\}\) and \(\{W_{1},B_{1}\}\) represent weight and bias of first Conv\(1\times 1\) and second Conv\(3\times 3\). \(\{W_{0,1},B_{0,1}\}\) denotes the weight and bias of their combination after re-parameterization. \(perm\) means exchanging the first and second dimensions of the convolution kernel. Then, the problem is converted to a Conv\(3\times 3\)-Conv\(1\times 1\) combination.
\[W_{rep}=res(bmm(res(W_{0,1}),res(W_{2})))+W_{I}, \tag{4}\] \[B_{rep}=mm(res(W_{0,1}),B_{0,1})+B_{2}+B_{I}, \tag{5}\]
where \(\{W_{2},B_{2}\}\) and \(\{W_{I},B_{I}\}\) represent the third Conv\(3\times 3\) and the identity. \(res\) represents reshaping kernel, and \(mm(bmm)\) is (batch) matrix multiplication. The side-way identity connection (channel-wise operation) is supposed to conduct a direct addition. However, the method how to re-parameterize it has not been developed. We regard the identity connection as a standard \(3\times 3\) convolution with
Figure 2: Residual block (a), inverted residual block (b) and our re-parameterized inverted residual block (c).
sparse kernel. In this way, RIRB is re-parameterized into a standard Conv\(3\times 3\). Although RIRB is more expensive for training than the standard Conv\(3\times 3\), an efficient inference is our drive to improve the model performance. The whole pseudo code can be viewed in appendix.
### Self-attention with linear complexity
In this section, we empirically analyze the choice of patch embedding and spatial embedding in image Transformer. We then introduce softmax shrinking, a novel linear self-attention mechanism that incorporates a low-level feature weight matrix. We provide detailed explanations of each component to enhance comprehension of their respective roles and functionalities.
#### 3.3.1 Patch-wise or Spatial-wise Embedding
We first explore the right way of embedding images in Transformer. After TSRN [37] emerges, recent STISR methods adopt the spatial-wise embedding for the later dot-product self-attention calculation. This means each embedded token represents the channels at one position. Patch-wise embedding is a method of representing an image as a sequence of patches like the vision transformer [11], which has shown good performance in various high-level vision tasks. We select TBSRN [5], a Transformer based super-resolution network as the baseline. As shown in Figure 3, "ViT" denotes using patch embedding before the query, key and value fed into self-attention, and the suffix number represents the patch size. The other configurations follow the original TBSRN.
Regarding Figure 3, our analysis revealed that using patch-wise embedding in ViT has a detrimental effect on the STISR task. Spatial-wise embedding appears to be better suited for extracting low-level information, due to its ability to adapt to small dimensionality. In contrast, patch-wise embedding tends to focus more on long-range dependencies. Furthermore, the Transformer model heavily relies on local information, and if the sequence length is reduced, the positional embedding's impact is significantly diminished, which affects the Transformer's effectiveness on the sequence. In summary, our results indicate that spatial-wise embedding outperforms patch-wise embedding, and that low-level information is more critical than high-level information for achieving effective STISR.
#### 3.3.2 Linear Self-attention
The vanilla multi-head self-attention [36] has a quadratic scaling with sequence length, resulting in a large computation burden when \(n\) is large. To address this issue and improve the performance of ViT on STISR tasks, we propose a novel linear self-attention method, named softmax shrinking that reduces complexity from \(O(n^{2})\) to \(O(n)\). Specifically, we introduce a low-level feature generator as an external factor in scale dot product. Transformer-based modules and improves the efficiency of the model.
Kernel-based methodFirst we introduce the kernel-based method in previous work. Given queries \(Q\), keys \(K\) and values \(V\in\mathbb{R}^{n\times d}\), the general form of self-attention is
\[Attention(Q,K,V)=softmax(\frac{QK^{T}}{\sqrt{d}})V. \tag{6}\]
The vanilla Transformer uses linear projection kernel to extract latent feature. We also apply a linear transformation to process incoming data. In Figure 4, we could easily get that the attention map is an \(n\times n\) matrix and the computation complexity is \(O(n^{2})\). Various linear self-attention mechanisms [5, 6, 29] attempt to tackle the quadratic complexity of such operation by a kernel decomposition. Katharopoulos _et al_. [15] interpret \(softmax(\cdot)\) as a general similarity matrix between \(Q\) and \(K\). Thus, we reconstitute self-attention as a similarity matrix of \(Q\) and \(K\) as follows:
\[Attention(Q,K,V)=similar(Q,K)V. \tag{7}\]
Then, similarity matrix can be described as the multiplication of representations of \(Q\) and \(K\) after feature extraction:
\[Attention(Q,K,V;\phi)=(\phi(Q)\phi(K)^{T})V, \tag{8}\]
where \(\phi(\cdot)\) denotes the latent representation of input data. After that, dot-product computation order can be rearranged due to associativity of matrix multiplication:
\[Attention(Q,K,V;\phi)=\phi(Q)(\phi(K)^{T}V). \tag{9}\]
As visually illustrated in Figure 4. instead of explicitly computing the attention matrix \(A=QK^{T}\in\mathbb{R}^{n\times n}\), in Eq. (9) we first calculate \(\phi(K)^{T}V\in R^{d\times d}\), and then multiply \(\phi(Q)\in R^{N\times d}\). By using this trick, we only incur
Figure 3: Accuracy comparison of patch embedding and spatial embedding.
a computation complexity of \(O(nd^{2})\). Note that, in spatial-wise embedded images, the feature dimension of one head \(d\) in sequence is always much smaller than the input sequence length \(n\), so we can safely omit \(d\) and achieve computation complexity of \(O(n)\), as visually illustrated in Figure 4.
Softmax shrinkingAlthough kernel-based methods largely decrease computation of self-attention, such lightweight schemes are originally designed and applied for natural language processing, and not performs satisfactorily in the STISR task that involves sequential text and low-level feature information. Hence, we propose a novel \(softmax(\cdot)\) shrinking based on kernel-based method, to add low-level feature information into self-attention for capturing more localized information.
In [29], the \(softmax(\cdot)\) operator is considered as the key property to maintain approximation capacity of self-attention. Reviewing vanilla self-attention formula in Eq. (6), we found that it uses the full part of \(Q\) and \(K\) to calculate a similarity matrix and normalize it. Kernel-based methods decompose this process as in Eq. (8) to approximate \(softmax(\cdot)\) such that:
\[similar(Q,K;\phi)=\phi(Q)\phi(K)^{T}, \tag{10}\]
and we assume that \(Q\) and \(K\) can continue to be decomposed as a multiplication of generated feature matrix and themselves:
\[\phi(Q) =\psi(q)\phi(Q^{\prime}), \tag{11}\] \[\phi(K)^{T} =\psi(k)\phi(K^{\prime})^{T}, \tag{12}\]
where \(\psi\) is a kernel function that maps processed data to the latent low-level feature matrix. After that, the similarity function can be denoted as secondary decomposed format as:
\[similar(Q^{\prime},K^{\prime},q,k;\phi,\psi)=(\psi(q)\phi(Q^{\prime}))\cdot( \psi(k)\phi(K^{\prime})^{T}). \tag{13}\]
We could merge \(\psi(q)\) and \(\psi(k)\) via associativity in matrix product:
\[similar(Q^{\prime},K^{\prime},q,k;\phi,\psi)=\phi(Q^{\prime})\cdot(\psi(q)\psi (k))\cdot\phi(K^{\prime})^{T}. \tag{14}\]
Finally, we denote \(\psi(q)\psi(k)\) as the normalization of low-level similarity matrix \(S\) between \(Q\) and \(K\). Eventually, it can be seen as
\[softmax(S)=\psi(q)\psi(k). \tag{15}\]
We rearrange the multiplying order to retain the linear complexity. The ultimate result of self-attention is calculated as:
\[Attention(Q,K,V,S;\phi)=\phi(Q)((softmax(S)\phi(K)^{T})V). \tag{16}\]
In the vanilla multi-head self-attention, \(softmax(\cdot)\) is applied to normalize each token on the rows of attention matrix. Compared with Eq. (6), Eq. (16) shrinks the domain of \(softmax(\cdot)\) to low-level feature similarity matrix \(S\). The whole transformation finally keeps the complexity of self-attention as \(O(n)\) as shown in Figure 4. We employ the ELU activation function as \(\phi(\cdot)\) to retain non-negative property as [29] does.
Low-level feature generatorAs aforementioned, attention structure needs to extract representative low-level feature information to reformulate dot-product self-attention. In this paper, we propose a low-level feature generator, a CNN-based module to preserve low-level prior knowledge into a square matrix. As shown in Figure 1, We adopt the basic convolution block as the locality feature extractor and maximum pooling layer to aggregate the representative feature from local area.
Due to practical consideration, we avoid concatenation, splitting and attention branch during this process, but only apply pooling and convolution to maintain the module efficiency within self-attention.
Denoising process before self-attentionWe first employ a denoising process on the sequence \(X\) after positional embedding, object to approximate original sequence by countering the distribution. Firstly, we sample an expected number of corrupting tokens from a continuous uniform distribution. Specifically, we sample \(n\) from \(\mathcal{U}([p*l,l])\)
Figure 4: Illustration of the computations for vanilla self-attention with quadratic complexity (left), linear self-attention via kernel decomposition (middle), and our linear self-attention via softmax shrinking (right).
where \(p\) represent the minimum ratio of the sample number \(p\sim\mathcal{U}([0,1])\) and \(l\) denotes the sequence length. Once these values are obtained, we randomly select \(n\) pixels to apply the corruption function described below,
\[X_{c}=(1-n)*X+n*N, \tag{17}\]
\(X\) means original pixel sequence before self-attention and \(X_{c}\) indicates which after corruption. Noises \(N\) represents a sequence of random values which are generated by discrete uniform distribution \(N=(r_{1},r_{2},\ldots,r_{l}),r\sim\mathcal{U}(0,1,\ldots,255)\).
## 4 Experiments
In this section, we first verify our ESTISR efficiency on real-world applications. Then we conduct experiments on dataset to evaluate our ESTISR performance on recognition accuracy improvement and image quality.
### Dateset
We select TextZoom [37] as our evaluation benchmark. TextZoom contains 21740 LR-HR image pairs, which are collected from two image super-resolution datasets, Real-SR [3] and SR-RAW [41]. The LR-HR pairs are captured by cameras with different local length to approximate the real scene. LR and HR images are resized to \(16\times 64\) and \(32\times 128\), respectively.
### Implementation details
Our ESTISR is implemented on PyTorch. We set mini-batch size to \(16\) as each training iteration input with the ADAM optimizer [17] for \(500\) epochs. The initial learning rate is set to \(4\times 10^{-4}\), and decreases half per \(2\times 10^{6}\) iterations for total \(2\times 10^{7}\) iterations. Training is conducted on one NVIDIA Titan X GPU with 2.5GB Memory. Specifically, we adopt text-focused loss [5] as loss function to yield better performance. Text-focused loss is mainly composed of the L2 loss function with assistance of a pre-trained text recognition module to highlight position and content. In order to get precise measurements on efficiency, we gauge the running time of STISR model with \(torch.Event\), and record peak memory consumption with \(torch.cuda.max\_memory\_allocated\) that across the STISR-STR process.
### Comparison with STISR Methods in Efficiency
We evaluate ESTISR and other STISR methods on the peak memory consumption and running time (speed), which are the most significant efficiency metrics. Others including parameters and MACs reflect the computation and memory occupation but not equal practical results. We select CRNN [31] as the subsequent STR model. To approximate the model inference under real scene, the full STISR-STR process is conducted across evaluating dataset of TextZoom [37]. The average running time is calculated by recording each image forwarding time during evaluation. Results can be seen in Table 2, which show that ESTISR achieves the lowest peak memory consumption and average running time. Especially, ESTISR reduces 35% average running time compared with the primal method TSRN, and 60% of TBSRN, which has the similar network architecture of ESTISR. Consequences indicate that our RIRB and softmax shrinking have a positive implication for lightweight STISR deployment on both resource costs and time-consuming.
### Comparisons with SISR and STISR Methods in Performance
We compare our ESTISR with current scene text image super-resolution methods including TSRN [37], TBSRN [5], TPGSR [26], and TATT [27]. Besides, we also compare our method with single image super-resolution methods including SRCNN [9] and SRResNet [20].
**Recognition accuracy improvement.** To measure ESTISR effectiveness on the improving scene text recognition, we select three text recognition models, including CRNN [31], ASTER [32], and MORAN [25] as the benchmark. As shown in Table 3, ESTISR achieves a competitive performance on each recognizer. Compared with the baseline TSRN, ESTISR gets a large improvement in recognition accuracy. Compared with recent state-of-the-art methods, ESTISR only sacrifices a little performance degradation, but achieves a better trade-off between efficiency and performance.
**Image restoration quality.** We also compare ESTISR with such methods on the conventional PSNR/SSIM metrics in Table 4. ESTISR has a competitive performance on SR image results.
### Visual comparison
We visualize several super-resolution (SR) samples of TSRN [37], TBSRN [5], TPGSR [26], TATT [27], and our ESTISR in Figure 5. Compared with lightweight model TSRN, ESTISR generates higher quality images, and get better accuracy in recognition. Compared with other methods, ESTISR achieves competitive performance in various difficulty. Besides, ESTISR is robust in hard images (see the last column) and incomplete images (see the third column).
### Ablation Study
In this section, we will evaluate the effectiveness of each component, including the RIRB and softmax shrinking. Experiments are conducted on TextZoom and recognition accuracy is computed by the pre-trained CRNN [31]. All the ablation experiments are conducted with the same conditions as the above except for recording the results in 200
epochs.
#### 4.6.1 Effect of different structural re-parameterization blocks
We set the standard convolution layer as the baseline, and then compare our RIRB with other re-parameterization kernels including RepVGG [8], RepSR block [38] and edge-oriented convolution block (ECB) [42] to explore its effectiveness in feature extraction. Additionally, we adapt the expanding ratio in inverted residual from 2 to 6. In Table 5, we found that RIRB outperforms existing re-parameterization methods. RIRB has better performance than RepVGG, a re-parameterization block for high-level tasks. RepSR and ECB perform brilliantly on SISR as the ConvNet backbone, but for STISR, our RIRB has better generalization ability at various levels of low-resolution images. According to accuracy changes on expanding ratio, with the expanding ratio increasing, RIRB is confronted with a performance degradation gradually. Above all, comparison in Conv and RIRB indicates that structural re-parameterization could largely enhance CNN with no sacrifice of efficiency.
#### 4.6.2 Effect of softmax shrinking
In ESTISR and TBSRN [5], we substitute scales dot-product attention with vanilla attention to explore the effect of linear self-attention. Particularly, low-level feature generator is removed in vanilla self-attention. In Table 6, our experiments showed that vanilla self-attention is computationally expensive due to its quadratic computation and spatial complexity. Despite this, we found no significant drops in performance. Linear self-attention with softmax shrinking outperformed vanilla self-attention in ESTISR, emphasizing the importance of low-level information in image transformers. Although softmax shrinking requires additional computation, its performance improvement justifies the added \(d\times d\) matrix computation cost.
### Different linear self-attention methods.
We select efficient self-attention methods with linear complexity as contrasts, including performer [6], cosformer [29] and RFA [28].
**Cosformer [29]**. To maintain the non-negative property of attention matrix, cosformer adopts ReLU activation as the representational function \(\phi(\cdot)\). Specifically, we replace positional encoding with cos-based reweighting mechanism as cosformer does. The ultimate similarity has the following form, where \(i,j\) represent the position of sequences \(Q\) and \(K\):
\[similar(q_{i},k_{j})=ReLU(q_{i})ReLU(k_{j}^{T})cos(\frac{\pi}{2}\times\frac{i- j}{M}), \tag{18}\]
which proposes a novel kernel decomposition relies on non-negative property of \(softmax(\cdot)\) attention.
**Performer [6]**. Performer adopt the fast attention via positive orthogonal random features approach (FAVOR+) algorithm to approximate \(softmax(\cdot)\) kernel. Particularly, in experiments we select bidirectional attention mechanism as Transformer backbone, and the same kernel-based method as the softmax shrinking in \(softmax(\cdot)\).
**Random feature attention (RFA) [28]**. Based on random feature map \(\phi(\cdot)\), RFA builds a unbiased estimate to \(similar(Q,K)\). Specifically, to overcome the exceed de
\begin{table}
\begin{tabular}{l c c|c c} \hline \hline Method & Params (M) & MACs (G) & Runtime (ms) & Memory (G) \\ \hline \hline TSRN [37] & 2.67 & 0.88 & 39.41 & 1.02 \\ \hline TBSRN [5] & 2.98 & 1.19 & 64.58 & 4.21 \\ \hline TPGSR [26] & 11.88 & 1.72 & 59.72 & 2.68 \\ \hline TATT [27] & 15.94 & 1.99 & 68.49 & 2.15 \\ \hline ESTISR (ours) & 2.16 & 1.21 & **25.52** & **1.02** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Efficiency comparison of STISR models.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Loss} & \multicolumn{4}{c}{ASTER} & \multicolumn{4}{c}{MORAN} & \multicolumn{4}{c}{CRNN} \\ \cline{2-13} & & Easy & Medium & Hard & Average & Easy & Medium & Hard & Average & Easy & Medium & Hard & Average \\ \hline Bicubic & - & 42.4\% & 31.26 & 47.26 & 60.6\% & 37.9\% & 30.8\% & 44.1\% & 36.4\% & 21.1\% & 21.1\% & 26.8\% \\ \hline SRCNN & \(L_{2}\) & 69.4\% & 43.4\% & 32.2\% & 49.5\% & 63.2\% & 39.0\% & 30.2\% & 45.3\% & 38.7\% & 21.6\% & 20.9\% & 27.7\% \\ \hline SRResNet & \(L_{2}+L_{tr}+L_{p}\) & 69.4\% & 47.3\% & 34.3\% & 51.3\% & 60.7\% & 42.9\% & 32.6\% & 46.3\% & 39.7\% & 27.6\% & 22.7\% & 30.6\% \\ \hline TSRN & \(L_{2}+L_{GP}\) & 75.1\% & 56.3\% & 40.1\% & 58.3\% & 70.1\% & 53.3\% & 37.9\% & 54.8\% & 52.5\% & 38.2\% & 31.4\% & 41.4\% \\ \hline TBSRN & \(L_{POS}+L_{CON}\) & 75.7\% & 59.9\% & 41.6\% & 60.0\% & 74.1\% & 57.0\% & 40.8\% & 58.4\% & 59.6\% & 47.1\% & 35.3\% & 48.1\% \\ \hline TPGSR & \(L_{2}+L_{TT}\) & 77.0\% & 60.9\% & 42.4\% & 60.9\% & 72.2\% & 57.8\% & 41.3\% & 57.8\% & 61.0\% & 49.9\% & 36.7\% & 49.8\% \\ \hline TATT & \(L_{2}+L_{TP}+L_{TSC}\) & 78.9\% & 63.4\% & 45.4\% & 63.6\% & 72.5\% & 60.2\% & 43.1\% & 59.5\% & 62.6\% & 53.4\% & 39.8\% & 52.6\% \\ \hline \hline ESTISR (ours) & \(L_{POS}+L_{CON}\) & 75.8\% & 82.4\% & 60.0\% & 71.2\% & 56.2\% & 40.9\% & 57.1\% & 59.7\% & 47.9\% & 37.2\% & 49.0\% \\ \hline HR & - & 94.2\% & 87.7\% & 76.2\% & 86.6\% & 91.2\% & 85.3\% & 74.2\% & 84.1\% & 76.4\% & 75.1\% & 64.6\% & 72.4\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: Scene text recognition accuracy comparison.
pendency positional encoding, RFA augments history in sequential models with a learned gating mechanism. In general, RFA powers down a part of sequence information via reweighting factors of similarity matrix to improve model complexity.
We conducted a performance comparison of different self-attention formats in our ESTISR model, as summarized in Table 7. Among the evaluated methods, self-attention with softmax shrinking achieved the highest recognition accuracy and performed well on the PSNR/SSIM metrics. Cosformer, which utilizes a similar kernel-based approach, achieved the second best results. However, Performer and RFA exhibited a significant performance degradation, indicating that aggregating sparse information for the \(softmax(\cdot)\) kernel is not conducive to low-level tasks. The underlying reasons behind the performance disparities in the low-level field remain unclear. In our future work, we plan to investigate the fundamental principles of efficient transformers in low-level tasks and explore their applicability in diverse domains.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Loss} & \multicolumn{4}{c}{PSNR} & \multicolumn{4}{c}{SSIM} \\ \cline{3-10} & & Easy & Medium & Hard & Average & Easy & Medium & Hard & Average \\ \hline Bicubic & - & 22.35 & 18.98 & 19.39 & 20.35 & 0.7884 & 0.6254 & 0.6592 & 0.6961 \\ \hline SRCNN & \(L_{2}\) & 23.48 & 19.06 & 19.34 & 20.78 & 0.8379 & 0.6323 & 0.6791 & 0.7227 \\ \hline SRResNet & \(L_{2}+L_{tv}+L_{p}\) & 24.36 & 18,88 & 19.29 & 21.03 & 0.8681 & 0.6406 & 0.6911 & 0.7403 \\ \hline TSRN & \(L_{2}+L_{GP}\) & 25.07 & 18.86 & 19.71 & 21.42 & 0.8897 & 0.6676 & 0.7302 & 0.7690 \\ \hline TBSRN & \(L_{POS}+L_{CON}\) & 23.46 & 19.17 & 19.68 & 20.91 & 0.8729 & 0.6455 & 0.7452 & 0.7603 \\ \hline TPGSR & \(L_{2}+L_{TP}\) & 23.73 & 18.68 & 20.06 & 20.97 & 0.8805 & 0.6738 & 0.7440 & 0.7719 \\ \hline TATT & \(L_{2}+L_{TP}+L_{TSC}\) & 24.72 & 19.02 & 20.31 & 21.52 & 0.9006 & 0.6911 & 0.7703 & 0.7930 \\ \hline \hline ESTISR (ours) & \(L_{POS}+L_{CON}\) & 23.68 & 19.08 & 19.91 & 21.03 & 0.8836 & 0.6530 & 0.7487 & 0.7678 \\ \hline \hline \end{tabular}
\end{table}
Table 4: PSNR and SSIM comparisons.
Figure 5: Different STISR models’ super-resolution images of samples in Textzoom [37]. LR_bicubic means low-resolution images after bicubic upsampling.
## 5 Conclusion
This paper introduces an efficient scene text image super-resolution (ESTISR) network with the awareness of model efficiency for STISR. We propose a re-parameterized inverted bottleneck (RIRB) to enhance the feature extraction module by structural re-parameterization strategy. Furthermore, we pay attention to the quadratic complexity of the widely used Transformer in STISR, and propose an softmax shrinking method to power-down self-attention complexity from \(O(n^{2})\) to \(O(n)\), meanwhile introducing low-level feature information to improve its representational capacity. Extensive experiments show that our ESTISR could largely reduce the time-consuming of STISR, meanwhile maintaining the image quality and recognition accuracy, achieving a better trade-off between performance and efficiency.
|
2304.03124
|
FeFET-based MirrorBit cell for High-density NVM storage
|
HfO2-based Ferroelectric field-effect transistor (FeFET) has become a center
of attraction for non-volatile memory applications because of their low power,
fast switching speed, high scalability, and CMOS compatibility. In this work,
we show an n-channel FeFET-based Multibit memory, termed MirrorBit, which
effectively doubles the chip density via programming the gradient ferroelectric
polarizations in the gate using an appropriate biasing scheme. We have
experimentally demonstrated MirrorBit on GlobalFoundries HfO2-based FeFET
devices fabricated at 28 nm bulk HKMG CMOS technology. Retention of MirrorBit
states has been shown up to $10^5$ s at different temperatures. Also, the
endurance is found to be more than $10^3$ cycles. A TCAD simulation is also
presented to explain the origin and working of MirrorBit states based on the
FeFET model calibrated using the GlobalFoundries FeFET device. We have also
proposed the array-level implementation and sensing methodology of the
MirrorBit memory. Thus, we have converted 1-bit FeFET into 2-bit FeFET using a
particular programming scheme in existing FeFET, without needing any notable
fabrication process alteration, to double the chip density for high-density
non-volatile memory storage.
|
Paritosh Meihar, Rowtu Srinu, Vivek Saraswat, Sandip Lashkare, Halid Mulaosmanovic, Ajay Kumar Singh, Stefan Dünkel, Sven Beyer, Udayan Ganguly
|
2023-04-06T14:52:08Z
|
http://arxiv.org/abs/2304.03124v3
|
# FeFET-based MirrorBit cell for High-density NVM storage
###### Abstract
Ferroelectric field-effect transistor (FeFET) has become a center of attraction for non-volatile memory application because of their low power, fast switching speed, and high scalability. In this work, we show an n-channel FeFET-based Multibit memory, termed "MirrorBit", which effectively doubles the chip density via programming the gradient ferroelectric polarizations in the gate, using an appropriate biasing scheme. We have experimentally demonstrated MirrorBit on GlobalFoundries' HfO\({}_{2}\)-based FeFET devices fabricated at 28 nm bulk HKMG CMOS technology, with retention up to 10\({}^{4}\) s. A TCAD simulation of the MirrorBit operation is also presented, based on the FeFET model calibrated using the GlobalFoundries FeFET device. The simulation results reveal that the MirrorBit has uniform and non-uniform (gradient) polarization variations in the ferroelectric layer from source to drain, consisting of a total of 4 states. The spatially programmed polarization can be distinguished based on its effect on channel current in two different read directions, namely, source read and drain read. The threshold voltages, V\({}_{\rm T}\), are symmetric for uniform polarization bit and asymmetric when MirrorBit is programmed for source and drain reads. Thus, we have converted 1-bit FeFET into 2-bit FeFET using programming and reading schemes in existing FeFET, without the need for any special fabrication process, to double the chip density for high-density non-volatile memory storage.
Band diagram, Drain/Source write/read, Ferroelectricity, Field effect transistor (FeFET), Polarization
## 1 Introduction
Moore's law has been predicting the physical transistor scaling for nearly 6 decades [1]. It states that the chip density doubles every two years reducing the cost by half. The requirement for high-density and high-performance memory cells has become essential due to big data, neural network training, IoT, etc [2]. Emerging memories [3, 4] like FeRAM (Ferroelectric-RAM) [5], ReRAM (Resistive-RAM) [6], STT-MRAM (Spin-transfer torque Magnetic-RAM) [7], have shown the potential to bridge the gap between memory and storage, shifting the Von-Neumann computer architecture paradigm toward in-memory compute and biomimetic network architectures [8].
Transistor scaling has been a primary motivator toward increasing memory capacity, manufacturing low-power and high-performance devices [9][10]. There have been numerous methods developed to increase the memory capacity, such as dimensional scaling, Multi-Level-Cell (Strata flash)/Triple-Level-Cell and 3D-stacking of flash transistors [11], developing new geometric designs like FinFET [12], and engineering material/gate-stack of emerging memories [13, 14, 15]. In 2002, the Joint venture between AMD and Fujitsu Ltd. "Spansion" commercialized a new flash memory technology, called "MirrorBit", which effectively doubles the memory capacity [16].
The ferroelectric memory, after the discovery of HfO\({}_{2}\)-based ferroelectric devices in 2011 [17], has become a promising candidate and a competitor to the existing and other emerging memories [18]. The HfO\({}_{2}\)-based ferroelectric memory offers fast switching speed, low operational voltage, high scalability, and CMOS compatibility [19]. We show MirrorBit operation in Ferroelectric-FET (FeFET). Unlike the charge storage phenomena in Charge-Trap Flash devices, the FeFET works on the direction of polarization of the ferroelectric layer. The conventional FeFET has two states or 1-bit of
Figure 1: **MirrorBit Concept:** a) Spansion MirrorBit cell, b) FeFET based MirrorBit, and c) different polarization configurations and their corresponding drain (\(\mathsf{I_{b}}\)) and source (\(\mathsf{I_{s}}\)) currents.
information, which correspond to saturated UP and DOWN polarizations [19]. A suitable biasing scheme creates a gradient in polarization, which gives rise to another bit of information, effectively doubling the chip density.
## 2 Methods
Fig. 2 (a) represents a device schematic of FeFET fabricated at GlobalFoundries' 28-nm bulk HKMG CMOS technology [18]. We have performed the electrical characterization, using Agilent B1530A 4-channel Waveform generator/Fast measurement unit (WGFMU), on FeFET devices of dimensions \(L=240\) nm and \(W=240\) nm. The pulse scheme involves a MirrorBit write and two read events. Also, a TCAD model of the FeFET is developed and calibrated using experimental threshold voltages (V\({}_{T}\) s) matching to explain the origin and working of the MirrorBit.
## 3 Results and discussion
### GF-FeFET electrical characterization and TCAD model validation
The measured transfer characteristics of the FeFET (Fig. 2(c)) shows two threshold voltages (V\(\tau\) ), which correspond to two uniform polarization states. The uniform-write high V\(\tau\) (UHW) or Bit-00 state is written by applying -4.5 V, 10 \(\mu\)s voltage pulse at the gate and uniform-write low V\({}_{T}\) (UWL) or Bit-11 state by 4.5 V, 10 \(\mu\)s voltage pulse.
Programming pulse width can be reduced up to few tens of nanoseconds as observed in [20]. Recently, it has been demonstrated that this kind of FeFET devices can even switch in the sub- nanosecond range, down to 300 ps [21], which however, necessitates a special probing equipment. For simplicity and due to our measurement set-up limitations, we adopt pulse widths of 10 \(\mu\)s. We have chosen this time to ensure a full-scale saturated switching V\({}_{T}\), because the initial state for MirrorBit programming requires a fully programmed low V\({}_{T}\) state.
To read the V\({}_{T}\) of the states a ramp voltage (-0.5 V to 1.5 V) is applied at the gate, keeping drain voltage at 0.1 V and source grounded. The V\({}_{T}\) s of the device are extracted by constant current method at I\({}_{0}=1\)\(\mu\)A. The TCAD model of the FeFET has been developed by adding a 10 nm ferroelectric HfO2 layer in the standard MOSFET model such that the gate stack becomes Metal-Ferroelectric-Insulator-Semiconductor (MFIS). The parameters obtained to calibrate the model are such that the V\(\tau\) shift matches with the measured V\(\tau\) shift for two polarization states as shown in Fig. 2(c). The two V\(\tau\) s are V\(\tau\)\(H=1.25\) V and V\(\tau\)\(L=0.1\) V. As the present study focuses on the concept demonstration and working of the MirrorBit, therefore without loss of generality, only V\({}_{T}\) matching is performed. However, one can calibrate the model by exact fitting (by adding non-idealities like DIT).
### MirrorBit write
The ferroelectric layer is divided into two parts (Fig. 3, device schematic), for ease in understanding and comparing the states. The programming of UWL and UWH states is explained in the previous section. To program Drain Write (DW) or Bit-10 and Source Write (SW) or Bit-01 states, we start with an initial UWL state. We apply a 4 V, 100 \(\mu\)s at the drain to obtain the DW state. The direction of the electric field near the drain is from drain to gate, which is opposite to
Figure 3: **MirrorBit write**: a), b), c), and d) Band diagram and Polarization variation in the lateral direction in the channel and the ferroelectric HfO2 layer respectively for IWH, IWM, SW and DW polarization configurations respectively. For uniform write case the energy band is uniformly shifted up or down, however, for DW/SW case the band is shifted up near terminal where the positive write voltage is applied, taking a triangular (Schottky barrier like) shape
Figure 2: **Experiment and Model calibration**: a) GF-FeFET Device schematic, b) 2D-model schematic of FeFET in TCAD, and c) Experimental and simulated I\({}_{0}\) -V\({}_{G}\) for model validation for which the threshold voltage positions (V) are matched for uniform-write low (UWL) and uniform-write high (UWM) V\(\tau\).
the polarization, leading to a polarization switching near the drain. Similarly, we write the SW state by initializing with UWL state followed by a 4 V pulse at the source.
In the TCAD, to understand the origin of MirrorBit DW and SW states, we apply the same pulse scheme. Starting with the initial UWL state, DW (Bit-10) is obtained, by applying \(V_{D}=4\,V\) keeping other terminals grounded and in the measurement by applying the \(V_{G}=0\)_V, \(V_{D}=4\,V\), \(V_{S}=0\)_V,_ and \(V_{sub}=0\)_V_, producing a gradient in the electric field in the ferroelectric layer (Fig 3 (b) and (c)). This causes polarization to switch in a gradient fashion, due to which the conduction band energy becomes triangular or Schottky-like barrier (Fig. 3 (c) blue curve). The peak of this barrier is towards the Source, where the write pulse was applied. Similarly, we obtain the SW (Bit-01) as well (Fig. 3 (d)).
Although not explicitly considered in this work, with the lateral extension of the electric field and the formation of polarization gradient within the ferroelectric layer, a domain-to-domain interaction might be expected. This might be even more pronounced for different device geometries (like different channel lengths or widths) where percolation effects of specific domain alignments might play a role. These effects still need a deeper theoretical an experimental understanding.
### MirrorBit read
Reading the MirrorBit states involves comparing the channel currents in both the directions. When a read voltage of 1.5 V is applied at the drain and V\({}_{T}\) is obtained through I\({}_{D}\)-V\({}_{G}\), it is termed as Source Read (SR). Similarly, it is Drain Read (DR), if the read voltage is applied at the Source. For the conventional bits, UWL (Bit-11) and UWH (Bit-00), both SR and DR currents are high (Low-V\(\gamma\)) and low (High-V\(\tau\)) respectively. For DW (Bit-10) state, output characteristics become asymmetric (S Schottky-like current) for SR and DR.
When V\({}_{G}\) is kept fixed and source voltage (DR) is varied (Fig. 4 (a)), we observe an off-state current, similar to a Schottky barrier off-current. However, when the drain voltage (SR) is varied, we see a higher current which increases with read voltage. The reason for this asymmetric current lies in the shape of the conduction band energy. When read voltage is applied at the source, the barrier for electrons (at the drain side) hardly changes (Fig. 4 (b)), however, the barrier decreases, when the drain voltage is applied, allowing higher channel current.
In MOSFET terminology, UWL shows Low-V\({}_{T}\) for both SR and DR (Fig. 5 (a) and (c)), and UWH shows a high-V\({}_{T}\). However, for DW, we see that I\({}_{D}\)-V\({}_{G}\) is shifted for DR but not for SR i.e. Low-V\({}_{T}\) for SR and High-V\({}_{T}\) for DR, resulting in an asymmetric V\({}_{T}\) shifts and vice versa for SW. Fig. 5 shows the summary of the Experimental and simulated transfer curves and corresponding V\(\tau\) distributi
Figure 4: **MirrorBit read:** a) V\({}_{S}\) and V\({}_{D}\) sweep for DW (Bit-10) showing the characteristics similar to a Schottky junction, b) and c) Band-diagram for DR and SR, showing the change in the peak heights, giving rise to different currents in two directions
Figure 5: **MirrorBit Memory:** a) and c) I\({}_{D}\)-V\({}_{G}\) for UWL, UWH, DW, and SW cases showing that only one of the V\({}_{T}\) s shifts asymmetrically for DW/SW, but both V\(\tau\) s shift symmetrically for UW case, b) and d) corresponding V\({}_{T}\) distribution of all the states.
Figure 6: **MirrorBit Device-to-device variation:** MirrorBit transfer curves and V\(\tau\) distribution measured for 5 devices.
have also measured 5 devices to capture the device-to-device (D2D) variation of the MirrorBit operation (Fig. 6).
### Retention: DW and SW states
We have measured the room temperature retention of DW and SW states. After the DW state is programmed, SR and DR V\({}_{T}\) s are measured. Both V\({}_{T}\) s are increasing initially and going toward saturation maintaining the memory window for \(>10^{4}\) s (Fig. 7).
### MirrorBit array
Although, this work focuses on the device level demonstration of FeFET-based MirrorBit, we also propose, in brief, the array implementation and biasing scheme, which requires further study. Historically, the MirrorBit flash technology has preferred NOR array type for demonstrating successful functionality, scaling and high volume production [20]. Nevertheless, the FeFETs seem to show the best functionality in an AND-array type [21].
In the AND-array implementation of the MirrorBit, there are three voltage-controlled lines, Wordline (WL), Bitiline (BL), and Sourceline (SL). The so-called V-V/2 biasing scheme could be an appropriate programming scheme in this structure, as exemplified in Fig. 8. Let's say DW is to be written in the highlighted cell. Programming voltage \(V\) (here, V = 4 V @ 100 us or 3.5 V @ 400 us) is applied at the BL, \(0V\) is applied at the WL and SL, and _V/2_ is applied at the rest of lines. In this way, the voltage drop across the ferroelectric layer for the unselected cells is insufficient to change the polarization within the programming time.
## 4 Conclusions
We showed the operation and the working of a FeFET-based MirrorBit, experimentally and through simulation. The device-to-device variation shows a tight and distinguishable distribution of V\({}_{T}\) s. The simulation reveals a Schottky-like triangular barrier formation, which explained the asymmetry in the channel current in two directions. The asymmetric states DW and SW are retaining their states for \(>10^{4}\) s. Lastly, the array implementation along with memory biasing scheme is also proposed. Hence, without any process alteration, purely through biasing, the density of bits has been doubled. This opens up avenues for using fundamental device characteristics beyond its defined functionality.
|
2301.06682
|
The structure of heavily doped impurity band in crystalline host
|
We study the properties of the impurity band in heavily-doped non-magnetic
semiconductors using the Jacobi-Davidson algorithm and the supervised deep
learning method. The disorder averaged inverse participation ratio (IPR) and
thouless number calculation show us the rich structure inside the impurity
band. A Convolutional Neural Network(CNN) model, which is trained to
distinguish the extended/localized phase of the Anderson model with high
accuracy, shows us the results in good agreement with the conventional
approach. Together, we find that there are three mobility edges in the impurity
band for a specific on-site impurity potential, which means the presence of the
extended states while filling the impurity band.
|
Hongwei Chen, Zi-Xiang Hu
|
2023-01-17T03:50:37Z
|
http://arxiv.org/abs/2301.06682v1
|
# The structure of heavily doped impurity band in crystalline host
###### Abstract
We study the properties of the impurity band in heavily-doped non-magnetic semiconductors using the Jacobi-Davidson algorithm and the supervised deep learning method. The disorder averaged inverse participation ratio (IPR) and thouless number calculation show us the rich structure inside the impurity band. A Convolutional Neural Network(CNN) model, which is trained to distinguish the extended/localized phase of the Anderson model with high accuracy, shows us the results in good agreement with the conventional approach. Together, we find that there are three mobility edges in the impurity band for a specific on-site impurity potential, which means the presence of the extended states while filling the impurity band.
pacs: 71.23.-k, 71.55.-i, 02.60.Cb
## I Introduction
The effect of disorder has been extensively studied since Anderson's seminal paper[1]. Diluted magnetic semiconductors (DMS) doped with a small concentration of charged impurities constitute an interesting magnetic system that has a number of novel features for study by numerical simulation[2]. Much of the research has been focused on II-VI (such as CdTe or ZnSe) and III-V (such as GaAs) compound semiconductors doped with a low concentration (\(x\sim 1-8\%\)) of Manganese (Mn) impurities. Of particular interest in this field is Ga\({}_{1-x}\)Mn\({}_{x}\)As which has been shown to exhibit ferromagnetic behavior above 100K[3]. In these samples, the Manganese is substitutions with the Gallium and acts as an acceptor (donating one hole to the crystal), so that the material is p-type. The holes bind to the impurities with an energy of around 130 meV around \(x\sim 10\%\)[4]. Since \(x\ll 1\), the overlap between different impurity states can be ignored, thus the interaction between the charge carriers can be neglected. The system can be simply described by a non-interacting tight-binding model. When the system contains only one impurity, and the binding energy is large enough, an impurity state appears below the conductance band (we assume the impurity potential is attractive). It is locally distributed in space near the impurity potential within a localization length \(\zeta\). As increasing the concentration \(x\), the overlap between different impurity states extends the single impurity energy to an impurity band in the density of state (DOS) and eventually merges with the conductance band. Simultaneously, the states in the impurity band are expected to become more and more extended and ultimately regain their bandlike character [5]. However, the details inside the impurity band are rarely studied.
One reason for lacking such a study is the computation difficulty even in the non-interacting case. Generally, the percentage of the state in the impurity band in the total number of states is about 10% at the concentration we are interested in. Taking a 3-dimensional Anderson model with lattice size \(30\times 30\times 30\) as an example, the number of states which we need to know in the impurity band is about 3000. The exact diagonalization[6] for such a system is very difficult due to the large dimension. On the other hand, we have to do a large number of sample averages. The sparse matrix diagonalization, such as the Lanczos method[7], can be adapted to obtain a few lowest-lying states or a few states nearby special energy (the simplest way is diagonalizing \((H-\epsilon I)^{2}\) by using the original Lanczos diagonalization method).
Machine learning methods have recently emerged as a valuable tool to study the quantum many-body physics problems[8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22]. Its ability to process high dimensional data and recognize complex patterns have been utilized to determine phase diagrams and phase transitions[23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34]. In particular, Convolutional Neural Network(CNN)[35] model, which initially is designed for image recognition, was widely used to study different kinds of phase transition problems including the Bose-Hubbard model[36], spin 1/2 Heisenberg model[37], quantum transverse-field Ising model[32] and etc. The power of using machine learning to recognize quantum states lies in their ability to finish tasks without the knowledge of physics background or the Hamiltonian of the system. Even if the neural network is trained in a small energy region of the system, it can be used to obtain the whole phase diagram[24; 26]. Also, it can discriminate quantum states with high accuracy even if they are trained from a totally different Hamiltonian. This special feature of machine learning inspires us to try to identify the delocalized states in the "impurity band".
In this paper, we develop a method to obtain the cor
rect density of states (DOS) and other localization properties, such as inverse participation ratio (IPR)[38] and thouless number[39], by using Jacobi-Davidson sparse matrix diagonalization[40] with an importance sampling statistics method. Meanwhile, we train a 3-dimensional CNN model using the data generated from the Anderson model, and then the trained model is used to identify the existence of extended states in the impurity band. This manuscript is organized as follows: In sec.II we describe the tight-binding model on the cubic lattice and numerical methods; Sec. III demonstrates the effect of heavy doping studied by studying the IPR and Thouless number; Sec.IV demonstrates the implementation of the deep learning approach and the results from the trained neural network model; finally, we close with a conclusion.
## II Model and Methods
We consider a tight-binding model on a D-dimensional hypercubic lattice with the nearest neighbor hopping t, and on-site energies \(\epsilon_{i}\):
\[H=-t\sum_{\langle i,j\rangle}(\hat{c}_{j}^{\dagger}\hat{c}_{j}+h.c.)+\sum_{i} \epsilon_{i}\hat{c}_{i}^{\dagger}\hat{c}_{i} \tag{1}\]
The hopping term simulates the iterative electrons and the on-site energy has a bimodal distribution \(\epsilon_{i}=-W\) with probability \(x\), and \(\epsilon_{i}=0\) with probability \((1-x)\). This model a host lattice with a single relevant band, with a fraction \(x\) of substitutional impurities. For one-dimensional (d = 1) free electrons, the energy-momentum dispersion relation is \(E(k)=2t\cos(k)\), it is easy to get the DOS with the formula
\[\rho(E)=(\frac{1}{2\pi})^{d}\int\frac{dS}{\nabla_{k}E}. \tag{2}\]
The result for 1D is:
\[\rho_{1d}(E)=\frac{1}{\sqrt{4t^{2}-E^{2}}}. \tag{3}\]
There is no analytic solution for higher dimensional systems, however, an approximation that is accurate to roughly 2% was given by Andres et al [41]. Instead, the DOS can be calculated numerically by exact diagonalization as shown in Fig. 1 where \(t\) has been set to the unit. After introducing the impurities, all states become localized in 1D and 2D based on the scaling theory of localization [42]. Part of the states become to be localized and develop into an impurity band at the edge of the conducting band. To determine the localized/extended state, namely the location of the mobility gap, we calculate the inverse participation ratio (IPR)[38]
\[\text{IPR}=\frac{\sum_{i}|\psi_{i}^{4}|}{(\sum_{i}|\psi_{i}^{2}|)^{2}} \tag{4}\]
for each state, where the \(\psi_{i}\) is the weight of an eigen wave function on the \(i\)'th site. Heuristically, if we compare two trivial states with wave functions for a \(N\)-site system:
\[\Psi_{extended}=\sum_{i}(\psi_{i}=1/\sqrt{N})=\sqrt{N}, \tag{5}\]
and
\[\Psi_{localized}(j)=\sum_{i}(\psi_{i}=\delta_{ij})=1 \tag{6}\]
where \(\Psi_{extended}\) is an extended state which has equal weight on each site and \(\Psi_{localized}(j)\) is a localized state which only has weight on the \(j\)'th site. It is easy to see that the IPR of \(\Psi_{extended}\) decreased with the order of \(\frac{1}{N}\) and a constant for \(\Psi_{localized}(j)\). On the other hand, the Thouless number[39] is defined as:
\[g(E)=\frac{\langle|\Delta E|\rangle}{\langle\delta E\rangle}, \tag{7}\]
where \(\delta E\) is the energy difference while the boundary condition changes from periodic boundary condition (PBC) to anti-periodic boundary condition (APBC) and the \(|\Delta E|\) is the average energy distance around \(E\). Since only the extended states are sensitive to the change of boundary condition, \(g(E)\) grows linearly as a function of the system size for the extended state, and conversely, it reduces for the localized state. In this work, we determine the localization properties by systematically studying the IPR and Thouless number for different system sizes, and the crossover points of the Thouless number give us a hint of the mobility edge.
For three dimensional cubic lattice of size \(L\), the Hamiltonian matrix has a dimension of \(L^{3}\). General full exact diagonalization methods, such as Lapack library [43], can only deal with small system sizes. The computation time of diagonalizing one matrix with size \(L^{3}\) grows dramatically as a function of the system size. As shown in Fig. 2,
Figure 1: The density of states for free electrons in the tight-binding model in one, two, and three dimensions. Here \(t\) has been set to be unit.
we deal with a system with size \(19\times 20\times 21\) with doping concentration \(x=5\%\), after averaging thousands of samples, we obtained the DOS for different doping energies. It is shown that a peak emerges gradually near the band edge as increasing the doping energy \(W\). This peak becomes more prominent around \(W\sim 4.5\) at which an obvious depletion is developed at the junction between the impurity band and the conduction band.
Since only the developed impurity band is the interesting part we are focusing on. The number of states in the impurity band is about the lowest \(10\%\) of states in the whole band, thus we do not have to fully diagonalize the Hamiltonian. On the other side, we just need to calculate the DOS, IPR, and Thouless number for these lowest \(10\%\) states after averaging thousands of samples. According to our demand, we use the sparse matrix diagonalization with Jacobi-Division (JADA) method [40] which can search a few (10 to 20) states efficiently near specific points. For a given sample at fixed doping strength, we randomly distribute the reference points (30-50 points) in the impurity band, taking \(W=-4.5\) as an example, the reference points are picked randomly in the region \([-8:-4]\), about 10-20 states can be obtained by JADA around each reference point. The reference points could also be picked by importance sampling based on the DOS for a small system from the full diagonalization. We collect all these energies for each reference point in one sample. After thousands of sample averaging, we obtain the same DOS as that from the full diagonalization for a small system. It is obvious that the JADA method can easily go beyond the limit of the full exact diagonalization. At least on the same price of the computation time, we can nearly double the system size compared to the Lapack method. In this work, we calculate the properties for system sizes up to \(40^{3}\) sites by using the JADA method.
## III The effect of heavily doping
As analyzed in the previous section, with typical doping concentration \(x=5\%\), we find that a clear impurity band in the DOS is developed at about \(W=-4.5\). We plot the DOS and IPR together for different system sizes as shown in Fig.3. The line of the DOS for different system sizes collapses to a single curve and it is the same as that from ED as shown in Fig. 2, which tells us that we have already obtained the essential information of the impurity band. As increasing the system size, the IPR does not change on the edge of the band which means the states on the edge of the whole band are localized. The IPR in the bulk decreases as enlarging the system, especially at the center of the impurity band (\(E\sim-6.7\)), the IPR drops to zero, which is the same as in the system bulk (\(E\sim-4.0\)). However, there is a small peak near \(E\sim-5.5\) which is at the right edge of the impurity band. The IPR in the vicinity of this point tends to saturate to a fixed value as increasing the system size. The nonzero saturation of the IPR at this energy means another possible mobility edge existing near the junction between the conduction band and the impurity band.
In order to justify our conjecture, we systematically study the value of IPR for several system sizes. As shown in Fig. 4(a), we choose four points from the knowledge of the DOS and IPR. (1) \(E=-4.2\) is in the bulk of the conduction band, at which the state is extended. (2) \(E=-5.4\) is at the right edge of the impurity band. The state here is localized according to our conjecture. (3) \(E=-6.2\) is in the bulk of the impurity band, which is extended according to its zero IPR value in large \(L\) limit. (4) \(E=-6.8\) is on the left edge of the impurity band and thus at the edge of the whole energy band. The state at the band edge is supposed to be localized. In Fig. 4(b) we again compare the DOS from JADA with that from
Figure 3: The DOS and IPR for \(5\%\) doped system with \(W=-4.5\). The results are obtained from exact diagonalization. The number of configurations ranges from \(1000\) for system \(14\times 15\times 16\) to \(50\) for \(29\times 30\times 31\). The DOS is almost system-size independent. The IPR drops in the center of the impurity band.
Figure 2: The evolution of the DOS at the band edge with different doping strengths. The system has size \(19\times 20\times 21\) and can be fully diagonalized.
Lapack which shows a convergence in large system size. According to the way of choosing these four points, (1) and (3) should have similar behavior as increasing the system size, and vice versa for (2) and (4). Fig. 4(c) shows the IPR for these four energies in different system sizes. We plot the data in log scale and fit it by function
\[\log(\text{IPR})=A+B\log(L)+C\log(L)^{2}. \tag{8}\]
The sign of the curvature \(C\) tells us whether the state is localized or not. For (1) and (3), \(C<0\) means they are extended states, and oppositely \(C>0\) for localized states at points (2) and (4).
As another criterion, we calculate the Thouless number \(g(E)\) for different system sizes. The results are shown in Fig.5 in which we plot the DOS together with the same horizontal axis. The impurity band has been divided into several regions at the crossover of \(g(E)\) for different sizes. We label these regions by "L" (localized) and "E" (extended) to demonstrate different behavior \(g(E)\). As increasing the system size, it is obvious that the \(g(E)\) increases in the "E" region and decreases in the "L" region. The energies with vertical lines are the locations of the mobility edges, or the boundaries between the localized states and extended states.
## IV Deep learning approach
Convolutional neural network(CNN), which is originally designed for 2D image recognition, has been widely adopted in studying phase transition and achieves high accuracy in recognition. A standard image recognition model can be used for a 3D electron system by integrating the 3D electron density in one direction. But the drawback of this approach is that the information of the electron density along one direction is lost during integration. So, we design a 3D CNN model for our 3D lattice model. To distinguish the localized and delocalized state, the CNN model will return two real numbers to represent the probability of the extended state \(P\) and localized state \((1-P)\) for the given wave function. If the probability of the extended state is larger than 0.5, we think the eigenstate is delocalized, and vice localized. Due to the limitation of the graphics memory (8GB) of our graphics card (NVIDIA GTX 1080), we consider a 3D \(20\times 20\times 20\) lattice. The hidden layers in the CNN model consist of convolutional layers, max-pooling layers, and fully connected layers. The loss function is defined by the cross entropy \(H(x)=-\sum_{x}p(x)\log q(x)\). During the training, we use the RMSPropOptimizer solver defined in Tensorflow[44] as the stochastic gradient descent solver to minimize the loss function. The details of the neural network model are in Appendix A.
The training data for different phases are sampled from the 3-dimensional Anderson model using different disorder parameters. It's well known that the critical disorder at \(E=0\) for the 3D Anderson model is 16.54 \(\pm 0.01\)[45; 46; 47]. When the disorder strength \(W\) is larger than the critical value, the wave functions are exponentially localized and the system behaves as an insulator. Otherwise, the wave functions are delocalized and the system behaves as a metal. This phenomenon is known as Metal
Figure 4: The IPR/DOS as a function of system size for fixed energies. In fig.(c), we fit the data by using a function \(\log(IPR)=A+B\log(L)+C\log(L)^{2}\). The curvature \(C\) is labeled in the figure.
Figure 5: The Thouless number \(g(E)\) as a function of energy for finite systems using exact diagonalization.
Insulator Transition(MIT)[1]. We get 4000 eigenstates from \(W\in[14.0,16.0)\) as the delocalized phase and 4000 eigenstates from \(W\in[17.0,19.0)\) as the localized phase by steps of 0.1. For each W, we prepare 40 different realizations of randomness and for each realization, we take five eigenstates around \(E=0\). For the validation data set, we get another 600 eigenstates from \(W\in[10.0,16.0)\) and 600 eigenstates from \(W\in[17.0,23.0)\) in steps of 0.1. During each step of the training, we randomly select 256 eigenstates from the training data set as the input and calculate the gradient of the loss function with respect to the parameters in the CNN model and update them. After every 50 steps, we test the prediction accuracy on the validation data set and save the model with the highest prediction accuracy.
To show the prediction accuracy for different disorder parameters \(W\), we generate another 16000 eigenstates sampled from the Anderson model using \(W\in[0.1,16.0]\) and \(W\in[17.0,33.0)\). The prediction accuracy for different disorder strengths \(W\) is shown in Fig.6(a), and the overall accuracy is 99.0%. The lowest prediction accuracy around the critical disorder \(0.8W_{c}<W<1.2W_{c}\) is about 83%. We also test our trained model by producing the phase transition diagram of the 3D Anderson model. The testing data are sampled from \(W\in[8.0,25.0]\) by steps of 0.1. In each realization of the same disorder parameter \(W\), we pick 5 eigenstates around the band center(\(E=0\)) as input data and use the averaged delocalized probability of the five eigenstates as the delocalized probability of this realization. We prepare 5 random realizations for each \(W\) and average the delocalized probability. The phase diagram calculated using our trained CNN model is shown in Fig. 6(b). From Fig. 6(b), we see that the trained CNN model successfully captures the Metal-Insulator Transition(MIT).
Owing to its excellent classification accuracy, the trained neural network model is ready to find the extended state in the impurity band. We generate 1000 random realizations for the Hamiltonian in Eq.1 with doping probability \(x=5\%\) and disorder parameter \(W=-4.5\), and obtain all eigenstates using the exact diagonalization method in Lapack. These quantum states are used as the input data for our trained CNN model to calculate the delocalized probability. We average the probability over 1000 realizations and the result is shown in Fig.7. We can see that the CNN model confirms that delocalized states exist in the impurity band, which is in good agreement with the results obtained by IPR or Thouless number.
## V Conclusions
In this work, we numerically investigate the properties of the states in the "impurity band" of heavily-doped non-magnetic semiconductors. By using general full exact diagonalization and sparse matrix diagonalization with Jacobi-Division (JADA) method, we find that with a typical doping probability \(x=5\%\), the impurity band in the DOS is developed at about \(W=-4.5\). We calculate the IPR, Thouless number, and DOS together for different system sizes and study the relationship between them. The data fitting of IPR and system size on four points suggests the existence of the extended states in the impurity band. The Thouless number calculation supports the same conclusion and gives the exact location of mobility edges.
Besides, we also utilize the supervised deep learning method, which is the state-of-the-art method in pattern recognition, to distinguish the extended and localized states in the impurity band. We train a 3D CNN model using the data generated from the Anderson model and then apply the trained neural network model to classify the states in the "impurity band". Our trained neural network model achieves high accuracy (99.0%) in classifying different states in the Anderson model. The prediction of our trained model on "impurity band" also supports the finding from the relationship between IPR, Thouless number and system size though the predicted locations of mobility edges have small discrepancies. Our calculation gives direct evidence that there are three mo
Figure 6: The performance of the trained neural network on Anderson model with different disorder parameters \(W\). \(W_{c}=16.54\) is the critical disorder for \(E=0\). (a) The classification accuracy of the trained neural network model. (b) The probability that the wave function is considered as an extended state by the trained neural network model.
Figure 7: The probability that the corresponding wave function for different eigenenergies is considered as an extended state by the trained neural network model. The input wave functions are generated from Hamiltonian in Eq.1 using exact diagonalization. Averages over 1000 realizations are taken.
bility edges in the impurity band for a specific on-site impurity potential in heavily-doped non-magnetic semiconductors.
###### Acknowledgements.
Z-X. Hu is supported by the National Natural Science Foundation of China Grant No. 11974064 and 12147102, the Chongqing Research Program of Basic Research, and Frontier Technology Grant No. cstc2021jcyjmscmX0081, Chongqing Talents: Exceptional Young Talents Project No. cstc2021ycjh-bgzxm0147, and the Fundamental Research Funds for the Central Universities Grant No. 2020CDJQY-Z003. HC acknowledges the U.S. Department of Energy, Office of Science, Basic Energy Sciences under Award No. DE-SC0022216.
## Appendix A Neural network model architecture and hyperparameters
The 3D CNN model used in this paper has a similar architecture to the "AlexNet"[35] and "VGGNet"[48], but with a smaller number of convolutional, max pooling, and fully connected layers. This is because we are dealing with a 3D lattice and the edges in the lattice have a much smaller length compared to the images. The architecture of our model is shown in Fig. 8, and the input and output dimension of each layer is also listed in the figure.
The size of the convolution kernel applied in the first and second convolutional layers are \(5\times 5\times 5\) and \(3\times 3\times 3\), respectively. Activation function ReLU (rectified linear unit)[49] is performed after the convolutional layer and fully connected layer except for the last layer, which is activated by the softmax[50] function. Bias parameters are included for artificial neurons. Dropout[51] is performed with probability \(p=0.5\) after the first fully connected layer to avoid over-fitting and increase the evaluation accuracy.
|
2307.01263
|
Yang-Baxter deformed wedge holography
|
In this paper, we construct the wedge holography in the presence of
homogeneous Yang-Baxter deformation. We observe that the DGP term is the reason
for the existence of non-zero tension of the Karch-Randall branes in
Yang-Baxter deformed wedge holography. The homogeneous Yang-Baxter deformation
introduce non-trivial island surfaces inside the black hole horizon whose
entanglement entropy is lower than the twice of thermal entropy of the black
hole. Therefore, we obtain the Page curve even without the DGP term on the
Karch-Randall branes due to the homogeneous Yang-Baxter deformation in the
context of wedge holography. Finally, we compute the the holographic complexity
in homogeneous Yang-Baxter deformed $AdS_2$ background.
|
Gopal Yadav, Hemant Rathi
|
2023-07-03T18:00:04Z
|
http://arxiv.org/abs/2307.01263v3
|
# Yang-Baxter Deformed Wedge Holography
###### Abstract
In this paper, we construct the wedge holography in the presence of Yang-Baxter deformation. By doing so, we propose the co-dimension two holography in the context of deformed SYK model. We observe that the DGP term play a crucial role in obtaining the non-zero tension of the Karch-Randall branes in Yang-Baxter deformed wedge holography. _The Yang-Baxter deformation introduce non-trivial island surfaces inside the black hole horizon whose entanglement entropy is lower than the twice of thermal entropy of the black hole. Therefore, we obtain the Page curve even without the DGP term on the Karch-Randall branes due to the Yang-Baxter deformation in the context of wedge holography._ Finally, we compute the correction to the holographic complexity in Yang-Baxter deformed JT gravity.
###### Contents
* 1 Introduction
* 2 Wedge Holography in Yang-Baxter Deformed AdS Background
* 2.1 Vacuum Solution
* 2.1.1 Vacuum Solution without DGP Term
* 2.1.2 Vacuum Solution in the presence of DGP Term
* 2.2 Black Hole Solution
* 2.3 Wedge Holography in the Presence of Yang-Baxter deformation
* 3 Information Paradox in Yang-Baxter Deformed Wedge Holography
* 3.1 Page Curve Due to the Presence of Yang-Baxter Deformation
* 3.1.1 Hartman-Maldacena Surface
* 3.1.2 Non-Trivial Island Surface in the Presence of Yang-Baxter Deformation
* 3.1.3 Page Curve
* 3.2 DGP Term and Swampland Criteria in the Yang-Baxter Deformed Wedge Holography
* 4 Holographic Complexity in \(AdS_{2}^{\eta}\) Background
* 5 Conclusion and Discussion
## 1 Introduction
Duality is a robust proposal in physics. In string theory, the famous duality is the gauge-gravity duality. Gauge-gravity duality is helpful for exploring the strongly coupled gauge theories from the weakly coupled gravitational theories. J. Maldacena proposed the duality between type IIB string theory defined on \(AdS_{5}\times S^{5}\) and \(\mathcal{N}=4\) super Yang-Mills theory [1]. The gauge-gravity duality has been used in many branches of physics, e.g., condensed matter physics, cosmology, quantum chromodynamics (QCD), black hole information paradox, etc. The current paper is
focused on the application of an extended version of this duality (wedge holography) to the black hole information paradox.
A very long time ago, Hawking found that black holes do not follow the unitary evolution of quantum mechanics [2, 3] which give rise to the information paradox. To recover the unitary evolution of black holes, entanglement entropy of Hawking radiation must follow the Page curve [4]. There has been extensive study on the information paradox using holography, and many proposals have been made (e.g. island proposal, double holography and wedge holography). The island proposal is based on the inclusion of contribution to the entanglement entropy from the interior of black holes at late times [5]. In early times, islands do not contribute to the entanglement entropy of the Hawking radiation, and one obtains the divergent entanglement entropy. These two phases together (no-island and island) reproduce the Page curve 1. The doubly holographic setup is constructed from the two copies of the usual AdS/CFT duality. In doubly holographic setups, the black hole is living on the end-of-the-world brane, and the external CFT bath acts as the sink to collect the Hawking radiation, see [15] and references therein. Another proposal is based on the wedge holography where the bath is taken as gravitating [16, 17, 18]. In doubly holographic setups and wedge holography, one obtains the Page curve from the entanglement entropies of the extremal surfaces: Hartman-Maldacena [19] and island surfaces. The wedge holography states the duality between classical gravity in the \((d+1)\)-dimensional bulk and the \((d-1)\)-dimensional CFT at the defect, i.e., wedge holography can be called co-dimension two holography.
Footnote 1: See [6, 7, 8, 9, 10, 11, 12, 13, 14] and references therein where island proposal has been used.
In doubly holographic setup and wedge holography, some authors found that gravity is massive on the end-of-the-world brane [20, 21, 22, 23], and some authors showed that one could get the Page curve for the massless gravity in these models [24, 25, 15]. The authors in [24, 26, 27, 28] showed that the Page curve can be obtained in wedge holography by including the Dvali-Gabadadze-Porrati (DGP) term [29] on the Karch-Randall branes. However theories that includes DGP term on Karch-Randall branes belongs to the swampland [30] and these are not physical [31, 32]. The results of [15] have been obtained from a top-down approach for a non-conformal bath (thermal QCD bath) whose gravity dual is \(\mathcal{M}\)-theory inclusive of \(\mathcal{O}(R^{4})\) corrections [33] and do not include the DGP term on the end-of-the-world brane. Recently one of the co-authors (GY) has figured out how to describe the Multiverse from the wedge holography [34] and discussed the application of [34] to the information paradox of black holes with multiple horizons and the grandfather paradox.
The purpose of the present paper is to explore the effects of the Yang-Baxter deformation in the context of information paradox and holographic complexity. The importance of the
YB deformations stems from the fact that they preserve the integrability of the sigma model [35, 36, 37, 38, 39]2. Recently, the authors in [44, 45, 46] explore the effects of the novel YB deformations in the 2D-dilaton gravity system having a quadratic potential known as Almheiri-Polchinski (AP) model [47]. Interestingly, the authors found that the YB deformed \(AdS_{2}\) background [44] could be a consistent solution of AP model if one deformed the quadratic potential into the hyperbolic function. The YB deformed \(AdS_{2}\) metric plays a crucial role in the construction of YB deformed wedge holography. More concretely, in this paper
Footnote 2: See also [40, 41, 42, 43] for the study on \(\eta\) deformation.
* We propose the co-dimension two holography for the deformed SYK model where the gravity dual is Yang-Baxter deformed \(AdS_{3}\).
* We explore the effect of the Yang-Baxter deformation on the Page curve when the gravity dual is three-dimensional.
* Further, we investigate the effect of the Yang-Baxter deformation on holographic complexity.
The paper is organized as follows. Section 2 has been divided into three subsections 2.1, 2.2, and 2.3. Subsections 2.1 and 2.2 have discussions on the vacuum and black hole solutions for three-dimensional bulk, whereas subsection 2.3 is based on the explicit construction of Yang-Baxter deformed wedge holography. Section 3 compromises of two subsections: 3.1 and 3.2. In 3.1, we have shown that it is possible to get the Page curve in the presence of Yang-Baxter deformation even without DGP term. In subsection 3.2, we have discussed the DGP term and swampland criteria in the Yang-Baxter deformed wedge holography. We discuss the effect of Yang-Baxter deformation on holographic complexity in section 4 and then summarise our results in section 5.
## 2 Wedge Holography in Yang-Baxter Deformed AdS Background
In this section, we construct Yang-Baxter (YB) deformed vacuum and black hole solutions in subsections 2.1 and 2.2 respectively. We further construct Yang-Baxter deformed wedge holography in subsection 2.3.
Working action for the wedge holography with \(AdS_{3}\) bulk is written as follows:
\[S=\frac{1}{16\pi G_{N}^{(3)}}\Bigg{[}\int d^{3}x\sqrt{-g^{(3)}}(R-2\Lambda)+2 \int_{\partial M}d^{2}x\sqrt{-h}K+2\int d^{2}x\sqrt{-h_{\gamma}}\left(\mathcal{ K}_{\gamma}-T_{\gamma}\right)\Bigg{]}, \tag{1}\]
where the first term of the above equation (1) is Einstein Hilbert term with negative cosmological constant [48], the second term is the Gibbons-Hawking-York boundary term, and the third term is defined on the Karch-Randall branes with \(\gamma=1,2\). To discuss the wedge holography in YB-deformed background, we have considered two Karch-Randall branes with induced metric \(h_{\gamma}\) and tensions \(T_{\gamma}\). On varying the action with respect to space-time (\(g_{\mu\nu}\)), we obtain the Einstein equation:
\[R_{\mu\nu}-\frac{R}{2}g_{\mu\nu}+\Lambda g_{\mu\nu}=0. \tag{2}\]
### Vacuum Solution
In this subsection, we discuss vacuum solution without the DGP term in 2.1.1 and with the DGP term in 2.1.2, respectively.
#### 2.1.1 Vacuum Solution without DGP Term
Vacuum solution of the bulk Einstein's equation (2) of the action (1) is given by [48]3
Footnote 3: This solution has been constructed from two-dimensional Yang-Baxter deformed JT gravity by uplifting two-dimensional solution to three dimensions, for more details, see [48].
\[ds_{\eta}^{2}=\mathcal{F}_{\eta}(z)\Bigg{(}\frac{-dt^{2}+dz^{2} }{z^{2}}\Bigg{)}+\mathcal{G}_{\theta\theta}^{\eta}d\theta^{2}, \tag{3}\]
where
\[\mathcal{F}_{\eta}(z)=\frac{1}{1-\frac{\eta^{2}\alpha^{2}}{z^{2}}},\] \[\mathcal{G}_{\theta\theta}^{\eta}=\Bigg{[}\frac{1}{2\eta}\log \!\Bigg{(}\frac{1+\frac{\eta\alpha}{z}}{1-\frac{\eta\alpha}{z}}\Bigg{)}+1 \Bigg{]}^{2}, \tag{4}\]
where \(\eta\) is the Yang-Baxter deformation parameter. Notice that the above solution satisfies the equations of motion provided the following constraint holds:
\[2\eta+\log\Bigg{[}\frac{z+\alpha\eta}{z-\alpha\eta}\Bigg{]}- \frac{2z\alpha\eta}{z^{2}-\alpha^{2}\eta^{2}}=0 \tag{5}\]
and we stay infinitesimally away from the boundary i.e. \(z_{B}=\eta\alpha+\epsilon\), where \(\epsilon\sim O(\alpha)\).
Next we compute the Neumann boundary condition (NBC) which can be obtained by varying (1) with respect to the metric \(h_{ij}\)
\[\mathcal{K}_{ij}^{\gamma}-(\mathcal{K}^{\gamma}-T^{\gamma})h_{ij }^{\gamma}=0, \tag{6}\]
where \(\mathcal{K}_{ij}^{\gamma}=\frac{1}{2}n^{\theta}\partial_{\theta}g_{ij}^{\gamma}|_{ \theta=\theta_{1,2}}\)4
Footnote 4: Where \(\theta\) is the location of the KR brane and \(n^{\theta}\) is the unit normal to the KR brane. The unit normal is define through the following relation
\[g_{MM}n^{M}n^{N}=1. \tag{7}\]
Using the above relation (7), we obtain \[n^{\theta}=\pm\frac{1}{\sqrt{\mathcal{G}_{\theta\theta}^{\eta}}},\] (8)
where \(\pm\) denotes the increasing and decreasing direction of the \(\theta\). and \(\theta=\theta_{1,2}\) denote the locations of the two Karch-Randall (KR) branes, see Fig. 1. For general metric, NBC is satisfied provided, \(T^{\gamma}=\mathcal{K}^{\gamma}-\frac{\mathcal{K}_{ij}^{\gamma}}{h_{ij}^{ \gamma}}\). For the metric (3), \(K_{ij}^{\gamma}=0\), and hence \(\mathcal{K}^{\gamma}=0\), therefore (6) implying
\[T^{\gamma}h_{ij}^{\gamma}=0. \tag{9}\]
Notice that, \(h_{ij}^{\gamma}\neq 0\), and hence \(T_{\gamma}=0\), i.e., KR-branes are tensionless without DGP term. Moreover, the situation remains unaltered even when we consider the black hole solution (16) instead of the vacuum solution (3).
#### 2.1.2 Vacuum Solution in the presence of DGP Term
In the presence of the DGP term on the KR-branes, gravitational action (1) is modified as [24]:
\[S=\frac{1}{16\pi G_{N}^{(3)}}\bigg{[}\int_{M}d^{3}x\sqrt{-g^{(3)}}\left(R[g]- 2\Lambda\right)+2\int_{\partial M}d^{2}x\sqrt{-h}K+2\int_{Q_{\gamma}}d^{2}x \sqrt{-h_{\gamma}}\left(\mathcal{K}_{\gamma}-T_{\gamma}+\lambda_{\gamma}R_{h_ {\gamma}}\right)\bigg{]}, \tag{10}\]
where all the terms in (10) are same as defined in (1) except \(R_{h_{\gamma}}\), which are intrinsic curvature scalars on two Karch-Randall branes. In this case, bulk metric satisfies the following Neumann boundary condition at \(\theta=\theta_{1,2}\):
\[\mathcal{K}_{\gamma,ij}-(\mathcal{K}_{\gamma}-T_{\gamma}+\lambda_{\gamma}R_{h _{\gamma}})h_{\gamma,ij}+2\lambda_{\gamma}R_{\gamma,ij}=0. \tag{11}\]
The bulk metric (3) satisfies Neumann boundary condition (11) provided tensions of the KR-branes in the presence of DGP term should be given as follows:
\[T_{\gamma}=\mathcal{K}_{\gamma}-\frac{\mathcal{K}_{\gamma,ij}}{h_{\gamma,ij}} +\lambda_{\gamma}R_{h_{\gamma}}-\frac{2\lambda_{\gamma}R_{\gamma,ij}}{h_{ \gamma,ij}}. \tag{12}\]
For the bulk metric (3), equation (12) simplifies to the following form:
\[T_{\gamma}=\lambda_{\gamma}\left(R_{h_{\gamma}}-\frac{2R_{\gamma,ij}}{h_{ \gamma,ij}}\right). \tag{13}\]
For the given metric (3), the Ricci tensor and Ricci scalar on the KR-branes are obtained as:
\[R_{11} =\frac{-\alpha^{4}\eta^{4}+4\alpha^{2}\eta^{2}z^{2}+z^{4}}{\left(z^{ 3}-\alpha^{2}\eta^{2}z\right)^{2}},\] \[R_{22} =\frac{\alpha^{4}\eta^{4}-4\alpha^{2}\eta^{2}z^{2}-z^{4}}{\left(z^ {3}-\alpha^{2}\eta^{2}z\right)^{2}},\] \[R_{h_{1}} =R_{h_{2}}=-2-\frac{8\alpha^{2}\eta^{2}}{z^{2}}+\frac{2\alpha^{4} \eta^{4}}{z^{4}}. \tag{14}\]
Now we define \(\kappa=\eta\alpha\), and using (14), tensions of the branes (13) turn out to be of the following form:
\[T_{1} =2\lambda_{1}+\frac{8\kappa^{2}\lambda_{1}}{z^{2}}-\frac{2\kappa^ {4}\lambda_{1}}{z^{4}}+O\left(\kappa^{6}\right),\] \[T_{2} =2\lambda_{2}+\frac{8\kappa^{2}\lambda_{2}}{z^{2}}-\frac{2 \kappa^{4}\lambda_{2}}{z^{4}}+O\left(\kappa^{6}\right). \tag{15}\]
Notice that when \(\lambda_{1}=\lambda_{2}\) then both KR-branes have same tensions. From (15), we conclude that, _the presence of the DGP term generates the tensions on Karch-Randall branes in the Yang-Baxter deformed wedge holography5_.
Footnote 5: This discussion also applies to the black hole solution as well.
### Black Hole Solution
In this subsection, we compute the consistent Yang-Baxter deformed black hole solution associated with (1). On solving (2), we obtain
\[ds_{\eta}^{2}=\mathcal{F}_{\eta}(z)\Bigg{(}\frac{-f(z)dt^{2}+ \frac{dz^{2}}{f(z)}}{z^{2}}\Bigg{)}+\mathcal{G}_{\theta\theta}^{\eta}d\theta^ {2}, \tag{16}\]
where
\[f(z)=1-\frac{z^{2}}{z_{h}^{2}}\;. \tag{17}\]
Here, \(z_{h}\) denotes the location of the black hole horizon. It should be noted that the above black hole solution (16) satisfies the equations of motion (2) up to the following constraint:
\[2\eta+\log\left[\frac{z+\alpha\eta}{z-\alpha\eta}\right]-\frac{2 z\alpha\eta f(z)}{z^{2}-\alpha^{2}\eta^{2}}+\alpha\eta f^{\prime}(z)=0, \tag{18}\]
and the black hole horizon must be located far away from the \(\alpha\eta\), i.e., \(z_{h}>>\alpha\eta\).
The black hole solution defined in (16) has the following thermal entropy:
\[S_{\rm thermal}=\frac{A_{z=z_{h}}}{4G_{N}^{(3)}}=\frac{\int_{\theta_{1}}^{\theta_ {2}}d\theta\sqrt{\mathcal{G}_{\theta\theta}^{\eta}}}{4G_{N}^{(3)}}=\frac{( \theta_{2}-\theta_{1})\sqrt{\mathcal{G}_{\theta\theta}^{\eta}}|_{z=z_{h}}}{4G _{N}^{(3)}}. \tag{19}\]
Alternatively, one can compute the thermal entropy of \(3D\) black holes using the Wald method [49, 50]. Wald entropy or the Noether charge entropy for stationary black holes is defined as:
\[S_{W}=-2\pi\oint_{\Sigma}\Bigg{(}\frac{\delta\mathcal{L}}{\delta R_{abcd}} \Bigg{)}\epsilon_{ab}\epsilon_{cd}\overline{\epsilon}, \tag{20}\]
where \(\epsilon_{ab}\) is the anti-symmetric tensor with the normalisation condition \(\epsilon_{ab}\epsilon^{ab}=-2\) and the integral is evaluated on the \(D-2\) dimensional bifurcation surface \(\Sigma\). For our case, the bifurcation surface is at \(z=z_{h}\) and \(t=t_{0}\), where \(z_{h}\) is the location of horizon and \(t_{0}\) is a constant. Furthermore, the derivative in (20) is evaluated using the on-shell condition.
Using (1) and (20), we obtain
\[S_{W}=-\frac{1}{8\pi G_{N}^{(3)}}\oint_{\Sigma}g^{ac}g^{bd}\epsilon_{ab} \epsilon_{cd}\sqrt{\mathcal{G}_{\theta\theta}^{\eta}}d\theta=\frac{(\theta_{2 }-\theta_{1})}{4G_{N}^{(3)}}\sqrt{\mathcal{G}_{\theta\theta}^{\eta}}\Big{|}_{ z=z_{h}}, \tag{21}\]
which is identical to thermal entropy obtained in (19).
Using (4), (18) and (19), the thermal entropy of the black hole phase can be estimated as
\[S_{\rm thermal}=-\frac{(\theta_{2}-\theta_{1})\alpha f^{\prime}(z_{h})}{8G_{N }^{(3)}}=\frac{(\theta_{2}-\theta_{1})\alpha}{4G_{N}^{(3)}}, \tag{22}\]
where we set \(z_{h}=1\) for simplicity.
### Wedge Holography in the Presence of Yang-Baxter deformation
Wedge holography in the presence of Yang-Baxter deformation can be constructed as follows. We have two Karch-Randall branes located at \(\theta=\theta_{1,2}\), and these branes are joined at the one-dimensional defect (\(P\)) as shown in Fig. 1. In this setup, the non-conformal boundary6 of the \(3D\) bulk is located at the holographic screen placed at \(z=z_{B}\)7. The YB deformed wedge holography has the following three descriptions.
Footnote 6: \(\eta\) deformation breaks the scale invariance therefore the boundary theory is not a CFT, see [51].
Footnote 7: See [52] where the holographic screen has been used in the literature.
* **Boundary description**: Two-dimensional non-conformal field theory (NCFT) at the holographic screen (\(z=z_{B}\)) with one-dimensional boundary. We call the NCFT located at the holographic screen as "holographic screen non-conformal field theory (HSNCFT)".
* **Intermediate description**: Gravitating systems with Yang-Baxter deformed \(AdS_{2}\) geometries (KR-branes) are connected to each other via transparent boundary conditions at the defect8. Footnote 8: For the metric (3) there is no black hole and for the metric (16) there exist black hole on Karch-Randall brane. The later is useful for the information paradox.
* **Bulk description**: HSNCFT located at the holographic screen (\(z=z_{B}\)) has three-dimensional gravity dual.
In our setup, the dictionary of wedge holography in the presence of Yang-Baxter deformation can be schematically expressed as:
_Classical gravity in three-dimensional Yang-Baxter deformed \(AdS_{3}\) bulk_
\(\equiv\) _(Quantum) gravity on two Karch-Randall branes with metric Yang-Baxter deformed \(AdS_{2}\)_
\(\equiv\) _deformed SYK model living at the one-dimensional defect._
The relationship between the first and second lines is because of the braneworld holography [53, 54]. The third line is related to the second line due to JT/SYK duality [55, 56, 57, 58, 59, 60, 61, 62, 63, 64]. Therefore, _classical gravity in Yang-Baxter deformed \(AdS_{3}\) can be dual to deformed SYK model at the one-dimensional defect_ based on wedge holography [16, 17, 18].
Figure 1: Pictorial realization of Yang-Baxter deformed wedge holography. \(z_{B}\) is the holographic screen and \(P\) is the defect.
Information Paradox in Yang-Baxter Deformed Wedge Holography
In this section, we use the wedge holography constructed in section 2 to discuss the Page curve of black holes in the Yang-Baxter deformed wedge holography in 3.1. and in 3.2 we discuss the DGP term and swampland criteria in YB deformed wedge holography.
### Page Curve Due to the Presence of Yang-Baxter Deformation
In this subsection, we explore the Page curve of the eternal black hole in the absence of the DGP term on the Karch-Randall branes. For this purpose, we consider the two extremal surfaces: Hartman-Maldacena, and island surfaces, and calculate their respective entanglement entropies in 3.1.1 and 3.1.2.
#### 3.1.1 Hartman-Maldacena Surface
Let us write the bulk metric (16) in terms of the infalling Eddington-Finkelstein coordinate, \(dv=dt-\frac{dz}{f(z)}\) as below:
\[ds^{2}_{(2+1)}=g_{\mu\nu}dx^{\mu}dx^{\nu}=\mathcal{G}^{\eta}_{ \theta\theta}d\theta^{2}+\mathcal{F}_{\eta}(z)\Bigg{(}\frac{-f(z)dv^{2}-2 dvdz}{z^{2}}\Bigg{)}. \tag{23}\]
Hartman-Maldacena surface is parametrized by \(\theta\equiv\theta(z)\) and \(v\equiv v(z)\) which has the induced metric as given below, and the same can be obtained from (23):
\[ds^{2}=\Bigg{(}\mathcal{G}^{\eta}_{\theta\theta}\theta^{\prime}( z)^{2}-\frac{\mathcal{F}_{\eta}(z)v^{\prime}(z)}{z^{2}}\left(2+f(z)v^{\prime}(z) \right)\Bigg{)}dz^{2}, \tag{24}\]
where \(\theta^{\prime}(z)=\frac{d\theta}{dz}\) and \(v^{\prime}(z)=\frac{dv}{dz}\).
Using (24), the area of the Hartman-Maldacena surface can be computed as:
\[A_{\text{HM}}=\int_{z_{1}}^{z_{\text{max}}}dz\Bigg{(}\sqrt{ \mathcal{G}^{\eta}_{\theta\theta}\theta^{\prime}(z)^{2}-\frac{\mathcal{F}_{ \eta}(z)v^{\prime}(z)}{z^{2}}\left(2+f(z)v^{\prime}(z)\right)}\Bigg{)}, \tag{25}\]
where \(z_{1}\) and \(z_{\text{max}}\) are the point on gravitating bath and turning point of the Hartman-Maldacena surface. From (25), we obtain the equation of motion for \(\theta(z)\) as
\[\frac{\mathcal{G}^{\eta}_{\theta\theta}\theta^{\prime}(z)}{\sqrt {\mathcal{G}^{\eta}_{\theta\theta}\theta^{\prime}(z)^{2}-\frac{\mathcal{F}_{ \eta}(z)v^{\prime}(z)(f(z)v^{\prime}(z)+2)}{z^{2}}}}=C_{1}, \tag{26}\]
where \(C_{1}\) is a constant. Solving the above equation for \(\theta^{\prime}(z)\), we obtained:
\[\theta^{\prime}(z)=\frac{C_{1}\sqrt{\mathcal{F}_{\eta}(z)}\sqrt{v^{ \prime}(z)}\sqrt{f(z)v^{\prime}(z)+2}}{\sqrt{\mathcal{G}_{\theta\theta}^{ \eta}}z\sqrt{C_{1}^{2}-\mathcal{G}_{\theta\theta}^{\eta}}}, \tag{27}\]
Substituting \(\theta^{\prime}(z)\) from (27) into (25), we obtained:
\[A_{\rm HM}=\int_{z_{1}}^{z_{\rm max}}dz\mathcal{L}_{\rm HM}=\int _{z_{1}}^{z_{\rm max}}dz\Bigg{(}\sqrt{\frac{\mathcal{F}_{\eta}(z)v^{\prime}(z )\left(f(z)v^{\prime}(z)\left(C_{1}^{2}\mathcal{F}_{\eta}(z)+z^{2}\left( \mathcal{G}_{\theta\theta}^{\eta}-C_{1}^{2}\right)\right)+2\mathcal{G}_{ \theta\theta}^{\eta}z^{2}\right)}{z^{4}\left(C_{1}^{2}-\mathcal{G}_{\theta \theta}^{\eta}\right)}}\Bigg{)}. \tag{28}\]
The equation of motion for the embedding \(v(z)\) from (28) is calculated as
\[\frac{d}{dz}\left(\frac{\partial\mathcal{L}_{\rm HM}}{\partial v ^{\prime}(z)}\right)=0,\] \[\implies\frac{\partial\mathcal{L}_{\rm HM}}{\partial v^{\prime}(z )}=E. \tag{29}\]
Using (28) and (29), we obtained:
\[E=\frac{\mathcal{F}_{\eta}(z)\left(f(z)v^{\prime}(z)\left(C_{1} ^{2}\mathcal{F}_{\eta}(z)+z^{2}\left(\mathcal{G}_{\theta\theta}^{\eta}-C_{1}^ {2}\right)\right)+\mathcal{G}_{\theta\theta}^{\eta}z^{2}\right)}{z^{4}\left(C _{1}^{2}-\mathcal{G}_{\theta\theta}^{\eta}\right)\sqrt{\frac{\mathcal{F}_{ \eta}(z)v^{\prime}(z)\left(f(z)v^{\prime}(z)\left(C_{1}^{2}\mathcal{F}_{\eta}( z)+z^{2}\left(\mathcal{G}_{\theta\theta}^{\eta}-C_{1}^{2}\right)\right)+2 \mathcal{G}_{\theta\theta}^{\eta}z^{2}\right)}{z^{4}\left(C_{1}^{2}-\mathcal{G }_{\theta\theta}^{\eta}\right)}}}. \tag{30}\]
Using the condition \(v^{\prime}(z_{\rm max})\rightarrow-\infty\)[65], equations (4) and (30), we obtained:
\[E=\sqrt{\frac{\left(z_{\rm max}^{2}-1\right)\left(C_{1}^{2}z_{ \rm max}^{2}-C_{1}^{2}-z_{\rm max}^{2}\right)}{\left(C_{1}^{2}-1\right)z_{\rm max }^{4}}}-\frac{C_{1}^{2}\kappa\sqrt{\frac{\left(z_{\rm max}^{2}-1\right)\left( C_{1}^{2}z_{\rm max}^{2}-C_{1}^{2}-z_{\rm max}^{2}\right)}{\left(C_{1}^{2}-1 \right)z_{\rm max}^{4}}}}{\left(C_{1}^{2}-1\right)\eta z_{\rm max}\left(C_{1} ^{2}z_{\rm max}^{2}-C_{1}^{2}-z_{\rm max}^{2}\right)}. \tag{31}\]
where we retain the terms upto \(\mathcal{O}(\kappa)\).
At the turning point, \(\frac{dE}{dz_{\rm max}}=0\) which implies \(z_{\rm max}=\frac{\sqrt{2}C_{1}}{\sqrt{2C_{1}^{2}-1}}-\left(\sqrt{\frac{-3C_{ 1}^{2}+\sqrt{4-3C_{1}^{2}+2}}{3-3C_{1}^{2}}}\right)\kappa\). To make sure that the turning point of the Hartman-Maldacena surface is outside the horizon and positive, we need to consider \(C_{1}>\frac{1}{\sqrt{2}}\) (e.g., for \(C_{1}=\frac{1}{\sqrt{1.6}}\), \(z_{\rm max}\approx 2.2-1.2\kappa\)).
Time on the gravitating bath can be obtained after simplifying \(dv=dt-\frac{dz}{f(z)}\) and the same is written below:
\[t_{1}=t(z_{1})=-\int_{z_{1}}^{z_{\rm max}}\left(v^{\prime}(z)+ \frac{1}{f(z)}\right)dz. \tag{32}\]
Now we explore the late-time behavior (\(t\rightarrow\infty\)) of the area of the Hartman-Maldacena surface as follows:
\[\lim_{t\rightarrow\infty}\frac{dA_{\rm HM}}{dt}=\lim_{t \rightarrow\infty}\Biggl{(}\frac{\frac{dA_{\rm HM}}{dz_{\rm max}}}{\frac{dt}{dz_ {\rm max}}}\Biggr{)}=\frac{\mathcal{L}_{\rm HM}(z_{\rm max},v^{\prime}(z_{\rm max }))+\int_{z_{1}}^{z_{\rm max}}\frac{\partial\mathcal{L}_{\rm HM}(z_{\rm max},v^ {\prime}(z_{\rm max}))}{\partial z_{\rm max}}dz}{-v^{\prime}(z_{\rm max})- \frac{1}{f(z_{\rm max})}-\int_{z_{1}}^{z_{\rm max}}\frac{\partial v^{\prime}(z) }{\partial z_{\rm max}}}. \tag{33}\]
Since,
\[\lim_{t\rightarrow\infty}\frac{\partial v^{\prime}(z)}{\partial z _{\rm max}}=\lim_{t\rightarrow\infty}\frac{\partial v^{\prime}(z)}{\partial E }\frac{\partial E}{\partial z_{\rm max}}=0,\] \[\lim_{t\rightarrow\infty}\frac{\partial L(z,v^{\prime}(z))}{ \partial z_{\rm max}}=\frac{\partial\mathcal{L}_{\rm HM}(z,v^{\prime}(z))}{ \partial v^{\prime}(z)}\frac{\partial v^{\prime}(z)}{\partial z_{\rm max}}=0. \tag{34}\]
Therefore,
\[\lim_{t\rightarrow\infty}\frac{dA_{\rm HM}}{dt}=\frac{\mathcal{ L}_{\rm HM}(z_{\rm max},v^{\prime}(z_{\rm max}))}{-v^{\prime}(z_{\rm max})- \frac{1}{f(z_{\rm max})}}=Constant. \tag{35}\]
Hence, the Hartman-Maldancena surface has the following entanglement entropy in Yang-Baxter deformed model:
\[S_{\rm HM}=\frac{A_{\rm HM}}{4G_{N}^{(3)}}\propto t. \tag{36}\]
#### 3.1.2 Non-Trivial Island Surface in the Presence of Yang-Baxter Deformation
Island surface is parametrized by \(t=constant\), \(z=z(\theta)\), and hence induced metric of the same can be obtained from (16) as written below:
\[ds_{\rm Island}^{2}=\Biggl{(}\frac{\mathcal{F}_{\eta}(z)z^{\prime} (\theta)^{2}}{z^{2}f(z)}+\mathcal{G}_{\theta\theta}^{\eta}\Biggr{)}d\theta^{2}. \tag{37}\]
The area of the island surface using the induced metric (37) is obtained as follows:
\[\mathcal{A}_{\rm Island}=\int_{\theta_{1}}^{\theta_{2}}d\theta \Biggl{(}\sqrt{\frac{\mathcal{F}_{\eta}(z)z^{\prime}(\theta)^{2}}{z^{2}f(z)}+ \mathcal{G}_{\theta\theta}^{\eta}}\Biggr{)}. \tag{38}\]
Using (4), (17)(we used \(z_{h}=1\) for the simplicity of the calculations) and (38), we obtained:
\[\mathcal{A}_{\rm Island}=\int_{\theta_{1}}^{\theta_{2}}d\theta \Biggl{(}\sqrt{\frac{1}{4\eta^{2}}\left(2\eta+\log\left(-\frac{\kappa+z( \theta)}{\kappa-z(\theta)}\right)\right)^{2}+\frac{z^{\prime}(\theta)^{2}}{(z( \theta)^{2}-1)\left(\kappa^{2}-z(\theta)^{2}\right)}}\Biggr{)}. \tag{39}\]
The Euler-Lagrange equation of motion(EOM) for the island surface's embedding \(z(\theta)\) upto \(\mathcal{O}(\kappa^{2})\) can be schematically expressed as
\[\text{EOM}=\text{EOM}^{\kappa^{0}}-\left(\text{EOM}^{\kappa^{1}}\right)\kappa- \left(\text{EOM}^{\kappa^{2}}\right)\kappa^{2}, \tag{40}\]
where we defined EOM\({}^{\kappa^{0,1,2}}\) as below:
* EOM\({}^{\kappa^{0}}\): \[8\Bigg{(}\eta^{2}z(\theta)^{10}z^{\prime\prime}(\theta)-2\eta^{2 }z(\theta)^{8}z^{\prime\prime}(\theta)+\eta^{2}z(\theta)^{6}z^{\prime\prime}( \theta)-2\eta^{2}z(\theta)^{9}z^{\prime}(\theta)^{2}+3\eta^{2}z(\theta)^{7}z^ {\prime}(\theta)^{2}\] \[-\eta^{2}z(\theta)^{5}z^{\prime}(\theta)^{2}\Bigg{)}=0.\] (41)
* EOM\({}^{\kappa^{1}}\): \[8\Bigg{(}-2\eta z(\theta)^{9}z^{\prime\prime}(\theta)+4\eta z( \theta)^{7}z^{\prime\prime}(\theta)-2\eta z(\theta)^{5}z^{\prime\prime}( \theta)+2\eta z(\theta)^{8}z^{\prime}(\theta)^{2}-2\eta z(\theta)^{6}z^{ \prime}(\theta)^{2}\] \[+\eta z(\theta)^{12}-3\eta z(\theta)^{10}+3\eta z(\theta)^{8}- \eta z(\theta)^{6}\Bigg{)}=0.\] (42)
* EOM\({}^{\kappa^{2}}\): \[8\Bigg{(}3\eta^{2}z(\theta)^{8}z^{\prime\prime}(\theta)-6\eta^{2 }z(\theta)^{6}z^{\prime\prime}(\theta)+3\eta^{2}z(\theta)^{4}z^{\prime\prime}( \theta)-5\eta^{2}z(\theta)^{7}z^{\prime}(\theta)^{2}+7\eta^{2}z(\theta)^{5}z^ {\prime}(\theta)^{2}\] \[-2\eta^{2}z(\theta)^{3}z^{\prime}(\theta)^{2}-z(\theta)^{8}z^{ \prime\prime}(\theta)+2z(\theta)^{6}z^{\prime\prime}(\theta)-z(\theta)^{4}z^{ \prime\prime}(\theta)+z(\theta)^{5}z^{\prime}(\theta)^{2}-z(\theta)^{3}z^{ \prime}(\theta)^{2}\] \[+3z(\theta)^{11}-9z(\theta)^{9}+9z(\theta)^{7}-3z(\theta)^{5} \Bigg{)}=0.\] (43)
EOMs (41), (42), and (43) have a common physical solution which is given below.
\[z(\theta)=1 \tag{44}\]
As discussed earlier, \(z_{h}=1\) is the black hole horizon. Therefore solution to the island surface's embedding \(z(\theta)\) up to \(\mathcal{O}(\kappa^{2})\) is given by:
\[z(\theta)=1-\kappa-\kappa^{2}=z^{\text{YB deformed}} \tag{45}\]
We can see very easily that (45) satisfies the NBC on the branes, i.e., \(z^{\prime}(\theta)|_{\theta=\theta_{1,2}}=0\). For
\(0<\kappa<1\), equation (45) implying \(z(\theta)<1\), i.e., \(z^{\text{YB deformed}}<z_{h}\). Since \(z(\theta)\) is constant, therefore substituting \(z(\theta)\) from (45) into (38), we get the minimal area of the island surface as:
\[\mathcal{A}_{\text{Island}}=\int_{\theta_{1}}^{\theta_{2}}d\theta\sqrt{ \mathcal{G}_{\theta\theta}^{\eta}|_{z=z^{\text{YB deformed}}}}. \tag{46}\]
According to Ryu-Takayanagi's prescription the entanglement entropy of the island surfaces is obtained as [66]:
\[S_{\text{Island}}=\frac{2\mathcal{A}_{\text{Island}}}{4G_{N}^{(3 )}}=\frac{2\int_{\theta_{1}}^{\theta_{2}}d\theta\sqrt{\mathcal{G}_{\theta \theta}^{\eta}|_{z^{\text{YB deformed}}}}}{4G_{N}^{(3)}}\] \[=-\frac{2(\theta_{2}-\theta_{1})\alpha f^{\prime}(z^{\text{YB deformed }})}{8G_{N}^{(3)}}=2\left(\frac{(\theta_{2}-\theta_{1})\alpha z^{\text{YB deformed}}}{4G_{N}^{(3)}} \right)<2S_{\text{thermal}}. \tag{47}\]
The factor "2" in the above equation is due to the second island surface contribution from the thermofield double partner. Therefore, we get a _non-trivial island surface_ inside the horizon due to the presence of Yang-Baxter deformation whose entanglement entropy is lower than twice of thermal entropy of the black hole.
Figure 2: Description of the Yang-Baxter deformed wedge holography. The presence of Yang-Baxter deformation makes it possible to have a non-trivial island surface (solid brown curve) inside the horizon. The green dotted line in the figure corresponds to the black hole horizon. In this figure, we have shown just one part of the wedge holography; the Hartman-Maldacena surface will join the defect located on the other side of wedge holography.
#### 3.1.3 Page Curve
From subsections 3.1.1 and 3.1.2, we see that the Hartman-Maldacena surface has the linear time dependence (36) and island surface's entanglement entropy is less than the twice of thermal entropy of the black hole (47), hence the Page curve without DGP term is obtained as shown in Fig. 3. Fig. 3 has been drawn by dropping the overall factor \(\left(\frac{(\theta_{2}-\theta_{1})\alpha}{4G_{N}^{(3)}}\right)\). This factor is also common in the thermal entropy of the black hole (22). If we consider the numerical value of the aforementioned factor, then this will just scale the Page curves, but the qualitative results will remain the same. To draw the figure, we have taken \(\kappa=0,0.2,0.4\) implying \(z^{\text{YB un--deformed}}=z_{h}=1\) and \(z^{\text{YB deformed}}=0.76,0.44\) respectively. The entanglement entropies for \(z^{\text{YB deformed}}=0.76,0.44\) are \(S_{\text{Island}}=1.52,0.88\) which are less than \(2S_{\text{thermal}}=2\).
Let us understand the physical meaning of these results. Since \(z^{\text{YB deformed}}<z_{h}\) therefore, extremal surfaces in the presence of Yang-Baxter deformation are behind the black hole horizon. See [12, 67] where the island surface was found inside the horizon in non-holographic models. We found this result for the holographic model in this work. Using the entanglement entropies of the island surface and Hartman-Maldacena surface, we obtained the Page curve in wedge holography in the presence of Yang-Baxter deformation.
Based on [68, 69, 70], we can say that as soon as the island comes into the picture, one starts recovering the information from the black hole. In Fig. 3, we see that we get different Page curves for different values of \(\kappa\), and hence the Yang-Baxter deformation is shifting the Page curves. More explicitly, when \(\kappa\) increase, then Page curves shift towards earlier time. Therefore, variation of
Figure 3: Page curves for various values of \(\kappa\).
\(\kappa\) affects the emergence of islands and information recovery of the Hawking radiation9.
Footnote 9: See [13, 71] where the higher derivative terms affect the Page curves.
### DGP Term and Swampland Criteria in the Yang-Baxter Deformed Wedge Holography
In this subsection, we will discuss the effect of including the DGP term on the Karch-Randall branes. As discussed in [24], the Hartman-Maldacena surface entanglement entropy remains same. In the presence of the DGP term, the island surface's entanglement entropy receives an extra contribution from the boundary of the island surface [24]. In \(AdS_{d+1}/CFT_{d}\) correspondence, the same can be expressed as follows:
\[S_{EE}\sim\min\Biggl{[}\text{ext}\biggl{(}\int_{\Gamma}d^{d-1}x\sqrt{\gamma}+ \int_{\partial\Gamma}d^{d-2}x\sqrt{\sigma}\lambda_{a}\biggr{)}\Biggr{]}, \tag{48}\]
where \(\Gamma\) being the RT surface with induce metric \(\gamma\) and boundary of \(\Gamma\) (\(\partial\Gamma\)) has the induced metric \(\sigma\).
Recently it has been pointed out that including DGP term on the Karch-Randall brane is non-physical [31, 32]. DGP term can leads to negative entanglement entropy because of negative effective coupling constant of one brane. To have a positive entanglement entropy in the Yang-Baxter deformed wedge holography we need to consider \(\theta_{2}>\theta_{1}\). Another swampland criteria mentioned by authors in [31] is that for any extremal surface if \(S_{\text{ext}}<S_{\text{thermal}}\) then those theories will belong to the swampland.
We obtain the non-trivial island surface in Yang-Baxter deformed wedge holography due to the Yang-Baxter deformation. We can have positive entanglement entropy for the island surface provided \(\theta_{2}>\theta_{1}\) and hence we can avoid one of the swampland criteria given in [31]. But in our setup \(S_{\text{island}}<S_{\text{thermal}}\) therefore the second criteria of swampland is unavoidable. Based on the results of [31] we can say that Yang-Baxter deformed wedge holography is also the part of swampland.
## 4 Holographic Complexity in \(AdS_{2}^{\eta}\) Background
In this section, we compute the holographic complexity of Yang-Baxter deformed \(AdS_{2}\) using the complexity equals volume proposal [72]. This proposal states that the complexity of holographic dual is given by the volume of co-dimension one surface in the bulk. For our case, we
have \(AdS_{2}^{\eta}\)/deformed SYK duality, and hence we can write the expression for the holographic complexity as:
\[\mathcal{C}=\frac{\mathcal{V}}{G_{N}^{(2)}l}, \tag{49}\]
where \(\mathcal{V}\), \(G_{N}^{(2)}\), and \(l\) are the volume of a one-dimensional surface in two-dimensional bulk, Newton constant in two dimensions, and length scale associated with Yang-Baxter deformed \(AdS_{2}\) background respectively.
In order to proceed, we parametrize the volume slice by \(t\equiv t(z)\). The induced metric associated with co-dimension one surface in YB deformed \(AdS_{2}\) has the following form:
\[ds^{2}|_{\gamma}=\frac{\mathcal{F}_{\eta}(z)}{z^{2}}\Bigg{(}-f(z)t^{\prime}(z )^{2}+\frac{1}{f(z)}\Bigg{)}dz^{2}. \tag{50}\]
Using (50), the volume of co-dimension one surface is obtained as:
\[\mathcal{V}=\int dz\mathcal{L}=\int dz\left(\frac{\sqrt{\mathcal{F}_{\eta}(z) }\sqrt{\frac{1}{f(z)}-f(z)t^{\prime}(z)^{2}}}{z}\right). \tag{51}\]
Since there is no \(t(z)\) dependence in (51) therefore conjugate momentum of \(t(z)\) is constant (say \(C\)), and is given as below:
\[p=\frac{\partial\mathcal{L}}{\partial t^{\prime}(z)}=-\frac{f(z)\sqrt{ \mathcal{F}_{\eta}(z)}t^{\prime}(z)}{z\sqrt{\frac{1}{f(z)}-f(z)t^{\prime}(z)^ {2}}}=C. \tag{52}\]
On solving the above equation for \(t^{\prime}(z)\), we obtained:
\[t^{\prime}(z)=\frac{Cz}{f(z)\sqrt{C^{2}z^{2}+f(z)\mathcal{F}_{ \eta}(z)}}. \tag{53}\]
At the turning point \(dz/dt|_{z=z_{t}}=0\), and using \(f(z)=1-z^{2}\), \(\mathcal{F}_{\eta}(z)\) from (4), turning point from (53) is obtained as:
\[z_{t}=\frac{1}{\sqrt{1-C^{2}}}-\frac{C^{2}\kappa^{2}}{2\sqrt{1-C ^{2}}}. \tag{54}\]
Now substituting \(t^{\prime}(z)\) from (53) into the action (51), on-shell volume is obtained as:
\[\mathcal{V}=\int_{z_{B}}^{z_{t}}dz\mathcal{L}=\int_{z_{B}}^{z_{t} }dz\left(\frac{\sqrt{\frac{1}{C^{2}z^{2}-z^{2}+1}}}{z}+\frac{\kappa^{2}\left( \frac{1}{C^{2}z^{2}-z^{2}+1}\right)^{3/2}\left(2C^{2}z^{2}-z^{2}+1\right)}{2z ^{3}}\right). \tag{55}\]
Evaluating the radial integral in (55), we obtained the following simplified expression for the volume of co-dimension one surface in \(AdS_{2}^{\eta}\) background:
\[\mathcal{V}=\tanh^{-1}\left(\sqrt{\left(C^{2}-1\right)z_{B}^{2}+1} \right)-\tanh^{-1}\left(\sqrt{\left(C^{2}-1\right)z_{t}^{2}+1}\right)\] \[-\kappa^{2}\Bigg{(}\frac{\sqrt{\frac{1}{\left(C^{2}-1\right)z_{B} ^{2}+1}}\left(\left(C^{2}+1\right)z_{B}^{2}\,{}_{2}F_{1}\left(-\frac{1}{2},1; \frac{1}{2};\left(C^{2}-1\right)z_{B}^{2}+1\right)-1\right)}{4z_{B}^{2}}\] \[-\frac{\sqrt{\frac{1}{\left(C^{2}-1\right)z_{t}^{2}+1}}\left( \left(C^{2}+1\right)z_{t}^{2}\,{}_{2}F_{1}\left(-\frac{1}{2},1;\frac{1}{2}; \left(C^{2}-1\right)z_{t}^{2}+1\right)-1\right)}{4z_{t}^{2}}\Bigg{)}, \tag{56}\]
where \({}_{2}F_{1}(....)\) is the hypergeometric function. Utilizing (4) and (53), we obtained the boundary time as written below:
\[t(z_{B})=t_{b}=-\int_{{}_{z_{B}}}^{z_{t}}dz\left(\frac{Cz}{\left(1-z^{2}\right) \sqrt{C^{2}z^{2}-z^{2}+1}}-\frac{C\kappa^{2}}{2z\left(C^{2}z^{2}-z^{2}+1 \right)^{3/2}}\right). \tag{57}\]
After integrating (57), the simplified form of the boundary time is obtained as:
\[t_{b}=\tanh^{-1}\left(\frac{\sqrt{\left(C^{2}-1\right)z_{B}^{2}+1 }}{C}\right)-\tanh^{-1}\left(\frac{\sqrt{\left(C^{2}-1\right)z_{t}^{2}+1}}{C}\right)\] \[+\kappa^{2}\left(\frac{C\,{}_{2}F_{1}\left(-\frac{1}{2},1;\frac{1 }{2};\left(C^{2}-1\right)z_{B}^{2}+1\right)}{2\sqrt{\left(C^{2}-1\right)z_{B}^ {2}+1}}-\frac{C\,{}_{2}F_{1}\left(-\frac{1}{2},1;\frac{1}{2};\left(C^{2}-1 \right)z_{t}^{2}+1\right)}{2\sqrt{\left(C^{2}-1\right)z_{t}^{2}+1}}\right). \tag{58}\]
Now the ratio of the volume (56) and boundary time (58) upto \(\mathcal{O}(\kappa^{2})\) can be schematically expressed as:
\[\frac{\mathcal{V}}{t_{b}}=a_{1}\left(1+a_{2}\kappa^{2}\right), \tag{59}\]
where \(a_{1}\) and \(a_{2}\) are defined as:
\[a_{1}=\frac{\tanh^{-1}\left(\sqrt{\left(C^{2}-1\right)z_{B}^{2}+ 1}\right)-\tanh^{-1}\left(\sqrt{\left(C^{2}-1\right)z_{t}^{2}+1}\right)}{\tanh^ {-1}\left(\frac{\sqrt{\left(C^{2}-1\right)z_{B}^{2}+1}}{C}\right)-\tanh^{-1} \left(\frac{\sqrt{\left(C^{2}-1\right)z_{t}^{2}+1}}{C}\right)},\] \[a_{2}=\Bigg{[}\frac{C\left(\sqrt{\left(C^{2}-1\right)z_{B}^{2}+1 }\,{}_{2}F_{1}\left(-\frac{1}{2},1;\frac{1}{2};\left(C^{2}-1\right)z_{t}^{2}+ 1\right)-\sqrt{\left(C^{2}-1\right)z_{t}^{2}+1}\,{}_{2}F_{1}\left(-\frac{1}{2},1;\frac{1}{2};\left(C^{2}-1\right)z_{B}^{2}+1\right)\right)}{2\sqrt{\left(C^ {2}-1\right)z_{B}^{2}+1}\sqrt{\left(C^{2}-1\right)z_{t}^{2}+1}\left(\tanh^{-1 }\left(\frac{\sqrt{\left(C^{2}-1\right)z_{B}^{2}+1}}{C}\right)-\tanh^{-1} \left(\frac{\sqrt{\left(C^{2}-1\right)z_{t}^{2}+1}}{C}\right)\right)}\] \[+\frac{\frac{\sqrt{\left(C^{2}-1\right)z_{t}^{2}+1}\left(\left(C^ {2}+1\right)z_{t}^{2}\,{}_{2}F_{1}\left(-\frac{1}{2},1;\frac{1}{2}\left(C^{2}- 1\right)z_{t}^{2}+1\right)-1\right)}{4z_{t}^{2}}-\frac{\sqrt{\left(C^{2}-1 \right)z_{B}^{2}+1}\left(\left(C^{2}+1\right)z_{B}^{2}\,{}_{2}F_{1}\left(- \frac{1}{2},1;\frac{1}{2};\left(C^{2}-1\right)z_{B}^{2}+1\right)-1\right)}{4z_ {B}^{2}}}\Bigg{]}. \tag{60}\]
Using (49) and (59), we obtained the complexity growth in the presence of Yang-Baxter deformation as given below:
\[\frac{d\mathcal{C}}{dt_{b}}=\frac{a_{1}}{G_{N}^{(2)}l}\left(1+a_{2} \kappa^{2}\right). \tag{61}\]
The above equation implies that
\[\mathcal{C}\propto a_{1}\left(1+a_{2}\kappa^{2}\right)t_{b}, \tag{62}\]
where \(a_{1}\) and \(a_{2}\) are constants.
As a consistency check of our results, we see that at the leading order, i.e., \(\mathcal{O}(\kappa^{0})\), our results reduce to the ones obtained in [73]. In particular, leading order terms of (54), (57) and (55) match with [73] for \(z_{h}=1\). In [73], the boundary of \(AdS_{2}\) is located at \(z=0\) whereas in our case boundary of Yang-Baxter deformed \(AdS_{2}\) is located at \(z=z_{B}\). The boundary, \(z=0\), of \(AdS_{2}\) leads to divergent complexity growth because \(a_{1}\) and \(a_{2}\) are divergent when \(z=0\). However, the _holographic screen located at \(z=z_{B}\) makes the right-hand side of (61) finite, and we obtain the finite constant contribution to the complexity growth._ The effect of Yang-Baxter deformation on the complexity growth will depend upon the sign of \(a_{2}\). When \(a_{2}>0\), then this will increase the rate of complexity growth, and when \(a_{2}<0\), then this will decrease the complexity growth. One can obtain \(a_{2}\) from the appropriate choice of the constant "\(C\)" and the location of holographic screen "\(z_{B}\)".
## 5 Conclusion and Discussion
In this paper, we have constructed the wedge holography in Yang-Baxter deformed \(AdS_{3}\) background. The same has been done by considering the two Karch-Randall branes located at \(\theta=\theta_{1,2}\) and the Yang-Baxter deformed bulk \(AdS_{3}\) metric that satisfies Neumann boundary condition on the branes. In our case, the defect that connects the two Karch-Randall branes can be identified as a "deformed SYK model". In this way, we proposed the co-dimension two holography for the "deformed SYK model". In other words, this is the, "_duality between a Yang-Baxter deformed \(AdS_{3}\) bulk and deformed SYK model living at the defect_"10.
Footnote 10: We are expecting this kind of duality because of wedge holography. We will explore more about this duality in our future work.
Let us summarise the key results of the paper.
* In this setup, the non-conformal boundary of the Yang-Baxter deformed \(AdS_{3}\) bulk is located at the holographic screen placed at \(z=z_{B}\). Hence, the defect is also located
at the holographic screen in this setup as shown in Fig. 2. In Yang-Baxter deformed wedge holography, two-dimensional field theory is dual to three-dimensional bulk located at the holographic screen which we termed as "Holographic Screen Non-Conformal Field Theory (HSNCFT)". Since we have Yang-Baxter \(AdS_{2}\) on the KR-branes and hence the deformed SYK model is living at defect situated at the holographic screen because of "\(AdS_{2}^{\eta}\)/deformed SYK" duality. Overall, deformed SYK model living at the defect is dual to three-dimensional bulk.
* According to the wedge holography, the only possible extremal surface is the black hole horizon [22]. The authors in [24, 26, 27] obtained non-trivial island surface with entanglement entropy lower than the thermal entropy of the black hole by including the DGP term on the KR-branes. This possibility has been ruled out by the swampland criteria discussed in [31, 32].
* We used the Yang-Baxter deformed wedge holography to obtain the Page curve of the black hole and obtained the usual Page curve without DGP term in the presence of Yang-Baxter deformation in contrast to the flat Page curve as obtained in [22, 24, 26]. Yang-Baxter deformation is the reason for the existence of _non-trivial island surface_ inside the black hole horizon.
Let us compare our results with the literature. In [22], authors argued that we could not get the Page curve in usual wedge holography because the only possible extremal surface is the black hole horizon. In this paper, _we showed that there exists a non-trivial island surface inside the horizon in our case if we include the Yang-Baxter deformation in our setup. If we switch off the Yang-Baxter deformation, then the extremal surface is the black hole horizon similar to [22]_.
We also discussed the effect of Yang-Baxter deformation on the complexity growth and computed the correction to the same using the complexity equals volume proposal [72]. We found that Yang-Baxter deformation will enhance or decrease the complexity growth controlled by the parameter \(a_{2}\). Without Yang-Baxter deformation, our results reduce to ones obtained in [73].
## Acknowledgements
GY is supported by a Senior Research Fellowship (SRF) from the Council of Scientific and Industrial Research (CSIR), Govt. of India. HR would like to thank the authorities of Indian Institute of Technology Roorkee for their unconditional support towards researches in basic sciences. We are grateful to Dibakar Roychowdhury for the fruitful comments and discussions. We would like to thank Rong-Xin Miao, Herman Verlinde and Andreas Karch for the helpful clarifications. GY
thanks the organizers of "_Holography@25_" for organizing such a nice event where some part of this work has been done. GY is very grateful to the Science and Engineering Research Board (SERB), Govt. of India (and StAC IIT Roorkee) for providing the partial financial support under the scheme "_International Travel Scheme (ITS)_" to attend "_Holography@25_".
|
2305.00704
|
Multi-Species Prey-Predator Dynamics During a Multi-Strain Pandemic
|
Small and large scale pandemics are a natural phenomenon repeatably appearing
throughout history, causing ecological and biological shifts in ecosystems and
a wide range of their habitats. These pandemics usually start with a single
strain but shortly become multi-strain due to a mutation process of the
pathogen causing the epidemic. In this study, we propose a novel
eco-epidemiological model that captures multi-species prey-predator dynamics
with a multi-strain pandemic. The proposed model extends and combines the
Lotka-Volterra prey-predator model and the Susceptible-Infectious-Recovered
(SIR) epidemiological model. We investigate the ecosystem's sensitivity and
stability during such a multi-strain pandemic through extensive simulation
relying on both synthetic cases as well as two real-world configurations. Our
results are aligned with known ecological and epidemiological findings, thus
supporting the adequacy of the proposed model in realistically capturing the
complex eco-epidemiological properties of the multi-species multi-strain
pandemic dynamics.
|
Ariel Alexi, Ariel Rosenfeld, Teddy Lazebnik
|
2023-05-01T08:08:10Z
|
http://arxiv.org/abs/2305.00704v1
|
# Multi-Species Prey-Predator Dynamics During a Multi-Strain Pandemic
###### Abstract
Small and large scale pandemics are a natural phenomenon repeatably appearing throughout history, causing ecological and biological shifts in ecosystems and a wide range of their habitats. These pandemics usually start with a single strain but shortly become multi-strain due to a mutation process of the pathogen causing the epidemic. In this study, we propose a novel eco-epidemiological model that captures multi-species prey-predator dynamics with a multi-strain pandemic. The proposed model extends and combines the Lotka-Volterra prey-predator model and the Susceptible-Infectious-Recovered (SIR) epidemiological model. We investigate the ecosystem's sensitivity and stability during such a multi-strain pandemic through extensive simulation relying on both synthetic cases as well as two real-world configurations.
Our results are aligned with known ecological and epidemiological findings, thus supporting the adequacy of the proposed model in realistically capturing the complex eco-epidemiological properties of the multi-species multi-strain pandemic dynamics.
**keywords:** Lotka-Volterra model, ODE on a graph, numerical simulation, extended SIR model.
## 1 Introduction
In nature, a gentle biological and ecological balance is kept in a complex system of plants, animal species, and the environment [1, 2, 3, 4, 5, 6]. In the micro-level of a small spatial location, the ecological system's dynamics is the sum of interactions between a number of animal (and plants) species with their environment and each other. Thus, one can divide these biological interactions into two: animal-environment and animal-animal interactions [7]. The first type is mostly stable over time, as changes in such interactions result from a long-term evolution process [8]. As such, a good approximation to these dynamics can be associated with the environment's ability to support its inhabitants [9]. The latter type is much more complex with multiple ways and strategies animals apply to survive and produce offsprings [10, 11, 12].
This system is highly sensitive and even a small-size event can break this gentle balance and put the ecological system in a long-term course of re-stabilization [13, 14]. A large catastrophic event can result in fatal outcomes such as species extinction [15], partially ruined food-chains [16, 17, 18], and large-scale economic damages for the human population supported by this ecosystem [19, 20]. In history, experts recorded many types and occasions of such events, ranging from large-scale fires to extreme weather changes [21, 22, 23]. A dominant type of event that repeats itself over time and locations is pandemics [24, 25, 26]. For example, the influenza virus, a member of the Orthomyxoviridae family, infects multiple species worldwide, including poultry, swine, humans, horses, seals, and other animals [27, 28, 29].
Currently, our understanding of multi-species pandemic is limited due to the complexity of detecting it on time, gathering relevant data, and influencing its course [30, 31, 32, 33]. However, the study of interacting species has gained popularity in the last decades, constantly revealing new insights about the biological dynamics around us and providing a cornerstone for a broad spectrum of technological developments [34, 35, 36, 37]. Indeed, a particular interest is provided to the study of epidemiology to understand the spread of infectious diseases with the goal to determine pandemic intervention policies to possibly eradicate them [38, 39, 40, 41, 42, 43, 44]. In a complementary manner, the research of prey-predator dynamics has been widely extended with models
increasing in complexity and scope which are, presumably, capable of better representation of the dynamics found in nature [45, 46]. As such, mathematical models and computer simulations are shown to be powerful tools to understand the biological and ecological dynamics of pandemic spread [47, 48, 49, 50, 51, 52, 53].
A large body of work aims to extend the simple Susceptible-Infectious-Recovered (SIR) model that takes the form [54]:
\[\tfrac{dS}{dt}=-\beta S(t)I(t),\;\tfrac{dI}{dt}=\beta S(t)I(t)-\gamma I(t),\; \tfrac{dR}{dt}=\gamma I(t), \tag{1}\]
where \(S,I,\) and \(R\) are the groups of susceptible, infected, and recovered individuals, respectively, and the average infection rate and the average recovery rate are donated by \(\beta\in\mathbb{R}^{+}\) and \(\gamma\in\mathbb{R}^{+}\), respectively. The SIR model assumes the population is well-mixed (i.e., the probability two individuals interact at any point in time is uniformly distributed) and that \(S+I+R=N\in\mathbb{N}\) such that \(N\) is the constant, over time, population's size. As the SIR model is shown to be too simplistic to capture realistic pandemic scenarios [55, 56, 57], multiple extensions were proposed to improve it [58, 59, 60, 61, 62]. For instance, [63] used the SIR model to capture the pandemic spread in a fish population, showing the model is able to well capture and predict the pandemic spread dynamics. [64] analyzed an SEIR (E stands for the Exposed status) epidemiological model with healthcare treatment pandemic intervention policy for Ebola in humans. [65] reviewed several mathematical modeling attempts for spatial-temporal transmission dynamics of influenza. In particular, they show that spatio-temporal stochastic SIR models are suitable to well approximate the average reproduction number of the swine flu based on historical data. More advanced epidemiological models take into consideration multi-strain dynamics where there is more than one pandemic in parallel [66]. [67] have studied a class of multi-strain deterministic epidemic models that extend the SIR model in which cross-immunity varies with the genetic distance between strains. The authors show that for low maximal values of cross-immunity, all strains play a critical role in the course of the dynamics and tend to chaos. However, for the complementary case, the system has both chaotic and stable phases during the dynamics. [68] studied a multi-scale immuno-epidemiological model of influenza viruses including direct and environmental transmission, extending the SIR model to allow two time-since-infection structural variables. [60] examine a spatio-temporal model for the disease, extending the SIR model by taking into consideration a population living on two or more patches between any pair of which migration is allowed. They analyzed the influence of a pulse vaccination strategy, concluding conditions for eradicating the pandemic.
In a similar manner, researchers investigated the prey-predator dynamics from a bio-mathematical perspective. Most of the prey-predator models are based on the Lotka-Volterra model which takes the form [69]:
\[\tfrac{dx(t)}{dt}=ax(t)-by(t)x(t),\;\tfrac{dy(t)}{dt}=cx(t)y(t)-dy(t), \tag{2}\]
where \(x(t)\) and \(y(t)\) are the prey and predator population sizes over time, respectively. \(a\in\mathbb{R}^{+}\) is the natural growth rate of the prey population supported by the environment, \(b\in\mathbb{R}^{+}\) is the proton of the prey population that consumed by the predator population, \(c\in\mathbb{R}^{+}\) is the rate of resources available for the predator population to grow due to consumption of the prey population, and \(d\in\mathbb{R}^{+}\) is the natural decay rate of the predator population. These models are based on two assumptions: a) the habitat for the prey is assumed to be unlimited so that in absence of predators the prey will reproduce exponentially, and b) the predators survive only on the prey, and in the absence of food, their number will decrease exponentially. This model and its extensions are well studied [70, 71, 72, 73, 74].
Several attempts of merging these two models have been investigated [75, 76, 77]. [78] developed and analyzed a predator-prey model, where both species are subjected to parasitism. They show that in the case where the uninfected predator cannot survive only on uninfected prey, the parasitization could lead to the persistence of the predator provided a certain threshold of transmission is surpassed. [79] analyzed a two-species prey-predator model with the SIS epidemiological model where predators have an alternative food source rather than the prey. [80] also investigate a two-species prey-predator model with the SIS epidemiological model, proposing a stochastic version of it and a numerical scheme to solve the model efficiently. Common to these works is the focus on both single-strain pandemics and only two species. To the best of our knowledge, no model that combines multi-strain epidemiological dynamics with a multi prey-predictor network has been proposed yet.
In this work, we propose a novel multi-strain with multi-species (MSMS) model for studying the spread of infectious diseases in a more realistic, complex ecosystem. A schematic view of the two possible extensions of two species with a single strain and their merge into an MSMS model is provided in Fig. 1.
The remaining paper is organized as follows: Section 2 describes the proposed model's mathematical formalization followed by a computer simulation implementation. Section 3 outlines the implementation of the
proposed model as a computer simulator. Section 4 provides a comprehensive evaluation of the proposed model using synthetic and real-world setups. Finally, Section 5 provides a discussion on the model's benefits and limitations followed by conclusion remarks and suggestions for future work.
## 2 Model Definition
In order to capture the ecological-epidemiological dynamics, we use a system of ordinary differential equations (ODEs). Intuitively, we combine the multi-strain pandemic model proposed by [66] with a multi-species Lotka-Volterra model proposed by [79]. On top of that, we further extend the multi-strain pandemic model to include cross-spices infection and infection's exposition phase.
### Single Specie-level dynamics
For each specie in the set of all species in the system, the multi-strain epidemiological model considers a population \(\mathbb{P}_{i}\) for the \(i_{th}\) specie. We assume a pandemic for specie \(i\) has \(M_{i}:=\{1,\ldots,m_{i}\}\) strains. Each individual in the population, \(p_{i}^{j}\in\mathbb{P}_{i}\), is associated with one of five epidemiological states with respect to each strain: susceptible (\(S\)), exposed (\(E\)), infected (\(I\)), recovered (\(R\)), and dead (\(D\)). Thus, the epidemiological state of an individual can be represented by a vector \(\eta_{i}\in\mathbb{R}^{|M_{i}|\times 5}\). Moreover, as it is assumed that an individual can not be infected or exposed to more than one strain at the same time and since once an individual is dead due to one strain it is dead, the individual's epidemiological state can be reduced to a set of strain one recovered from, \(j\in P(M_{i})\), and the current infectious strain, \(k\in P(M_{i})\).
Therefore, each individual belongs to one of five groups: 1) Infectious with strain \(k\in M_{i}\) and a history of recoveries \(J\in P(M_{i})\) (i.e., the power set of the strain and its strain set) represented by \(R_{J}I_{k}^{i}\); 2) Exposed with strain \(k\in M_{i}\) and a history of recoveries \(J\in P(M_{i})\) represented by \(R_{J}E_{k}^{i}\); 3) Recovered with a history \(J\in P(M)\) represented by \(R_{J}^{\prime}\); and 4) Dead (\(D^{i}\)). Of note, for \(J=\emptyset\), \(R_{J}\equiv S\) is the susceptible epidemiological state. A schematic view of the transition between the stages of the disease for an individual for two strains (i.e.,
Figure 1: A schematic view of the model’s structure and the connections between them.
\(\|M\|=2\)) is shown in Fig. 2.
Individuals in the Recovered (\(R_{J}\)) group have immunity to strains \(k\in J\) and are susceptible to infection by strains \(M_{i}\backslash J\). When an individual in this group is exposed to strain \(k\in M\backslash J\), the individual is transferred to the Exposed with strain \(k\) with a history of recoveries group \(J\) (\(R_{J}E_{k}\)) at a rate \(\beta_{J,k}\). The individual stays in this group \(\psi_{J,k}\) time stamps, after which the individual is transferred to the Infected group of the same strain \(k\) with the same recovery history \(J\) marked by (\(R_{J}I_{k}\)). The individuals stay in this group \(\gamma_{J,k}\) time stamps, after which the individuals are transferred to the Recovered group (\(R_{J\cup\{k\}}\)) or the Dead group (\(D\)) at rate \(1-\xi_{J,k}\) and \(\xi_{J,k}\), respectively. The recovered individuals are again healthy, no longer contagious, and immune from future infection from the same strain \(k\).
Formally, for the \(i_{th}\) specie, the multi-strain epidemiological model takes the following form: First, in Eq. (3), \(\frac{dR_{J}^{i}(t)}{dt}\) is the dynamic number of individuals who have recovered from a group of strains \(J\in P(M_{i})\) over time. It is affected by the following two terms. First, for each strain \(k\in J\), an individual who has recovered from group \(J\backslash\{k\}\) of strains and is infected with strain \(k\), recovers at rate \(\gamma_{J\backslash\{k\},k}^{i}\) with a probability of \(1-\xi_{J\backslash\{k\},k}^{i}\). Second, individuals infected by strain \(k\) with rate \(\beta_{J,k}^{i}\). These individuals can be infected by any individual with a strain \(k\) who has recovered from any group \(L\) of strains so that \(k\not\in L\).
\[\frac{dR_{J}^{i}(t)}{dt}=\sum_{l\in J}\gamma_{J\backslash\{l\},l}^{i}(1-\xi_{ J\backslash\{i\},i}^{i})R_{J\backslash\{i\}}^{i}I_{i}(t)-\sum_{l\in M_{i} \backslash J}\beta_{JJ}^{i}R_{J}^{i}(t)\sum_{L\in P(M),c\not\in L}R_{L}I_{c}^{i }(t). \tag{3}\]
Second, in Eq. (4), \(\frac{dR_{J}E_{k}(t)}{dt}\) is the dynamic number of individuals who have recovered from a group of strains \(J\) and are exposed to a strain \(k\) over time. It is affected by the following two terms. First, individuals infected by strain \(k\) with rate \(\beta_{J,k}^{i}\). These individuals can be infected by any individual with a strain \(k\) who has recovered from any group \(L\) of strains so that \(k\not\in L\). Second, individuals exposed to strain \(k\) who become infected at rate \(\phi_{J,k}^{i}\).
\[\frac{dR_{J}E_{k}^{i}(t)}{dt}=\sum_{k\in M_{i}\backslash J}\beta_{J,i}^{i}R_{ J}^{i}(t)\sum_{L\in P(M_{i}),k\not\in L}R_{L}I_{k}^{i}(t)-\psi_{J,k}^{i}R_{J}E_{k}^ {i}(t). \tag{4}\]
Third, in Eq. (5), \(\frac{dR_{J}I_{k}^{i}(t)}{dt}\) is the dynamic number of individuals who have recovered from a group of strains \(J\) and are infected with strain \(k\) over time. It is affected by the following two terms. First, individuals exposed to strain \(k\) with a history of \(J\) who become infected with strain \(k\), at rate \(\phi_{J,k}^{i}\). Second, individuals infected with strain \(k\) who are either dead or recovered at rate \(\gamma_{J,k}^{i}\).
Figure 2: Schematic view of transition between disease stages, shown for \(\|M\|=2\).
\[\frac{dR_{J}I_{k}^{i}(t)}{dt}=\psi_{J,k}^{i}R_{J}E_{k}^{i}(t)-\gamma_{J,k}^{i}R_{J }I_{k}^{i}(t). \tag{5}\]
Forth, in Eq. (6), \(\frac{dD^{i}(t)}{dt}\) is the dynamic number of dead individuals over time. For each strain \(k\), and for each group \(J\backslash\{k\}\), infected individuals who do not recover die at rate \(\gamma_{J\backslash\{k\},k}^{i}\) with probability \((\xi_{J\backslash\{k\},k})\).
\[\frac{dD^{i}(t)}{dt}=\sum_{k\in M,J\in P(M)}\gamma_{J\backslash\{k\},k}^{i} \xi_{J\backslash\{k\},k}^{i}R_{J\backslash\{k\}}I_{k}^{i}(t). \tag{6}\]
In summary, the single specie-level epidemiological dynamics take the form:
\[\begin{array}{l}\frac{dR_{J}^{i}(t)}{dt}=\sum_{l\in J}\gamma_{J \backslash\{l\},l}^{i}(1-\xi_{J\backslash\{i\},i}^{i})R_{J\backslash\{i\}}^{i }I_{i}(t)-\sum_{l\in M_{i}\backslash J}\beta_{J,l}^{i}R_{J}^{i}(t)\sum_{L\in P (M),c\not\in L}R_{L}I_{c}^{i}(t),\\ \\ \frac{dR_{J}E_{k}^{i}(t)}{dt}=\sum_{k\in M_{i}\backslash J}\beta_{J,i}^{i}R_{J }^{i}(t)\sum_{L\in P(M_{i}),k\not\in L}R_{L}I_{k}^{i}(t)-\psi_{J,k}^{i}R_{J}E_ {k}^{i}(t),\\ \\ \frac{dR_{J}I_{k}^{i}(t)}{dt}=\psi_{J,k}^{i}R_{J}E_{k}^{i}(t)-\gamma_{J,k}^{i}R _{J}I_{k}^{i}(t),\\ \\ \frac{dD^{i}(t)}{dt}=\sum_{k\in M,J\in P(M)}\gamma_{J\backslash\{k\},k}^{i}\xi_ {J\backslash\{k\},k}^{i}R_{J\backslash\{k\}}I_{k}^{i}(t)..\end{array} \tag{7}\]
One can notice that the last equation that captures the number of individuals in the population that die due to the pandemic does not change the dynamics. As such, for our subsequent implementation, we will omit this equation from consideration.
### Cross-Species dynamics
In our model, the cross-species dynamics include two main components: cross-infection and prey-predator interactions. However, not all species interact with all other species and, even if they do interact, they need not necessarily interact in the same way. As such, one can represent the interactions between a set of species \(\mathbf{P}:=[\mathbb{P}_{1},\ldots,\mathbb{P}_{N}]\) using a directed, non-empty graph \(G:=(V,E_{1},E_{2})\) where \(V\in\mathbf{P}\times\mathbb{R}^{2}\) is a set of nodes corresponding to the species populations, \(E_{1}\subset V\times V\times\mathbb{R}^{2}\) is set of directed edges representing the prey-predator interactions, and \(E_{2}\subset V\times V\times\mathbb{R}^{|M_{x}||M_{y}|}\) is set of directed edges representing the cross-infection interactions. Formally, \(v\in V\) represents the entire population of specie with two parameters: the natural growth rate due to free resources \(a^{i}\in\mathbb{R}\) and natural population decay \(d^{i}\in\mathbb{R}\). The prey-predator interaction between specie \(\mathbb{P}_{x}\) and \(\mathbb{P}_{y}\), \(e_{1}^{x,y}\in E_{1}\), defines two parameters the average portion of population \(\mathbb{P}_{x}\) consumes from population \(\mathbb{P}_{y}\), \(C_{x,y}\in\mathbb{R}\), and the growth rate population \(\mathbb{P}_{x}\) obtains from consuming population \(\mathbb{P}_{y}\), \(B_{x,y}\in\mathbb{R}\). The cross-infection interaction between specie \(\mathbb{P}_{x}\) and \(\mathbb{P}_{y}\), \(e_{2}^{x,y}\in E_{2}\), defines the average infection rate from an infected individual with strain \(k_{x}\) that belongs to population \(x\) to an individual in population \(y\) to become exposed to strain \(k_{y}\) to be \(\beta_{k_{x},k_{y}}^{x,y}\in\mathbb{R}\).
Accordingly, the prey-predator dynamics is following the Lotka-Volterra model for two species at a time. As such, in Eq. (8), \(\frac{d|\mathbb{P}_{x}(t)|(t)}{dt}\) is the dynamic number of individuals in population \(x\) over time. It is influenced by the following four terms. First, the non-infected population has a natural reproduction at rate \(a_{x}\). Second, the \(x\) population is consuming a set of other populations, \(\{y|(x,y)\in E_{1}\}\), such that from each one of them with a rate \(B_{x,y}\) is added to the \(x\) population concerning the size of the \(y\) population. Third, in a symmetric way to the second term, the \(x\) population is also consumed by other species with a rate \(C_{x,y}\). Finally, the population's size is naturally exponentially decreased at a rate \(d_{x}\).
\[\frac{d|\mathbb{P}_{x}(t)|}{dt}=a_{x}\sum_{J\in P(M_{x})}R_{J}^{x}(t)+\sum_{\{y |(x,y)\in E_{1}\}}B_{x,y}|\mathbb{P}_{x}(t)||\mathbb{P}_{y}(t)|-\sum_{\{y|(y,x )\in E_{1}\}}C_{y,x}|\mathbb{P}_{y}(t)||\mathbb{P}_{x}(t)|-d_{x}|\mathbb{P}_{i }(t)|. \tag{8}\]
In addition, in order to capture the cross-infection dynamic, let us focus on two species, \(x\) and \(y\). For any two species, a matrix \(A\in\mathbb{R}^{|M_{x}||M_{y}|}\) is defined to represent the infection rate that makes individuals from the \(x\) population infected by strain \(k_{1}\in M_{x}\) to an individual in the \(y\) population that would be infected by
strain \(k_{2}\in M_{y}\). Once \(x,y,k_{1}\) and \(k_{2}\) are chosen, the dynamic change of the exposed individuals in strain \(k_{1}\) from population \(x\) is corresponding to the infection rate \(\beta_{k_{1},k_{2}}^{x,y}\) of all individuals in the \(x\) population that are susceptible to strain \(k_{1}\) and can be infected from an individual from population \(y\) that is infected by strain \(k_{2}\) with any recovery history, as formally described in Eq. (9):
\[\frac{dR_{J}E_{k_{1}}^{x}(t)}{dt}=\beta_{k_{1},k_{2}}^{x,y}\sum_{J\in M_{x} \setminus\{k_{1}\}}dR_{J}^{x}(t)\sum_{L\in M_{y}\setminus\{k_{2}\}}R_{L}I_{k_{ 2}}^{y}(t). \tag{9}\]
Hence, these dynamics takes can be summarized by the following system of ODEs:
\[\forall x\in V:\frac{d\|\mathbb{P}_{x}(t)|(t)}{dt}=a_{x}\sum_{J\in P(M_{x})}R_ {J}^{x}(t)+\sum_{\{y|(x,y)\in E_{1}\}}B_{x,y}|\mathbb{P}_{x}(t)||\mathbb{P}_{ y}(t)|-\sum_{\{y|(y,x)\in E_{1}\}}C_{y,x}|\mathbb{P}_{y}(t)||\mathbb{P}_{x}(t)|-d_{x}| \mathbb{P}_{i}(t)|,\]
\[\forall(x,y)\in E_{2},k_{1},k_{2}\in M_{x}\times M_{y}:\frac{dR_{J}E_{k_{1}}^ {x}(t)}{dt}=\beta_{k_{1},k_{2}}^{x,y}\sum_{J\in M_{x}\setminus\{k_{1}\}}dR_{J}^ {x}(t)\sum_{L\in M_{y}\setminus\{k_{2}\}}R_{L}I_{k_{2}}^{y}(t) \tag{10}\]
where \(|\mathbb{P}_{i}(t)|:=\sum_{k\in M}\sum_{J\in P(M\setminus\{k\})}\left(R_{J}I_{ k}^{i}(t)+R_{J}E_{k}^{i}(t)\right)+\sum_{J\in P(M)}R_{J}(t)\) is the \(i_{th}\) population's size at time \(t\).
A schematic example of the prey-predator and cross-infection in a multi-species case is provided in Fig. 3 where six species are participating in the dynamics such that specie 1 eats species 2 and 3, specie 2 eats species 4 and 5, and specie 3 eats specie 6. In addition, specie 3 infects specie 2 and is infected by specie 5. Species 5 and 6 infect each other.
## 3 Computer Simulation
To simulate the model, we used an agent-based simulation approach [81, 82, 83, 84, 85] where each individual in the multi-population (i.e., the set of all species populations) is a timed finite state machine [86] that defined by a tuple \(p_{i}\in\mathbb{P}_{i}:p_{i}:=(s,k,J,\tau)\) such as \(s\in[1,\ldots,N]\) is the individual's specie's index, \(k\in M_{i}\cup 0\) is the current strain the individual is infected with (if any), \(J\in P(M_{i})\) is the recovery history, and \(\tau\in\mathbb{N}\) is the number of time steps that passed from the last change in the epidemiological state.
At the beginning of the simulation, the user is responsible to generate a set of populations, such that all agents in the same population have an identical \(s\) value. Moreover, for each population \(i\), the user declares the number of strains \(M_{i}\) and as a result, provides the following set of parameters: infection rates \(\beta_{J,k}^{i}\), duration from exposure to being infectious \(\psi_{J,k}^{i}\), recovery duration \(\gamma_{J,k}^{i}\), and recovery rate \(\xi_{J,k}^{i}\), for each \(k\in M_{i}\) and \(J\in P(M_{i})\). In addition, for each population \(i\), the user provides natural growth and decay rates \(a_{i}\) and \(d_{i}\), respectively. Once all the populations are generated, the user is required to construct the prey-predator graph \(G\) by introducing the two sets of edges \(E_{1}\) and \(E_{2}\) as follows. First, for each pair of populations such that population \(x\) is the prey and population \(y\) is the predator, the edge \((x,y)\in E_{1}\) is added with the consumption
Figure 3: An example for prey-predator and cross-infection graph of multi-species with six species.
portion of the prey population \(B_{x,y}\) and the growth to the predator population \(C_{x,y}\). Second, for each pair of populations \(x\) and \(y\) (not necessarily prey and predator), the edge \((x,y)\in E_{2}\) is added such that the \(x\)'s individual's strain \(k_{1}\in M_{x}\), the \(y\)'s individual's strain \(k_{2}\in M_{y}\) with recovery history \(J\in P(M_{y})\), and an cross-infection rate \(\beta_{J,k_{1}}^{x,y}\in\mathbb{R}^{+}\).
Afterward, the simulation takes place in rounds \(r\in[1,\ldots,T]\) such that \(T<\infty\). At each round, individuals in each population may interact, thus initiating some epidemiological and prey-predator dynamics as mathematically detailed in sections 2.1 and 2.2. However, since the order of execution of each dynamic might influence the course of the dynamics, we tackle this challenge by computing all the changes in the meta-population and executing all of them at once after canceling-out opposite changes, as commonly performed in particle simulations [87, 88, 89]. A schematic view of the simulator's process is shown in Fig. 4.
### Ecosystem instability metric
There are multiple metrics used to evaluate the course of a pandemic, such as total mortality, the maximum number of infected individuals at the same time, and the basic reproduction number [90, 91, 66, 92, 93]. Each of these properties captures different properties of the pandemic spread. However, they are all designed for the case where there is a single population or where the evaluation is agnostic to all sub-populations and aims to measure the overall pandemic spread.
Thus, for our analysis, we propose a novel metric to measure the pandemic spread for a multi-species scenario where the focus is on the ecosystem's ability to reach back a stable state. Intuitively, if no pandemic is present, the ecosystem reaches an equilibrium state [94, 95, 96], \(s^{*}\), that can be treated as a baseline and therefore \(d(s^{*})=1\). The other extreme case, \(s^{**}\) is that all species are extinct due to the pandemic. As this is probably the worst-case scenario, we define \(d(s^{**})=0\). Notice that both \(s^{*}\) and \(s^{**}\) define an equilibrium state. Thus, it is self-evident to require a metric \(d\) to evaluate the system's closest equilibrium state's condition. Following this rationale, the metric \(d\) is assessing an equilibrium state \(s\) that indicates the number of extinct species. Moreover, as we allow the equilibrium state to contain any value in \(\mathbb{R}\cup-\infty,\infty\), the proposed metric is also an indicator of the instability caused to the ecosystem by the pandemic. Therefore, the metric \(d\) is formally defined as follows.
**Definition 3.1** (Ecosystem's stability metric).: Given an MSMS dynamic system \(M\) with \(N\) species, the ecosystem's stability metric, \(d\), measures the level of ecosystem's stability significantly after the pandemic has been eliminated or has stabilized as follows:
\[d(M):=1-\frac{|\{v\in\lim_{t\rightarrow\infty}M(t):v\in\{0,\infty\}\}|}{N}\]
Notably, one can define \(d\) slightly differently following the same motivation and constraints. Thus, the proposed definition is a sample of a feasible definition for \(d\) and not the only possible one.
Figure 4: A schematic view of the simulator’s process in both process diagram (black) and objects diagram (blue). (a) the initial phase of constructing the simulator, (b) the computation of the simulation itself, and (c) the analysis of the simulation’s results.
Evaluation
In order to study the behavior of the MSMS model for various conditions and scenarios, we divide the analysis into two main parts: synthetic and real-world setups. Using the synthetic data, we are able to numerically study the damage and instability a multi-strain pandemic causes to an ecosystem for different cases. In particular, as the ecosystems are widely diverse and complex, in order to study the sensitivity of the pandemic's damage, we randomly generate a large set of cases and measure the average and standard deviation of the pandemic's damage on this set while changing a sub-set of parameters and initial conditions each time. In a complementary manner, real-world cases are considered in order to test the influence of global pandemic properties on the overall ecosystem's stability, given a specific topology of the species interactions.
### Synthetic Setup
For realizing this simulation, several parameters of the MSMS model have to be set. We discuss the main parameter values below and provide a summary in Table 1. The parameters are chosen to represent relatively small, yet diverse, ecosystems that require reasonable computational burden to simulate. The pandemic's properties as well as the prey-predator dynamics are randomly sampled, if not stated otherwise.
In order to obtain a wide variety of cases, \(n=1000\) randomly generated graphs are considered. For the sampled \(1000\) graphs, we examine the ecosystem's stability metric as a function of different properties of the MSMS model. Fig. 5 summarizes the main results obtained. As one can see from Fig. 5a, the mean ecosystem's stability is monotonically decreasing by the number of species (\(|V|\)) and the standard deviation is increasing. In a similar manner, Fig. 5b shows that the cross-infection density (\(|E_{2}|/|V|\)) enforces a monotonic decreasing behavior to the ecosystem's stability. Moreover, as the cross-infection density increases, the decrease rate intensifies, as indicated by the negative value of the second-order numerical derivative of the graph. The ecosystem's stability shows a monotonically decreasing behavior to the prey-predator density (\(|E_{1}|/|V|\)), as presented by Fig. 5c. Specifically, an inverse behavior is found to be \(0.95-0.39Z/(Z+0.11)\) where \(Z:=|E_{1}|/|V|\) with a coefficient of determination \(R^{2}=0.958\), using the least mean square method [97]. Similar dynamics are encountered when computing the sensitivity of the ecosystem's stability to the number of strains (\(|M|\)), as shown in Fig. 5d.
### Real-World Setup
In order to evaluate the proposed MSMS model's ability to capture and predict the ecosystem's state during a pandemic in a more realistic setup, we consider two real-world cases: First, a wild farm case in Asia where farm animals have interacted with nearby wild animals; Second, a near-shore ocean case. For both cases, we consider the Avian Influenza virus to be the pathogen at the root of the pandemic [98, 99, 100, 101, 102, 103].
Due to the fact that each case involving many species and their interactions, there is a lot of unknown information. In order to maintain a balance between available data and computational resources and the case's representation accuracy, we have selected a subset of species that play a key role in the dynamics. For both cases we consider the following: Initially, wild birds infected farm chickens, then chickens infected wild birds. Furthermore, wild birds and chickens infect horses, which in turn infect dogs. In addition, chickens also directly
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Parameter** & **Description** & **Default value** \\ \hline \(|V|\) & Number of species [1] & \([5,\ldots,20]\) \\ \hline \(|P_{i}|\) & Number of individuals in the \(i_{th}\) species’ population [1] & \([500,\ldots,5000]\) \\ \hline \(|E_{1}|\) & Number of prey-predator connections [1] & \([0.05|V|,\ldots,0.5|V|]\) \\ \hline \(|E_{2}|\) & Number of cross-infection connections [1] & \([0.05|V|,\ldots,0.5|V|]\) \\ \hline \(|M_{i}|\) & Number of strains for the \(i_{th}\) species [1] & \([0,1,2,3,4]\) \\ \hline \(T\) & Steps in time in days \([t]\) & \(365\) \\ \hline \(\beta^{i}_{k,J}\) & The infection rate of specie \(i\) for strain \(k\) for an individual with recovery history \(J\)[1] & \([0.05,\ldots,0.15]\cdot|J|/|M_{i}|\) \\ \hline \(X^{i}_{k,J}\) & The strain \(X\in[\gamma,\phi,\xi,\psi]\) property rate of specie \(i\) for strain \(k\) for an individual with recovery history \(J\)[1] & \([0.01,\ldots,0.5]\) \\ \hline \end{tabular}
\end{table}
Table 1: The model’s parameters description and default values.
infect dogs and pigs [104]. Among the prey-predator interactions, Coyotes eat wild birds, birds of prey, and horses [105]. In addition, birds of prey eat smaller wild birds [106]. In the second case, wild birds infect bats and vice versa. In addition, they infect sea otters [104]. As a prey-predator interaction, Coyotes eat wild birds and sea otters [105]. In their turn, sea otter eats small fish and crabs [107]. A schematic view of the two cases is shown in Fig. 6, where the solid and dashed lines indicate the cross-infection and prey-predator interactions, respectively.
In order to use the proposed MSMS model in these cases, one first needs to find the model's parameters' values and initial condition. Unfortunately, this data is largely unavailable [108] and even partial observations of the dynamics highly differ between locations and timeframes [109, 110, 111]. To overcome this challenge, we generate a large number of samples under the constraint that for the same initial condition and prey-predator interactions, the ecosystem's stability after 365 steps in time is 1. The motivation behind this constrain is to sample cases that are relatively stable if no pandemic is presented, as believed to be the case for a short duration of time in most setups [71, 72, 73].
Fig. 7 presents a two-dimensional sensitivity analysis for the real-world cases between the ecosystem's stability and the average cross-infection rate on the x-axis and the average inner-species infection rate on the y-axis. The results are shown as the average of \(n=25\) samples for each configuration, as computed after \(T=365\) steps. A paired T-test between the result metrics of the "wild farm" and "near-shore ocean" case shows that the two cases statistically significantly differ with \(p<0.0001\). One can see in both cases that, on
Figure 5: A sensitivity analysis of the ecosystem’s stability metric. The results are shown as mean \(\pm\) standard deviation of \(n=100\) repetitions.
average, a larger average cross-infection rate and a larger inner-species infection rate cases more instability in the ecosystem. However, this connection is non-linear as linear fitting on both cases resulted in a coefficient of determinations \(R^{2}=0.312\) and \(R^{2}=0.185\), respectively.
## 5 Discussion
In this study, we proposed a novel ecological-epidemiological model of multi-species with multi-strain dynamics that account for prey-predator interactions and cross-infection interactions between arbitrary numbers of species using an extended Lotka-Volterra and SEIRD models, represented by ordinary differential equations operated on a graph. Considering the Avian Influenza A pathogen and its various strains for different species as a representative example, we evaluated the proposed model using an extensive agent-based simulation based on both synthetic and real-world graphs and data.
Starting with synthetic graphs and data, we examine the sensitivity of the ecosystem's long-duration stability due to the multi-strain pandemic, using a wide range of cases. On average, as the number of species increases, the ecosystem's stability decreases, as shown in Fig. 4(a). Moreover, the entropy of the system is also increasing,
Figure 6: A schematic view of the two real-world cases. The solid and dashed lines indicate the cross-infection and prey-predator interactions between species, respectively.
Figure 7: A two-dimensional sensitivity analysis for the real-world cases between the ecosystem’s stability and the combined influence of the average cross-infection rate and the average inner-species infection rate.
as indicated by the standard deviation of the graph. These results agree with biological observations in nature [112]. A similar outcome was achieved for cross-infection density and prey-predator density, as shown in figs 5b and 5c, respectively, aligning with the existing view that cross-infections in the predator population result in community instability among predators and their prey [113, 112, 114, 115]. Furthermore, as the number of strains in the environmental dynamics increases, the ecosystem's stability decreases. This outcome again agrees with prior literature examining other multi-strain pandemics applied to a single species [66, 67].
In addition, using real-world data, we examined two realistic cases - one of a wild farm and another of a near-shore ocean, presented in Fig. 6. We examined the influence of the average cross-infection rate and the average inner-species infection rate on the ecosystem's stability, as presented in Fig. 7. As one could expect, as these quantities increase, the ecosystem's stability decreases in a non-linear fashion. Comparing the cross-infection rates between wild farm animals, it seems that the ecosystem is more stable among them as opposed to those found near the shore. This observation also holds when we consider the rate of inner infection among these animals. Furthermore, both ecosystems appear to have a higher rate of cross-infection than within-species infection which makes the ecosystem less stable.
Taken jointly, the results indicate that the proposed MSMS model with its agent-based simulation could adequately represent multi-species multi-strain pandemic dynamics. Researchers can utilize the proposed model to conduct _in silico_ experiments, exploring different pandemic intervention policies for a wide range of configurations.
This study has several limitations that should be addressed in future work to further improve the biological and ecological accuracy of the proposed MSMS model. First, as no spatial component is taken into consideration, the current infection rates are operating as an upper bound for the realistic infection rate [66] which results in over-pessimistic outcomes. Moreover, by considering spatial dynamics, one is able to capture the movement patterns of various species [116, 117, 118]. Thus, introducing a spatial component to the model would significantly increase its accuracy [119, 120, 121]. Second, many species alter their behavior over time, due to water chares for example, thus directly influencing other species' behaviors [122, 123, 124]. For instance, bird migration [125], bears' hibernate [126], and plants' blossom [127]. Third, the proposed model uses constant epidemiological and ecological values. However, in practice, these values are dynamic and influenced by the time of year, changes in the strains' mutation, and other factors. As such, time-depended or even stochastic values would make the model, presumably, even more realistic and accurate at the cost of analytical analysis feasibility. Finally, as strains in a pandemic are not static and new strains can appear through a mutation process in hosts, one can further extend the multi-strain pandemic model into a multi-mutation pandemic model as well [128].
## Declarations
### Funding
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
### Conflicts of interest/Competing interests
The authors have no financial or proprietary interests in any material discussed in this article.
### Data and materials availability
The data that has been used is presented in the manuscript with the relevant sources.
#### Author Contributions
Ariel Alexi: Conceptualization, investigation, data curation, and writing - original draft.
Ariel Rosenfeld: Conceptualization, supervision, validation, and writing - review & editing.
Teddy Lazebnik: Conceptualization, formal analysis, methodology, software, data curation, methodology, project administration, visualization, supervision, and writing - original draft.
### Acknowledgements
The authors wish to thank Ziv Zemah Shamir for his ecological-related consulting.
|
2301.03922
|
Trajectorial dissipation of $Φ$-entropies for interacting particle
systems
|
A classical approach for the analysis of the longtime behavior of Markov
processes is to consider suitable Lyapunov functionals like the variance or
more generally $\Phi$-entropies. Via purely analytic arguments it can be shown
that these functionals are indeed non-increasing in time under quite general
assumptions on the process. In this paper,we complement these classical results
via a more probabilistic approach and show that dissipation is already present
on the level of individual trajectories for spatially-extended systems of
infinitely many interacting particles with arbitrary underlying geometry and
compact local spin spaces. This extends previous results from the setting of
finite-state Markov chains or diffusions in $\mathbb{R}^n$ to an
infinite-dimensionalsetting with weak assumptions on the dynamics.
|
Benedikt Jahnel, Jonas Köppl
|
2023-01-10T11:55:24Z
|
http://arxiv.org/abs/2301.03922v1
|
# Trajectorial dissipation of \(\Phi\)-entropies for interacting particle systems
###### Abstract.
A classical approach for the analysis of the longtime behavior of Markov processes is to consider suitable Lyapunov functionals like the variance or more generally \(\Phi\)-entropies. Via purely analytic arguments it can be shown that these functionals are indeed non-increasing in time under quite general assumptions on the process. In this paper, we complement these classical results via a more probabilistic approach and show that dissipation is already present on the level of individual trajectories for spatially-extended systems of infinitely many interacting particles with arbitrary underlying geometry and compact local spin spaces. This extends previous results from the setting of finite-state Markov chains or diffusions in \(\mathbb{R}^{n}\) to an infinite-dimensional setting with weak assumptions on the dynamics.
Key words and phrases:Interacting particle systems, phi-entropy, time-reversal, martingale representation 2020 Mathematics Subject Classification: Primary 82C20; Secondary 60K35
## 1. Introduction
There are many different techniques to study the long-time behavior of Markov processes that excel in different situations. One very common and powerful technique is the use of Lyapunov functionals, i.e., functionals that are monotone in time. An example of such a functional is the variance
\[\mathbf{Var}_{\mu}(f):=\mathbb{E}_{\mu}[f^{2}]-\mathbb{E}_{\mu}[f]^{2},\quad f \in L^{2}(\mu),\]
where \(\mu\) is an invariant measure for some Markov process with semigroup \((P_{t})_{t\geq 0}\). If we now fix an observable \(f\) and consider the function
\[[0,\infty)\ni t\mapsto\mathbf{Var}_{\mu}(P_{t}f)\in[0,\infty),\]
then it is easy to see that this is non-increasing and under some further assumptions one can even show that it is strictly decreasing for all non-constant observables \(f\). This whole viewpoint, however, is purely based on functional analytic arguments and one does not even need to speak about the underlying Markov process itself to carry out the corresponding calculations. From a probabilistic point of view, this is somewhat unsatisfactory and we therefore want to specify this coarse and non-probabilistic approach with a finer, more probabilistic technique that allows us to obtain trajectorial versions of these results. Here, by trajectorial we mean results on the level of _single realizations_ of a stochastic process, as opposed to averaged quantities. Thereby our goal is to uncover more of the underlying probabilistic mechanisms behind the decay of variance, or more generally the decay of \(\Phi\)-entropies. For this, we will first briefly recall the notion of \(\Phi\)-entropies and then explain our main results and ideas with the help of the simple example of a continuous-time Markov chain on a finite state space. The rest of the article is then devoted to extending these ideas to the setting of spatially extended systems of infinitely many interacting particles as e.g. considered in [10].
### \(\Phi\)-entropies and their decay under Markovian dynamics
Let \(\Phi:I\to\mathbb{R}\) be a smooth and _convex_ function defined on a not necessarily bounded interval \(I\subset\mathbb{R}\). Let \((E,\mathcal{B}(E))\) be a Polish space equipped with its Borel \(\sigma\)-algebra and assume that \(\mu\) is a probability measure on \((E,\mathcal{B}(E))\). The \(\Phi\)-entropy functional is then defined on the set of \(\mu\)-integrable functions \(f:E\to I\) by
\[\mathbf{Ent}_{\mu}^{\Phi}(f):=\int_{E}\Phi(f)d\mu-\Phi\left(\int_{E}fd\mu \right)=\mathbb{E}_{\mu}\left[\Phi(f)\right]-\Phi\left(\mathbb{E}_{\mu}\left[ f\right]\right).\]
By Jensen's inequality one can immediately deduce that the \(\Phi\)-entropy functional takes its values in \(\mathbb{R}_{+}\cup\{+\infty\}\). Moreover, \(\mathbf{Ent}_{\mu}^{\Phi}(f)\) vanishes if its argument is constant and if \(\Phi\) is _strictly_ convex, then the converse is also true. For special choices of \(\Phi\) one can recover the classical variance and relative entropy functionals since we have
\[\mathbf{Ent}_{\mu}^{u\to u^{2}}=\mathbf{Var}_{\mu},\quad\mathbf{Ent}_{\mu}^{u \mapsto u\log u}=h(\cdot|\mu).\]
Now let \((X(t))_{t\geq 0}\) be a Markov process on our Polish space \(E\) with associated semigroup \((P_{t})_{t\geq 0}\) acting on \(C_{b}(E;\mathbb{R})\), the space of continuous and bounded real-valued functions on \(E\). Let us assume that there
exists an invariant probability measure \(\mu\) and denote by \(\mathscr{L}\) the generator of the semigroup \((P_{t})_{t\geq 0}\) with domain \(\mathrm{dom}(\mathscr{L})\subset C_{b}(E;\mathbb{R})\).
By invariance of \(\mu\) and Jensen's inequality one can now deduce that for all \(f\in C_{b}(E;\mathbb{R})\)
\[\mathbf{Ent}_{\mu}^{\Phi}(P_{t}f)=\mathbb{E}_{\mu}\left[\Phi(P_{t}f)\right]- \Phi\left(\mathbb{E}_{\mu}\left[P_{t}f\right]\right)\leq\mathbb{E}_{\mu}\left[ P_{t}\Phi(f)\right]-\Phi\left(\mathbb{E}_{\mu}\left[f\right]\right)=\mathbf{Ent}_{ \mu}^{\Phi}(f).\]
This tells us that the \(\Phi\)-entropy is non-increasing as a function of \(t\) and can be used as a Lyapunov function. More precisely, with purely analytic arguments, one can even deduce the following general result about the decay of \(\Phi\)-entropies.
**Proposition 1.1** (DeBruijn like property for Markov semigroups, [10]).: _Let \((X(t))_{t\geq 0}\) be a Markov process on a Polish space \(E\) equipped with its Borel \(\sigma\)-algebra \(\mathcal{B}(E)\) and let \((P_{t})_{t\geq 0}\) be the associated Markov semigroup with generator \(\mathscr{L}\). Assume that \(\mu\) is an invariant probability measure. Then, for any continuous and bounded function \(f:E\to I\) and any \(t>0\), it holds that_
\[\partial_{t}\ \mathbf{Ent}_{\mu}^{\Phi}(P_{t}f)=\mathbb{E}_{\mu}\left[\Phi^{ \prime}(P_{t}f)\mathscr{L}(P_{t}f)\right]\leq 0.\]
This result is classical, but we nevertheless recall its short analytic proof. We will later also provide a more probabilistic argument to obtain the same result in the context of interacting particle systems.
Proof.: The chain rule and the definition of the generator \(\mathscr{L}\) directly imply that
\[\partial_{t}\ \mathbf{Ent}_{\mu}^{\Phi}(P_{t}f)=\mathbb{E}_{\mu}\left[\Phi^{ \prime}(P_{t}f)\frac{d}{dt}(P_{t}f)\right]=\mathbb{E}_{\mu}\left[\Phi^{\prime }(P_{t}f)\mathscr{L}(P_{t}f)\right].\]
To see that the left-hand side is actually non-positive, it suffices to observe that the convexity of \(\Phi\) implies via Jensen's inequality for conditional expectations
\[\Phi(P_{t+s}g)\leq P_{t}(\Phi(P_{s}g))\]
for any \(s,t\geq 0\) and hence for all \(f\) we have
\[\mathbf{Ent}_{\mu}^{\Phi}(P_{t+s}f)\leq\mathbf{Ent}_{\mu}^{\Phi}(P_{s}f),\]
by invariance of \(\mu\).
By integrating one obtains the following classical corollary, which links exponential decay of \(\Phi\)-entropies and functional inequalities involving \(\Phi\)-entropies.
**Corollary 1.2**.: _In the setting of Proposition 1.1, the following two statements are equivalent._
1. _There exists a constant_ \(c>0\) _such that for all_ \(f\in\mathrm{dom}(\mathscr{L})\)__ \[\mathbf{Ent}_{\mu}^{\Phi}(f)\leq-c\mathbb{E}_{\mu}\left[\Phi^{\prime}(f) \mathscr{L}f\right].\]
2. _There exists a constant_ \(c>0\) _such that for all continuous and bounded_ \(f:E\to I\)__ \[\mathbf{Ent}_{\mu}^{\Phi}(P_{t}f)\leq e^{-\frac{t}{2}}\mathbf{Ent}_{\mu}^{\Phi} (f).\]
Note that, in the special case \(\Phi:u\mapsto u^{2}\), one recovers the Poincare inequality
\[\mathbf{Var}_{\mu}(f)\leq-\frac{c}{2}\left\langle f,\mathscr{L}f\right\rangle_ {L^{2}(\mu)},\]
which is well-known to be equivalent to exponential \(L^{2}\) ergodicity, see e.g. [10]. For a more detailed review of \(\Phi\)-entropies and further results we refer the interested reader to [10].
### A finite state-space example for the trajectorial approach
As one can see, the results above can be obtained without even mentioning the underlying stochastic process and just dealing with the semigroup and its generator. We want to supplement this viewpoint with a more probabilistic approach that allows us to obtain a somewhat finer result on a trajectorial level, which also implies the classical results by taking expectations.
For simplicity, we will discuss the main ideas for the example of a continuous-time Markov chain on a finite state space. More precisely, let \((X_{t})_{t\geq 0}\) be a Markov chain on a finite set \(E\) with irreducible generator \(\mathscr{L}\) and strictly positive invariant measure \(\mu\). Hence, the corresponding Markov semigroup is given by the matrix exponential \((e^{t\mathscr{L}})_{t\geq 0}\). Denote the underlying probability space by \((\Omega,\mathcal{A},\mathbb{P})\) and assume that \(X_{0}\sim\mu\) under \(\mathbb{P}\).
It is easy to check that for all bounded \(f:[0,\infty)\times E\to\mathbb{R}\) such that for all \(x\in E\) the partial derivatives \(\partial_{t}f(\cdot,x)\) are continuous and bounded, the process defined by
\[f(t,X_{t})-\int_{0}^{t}(\partial_{s}+\mathscr{L})f(s,X_{s})ds,\quad t\geq 0, \tag{1.1}\]
is a martingale w.r.t. the canonical filtration generated by \((X_{t})_{t\geq 0}\), see e.g. [11, Lemma IV.4.20].
If we now fix a finite time horizon \(T>0\) and consider the time-reversal \((\hat{X}_{t})_{0\leq t\leq T}\) of \((X_{t})_{t\geq 0}\), where \(\hat{X}_{t}=X_{T-t}\), then under \(\mathbb{P}\) the time-reversed process is again a time-homogeneous Markov process with generator \(\hat{\mathscr{L}}\), where
\[\hat{\mathscr{L}}(x,y)=\frac{\mu(y)}{\mu(x)}\mathscr{L}(y,x).\]
A short calculation now shows that, for each bounded \(g:E\to\mathbb{R}\) and \(T>0\), the process \((P_{T-s}g(\hat{X}_{s}))_{0\leq s\leq T}\) is a \(((\hat{\mathcal{F}}_{t})_{0\leq t\leq T},\mathbb{P})\)-martingale, where \(\hat{\mathcal{F}}_{t}=\sigma(X_{T-s}:\ 0\leq s\leq t)\). Indeed, we can use the chain rule to calculate
\[\partial_{t}P_{T-t}g(x)=-\hat{\mathscr{L}}P_{T-t}g(x),\]
so the correction term in (1.1) vanishes. Note that it is crucial to use the time-reversed process here, since the correction term does not cancel out if one uses the forward process.
By convexity this directly implies that the time-reversed trajectorial \(\Phi\)-entropy, i.e., the process defined by
\[\Phi(P_{T-s}g(X(T-s))),\quad 0\leq t\leq T,\]
is a submartingale. This stochastic monotonicity can be seen as a trajectorial version of the classical \(\Phi\)-entropy decay and after taking expectations with respect to \(\mathbb{P}\), one obtains the classical results as in Proposition 1.1. Therefore, one can interpret these trajectorial results as the probabilistic version of the decay of \(\Phi\)-entropies.
The main work is now to establish that a similar argument can also be used to treat infinite-dimensional systems like the interacting particle systems we consider. To our best knowledge, the first results of this kind, in the context of diffusions in \(\mathbb{R}^{n}\), have been achieved in [11]. More recently, starting with [10], these results have been extended to more and more classes of Markov processes, including continuous time Markov chains on countable state spaces, see [13]. The works [10] and [14] are also in a similar spirit.
The setting will be made precise in Section 2, but roughly speaking, we consider continuous-time Markov jump processes on general configuration spaces \(\Omega=\Omega_{0}^{S}\), where \(S\) is an arbitrary countable set and \(\Omega_{0}\) is a compact Polish space. We will refer to the elements of \(S\) as _sites_ and call \(\Omega_{0}\) the _local state-space_. In most examples considered in the literature, \(S\) is the vertex set of some graph like the \(d\)-dimensional hypercubic lattice \(\mathbb{Z}^{d}\), a tree or the Cayley graph of a group. This underlying spatial geometry dictates which particles can interact with each other and we are therefore not in the setting of mean-field systems but in an infinite-dimensional setting. This of course brings with it its own set of technical difficulties which need to be dealt with for making the time-reversal arguments work.
The main technical difficulties come from making sure that the time-reversal is again a well-defined interacting particle system and from obtaining a description of its generator. This is made possible by assuming some local regularity of the local conditional distributions of the time-stationary measure \(\mu\). Namely, by the assumption that \(\mu\) is actually a Gibbs measure with respect to a quasilocal specification that additionaly satisfies a certain smoothness condition. This condition is e.g. satisfied if the specification is given in terms of a potential \(\Phi=(\Phi_{B})_{B\in S}\) such that
\[\sup_{x\in S}\sum_{B\in S:\ B\ni x}\left|B\right|\left\|\Phi_{B}\right\|_{ \infty}<\infty.\]
Note that this condition is for example satisfied for any translation-invariant finite-range potential, so our theory applies to a fairly large class of models.
### Organization of the manuscript
The rest of this article is organized as follows. We will first collect the necessary notation and formulate our main results in Section 2. Then, as a first step, we investigate the time-reversal of interacting particle systems in equilibrium in Section 3 with the main goal of obtaining an explicit representation of the (formal) generator of the time-reversed dynamics. In Section 4, we will then apply these results to establish pathwise properties of general \(\Phi\)-entropy functionals and derive the classical DeBrujin-like decay property as a corollary.
## 2. Setting and main results
Let \((\Omega_{0},\mathcal{B}_{0})\) be a compact Polish space equipped with its Borel \(\sigma\)-algebra and \(\lambda_{0}\) a probability measure on \((\Omega_{0},\mathcal{B}_{0})\), which will serve as our reference measure. We will consider Markovian dynamics on the configuration space \(\Omega=\Omega_{0}^{S}\), where \(S\) is some countable set whose elements we will refer to as _sites_. In most applications this will be the set of vertices of some graph, e.g. \(\mathbb{Z}^{d}\) or a tree. We equip \(\Omega\) with the product topology and corresponding Borel \(\sigma\)-algebra \(\mathcal{F}\). Note that \(\mathcal{F}\) coincides with the product \(\sigma\)-algebra \(\otimes_{x\in S}\mathcal{B}_{0}\). For \(\Delta\subset S\) we will also write \(\Omega_{\Delta}:=\Omega_{0}^{\Delta}\) for the set of partial configurations. We will also equip
\(\Omega_{\Delta}\) with the product \(\sigma\)-algebra and the probability measure \(\lambda_{\Delta}=\otimes_{x\in\Delta}\lambda_{0}\). For \(\Lambda\subset S\), let \(\mathcal{F}_{\Lambda}\) be the sub-\(\sigma\)-algebra of \(\mathcal{F}\) that is generated by the projections \(\omega\mapsto\omega_{\Delta}\in\Omega_{\Delta}\) for \(\Delta\Subset\Lambda\), where we write \(\Subset\) to signify that a set is a _finite_ subset of another set. For \(\Delta\subset S\) and (partial) configurations \(\eta_{\Delta^{c}}\in\Omega_{\Delta^{c}}\) and \(\xi_{\Delta}\in\Omega_{\Delta}\), we will write \(\xi_{\Delta}\eta_{\Delta^{c}}\) for the configuration that is defined on all of \(S\) and agrees with \(\eta_{\Delta^{c}}\) on \(\Delta^{c}\) and with \(\xi_{\Delta}\) on \(\Delta\). For a topological space \(E\), we will denote its Borel \(\sigma\)-algebra by \(\mathcal{B}(E)\) and the space of continuous real-valued functions on \(E\) by \(C(E)\). The space of non-negative measures on \(E\), or more precisely on \(\mathcal{B}(E)\), will be denoted by \(\mathcal{M}(E)\) and is equipped with the topology of weak-convergence. The total variation distance on \(\mathcal{M}(E)\) will be denoted by \(\left\|\cdot\right\|_{\mathrm{TV}}\).
### Interacting particle systems and Gibbs measures
#### 2.1.1. Interacting particle systems
We will consider time-continuous Markovian dynamics on \(\Omega\), namely interacting particle systems characterized by time-homogeneous generators \(\mathscr{L}\) with domain \(\mathrm{dom}(\mathscr{L})\subset C(\Omega)\) and the associated Markovian semigroup \((P_{t})_{t\geq 0}\) on \(C(\Omega)\). For interacting particle systems we adopt the notation and exposition of the standard reference [13, Chapter I].
In our setting, the generator \(\mathscr{L}\) is given by a collection of transition measures \((c_{\Delta}(\cdot,d\xi))_{\Delta\Subset S}\) in finite volumes \(\Delta\Subset S\), i.e., mappings
\[\Omega\ni\eta\mapsto c_{\Delta}(\eta,d\xi_{\Delta})\in\mathcal{M}(\Omega_{ \Delta}).\]
These transition measures can be interpreted as the infinitesimal rates at which the particles inside \(\Delta\) switch from the configuration \(\eta_{\Delta}\) to \(\xi_{\Delta}\), given that the rest of the system is currently in state \(\eta_{\Delta^{c}}\). The full dynamics of the interacting particle system is then given as the superposition of these local dynamics,
\[\mathscr{L}f(\eta)=\sum_{\Delta\Subset S}\int_{\Omega_{\Delta}}\left[f(\xi_{ \Delta}\eta_{\Delta^{c}})-f(\eta)\right]c_{\Delta}(\eta,d\xi_{\Delta}). \tag{2.1}\]
In [13, Chapter I] it is shown that the following conditions are sufficient to guarantee well-definedness.
1. For each \(\Delta\Subset\Omega\) the mapping \[\Omega\ni\eta\mapsto c_{\Delta}(\eta,d\xi_{\Delta})\in\mathcal{M}(\Omega_{ \Delta})\] is continuous.
2. The total rate at which a single particle switches its state is uniformly bounded, i.e., \[\sup_{x\in S}\sum_{\Delta\ni x}\sup_{\eta\in\Omega}c_{\Delta}(\eta,\Omega_{ \Delta})<\infty.\]
3. The total influence of all other particles on the transition rates of a single particle is uniformly bounded, i.e., \[M:=\sup_{x\in S}\sum_{\Delta\ni x}\sum_{y\neq x}\delta_{y}c_{\Delta}<\infty,\] where \[\delta_{y}c_{\Delta}:=\sup\left\{\left\|c_{\Delta}(\eta,\cdot)-c_{\Delta}(\xi,\cdot)\right\|_{\mathrm{TV}}:\ \eta_{y^{c}}=\xi_{y^{c}}\right\}.\] Under these conditions, the core of the operator \(\mathscr{L}\) is given by \[D(\Omega):=\left\{f\in C(\Omega):\ \left\|f\right\|\right\|:=\sum_{x\in S} \delta_{x}(f)<\infty\right\},\] where for \(x\in S\) \[\delta_{x}(f):=\sup_{\eta,\xi:\ \eta_{x^{c}}=\xi_{x^{c}}}\left|f(\eta)-f(\xi)\right|\] is the oscillation of a function \(f:\Omega\to\mathbb{R}\) at the site \(x\). In addition, one can show the following estimates for \(\mathscr{L}\) and the action of the semigroup \((P_{t})_{t\geq 0}\) generated by \(\mathscr{L}\). We will need these later on.
**Lemma 2.1**.: _Assume that the generator \(\mathscr{L}\) satisfies \(\mathbf{(L1)}-\mathbf{(L3)}\) and denote by \((P_{t})_{t\geq 0}\) the semigroup generated by \(\mathscr{L}\)._
1. _For_ \(f\in D(\Omega)\) _we have_ \(P_{t}f\in D(\Omega)\) _for all_ \(t\geq 0\) _and_ \[\left\|\!\left|P_{t}f\right\|\!\right\|\leq\exp\left((M-\varepsilon)t\right) \left\|\!\left|f\right\|\!\right\|.\]
2. _For all_ \(f\in D(\Omega)\) _it holds that_ \[\left\|\mathscr{L}f\right\|_{\infty}\leq\left(\sup_{x\in S}\sum_{\Delta\ni x} \sup_{\eta\in\Omega}c_{\Delta}(\eta,\Omega_{\Delta})\right)\left\|\!\left|f \right\|\!\right\|.\]
_The constants are explicitly given by_
\[M =\sup_{x\in S}\sum_{\Delta\ni x}\sum_{y\neq x}\delta_{y}c_{\Delta}<\infty,\] \[\varepsilon =\inf_{x\in S}\inf_{\eta,\zeta:\ \eta_{x^{c}}=\zeta_{x^{c}},\eta_{x} \neq\zeta_{x}}\sum_{\Delta\ni x}\left(\sum_{\xi_{\Delta}:\xi_{x}=\zeta_{x}}c_{ \Delta}(\eta,\xi_{\Delta})+\sum_{\xi_{\Delta}:\xi_{x}=\eta_{x}}c_{\Delta}(\zeta,\xi_{\Delta})\right).\]
Proof.: Combine the results from Proposition 3.2(a) and Theorem 3.9.(d) in [19, Chapter I].
For our purposes, the mere well-definedness of an interacting particle system is not sufficient and we need to assume some more regularity. All the additional assumptions we put will be used to make the generator of the time-reversal well-defined.
1. [label=**(R0)**, ref=(R1)]
2. For each \(\Delta\) and \(\eta\in\Omega\) the measure \(c_{\Delta}(\eta,d\xi_{\Delta})\) is absolutely continuous w.r.t. the reference measure \(\lambda_{\Delta}(d\xi_{\Delta})\) with density \(c_{\Delta}(\eta,\cdot)\).
3. For each \(\Delta\in\Omega\) the map \[\Omega\times\Omega_{\Delta}\ni(\eta,\xi_{\Delta})\mapsto c_{\Delta}(\eta,\xi_ {\Delta})\in\mathbb{R}\] is continuous w.r.t. the product topology.
4. The total rate of transition for a single site is uniformly bounded from above \[\sup_{x\in S}\sum_{\Delta\ni x}\sup_{\eta\in\Omega}\left\|c_{\Delta}(\eta, \cdot)\right\|_{\infty}<\infty.\]
5. The condition 1 is satisfied, i.e., \[\sup_{x\in S}\sum_{\Delta\ni x}\sum_{y\neq x}\delta_{y}c_{\Delta}<\infty.\]
6. There exists an \(R>0\) such that for all \(\Delta\in\mathbb{Z}^{d}\) with \(|\Delta|>R\) we have \[\sup_{\eta\in\Omega,\xi_{\Delta}\in\Omega_{\Delta}}c_{\Delta}(\eta,\xi_{\Delta })=0.\]
We will comment on where and why we need these assumptions and their connection to the classical conditions \(\mathbf{(L1)}-\mathbf{(L3)}\) at the end of Section 2.1.2, after we have stated our assumptions on the local conditional distribution of the time-stationary measure \(\mu\).
#### 2.1.2. Gibbs measures and the DLR formalism
We will mainly be interested in the situation where the process generated by \(\mathscr{L}\) admits a time-stationary measure \(\mu\) with a well-behaved local representation, namely that \(\mu\) is a Gibbs measure w.r.t. to a sufficiently nice specification \(\gamma\). Let us therefore first recall the general definition of a specification.
**Definition 2.2**.: _A specification \(\gamma=(\gamma_{\Lambda})_{\Lambda\Subset S}\) is a family of probability kernels \(\gamma_{\Lambda}\) from \(\Omega_{\Lambda^{c}}\) to \(\mathcal{M}_{1}(\Omega)\) that additionally satisfies the following properties._
1. _Each_ \(\gamma_{\Lambda}\) _is_ proper_, i.e., for all_ \(B\in\mathcal{F}_{\Lambda^{c}}\) _it holds that_ \[\gamma_{\Lambda}(B|\cdot)=\mathbf{1}_{B}(\cdot).\]
2. _The probability kernels are_ consistent _in the sense that if_ \(\Delta\subset\Lambda\Subset S\)_, then for all_ \(A\in\mathcal{F}\)__ \[\gamma_{\Lambda}\gamma_{\Delta}(A|\cdot)=\gamma_{\Lambda}(A|\cdot),\] _where the concatenation of two probability kernels is defined as usual via_ \[\gamma_{\Lambda}\gamma_{\Delta}(A|\eta)=\int_{\Omega}\gamma_{\Delta}(A|\omega )\gamma_{\Lambda}(d\omega|\eta).\]
For the existence and further properties of Gibbs measures with specification \(\gamma\) one needs to impose some conditions on the specification \(\gamma\). One sufficient condition for the existence of a Gibbs measure for a specification \(\gamma\) is _quasilocality_, see e.g. [10] or [11]. For the following sections we will need to assume some more regularity for the specification \(\gamma\). In particular, these assumptions will guarantee that \(\gamma\) is quasilocal.
1. [label=**(S0)**, ref=(S1)]
2. For each \(\Delta\Subset S\) and \(\eta\in\Omega\), the probability measure \(\gamma_{\Delta}(d\xi_{\Delta}|\eta)\) is absolutely continuous w.r.t. the reference measure \(\lambda_{\Delta}(d\xi_{\Delta})\) with density \(\gamma_{\Delta}(\cdot|\eta)\).
3. For all \(\Delta\Subset S\), the map \[\Omega\times\Omega_{\Delta}\ni(\eta,\xi_{\Delta})\mapsto\gamma_{\Delta}(\xi_{ \Delta}|\eta_{\Delta^{c}})\in[0,\infty)\] is continuous (w.r.t. the product topology).
**(S3)**: The conditional densities on the single spin spaces are uniformly bounded away from zero and infinity, i.e.,
\[0<\delta\leq\inf_{x\in S}\inf_{\eta\in\Omega}\gamma_{\pi}(\eta_{x}|\eta_{x^{e}}) \leq\sup_{x\in S}\sup_{\eta\in\Omega}\gamma_{\pi}(\eta_{x}|\eta_{x^{e}})\leq \delta^{-1}<\infty.\]
**(S4)**: We have
\[\sup_{x\in S}\sum_{\Delta\ni x:\ c_{\Delta}\succ 0}\sum_{y\neq x}\delta_{y} \gamma_{\Delta}<\infty,\]
where
\[\delta_{y}\gamma_{\Delta}=\sup\left\{\left\|\gamma_{\Delta}(d\xi_{\Delta}| \eta)-\gamma_{\Delta}(d\xi_{\Delta}|\zeta)\right\|_{\mathrm{TV}}:\ \eta_{y^{c}}=\zeta_{y^{c}}\right\}.\]
**Remark 2.3**.: _Now that we have stated all of the conditions that we need, let us briefly comment on why and where we need them._
1. _Assumption_ (**R3**) _clearly implies_ (**L2**) _and together with_ (**R4**) _ensures that the interacting particle system is well-defined._
2. _Assumption_ (**R1**) _and_ (**S1**) _allow us to write down the local transition density of the time-reversal and_ (**S3**) _makes sure that we are not performing a division by zero._
3. _The further regularity assumptions_ (**R3**)_,_ (**R5**) _(_**S3**)__,_ (**S4**) _and the continuity assumptions_ (**R2**) _and_ (**S2**) _make sure that the local transition densities of the time-reversal also give rise to a well-defined interacting particle system._
4. _The quantity in_ (**S4**) _is similar to the classical Dobrushin uniqueness condition, see_ _[_6_]__. However, we only need it to be finite and not strictly smaller than one._
**Example 2.4**.: One particular class of models to which our theory can be applied to are spin systems for which the specification \(\gamma\) is defined via a potential \(\Phi=(\Phi_{B})_{B\in S}\) that satisfies
\[\sup_{x\in S}\sum_{B\ni x}\left|B\right|\left\|\Phi_{B}\right\|_{\infty}<\infty,\]
and where the rates are of the form
\[c_{\Delta}(\eta,\xi_{\Delta})=\begin{cases}\exp\left(-\beta\sum_{B:B\cap \Delta\neq\emptyset}\Phi_{B}(\xi_{\Delta}\eta_{\Delta^{c}})\right),&\text{if }\left| \Delta\right|=1,\\ 0,&\text{otherwise}.\end{cases}\]
Instead of these single-site updates one could also consider updates in larger regions with a bounded diameter. Then the rates satisfy \(\mathbf{(R1)}-\mathbf{(R5)}\) and the specification satisfies \(\mathbf{(S1)}-\mathbf{(S4)}\), as one can see by using similar arguments as in the proof of [17, Lemma 6.28].
### The time-reversal of an interacting particle system
In the notation of above, assume that \(\mu\in\mathscr{G}(\gamma)\) is Gibbs measure for a quasilocal specification \(\gamma\), i.e., assume that \(\mu\) satisfies the DLR equations
\[\mu(f)=\mu(\gamma_{\Delta}(f|\cdot))\]
for all \(\Lambda\Subset S\) and bounded measurable functions \(f\). Further assume that \(\mu\) is time-stationary with respect to the Markovian dynamics with generator \(\mathscr{L}\). Denote the semigroup generated by \(\mathscr{L}\) by \((P_{t})_{t\geq 0}\) and the corresponding process on \(\Omega\) by \((\eta(t))_{t\geq 0}\). As discussed in Section 1.2, for each fixed \(T>0\) the process \((\eta(T-t))_{0\leq t\leq T}\) is again a time-homogeneous Markov process and under some suitable assumptions its associated semigroup has a generator \(\hat{\mathscr{L}}\). But what does this generator look like? For general Markov processes it is not possible to give a closed form expression, but in our case we can use the special structure of \(\mathscr{L}\) as the superposition of local dynamics in finite volumes. In each of these finite volumes, it is clear how the time-reversal w.r.t. \(\mu\) should look and we can hope that we can again write \(\hat{\mathscr{L}}\) as the superposition of finite volume processes. With this Ansatz, the probabilistic intuition dictates the educated guess
\[\hat{c}_{\Delta}(\eta,\xi_{\Delta})=c_{\Delta}(\xi_{\Delta}\eta_{\Delta^{c}}, \eta_{\Delta})\frac{\gamma_{\Delta}(\xi_{\Delta}|\eta_{\Delta^{c}})}{\gamma_ {\Delta}(\eta_{\Delta}|\eta_{\Delta^{c}})} \tag{2.2}\]
for the transition densities appearing in the generator of the time-reversed interacting particle system. However, at this stage, it is not obvious that the generator of the time-reversed system is again of the form (2.1) and has precisely these rates. For Markov processes on finite state spaces this is an easy calculation but we have to put in some more work, which will be carried out in Section 3. The main result there is the following.
**Proposition 2.5** (Time-reversal generator).: _Let the rates of an interacting particle system with generator \(\mathscr{L}\) satisfy \(\mathbf{(R1)}-\mathbf{(R5)}\) and assume that \(\mu\) is a time-stationary measure for the corresponding Markov semigroup \((P(t))_{t\geq 0}\) on \(C(\Omega)\) that is generated by \(\mathscr{L}\) such that \(\mu\) is a Gibbs measure w.r.t. a specification
\(\gamma\) that satisfies \(\mathbf{(S1)}-\mathbf{(S4)}\). Then, the time-reversed process has a generator \(\hat{\mathscr{L}}\) whose transition densities (w.r.t. the reference measure \(\lambda_{o}\)) are given by_
\[\hat{c}_{\Delta}(\eta,\xi_{\Delta})=c_{\Delta}(\xi_{\Delta}\eta_{\Delta^{c}}, \eta_{\Delta})\frac{\gamma_{\Delta}(\xi_{\Delta}|\eta_{\Delta^{c}})}{\gamma_{ \Delta}(\eta_{\Delta}|\eta_{\Delta^{c}})}.\]
The proof of this can be found at the end of Section 3.
### Trajectorial decay of \(\Phi\)-entropies
With this auxiliary result at hand, we can then obtain the following result, which describes the dissipation of general \(\Phi\)-entropies on a trajectorial level. Before we state the theorem, let us introduce some further notation to express the main equation in a cleaner way. The Bregman \(\Phi\)-divergence associated with \(\Phi:I\to\mathbb{R}\) is defined as
\[\mathrm{div}^{\Phi}(p|q):=\Phi(p)-\Phi(q)-(p-q)\Phi^{\prime}(q),\quad p,q\in I.\]
This is precisely the difference between the value of \(\Phi\) at the point \(p\) and the value of the first-order Taylor expansion of \(\Phi\) around \(q\), evaluated at \(p\) and is non-negative, since we assumed that \(\Phi\) is convex. Bregman divergences are sometimes also referred to as Bregman distances, despite not being a metric since they are in general not symmetric and do not satisfy the triangle inequality. They are however still useful for applying techniques from optimization theory in more general contexts, e.g. in statistical learning theory [1].
Note that we now have to be careful with the probability space and filtration we are working with, since we are talking about results on a trajectorial level.
**Theorem 2.6** (Trajectorial decay of \(\Phi\)-entropies).: _Let \((\Omega,\mathcal{A},\mathbb{P})\) be a probability space on which the interacting particle system \((\eta(s))_{s\geq 0}\) is defined. Denote the generator of the interacting particle system by \(\mathscr{L}\) and assume that its rates satisfy \(\mathbf{(R1)}\) - \(\mathbf{(R5)}\) and assume that under \(\mathbb{P}\) we have \(\eta_{0}\sim\mu\), where \(\mu\) is a time-stationary measure for the corresponding Markov semigroup \((P_{t})_{t\geq 0}\) on \(C(\Omega)\) that is generated by \(\mathscr{L}\) such that \(\mu\) is a Gibbs measure w.r.t. a specification \(\gamma\) that satisfies \(\mathbf{(S1)}-\mathbf{(S4)}\). Then, for any \(f\in D(\Omega)\) and \(T>0\), the process defined by_
\[L^{\Phi,f}(s):=\Phi(P_{T-s}f(\eta_{T-s})),\quad 0\leq s\leq T, \tag{2.3}\]
_is a \(((\hat{\mathcal{G}}_{t})_{0\leq t\leq T},\mathbb{P})\)-submartingale, where \(\hat{\mathcal{G}}_{t}=\sigma(\eta(T-s):\ 0\leq s\leq t)\). Its Doob-Meyer decomposition is given by_
\[L^{\Phi,f}(t)=M^{\Phi,f}(t)\\ +\int_{0}^{t}\sum_{\Delta\in\mathcal{S}}\int_{\Omega_{\Delta}} \hat{c}_{\Delta}(\eta(T-s),\xi_{\Delta})\mathrm{div}^{\Phi}(P_{T-s}(f(\xi_{ \Delta}\eta_{\Delta^{c}}(T-s))|P_{T-s}f(\eta(T-s)))\lambda_{\Delta}(d\xi_{ \Delta})ds. \tag{2.4}\]
The proof of this theorem can be found at the end of Section 4. As mentioned before, by taking expectations one recovers the classical DeBrujin-like decay of \(\Phi\)-entropies as stated in Proposition 1.1.
For the sake of concreteness, let us write out the result explicitly for one of the simplest cases, namely the trajectorial decay of variance, corresponding to \(\Phi:u\mapsto u^{2}\).
**Corollary 2.7** (Trajectorial decay of variance).: _In the setting of Theorem 2.6, we have that, for any \(f\in D(\Omega)\) and \(T>0\), the process defined by \((P_{T-s}f(\eta_{T-s}))^{2}\), \(0\leq s\leq T\), is a \(((\hat{\mathcal{G}}_{t})_{0\leq t\leq T},\mathbb{P})\)-submartingale, where \(\hat{\mathcal{G}}_{t}=\sigma(\eta(T-s):\ 0\leq s\leq t)\). Its Doob-Meyer decomposition is given by_
\[(P_{T-s}f(\eta_{T-s}))^{2}=M^{f}(t)+\int_{0}^{t}\sum_{\Delta\in\mathcal{S}} \int_{\Omega_{\Delta}}\hat{c}_{\Delta}(\eta(T-s),\xi_{\Delta})\left[f(\xi_{ \Delta}\eta_{\Delta^{c}}(T-s))-f(\eta(T-s))\right]^{2}\lambda_{\Delta}(d\xi_{ \Delta})ds.\]
### Outlook
Even though we were able to show the trajectorial decay for the relative entropy under quite general assumptions on the dynamics, these results are not fully satisfactory in the context of statistical mechanics. The usually more interesting Lyapunov functional in this setting is the so-called _relative entropy density_, as e.g. considered in [16], which is not only defined for measures \(\nu\) that are absolutely continuous w.r.t. \(\mu\). Therefore, it would be much more natural to work with this functional \(h(\cdot|\mu):\mathcal{M}^{inv}_{1}(\Omega)\to\mathbb{R}\) instead and one can show that is also a Lyapunov function for interacting particle systems under quite general assumptions, see [11], but it is somewhat unclear how to even formulate conjectures about the trajectorial properties of this functional, since one cannot naively evaluate it pointwise.
As we already saw in the case of a continuous time Markov chain on a finite state space, the main ingredient for this type of result is to obtain an explicit description of the generator of the time-reversed process. Another class of processes that could be of interest and is not covered by our results are systems which evolve continuously on their single spin spaces, as opposed to our pure-jump processes. The first example that comes to mind are of course systems of (infinitely-many) interacting diffusions, e.g. indexed by \(\mathbb{Z}^{d}\). We expect that, if a given system of interacting diffusions admits a Gibbs measure
as an invariant probability measure, then a combination of the techniques in [13] and this article should yield analogous results - of course under some suitable regularity conditions on the coefficients.
## 3. The time-reversed interacting particle system and its generator
The main goal of this section is to prove Proposition 2.5, thereby establishing that the generator of the time-reversal is indeed given by \(\hat{\mathscr{L}}\). For this we will need to establish some regularity properties for the transition densities as defined in (2.2).
### Upper and lower bounds for the conditional densities
Since we will need to deal with quotients involving the conditional densities \(\gamma_{\Delta}\) on arbitrary finite subsets \(\Delta\Subset S\), we will need to lift the upper and lower bounds from (**S3**) to this more general case. This is essentially the content of the following lemma.
**Lemma 3.1**.: _Let \(\gamma\) be a specification that satisfies_ (**S1**) _and_ (**S3**)_. Then, there exists a constant \(C\in(0,\infty)\) such that for all \(\Delta\Subset S\) we have the estimate_
\[e^{-C|\Delta|}\leq\inf_{\eta\in\Omega}\gamma_{\Delta}(\eta_{\Delta}|\eta_{ \Delta^{c}})\leq\sup_{\eta\in\Omega}\gamma_{\Delta}(\eta_{\Delta}|\eta_{ \Delta^{c}})\leq e^{C|\Delta|}.\]
_This constant is precisely given by \(C=|\log\delta|\)._
Proof.: For this, fix an enumeration \(i_{1},\ldots,i_{k}\) of the elements of \(\Delta\) and introduce the notation
\[[i_{j},i_{k}]:=\left\{i_{j},i_{j+1},\ldots,i_{k}\right\},\quad 1\leq j\leq k.\]
With this notation at hand, we can use the chain rule for conditional probability densities to write
\[\gamma_{\Delta}(\eta_{\Delta}|\eta_{\Delta^{c}})=\prod_{j=1}^{k}\gamma_{[i_{1},i_{j}]}(\eta_{i_{j}}|\eta_{[i_{j+1},i_{k}]}|\eta_{\Delta^{c}}), \tag{3.1}\]
where \(\gamma_{[i_{1},i_{j}]}(\eta_{i_{j}}|\eta_{[i_{j+1},i_{k}]}|\eta_{\Delta^{c}})\) is the marginal conditional density of the measure \(\gamma_{[i_{1},i_{j}]}(d\eta_{[i_{1},i_{j}]}|\eta_{[i_{j+1},i_{k}]}\eta_{ \Delta^{c}})\) w.r.t. the site \(i_{j}\). But, using consistency of the specification \(\gamma\), we have
\[\gamma_{[i_{1},i_{j}]}(\eta_{i_{j}}|\eta_{[i_{j+1},i_{k}]}\eta_{\Delta^{c}})= \int\gamma_{[i_{1},i_{j}]}(d\xi_{[i_{1},i_{j}]}|\eta_{[i_{j+1},i_{k}]}\eta_{ \Delta^{c}})\gamma_{i_{j}}(\eta_{i_{j}}|\xi_{[i_{1},i_{j-1}]}|\eta_{[i_{j+1}, i_{k}]}|\eta_{\Delta^{c}}),\]
which is, by assumption, upper bounded by \(\delta^{-1}\) and lower bounded by \(\delta\). In conjunction with the representation (3.1) this implies the desired upper and lower bound where the constant \(C\) is explicitly given by \(C=|\log(\delta)|\).
As a corollary we now get the following estimate for the quotients that appear in the definition of transition density of the time-reversal (2.2).
**Lemma 3.2**.: _Let \(\Delta\Subset S\) and \(\gamma\) be a specification that satisfies_ (**S1**) _and_ (**S3**)_. Then, for all \(\Delta\Subset S\), \(\eta\in\Omega\) and \(\xi_{\Delta}\in\Omega_{\Delta}\), we have_
\[0<e^{-2C|\Delta|}\leq\frac{\gamma_{\Delta}(\xi_{\Delta}|\eta_{\Delta^{c}})}{ \gamma_{\Delta}(\eta_{\Delta}|\eta_{\Delta^{c}})}\leq e^{2C|\Delta|}.\]
### The switching lemma
Now that we can be sure that the densities as in (2.2) are actually well-defined and we are not performing a division by zero, we can start showing that \(\hat{\mathscr{L}}\) is indeed the generator of the time-reversed process. The main technical tool will be the following lemma.
**Lemma 3.3**.: _Let the rates of a well-defined interacting particle system with generator \(\mathscr{L}\) satisfy_ (**R1**) _and assume that \(\mu\) is a time-stationary measure for \(\mathscr{L}\) and \(\mu\) is a Gibbs measure w.r.t. a specification \(\gamma\) that satisfies_ (**S1**) _and_ (**S3**)_. Then, we have for all bounded and measurable \(f,g:\Omega\to\mathbb{R}\) and \(\Delta\Subset S\) that_
\[\int_{\Omega_{\Delta}}\int_{\Omega}c_{\Delta}(\omega,\xi_{\Delta})f(\omega)g( \xi_{\Delta}\omega_{\Delta^{c}})\mu(d\omega)\lambda_{\Delta}(d\xi_{\Delta})= \int_{\Omega_{\Delta}}\int_{\Omega}\hat{c}_{\Delta}(\omega,\xi_{\Delta})f(\xi_ {\Delta}\omega_{\Delta^{c}})g(\omega)\mu(d\omega)\lambda_{\Delta}(d\xi_{\Delta}), \tag{3.2}\]
_where_
\[\hat{c}_{\Delta}(\eta,\xi_{\Delta})=c_{\Delta}(\xi_{\Delta}\eta_{\Delta^{c}}, \eta_{\Delta})\frac{\gamma_{\Delta}(\xi_{\Delta}|\eta_{\Delta^{c}})}{\gamma_{ \Delta}(\eta_{\Delta}|\eta_{\Delta^{c}})}.\]
To keep the notation for conditional expectations in the upcoming proof simple, we will denote integration with respect to \(\mu\) by \(\mathbb{E}[\cdot]\).
Proof.: As a first step, note that for fixed \(\Delta\Subset S\) and \(\xi_{\Delta}\in\Omega_{\Delta}\) the maps
\[\Omega\ni\omega\mapsto g(\xi_{\Delta}\omega_{\Delta^{c}})\in\mathbb{R},\quad \Omega\ni\omega\mapsto f(\xi_{\Delta}\omega_{\Delta^{c}})\in\mathbb{R},\]
are \(\mathcal{F}_{\Delta^{c}}\)-measurable. Therefore, we can use that \(\gamma\) is the local conditional distribution of \(\mu\) and the definition of the rates \(\hat{c}\) to obtain the \(\mu\)-almost-sure identity
\[\mathbb{E}\left[c_{\Delta}(\cdot,\xi_{\Delta})f(\cdot)g(\xi_{ \Delta^{\Delta}\cdot^{\Delta}})|\mathcal{F}_{\Delta^{c}}|\right](\omega) =g(\xi_{\Delta}\omega_{\Delta^{c}})\mathbb{E}\left[c_{\Delta}( \cdot,\xi_{\Delta})f(\cdot)|\mathcal{F}_{\Delta^{c}}|\right](\omega)\] \[=g(\xi_{\Delta}\omega_{\Delta^{c}})\int_{\Omega_{\Delta}}\gamma_{ \Delta}(\zeta_{\Delta}|\omega_{\Delta^{c}})c_{\Delta}(\zeta_{\Delta}\omega_{ \Delta^{c}},\xi_{\Delta})f(\zeta_{\Delta}\omega_{\Delta^{c}})\lambda_{\Delta} (d\zeta_{\Delta})\] \[=g(\xi_{\Delta}\omega_{\Delta^{c}})\int_{\Omega_{\Delta}}\gamma_{ \Delta}(\xi_{\Delta}|\omega_{\Delta^{c}})\hat{c}_{\Delta}(\xi_{\Delta}\omega_{ \Delta^{c}},\zeta_{\Delta})f(\zeta_{\Delta}\omega_{\Delta^{c}})\lambda_{\Delta} (d\zeta_{\Delta}).\]
If we now integrate over \(\xi_{\Delta}\), exchange the order of integration (via Fubini) and apply the same arguments in reverse - with \(f\) taking the role of \(g\) and vice versa - we get
\[\int_{\Omega_{\Delta}}\mathbb{E}\left[c_{\Delta}(\cdot,\xi_{\Delta})f(\cdot)g( \xi_{\Delta^{c}\cdot^{\Delta^{c}}})|\mathcal{F}_{\Delta^{c}}|\right](\eta) \lambda_{\Delta}(d\xi_{\Delta})=\int_{\Omega_{\Delta}}\mathbb{E}\left[\hat{c}_ {\Delta}(\cdot,\zeta_{\Delta})f(\zeta_{\Delta^{c}\cdot^{\Delta^{c}}})g(\cdot )|\mathcal{F}_{\Delta^{c}}|\right](\eta)\lambda_{\Delta}(d\zeta_{\Delta}).\]
By integrating both sides with respect to \(\mu\), exchanging the order of integration, and applying the law of total expectation we obtain
\[\int_{\Omega_{\Delta}}\int_{\Omega_{\Delta}}(\omega,\xi_{\Delta})f(\omega)g( \xi_{\Delta}\omega_{\Delta^{c}})\mu(d\omega)\lambda_{\Delta}(d\xi_{\Delta})= \int_{\Omega_{\Delta}}\int_{\Omega}\hat{c}_{\Delta}(\omega,\zeta_{\Delta})f( \zeta_{\Delta}\omega_{\Delta^{c}})g(\omega)\mu(d\omega)\lambda_{\Delta}(d\zeta _{\Delta}),\]
as desired.
### Regularity of the time-reversal rates
To make sure that \(\hat{\mathscr{L}}\) is actually the generator of a well-defined interacting particle system we now show that the collection of transition measures \((\hat{c}_{\Delta}(\cdot,\cdot))_{\Delta\Subset S}\) satisfies the three conditions \(\mathbf{(L1)}-\mathbf{(L3)}\).
**Proposition 3.4**.: _Let the rates of an interacting particle system with generator \(\mathscr{L}\) satisfy \(\mathbf{(R1)}-\mathbf{(R5)}\) and assume that \(\mu\) is a time-stationary measure for \(\mathscr{L}\) and such that \(\mu\) is a Gibbs measure w.r.t. a specification \(\gamma\) that satisfies \(\mathbf{(S1)}-\mathbf{(S4)}\). Then, the transition measures \(\left(\hat{c}_{\Delta}(\cdot,d\xi_{\Delta})\right)_{\Delta\Subset\mathbb{Z}^{d}}\) with \(\lambda_{\Delta}\)-densities given by_
\[\hat{c}_{\Delta}(\eta,\xi_{\Delta})=c_{\Delta}(\xi_{\Delta}\eta_{\Delta^{c}}, \eta_{\Delta})\frac{\gamma_{\Delta}(\xi_{\Delta}|\eta_{\Delta^{c}})}{\gamma_{ \Delta}(\eta_{\Delta}|\eta_{\Delta^{c}})}\]
_satisfy the conditions \(\mathbf{(L1)}-\mathbf{(L3)}\)._
Proof.: _Ad \(\mathbf{(L1)}\):_ This follows from the continuity assumptions \(\mathbf{(R2)}\) and \(\mathbf{(S2)}\), together with assumption \(\mathbf{(S3)}\) and Lemma 3.2.
_Ad \(\mathbf{(L2)}\):_ Note that for fixed \(\Delta\Subset S\), \(\xi_{\Delta}\in\Omega_{\Delta}\) and \(\eta\in\Omega\) we have by Lemma 3.2 and assumption \(\mathbf{(R5)}\)
\[|c_{\Delta}(\eta,\xi_{\Delta})|=\left|c_{\Delta}(\xi_{\Delta}\eta_{\Delta^{c}}, \eta_{\Delta})\frac{\gamma_{\Delta}(\xi_{\Delta}|\eta_{\Delta^{c}})}{\gamma_{ \Delta}(\eta_{\Delta}|\eta_{\Delta^{c}})}\right|\leq\frac{1}{\delta}e^{R}c_{ \Delta}(\xi_{\Delta}\eta_{\Delta^{c}},\eta_{\Delta}).\]
So we get
\[\sup_{\eta\in\Omega}\hat{c}_{\Delta}(\eta,\Omega_{\Delta})=\sup_{ \eta\in\Omega}\int_{\Omega_{\Delta}}\hat{c}_{\Delta}(\eta,\xi_{\Delta})\lambda _{\Delta}(d\xi_{\Delta}) \leq\frac{1}{\delta}e^{R}\sup_{\eta\in\Omega}\int_{\Omega_{\Delta}}c_{ \Delta}(\xi_{\Delta}\eta_{\Delta^{c}},\eta_{\Delta})\lambda_{\Delta}(d\xi_{ \Delta})\] \[\leq\frac{1}{\delta}e^{R}\sup_{\eta\in\Omega}\left\|c_{\Delta}(\eta, \cdot)\right\|_{\infty}.\]
Therefore, assumption \(\mathbf{(R3)}\) implies that \(\mathbf{(L2)}\) is also satisfied.
_Ad \(\mathbf{(L3)}\):_ Fix \(\Delta\Subset S\), \(y\in S\) and two configurations \(\eta^{1},\eta^{2}\) that only disagree at \(y\). Then, for any \(\xi_{\Delta}\in\Omega_{\Delta}\) we have
\[\left|\hat{c}_{\Delta}(\eta^{1},\xi_{\Delta})-\hat{c}_{\Delta}( \eta^{2},\xi_{\Delta})\right| =\left|c_{\Delta}(\xi_{\Delta}\eta_{\Delta^{c}}^{1},\eta_{\Delta}^{ 1})\frac{\gamma_{\Delta}(\xi_{\Delta}|\eta_{\Delta^{c}}^{1})}{\gamma_{\Delta}( \eta_{\Delta}^{2}|\eta_{\Delta^{c}}^{1})}-c_{\Delta}(\xi_{\Delta}\eta_{\Delta^{c }}^{2},\eta_{\Delta}^{2})\frac{\gamma_{\Delta}(\xi_{\Delta}|\eta_{\Delta^{c}}^{2})}{ \gamma_{\Delta}(\eta_{\Delta}^{2}|\eta_{\Delta^{c}}^{2})}\right|\] \[\leq\left|c_{\Delta}(\xi_{\Delta}\eta_{\Delta^{c}}^{1},\eta_{ \Delta}^{1})\right|\left|\frac{\gamma_{\Delta}(\xi_{\Delta}|\eta_{\Delta^{c}}^{1})}{ \gamma_{\Delta}(\eta_{\Delta}^{1}|\eta_{\Delta^{c}}^{1})}-\frac{\gamma_{\Delta}( \xi_{\Delta}|\eta_{\Delta^{c}}^{2})}{\gamma_{\Delta}(\eta_{\Delta}^{2}|\eta_{ \Delta^{c}}^{2})}\right|\] \[\quad+\left|\frac{\gamma_{\Delta}(\xi_{\Delta}|\eta_{\Delta^{c}}^{2 })}{\gamma_{\Delta}(\eta_{\Delta}^{2}|\eta_{\Delta^{c}}^{2})}\right|\left|c_{ \Delta}(\xi_{\Delta}\eta_{\Delta^{c}}^{1},\eta_{\Delta}^{1})-c_{\Delta}(\xi_{ \Delta}\eta_{\Delta^{c}}^{2},\eta_{\Delta}^{2})\right|.\]
To estimate this further, we will have to make a case distinction over whether the site \(y\) is contained in \(\Delta\) or not. If \(y\) is contained in \(\Delta\), then we can naively use Lemma 3.2 and assumption \(\mathbf{(R5)}\) to obtain the rough estimate
\[\left|\hat{c}_{\Delta}(\eta^{1},\xi_{\Delta})-\hat{c}_{\Delta}(\eta^{2},\xi_{ \Delta})\right|\leq 4\frac{1}{\delta}e^{R}\sup_{\eta\in\Omega,\xi_{\Delta}\in\Omega_{\Delta}}|c_{ \Delta}(\eta,\xi_{\Delta})|\leq\frac{4e^{R}K(c)}{\delta}.\]
In the case where \(y\) is not contained in \(\Delta\), we can (and have to) be a bit more precise. Via the elementary algebraic rule
\[ac-bd=\frac{1}{2}\left[(a-b)(c+d)+(a+b)(c-d)\right],\]
and Lemma 3.2 plus assumption (**R5**) one obtains
\[\left|c_{\Delta}(\xi_{\Delta}\eta_{\Delta^{*}}^{1},\eta_{\Delta} ^{1})\right|\frac{\gamma_{\Delta}(\xi_{\Delta}|\eta_{\Delta^{*}}^{1})}{\gamma_ {\Delta}(\eta_{\Delta}^{2}|\eta_{\Delta^{*}}^{1})}-\frac{\gamma_{\Delta}(\xi_{ \Delta}|\eta_{\Delta^{*}}^{2})}{\gamma_{\Delta}(\eta_{\Delta}^{2}|\eta_{ \Delta^{*}}^{2})}\right|+\left|\frac{\gamma_{\Delta}(\xi_{\Delta}|\eta_{\Delta ^{*}}^{2})}{\gamma_{\Delta}(\eta_{\Delta}^{2}|\eta_{\Delta^{*}}^{2})}\right| \left|c_{\Delta}(\xi_{\Delta}\eta_{\Delta^{*}}^{1},\eta_{\Delta}^{1})-c_{ \Delta}(\xi_{\Delta}\eta_{\Delta^{*}}^{2},\eta_{\Delta}^{2})\right|\] \[=\frac{1}{2}\left|c_{\Delta}(\xi_{\Delta}\eta_{\Delta^{*}}^{1}, \eta_{\Delta}^{2})\right|\left|\frac{1}{\gamma_{\Delta}(\eta_{\Delta}^{1}|\eta _{\Delta^{*}}^{1})\gamma_{\Delta}(\eta_{\Delta}^{2}|\eta_{\Delta^{*}}^{2})} \right|\left|\gamma_{\Delta}(\xi_{\Delta}\eta_{\Delta^{*}}^{1})-\gamma_{\Delta }(\xi_{\Delta}|\eta_{\Delta^{*}}^{2})\right|\left|\gamma_{\Delta}(\eta_{ \Delta}^{1}|\eta_{\Delta^{*}}^{1})-\gamma_{\Delta}(\xi_{\Delta}|\eta_{\Delta^ {*}}^{2})\right|\left|\gamma_{\Delta}(\eta_{\Delta}^{1}|\eta_{\Delta^{*}}^{1}) +\gamma_{\Delta}(\eta_{\Delta}^{2}|\eta_{\Delta^{*}}^{2})\right|\] \[\quad\left.+\left|\frac{\gamma_{\Delta}(\xi_{\Delta}|\eta_{\Delta ^{*}}^{2})}{\gamma_{\Delta}(\eta_{\Delta}^{2}|\eta_{\Delta^{*}}^{2})}\right| \left|c_{\Delta}(\xi_{\Delta}\eta_{\Delta^{*}}^{1},\eta_{\Delta}^{1})-c_{ \Delta}(\xi_{\Delta}\eta_{\Delta^{*}}^{2},\eta_{\Delta}^{2})\right|\] \[\leq\frac{1}{2\delta^{2}}e^{2R}K(c)K(\gamma)\left|\gamma_{\Delta} (\xi_{\Delta}|\eta_{\Delta^{*}}^{1})-\gamma_{\Delta}(\xi_{\Delta}|\eta_{\Delta ^{*}}^{2})\right|+\frac{1}{\delta}e^{R}\left|c_{\Delta}(\xi_{\Delta}\eta_{ \Delta^{*}}^{1},\eta_{\Delta}^{1})-c_{\Delta}(\xi_{\Delta}\eta_{\Delta^{*}}^{2 },\eta_{\Delta}^{2})\right|.\]
Now, by integrating this pointwise difference of the densities over \(\xi_{\Delta}\), we obtain via all of the other assumptions that
\[\sup_{x\in S}\sum_{\Delta\ni x}\sum_{y\neq x}\delta_{y}\hat{e}_{\Delta}<\infty.\]
But this is precisely (**L3**) and the proof is finished.
With these two intermediate results at hand, we can now show that \(\hat{\mathscr{L}}\) is indeed the generator of the time-reversal of \((\eta_{t})_{t\geq 0}\) (w.r.t. the time-stationary measure \(\mu\)).
Proof of Proposition 2.5.: It only remains to show that for all \(f,g\in D(\Omega)\) we have
\[\int_{\Omega}f(\omega)\mathscr{L}g(\omega)\mu(d\omega)=\int_{\Omega}\left( \hat{\mathscr{L}}f\right)(\omega)g(\omega)\mu(d\omega),\]
since then the claimed time-reversal duality follows from Lemma A.4.
For this, we first note that it suffices to show that the duality relation for the generators holds for all local functions \(f,g:\Omega\to\mathbb{R}\). Indeed, if it holds for all pairs of local functions, we can then extend it to functions with bounded total oscillation by using the estimates from Lemma 2.1 and dominated convergence. Therefore, let \(f,g\) be two local functions and let \(\Lambda\Subset S\) be sufficiently large such that both \(f\) and \(g\) only depend on coordinates in \(\Lambda\). By first applying Lemma 3.3 and then using that \(\mu\) is time-stationary with respect to the Markovian dynamics generated by \(\mathscr{L}\), we see that
\[\int_{\Omega}f(\omega)\mathscr{L}g(\omega)\mu(d\omega)-\int_{ \Omega}\left(\hat{\mathscr{L}}f(\omega)\right)g(\omega)\mu(d\omega)\] \[=\sum_{\Delta\cap\Lambda\neq\emptyset}\int_{\Omega}\int_{\Omega}c _{\Delta}(\omega,\xi_{\Delta})[f\cdot g(\xi_{\Delta}\omega_{\Delta^{*}})-f \cdot g(\omega)]\mu(d\omega)\lambda_{\Delta}(d\xi_{\Delta})=\int_{\Omega} \mathscr{L}(f\cdot g)(\omega)\mu(d\omega)=0,\]
which finishes the proof.
## 4. Trajectorial decay of \(\Phi\)-entropies
In this section we use the time-reversed process and a martingale argument to prove Theorem 2.6.
### The time-dependent martingale property
The main technical tool will be the following lemma which can be seen as an extension of [10, Lemma IV.20.12] to our setting.
**Lemma 4.1**.: _Let \(\mathscr{L}\) be the generator of an interacting particle system \((\eta(s))_{s\geq 0}\) such that its transition rates satisfy \((\textbf{L1})-(\textbf{L3})\) and let \(\mu\) be a time-stationary measure w.r.t. \(\mathscr{L}\). Then, for all \(f:[0,\infty)\times\Omega\to\mathbb{R}\) such that_
1. \(f(\cdot,\eta)\in C^{1}([0,\infty))\) _for all_ \(\eta\in\Omega\) _and_
2. _for all_ \(T>0\) _it holds that_ \[\sup_{0\leq t\leq T}\left|\!\left|f(t,\cdot)\right|\!\right|\!\right|<\infty,\]
_the process defined by_
\[f(t,\eta(t))-\int_{0}^{t}(\partial_{s}+\mathscr{L})f(s,\eta(s))ds\]
_is a martingale w.r.t. the filtration \(\mathcal{G}_{t}:=\sigma(\eta(u):0\leq u\leq t)\)._
The proof of this lemma is not difficult but hard to find in the existing literature, we therefore give it in some detail.
Proof.: For functions \(f\) as above, we define
\[M(s):=f(s,\eta(s))-\int_{0}^{s}(\partial_{u}+\mathscr{L})f(u,\eta(u))du,\quad s\geq 0\]
and denote by \((P_{t})_{t\geq 0}\) the Markov semigroup generated by \((\partial_{s}+\mathscr{L})\). Then, for \(s<t\), the Markov property and the elementary identity
\[\frac{d}{dt}P_{t}=P_{t}(\partial_{t}+\mathscr{L})=(\partial_{t}+\mathscr{L})P _{t},\]
give us
\[\mathbb{E}\left[f(t,\eta(t))-\int_{0}^{t}(\partial_{u}+\mathscr{ L})f(u,\eta(u))du\Big{|}\mathcal{G}_{s}\right]\] \[=P_{t-s}f(s,\eta(s))-\int_{0}^{s}(\partial_{u}+\mathscr{L})f(u, \eta(u))du-\int_{s}^{t}P_{u-s}(\partial_{u}+\mathscr{L})f(s,\eta(s))du\] \[=P_{t-s}f(s,\eta(s))-\int_{0}^{s}(\partial_{u}+\mathscr{L})f(u, \eta(u))du-\int_{s}^{t}\frac{d}{du}P_{u-s}f(s,\eta(s))du\] \[=f(s,\eta(s))-\int_{0}^{s}(\partial_{u}+\mathscr{L})f(u,\eta(u))du.\]
This shows that the process \((M(s))_{s\geq 0}\) is indeed a martingale.
This abstract tool now lets us establish the analogue of the first step in the case of a finite state space considered in Section 1.2.
**Proposition 4.2**.: _Let \((\Omega,\mathcal{A},\mathbb{P})\) be a probability space on which the interacting particle system \((\eta(s))_{s\geq 0}\) is defined. Denote the generator of \((\eta(s))_{s\geq 0}\) by \(\mathscr{L}\) and assume that the rates satisfy \((\mathbf{R1})-(\mathbf{R5})\) and assume that under \(\mathbb{P}\) we have \(\eta(0)\sim\mu\), where \(\mu\) is a time-stationary measure for the corresponding Markov semigroup \((P_{t})_{t\geq 0}\) on \(C(\Omega)\) that is generated by \(\mathscr{L}\) and that \(\mu\) is a Gibbs measure w.r.t. a specification \(\gamma\) that satisfies \((\mathbf{S1})-(\mathbf{S4})\). Then, for all \(f\in D(\Omega)\) and \(T>0\), the process defined by_
\[P_{T-s}f(\eta(T-s)),\quad 0\leq s\leq T,\]
_is a \(((\hat{\mathcal{G}}_{t})_{0\leq t\leq T},\mathbb{P})\)-martingale, where \(\hat{\mathcal{G}}_{t}=\sigma(\eta(T-s):\ 0\leq s\leq t)\)._
Proof.: Note that by Lemma 2.1 we can apply Lemma 4.1 to the function
\[[0,T]\times\Omega\ni(s,\eta)\mapsto P_{T-s}f(\eta).\]
But since we have by the chain rule
\[\partial_{s}P_{T-s}f=-\mathscr{L}P_{T-s}f,\]
the correction term cancels out and we obtain the claimed martingale property.
### Trajectorial decay of \(\Phi\)-entropies
With this preliminary result in place, we can now come to the proof of our main result.
Proof of Theorem 2.6.: _Submartingale property:_ By Jensen's inequality and Proposition 4.2 we immediately see that the process \((L^{\Phi,f}(t))_{t\geq 0}\), as defined in (2.3), is a submartingale.
_Doob-Meyer decomposition:_ Here we want to apply Lemma 4.1 to the function
\[g:[0,T]\times\Omega\ni(s,\eta)\mapsto\Phi(P_{T-s}f)\in\mathbb{R}.\]
Via the chain rule we see that
\[\partial_{s}g(s,\cdot)=\partial_{s}\Phi(P_{T-s}f(\cdot))=-\Phi^{\prime}(P_{T- s}f(\cdot))\hat{\mathscr{L}}P_{T-s}f(\cdot).\]
Applying the generator \(\hat{\mathscr{L}}\) for fixed \(s\in[0,T]\) yields
\[\hat{\mathscr{L}}g(s,\eta)=\sum_{\Delta\in S}\int_{\Omega_{\Delta}}\hat{c}_{ \Delta}(\eta,\xi_{\Delta})\left[\Phi(P_{T-s}f(\xi_{\Delta}\eta_{\Delta^{*}}))- \Phi(P_{T-s}f(\eta))\right]\lambda_{\Delta}(d\xi_{\Delta}).\]
By putting these two ingredients together and using the previously introduced notation for the Bregman \(\Phi\)-divergence we obtain
\[(\partial_{s}+\hat{\mathscr{L}})g(s,\eta)=\sum_{\Delta\in S}\int_{\Omega_{ \Delta}}\hat{c}_{\Delta}(\eta,\xi_{\Delta})\mathrm{div}^{\Phi}(P_{T-s}f(\xi_{ \Delta}\eta_{\Delta^{*}})|P_{T-s}f(\eta))\lambda_{\Delta}(d\xi_{\Delta}).\]
The claimed Doob-Meyer decomposition (2.4) of the submartingale \(L^{\Phi,f}\) now follows from Lemma 4.1.
## Acknowledgements
The authors acknowledge the financial support of the Leibniz Association within the Leibniz Junior Research Group on _Probabilistic Methods for Dynamic Communication Networks_ as part of the Leibniz Competition.
## Appendix A The time-reversal of Markov processes in equilibrium
In this section, we briefly summarize some properties of the time-reversal of a Markov process w.r.t. a time-stationary measure. These results are classical but not particularly easy to find in the literature, at least in this formulation.
We start by making precise what we mean by time-reversal of a stochastic process. Recall that any time-stationary stochastic process \((X_{t}))_{t\geq 0}\) can be extended (in law) to a process \((X_{t})_{-\infty<t<\infty}\) via Kolmogorov's extension theorem.
**Definition A.1**.: _Let \((X_{t})_{t\geq 0}\) be a time-stationary stochastic process. We call a process \((Y_{t})_{t\geq 0}\) the time-reversal of \(X\) if_
\[\operatorname{Law}((X_{-t})_{t\geq 0})=\operatorname{Law}((Y_{t})_{t\geq 0}).\]
The intuition behind this definition is that forward in time the process \(Y\) looks like the process \(X\) run backwards. For Markov processes this notion can be characterized in terms of the semigroups and generators as follows.
**Proposition A.2**.: _Let \(X=(X_{t})_{t\geq 0}\) and \(Y=(Y_{t})_{t\geq 0}\) be Markov processes on a compact topological space \(E\) with associated semigroups \((T_{t})_{t\geq 0}\) and \((S_{t})_{t\geq 0}\). Assume that \(X\) has a time-stationary measure \(\nu\) and for all \(f,g\in C(E)\) we have_
(A.1) \[\int_{E}(T_{t}f)gd\nu=\int_{E}f(S_{t}g)d\nu.\]
_Then \(\nu\) is also time-stationary for \(Y\) and \(Y\) is the time-reversal of \(X\) (w.r.t. \(\nu\))._
Proof.: By the duality relation (A.1), \(\nu\) is also time-stationary for \(Y\) and we can extend it to a process \((Y_{t})_{-\infty<t<\infty}\) with \(Y_{0}\sim\nu\). To show that \(Y\) is the time-reversal of \(X\) (w.r.t. \(\nu\)) it suffices to show that for arbitrary \(n\in\mathbb{N}\) and times \(t_{0}<t_{2}<\cdots<t_{n}<t_{n+1}\) and functions \(f_{1},\ldots,f_{n}\in C(E)\) it holds that
\[\mathbb{E}_{Y_{0}\sim\nu}\left[f_{1}(Y_{t_{1}})\cdots f_{n}(Y_{t_{n}})\right] =\mathbb{E}_{X_{0}\sim\nu}\left[f_{1}(X_{-t_{1}})\cdots f_{n}(X_{-t_{n}}) \right].\]
We will do this as follows. First, we introduce the notation \(f_{0}=f_{n+1}\equiv 1\) and define functions \(g_{0},\ldots,g_{n+1}\) and \(h_{0},\ldots,h_{n+1}\) by \(g_{0}=h_{n+1}\equiv 1\),
\[g_{l}(x):=\mathbb{E}_{x}\left[\prod_{k=0}^{l-1}f_{k}(X_{t_{l}-t_{k}})\right], \quad 1\leq l\leq n+1\]
and
\[h_{l}(y):=\mathbb{E}_{y}\left[\prod_{k=l+1}^{n+1}f_{k}(Y_{t_{k}-t_{l}})\right],\quad 0\leq l\leq n.\]
With this notation it suffices to show that the quantity
\[\alpha_{l}:=\int_{E}g_{l}(x)f_{l}(x)h_{l}(x)\nu(dx),\quad 0\leq l\leq n+1,\]
does not depend on \(l\). Indeed, by stationarity of \(X\) and \(Y\), this will then imply that
\[\mathbb{E}_{Y_{0}\sim\nu}\left[f_{1}(Y_{t_{1}})\cdots f_{n}(Y_{t_{n}})\right] =\int_{E}h_{0}d\nu=\alpha_{n+1}=\int_{E}g_{n+1}d\nu=\mathbb{E}_{X_{0}\sim\nu} \left[f_{1}(X_{-t_{1}})\cdots f_{n}(X_{-t_{n}})\right],\]
exactly as we wanted. In order to show that \(\alpha_{l}\) does not depend on \(l\), we use the duality relation as follows. For \(0\leq l\leq n\) it holds that
\[\alpha_{l} =\int_{E}g_{l}f_{l}h_{l}d\nu\] \[=\int_{E}g_{l}(x)f_{l}(x)T_{t_{l+1}-t_{l}}[f_{l+1}h_{l+1}](x)\nu(dx)\] \[=\int_{E}S_{t_{l+1}-t_{l}}[g_{l}f_{l}](x)f_{l+1}(x)h_{l+1}(x)\nu(dx)\] \[=\int_{E}g_{l+1}f_{l+1}h_{l+1}d\nu=\alpha_{l+1}.\]
Therefore, \(\alpha_{l}\) does not depend on \(l\) and the claim follows.
The duality relation (A.1) for the semigroups can also be verified by checking a similiar property on the level of generators. The main technical tool for establishing this relation between the generator of a Markov process and its semigroup will unsurprisingly be the celebrated Hille-Yosida theorem which we recall here.
**Theorem A.3** (Hille-Yosida).: _There is a one-to-one correspondence between Markov generators on \(C(E)\) and Markov semigroups on \(C(E)\). This correspondence is explicitly given by_
1. \[\operatorname{dom}(\mathscr{L})=\left\{f\in C(E):\ \lim_{t\downarrow 0} \frac{S_{t}f-f}{t}\text{ exists}\right\},\quad\text{and}\] \[\mathscr{L}f:=\lim_{t\downarrow 0}\frac{S_{t}f-f}{t},\quad f\in \operatorname{dom}(\mathscr{L}).\]
2. \[S_{t}f=\lim_{n\to\infty}\left(Id-\frac{t}{n}\mathscr{L}\right)^{-n}f,\quad f \in C(E),t\geq 0.\]
_Moreover,_
1. _if_ \(f\in\operatorname{dom}(\mathscr{L})\)_, then_ \(S_{t}f\in\operatorname{dom}(\mathscr{L})\) _for all_ \(t\geq 0\) _and_ \(\frac{d}{dt}S_{t}f=\mathscr{L}S_{t}f=S_{t}\mathscr{L}f\) _and_
2. _for_ \(g\in C(E)\) _and_ \(\lambda\geq 0\)_, the solution to the resolvent equation_ \(f-\lambda\mathscr{L}f=g\) _is given by_ \[f=\int_{0}^{\infty}e^{-t}S_{\lambda t}gdt.\]
Proof.: See Theorem 1.2.6 and Theorem 4.2.2 in [1].
With this at hand, we can now formulate the time-reversal duality for generators.
**Lemma A.4**.: _Let \((T_{t})_{t\geq 0}\) and \((S_{t})_{t\geq 0}\) be two Markov semigroups on \(C(E)\) where \(E\) is a compact topological space with time-stationary measure \(\nu\). Let \((\mathscr{A},\operatorname{dom}(\mathscr{A}))\) and \((\mathscr{B},\operatorname{dom}(\mathscr{B}))\) be their generators. If for all \(f\in\operatorname{dom}(\mathscr{A})\) and \(g\in\operatorname{dom}(\mathscr{B})\) it holds that_
(A.2) \[\int_{E}\left(\mathscr{A}f\right)g\ d\nu=\int_{E}f\left(\mathscr{B}g\right)\ d\nu,\]
_then \((\ref{eq:A.1})\) holds. It suffices if \((\ref{eq:A.2})\) holds for \(f,g\) in a core of the respective generators._
Proof.: The duality relation implies that for all \(f\in\operatorname{dom}(\mathscr{A})\), \(g\in\operatorname{dom}(\mathscr{B})\) and \(\lambda\geq 0\) it holds that
\[\int_{E}f\big{(}g-\lambda\mathscr{L}g\big{)}\ d\nu=\int_{E}g\big{(}f-\lambda \hat{\mathscr{L}}f\big{)}\ d\nu.\]
Now we can replace \(f\) by \((\operatorname{Id}-\lambda\mathscr{L})^{-1}f\) and \(g\) by \((\operatorname{Id}-\lambda\hat{\mathscr{L}})^{-1}g\) to see that for all \(f,g\in C(\Omega)\)
\[\int_{E}[(\operatorname{Id}-\lambda\mathscr{A})^{-1}f]g\ d\nu=\int_{E}[( \operatorname{Id}-\lambda\mathscr{B})^{-1}g]f\ d\nu.\]
By iteratively applying this equality we obtain that for all \(n\in\mathbb{N}\) and \(\lambda\geq 0\) it holds that
\[\int_{E}[(\operatorname{Id}-\lambda\mathscr{A})^{-n}f]g\ d\nu=\int_{E}[( \operatorname{Id}-\lambda\mathscr{B})^{-n}g]f\ d\nu.\]
For fixed \(t\geq 0\) we can replace \(\lambda\) by \(t/n\) and use Part \(ii.\) of Theorem A.3 to see that
\[\lim_{n\to\infty}\int_{E}\left[(\operatorname{Id}-\frac{t}{n}\mathscr{A})^{-n }f\right]g\ d\nu=\int_{E}gT_{t}f\ d\nu\quad\text{ and }\quad\lim_{n\to\infty}\int_{E}\left[( \operatorname{Id}-\frac{t}{n}\mathscr{B})^{-n}g\right]f\ d\nu=\int_{E}fS_{t}g \ d\nu,\]
as desired.
To sum this up, in order to show that a stationary Markov process \(Y\) is the time-reversal of a stationary Markov process \(X\), it suffices to check that their generators satisfy the duality relation (A.2).
|
2303.15939
|
Generating artificial digital image correlation data using
physics-guided adversarial networks
|
Digital image correlation (DIC) has become a valuable tool to monitor and
evaluate mechanical experiments of cracked specimen, but the automatic
detection of cracks is often difficult due to inherent noise and artefacts.
Machine learning models have been extremely successful in detecting crack paths
and crack tips using DIC-measured, interpolated full-field displacements as
input to a convolution-based segmentation model. Still, big data is needed to
train such models. However, scientific data is often scarce as experiments are
expensive and time-consuming. In this work, we present a method to directly
generate large amounts of artificial displacement data of cracked specimen
resembling real interpolated DIC displacements. The approach is based on
generative adversarial networks (GANs). During training, the discriminator
receives physical domain knowledge in the form of the derived von Mises
equivalent strain. We show that this physics-guided approach leads to improved
results in terms of visual quality of samples, sliced Wasserstein distance, and
geometry score when compared to a classical unguided GAN approach.
|
David Melching, Erik Schultheis, Eric Breitbarth
|
2023-03-28T12:52:40Z
|
http://arxiv.org/abs/2303.15939v3
|
# Physics-guided adversarial networks for artificial digital image correlation data generation
###### Abstract
Digital image correlation (DIC) has become a valuable tool in the evaluation of mechanical experiments, particularly fatigue crack growth experiments. The evaluation requires accurate information of the crack path and crack tip position, which is difficult to obtain due to inherent noise and artefacts. Machine learning models have been extremely successful in recognizing this relevant information given labelled DIC displacement data. For the training of robust models, which generalize well, big data is needed. However, data is typically scarce in the field of material science and engineering because experiments are expensive and time-consuming. We present a method to generate synthetic DIC displacement data using generative adversarial networks with a physics-guided discriminator. To decide whether data samples are real or fake, this discriminator additionally receives the derived von Mises equivalent strain. We show that this physics-guided approach leads to improved results in terms of visual quality of samples, sliced Wasserstein distance, and geometry score.
_Keywords--_ physics-guided neural networks, generative adversarial networks, digital image correlation, fatigue crack growth
## 1 Introduction
Digital image correlation (DIC) is a contact-less optical measurement technique [1]. It computes full-field surface displacements by tracking a previously applied stochastic random pattern and comparing current images with a reference image. However, DIC data is subject to inherent noise and artefacts due to influences such as pattern quality, sensor noise, air movement, etc. [2].
In recent years, DIC has become an important tool to accompany and evaluate mechanical experiments [3], especially fatigue crack growth (FCG) experiments. FCG experiments are of significant importance to determine the lifetime and damage tolerance of critical engineering structures and components that are subjected to non-constant loads [4]. The DIC data serves as the basis for the subsequent mechanical evaluation of fracture mechanical quantities like the stress intensity factors [5] and \(J\)-integral [6]. For the evaluation, the spatial location of the crack and especially the exact crisp position is crucial. However, due to the inherent noise of DIC, this information can be difficult to obtain. To solve this problem in an automated way, specially designed convolutional neural networks have been successfully applied [7, 8, 9].
For these powerful data-driven models to work reliably, they need a diverse set of labelled training. However, data is sparse in our domain, experiments are expensive, and manual labelling is extremely tedious and time-consuming. To address the problem of insufficient data, Strohman et al. [8] added artificial training data in the form of finite element simulations. However, simulations are idealized and lack the characteristic DIC noise. In this case, it is difficult to imitate the DIC characteristics by simply adding noise to the data.
In the field of data-driven modeling, generative adversarial networks (GANs) have proven to be powerful data generators. GANs are a generative, unsupervised approach to machine learning based on deep neural networks trained using an adversarial process. Recently, deep convolutional GANs (DC-GANs) produced state-of-the-art results in many generative and translative computer vision tasks such as image generation [10, 11, 12], style transfer [13], image-to-image translation [14, 15], text-to-image translation [16], or semantic image synthesis [17].
However, training deep neural networks typically requires large amounts of data, which are often not available in the scientific domain. For example, it is not possible to test an entire aircraft just to generate
training data. Without sufficient data, deep learning models are often unreliable and poorly generalize to domains not covered by the training data. To overcome this problem, efforts have been made to integrate fundamental physical laws and domain knowledge into machine learning models. Karpatne et al. [18] describe theory-guided data science as an emerging paradigm aiming to use scientific knowledge to improve the effectiveness of data science models. They presented several approaches for integrating domain knowledge in data science. Daw et al. [19] proposed a physics-guided neural network (PGNN) by adding a physics-based term to the loss function of a neural network to encourage physically consistent results. Karniadakis et al.[20, 21] coined the term physics-informed neural networks (PINNs) - a deep learning framework which enables the combination of data-driven and mathematical models described by partial differential equations. Yang et al. [22] combined this approach with GANs by adding a physics-based loss to the generator and, recently, Daw et al. [23] introduced a physics-informed discriminator GAN, which physically supervises the discriminator instead.
In this work, we generate synthetic DIC displacement data using GANs. Our GAN framework is based on the classical convolutional DC-GAN architecture [24]. We incorporate mechanical knowledge by using a physics-guided discriminator. In addition to the generated displacement data, this discriminator receives the derived equivalent strain according to von Mises [25] as an additional input feature. This mechanically motivated physical feature guides the adversarial training process to physically consistent generated data. The synthetically generated data samples can be used to increase data variation of a given DIC dataset. Although this synthetic data is not labelled, it has the potential to also improve supervised machine learning tasks, e.g. by using it for unsupervised pre-training [26] or label propagation [27]. We use DIC data from FCG experiments of a middle tension specimen manufactured from an aluminium-based alloy (see Section 2.2) to train several GANs. To demonstrate the merits of the physics-guided method, we compare the results of the physics-guided approach to a classical one in terms of visual quality of generated samples (see Section 3.2) and distance of fake data distributions to the real training data. The latter is quantified using the sliced Wasserstein distance (SWD) [28] and the geometry score (GS) [29] (see Section 2.4).
## 2 Methodology
Generative Adversarial Networks (GANs) are generative machine learning models learned using an adversarial training process [30]. In this framework, two neural networks - the generator \(G\) and the discriminator \(D\) - contest against each other in a zero-sum game. Given a training dataset characterized by a distribution \(p_{\text{data}}\), the generator aims to produce new data following \(p_{\text{data}}\) while the task of the discriminator is to distinguish generated data samples from actual training data samples.
Given a noise vector \(z\) sampled from a prior, e.g. the standard normal distribution, the generator outputs data samples \(G(z)\), called _fake samples_, trying to follow the training data distribution \(p_{\text{data}}\). Given a real or fake sample, the discriminator is supposed to decide whether it is real or fake by predicting the probability of it belonging to the training dataset.
Both models \(G\) and \(D\) are trained simultaneously contesting against each other in a two-player zero-sum minimax game with the value function \(V(G,D)\):
\[\min_{G}\max_{D}V(G,D):=\mathbb{E}_{x\sim p_{\text{data}}}\left[\log(D(x)) \right]+\mathbb{E}_{z}\left[\log(D(G(z)))\right] \tag{1}\]
This means \(D\) is trained to minimize the loss
\[L_{D}=-\log(D(x))-\log(D(G(z))), \tag{2}\]
whereas \(G\) is trained to minimize the loss
\[L_{G}=-\log(D(G(z))). \tag{3}\]
As the discriminator gets better at identifying fake samples \(G(z)\), the generator has to improve on generating samples which are more similar to the real training samples \(x\sim p_{\text{data}}\). We refer to [30] for further details of the training algorithm.
### Digital image correlation
DIC is a contact-less, optical method to obtain full-field displacements and strains. It is widely applied in science and engineering to quantify deformation processes. In experimental mechanics, it is used to monitor and evaluate fatigue crack growth (FCG) experiments [31] by determining fracture mechanical parameters like stress intensity factors (SIFs) [5] or J-integral [6]. Essentially, DIC measurements are based on the comparison of a current image with a reference image using tracking and image registration techniques. The cross correlation method requires a random speckle pattern on the sample surface. Various external and internal influences such as illumination, air movement, vibrations, facet size and spacing, camera settings, sensor noise, pattern quality, etc. lead to inherent noise in the DIC data. Our goal is to generate realistic DIC data synthetically with GANs that incorporate these types of uncertainties.
### Training data
As training dataset, we use planar displacement fields \(u=(u_{x},u_{y})\) obtained during FCG experiments of the aluminium alloy AA2024-T3 using a commercial GOM Aramis 12M 3D-DIC system. Details on the general experimental setup can be found in [8]. For the dataset, we use _one_ FCG experiment performed on a middle tension (MT) specimen (width \(w=160\,\mathrm{mm}\), thickness \(t=2mm\)) at a maximal stress of \(\sigma_{\mathrm{max}}=47\,\mathrm{MPa}\) and \(R=0.3\) with 20 load cycles per second. Every 0.5 mm of crack growth (measured by direct current potential drop), 3 images (at minimal load \(F_{\mathrm{min}}=R\cdot\sigma_{F_{\mathrm{max}}}\), mean load \(F_{\mathrm{min}}+(F_{\mathrm{max}}-F_{\mathrm{min}})/2\) and maximum load \(F_{\mathrm{max}}\)) were acquired. From the resulting DIC dataset, we take the planar displacements \(u_{x}\) and \(u_{y}\) of the specimen and linearly interpolate the displacements from an area of \(70\times 70\,\mathrm{mm}^{2}\) of the right-hand side of the specimen on an equidistant \(256\times 256\) pixel grid. This procedure results in 838 training data samples of shape \(2\times 256\times 256\), where the first dimension stands for the \(x\)- and \(y\)- displacements. Each of the two channels is normalized to \([-1,1]\) by the min-max-scaling and shift
\[u_{\mathrm{scaled}}=2\cdot\frac{u-u_{\mathrm{min}}}{u_{\mathrm{max}}-u_{ \mathrm{min}}}-1 \tag{4}\]
such that the minimum and maximum are mapped to \(-1\) and \(1\), respectively.
### Physics-guided DIC-GAN
We aim to generate artificial DIC displacement data using deep convolutional GANs. For this, we mainly follow the architectural guidelines from [24]. However, in order to reduce checkerboard artefacts, we choose nearest-neighbor upsampling instead of transposed convolutions in the generator [32].
_Generator._ The input of the generator network is a \(n\)-dimensional vector \(z\) randomly sampled from a standard normal distribution. For definiteness, we choose \(n=5\) throughout all our training experiments. First, the random vector \(z\) passes a fully-connected layer with 8-8-512 = 32768 neurons, batch-normalization [33], and ReLU activation [34]. The output of this layer is then reshaped into 512 features of size \(8\times 8\). After that, these features are successively doubled in size using the base block (upsampling \(\rightarrow\) batch normalization \(\rightarrow\) ReLU \(\rightarrow\) convolution) four times. The final block ends with a tanh activation instead of ReLU. Therefore, in accordance with the training data, the generator outputs _fake_ samples with pixel values between \(-1\) and \(1\).
_Discriminator._ For the discriminator, we implemented the following two approaches:
1. **Classical:** The discriminator gets real and fake pairs of \(x\)- and \(y\)- displacement fields \((u_{x},u_{y})\) and predicts a (pseudo-)probability of the sample being real. We refer to this approach as _classical_ DIC-GAN.
2. **Physics-guided:** In addition to the displacement fields, the corresponding von Mises equivalent strain \(\varepsilon_{\mathrm{vm}}\) is calculated based on the generated and real displacement fields and the discriminator gets the triple \((u_{x},u_{y},\varepsilon_{\mathrm{vm}})\) as input in order to decide whether it is fake or real. For small strains, the von Mises equivalent strain is defined as the scalar quantity \[\varepsilon_{\mathrm{vm}}=\sqrt{\frac{2}{3}\varepsilon_{\mathrm{dev}}: \varepsilon_{\mathrm{dev}}},\qquad\text{where }\varepsilon_{\mathrm{dev}}= \varepsilon-\frac{1}{3}\text{tr}(\varepsilon)\] (5)
Figure 1: Physics-guided DIC-GAN framework: A deep convolutional generator creates fake DIC displacement data from noise. The discriminator gets the derived corresponding von Mises equivalent strain as an additional feature to decide whether samples are real or fake.
denotes the deviatoric part of the three-dimensional strain tensor
\[\varepsilon=\begin{pmatrix}\varepsilon_{xx}&\varepsilon_{xy}&\varepsilon_{xz}\\ \varepsilon_{xy}&\varepsilon_{yy}&\varepsilon_{yz}\\ \varepsilon_{xz}&\varepsilon_{yz}&\varepsilon_{zz}\end{pmatrix} \tag{6}\]
In case of plane stress, \(\varepsilon_{xx}=\varepsilon_{yz}=0\) and \(\varepsilon_{zz}=-\nu(\varepsilon_{xx}-\varepsilon_{yy})\). Assuming volume constancy with a Poisson ratio of \(\nu=1/2\), Formula (5) simplifies to
\[\varepsilon_{\text{vm}}=\frac{2}{\sqrt{3}}\sqrt{\varepsilon_{xx}^{2}+ \varepsilon_{yy}^{2}+\varepsilon_{zy}^{2}+\varepsilon_{xx}\varepsilon_{yy}}, \tag{7}\]
We use Formula (7) for the physics-guided discriminator. Therfore, we numerically approximate the strains using finite differences, e.g.
\[\varepsilon_{xy}(x,y)=\frac{\partial}{\partial y}u_{x}(x,y)=\frac{u_{x}(x,y+h) -u_{x}(x,y)}{h}+\mathcal{O}(h). \tag{8}\]
To guarantee differentiability, the square-root function is smoothed by using \(\sqrt{\cdot+\delta}\) with \(\delta\ll 1\). We refer to this approach as _physics-guided_ DIC-GAN (see Figure 1).
In both cases, we choose the same model architecture for the discriminator. The input of size \(2\times 256\times 256\), or \(3\times 256\times 256\) in case of the physics-guided discriminator, is successively downsampled to the size \(1\times 32\times 32\) using three blocks of strided convolutions, batch normalization, and LeakyReLU activation [35], where LeakyReLU\((t)=\max(\alpha t,t)\) with \(\alpha=0.2\). The extracted features are then flattened and pass the last fully-connected layer with one output neuron and sigmoid activation. The output is a number between 0 and 1 and is interpreted as the probability of the sample being real.
### Evaluation of GANs
In classical supervised learning, a model is trained by minimizing a specific loss (e.g. mean squared error), which quantitatively compares model predictions with the expected target. After training, models can be evaluated and compared by calculating the loss (and accuracy) for independent labeled test data. GAN generators, however, are trained in an adversarial fashion using a second model (the discriminator) to classify generated data as real or fake. Both models are trained simultaneously to maintain an equilibrium. Therefore, there is no natural objective function to evaluate GAN generators quantitatively. Instead GANs are evaluated by assessing the quality and variation of generated data. This is typically achieved by visual inspection of generated samples or by calculating the inception score (IS) [36] and Frechet inception distance (FID) [37]. However, in case of DIC data, several domain experts would be needed to objectively grade the visual quality of generated samples. Moreover, quantitative metrics like IS or FID can only be employed for natural images since they use image classification networks like Inception [38], which are pre-trained on ImageNet [39]. Therefore, in addition to a visual examination of generated samples in Section 3.2, we use metrics which are independent of the data type and do not use any pre-trained models. More precisely, we use the following two metrics:
_Sliced Wasserstein distance._ In mathematics, the Wasserstein distance is a natural distance function between two distributions. Intuitively, it can be viewed as the minimal cost of transforming one of the distributions into the other. In case of image-like datasets \(X=\{X_{n}\}_{n=1,\ldots N}\) and \(Y=\{Y_{n}\}_{n=1,\ldots N}\) with same number of samples \(N\) and image sizes \(c\times h\times w\), where \(c\) is the number of channels and \(h\) and \(w\) denote the height and width of images, respectively, the (quadratic) Wasserstein distance is given by,
\[W(X,Y)^{2}=\min_{\pi}\sum_{i,j,k,n}|X_{n}(i,j,k)-Y_{\pi(n)}(i,j,k)|^{2}, \tag{9}\]
where the minimum is taken over all permutations \(\pi\) of the set \(\{1,\ldots N\}\)[28]. Due to the high dimensionality of images and the large number of samples, the exact computation of the Wasserstein distance is computationally infeasible. This is because the number of permutations scales exponentially with the number of samples \(N\). Therefore, instead of (9), we use the sliced Wasserstein distance (SWD) introduced in [28] as an approximation, which is amendable for efficient numerical computation. The main idea of slicing is to map the high dimensional image data from \(\mathbb{R}^{c\times h\times w}\) onto one-dimensional slices. On these slices, the Wasserstein distance can be calculated in loglinear time by using the ordered structure of one-dimensional Euclidean space. The sliced Wasserstein distance is defined as,
\[\tilde{W}(X,Y)^{2}=\int_{\theta\in\Omega}\min_{\pi_{\theta}}\sum_{n=1}^{N}| \langle X_{n}-Y_{\pi_{\theta}(n)},\theta\rangle|^{2}d\theta, \tag{10}\]
where \(\Omega=\{\theta\in\mathbb{R}^{c\times h\times w}:\|\theta\|=1\}\) denotes the unit sphere. We refer to [10, 28] and Section 3.3 below for further details.
Geometry score.Introduced by Khrulkov & Oseledets [29], the geometry score (GS) allows to quantify the performance of GANs trained on datasets of arbitrary nature. It measures the similarity between the real dataset \(X_{\text{real}}\) and a generated one \(X_{\text{fake}}\) by comparing topological properties of the underlying low-dimensional manifolds [40]. The detailed quantitative characterization of the underlying manifold of a given dataset \(X\) is usually very hard. The core idea of [29] is to choose random subsets \(L\subset X\) called _landmarks_ and to build a family of _simplicial complexes_, parametrized by a non-negative, time-like _persistance parameter_\(\alpha\). For small \(\alpha\), the complexes consist of a disjoint union of points. Increasing \(\alpha\) adds more and more simplicies finally leading to one single connected blob. For each value of \(\alpha\), topological properties of the corresponding simplicial complex, namely the number of _one-dimensional holes_ in terms of homology, \(\beta_{1}(\alpha)\), are calculated (see, e.g., [41]). From this, the authors propose to compute Relative Living Times (RLTs) for every number of holes that was observed [29]. For each non-negative number \(i\), the RLT is the amount of the time when exactly \(i\) holes were present relative to the overall time \(\alpha_{\text{max}}\) after which everything is connected. More precisely,
\[\text{RLT}(i,X,L)=\frac{\mu\left(\{\alpha\in[0,\alpha_{\text{max}}]:\beta_{1 }(\alpha)=i\}\right)}{\alpha_{\text{max}}}, \tag{11}\]
where \(\mu\) denotes the standard Lebesgue measure. Since the RLTs depend on the choice of landmarks \(L\), we choose a collection of \(n\) random sets of landmarks \(L_{j}\) and define the Mean Relative Living Times (MRLTs) as
\[\text{MRLT}(i,X)=\frac{1}{n}\sum_{j=1}^{n}\text{RLT}(i,X,L_{j}). \tag{12}\]
The MRLT is a discrete probability distribution over the non-negative integers. It can be interpreted as the probability of the manifold having exactly \(i\) one-dimensional holes (on average). The \(L^{2}\)-distance between the MRLT distributions of \(X_{\text{real}}\) and \(X_{\text{fake}}\) defines a measure of topological similarity between the real dataset and the generated one, called geometry score (GS):
\[\text{GS}(X_{\text{fake}},X_{\text{real}})=\sum_{i=0}^{i_{\text{max}}-1}\left| \text{MRLT}(i,X_{\text{fake}})-\text{MRLT}(i,X_{\text{real}})\right|^{2}, \tag{13}\]
where \(i_{\text{max}}\) is an upper bound on the number of holes. We refer to [29] for further theoretical details and to Section 3.4 for the choice of hyperparameters and results in our case.
## 3 Results and discussion
In order to demonstrate the effectiveness of the method and to compare the classical and the physics-guided discriminator approach, we trained 10 randomly initialized classical and physics-guided DIC-GANs each for 100 epochs. Moreover, we trained two classical and physics-guided DIC-GANs each for 1000 epochs in order to compare both architectures after long training runs. The training setup is described in Section 3.1 below. The trained models are evaluated qualitatively and quantitatively by using the following criteria:
* Visual inspection of samples (Section 3.2)
* Sliced Wasserstein distances (Section 3.3)
* Geometry scores (Section 3.4)
A summary of the results can be seen in Table 1. In short, the physics-guided DIC-GAN approach leads to visually better results after 100 epochs and overall to measurably better results. For a detailed discussion, we refer to the sections below.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Model & Epoch & Visual quality & GS \(\times 10^{3}\) & SWD \(\times 10^{3}\) \\ \hline \hline Classical & 100 & low & 183.49 & 151.70 \\ Physics-guided & 100 & medium & **157.58** & **82.96** \\ Classical & 1000 & high* & 23.83 & 83.19 \\ Physics-guided & 1000 & high* & **1.93** & **61.61** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Subjective visual quality, calculated geometry score (GS), and sliced Wasserstein distance (SWD) (lower is better) for different DIC-GAN model architectures and training lengths. (*) apart from _garbage_ samples (see Figure 4)
### Training procedure
Before training, the filters of the convolution layers in both generator and discriminator network are initialised randomly from a normal distribution with zero mean and a standard deviation \(0.02\). In contrast, the weights of the batch normalization layers are initialised from mean \(1\) and standard deviation of \(0.02\), whereas the biases are initialised with zeros.
For training, we choose the Adam optimizer [42] with a learning rate of \(0.002\), momentum parameters of \(\beta_{1}=0.5,\,\beta_{2}=0.999\), and a batch size of \(8\). We noticed that occasionally models suffer from mode collapse during training. This means that the generator always outputs the same or visibly similar fake data samples and stops to learn. This problem is well-known and still part of active research. Popular strategies to overcome convergence issues of GANs regularize or perturb the discriminator [43, 44] or by using a more sophisticated loss function [45]. In our case, if mode collapse happened, we restarted the training and discarded the collapsed model. All neural networks and training loops were implemented using PyTorch [46]. The hardware for the training was an NVIDIA RTX8000 graphics card.
### Visual evaluation
We begin with a visual inspection of the generated data and compare real training data to representative samples generated by the classical DIC-GAN and the physics-guided DIC-GAN. We refer to fake samples generated by the classical or physics-guided DIC-GAN generators as classical or physics-guided DIC-GAN samples, respectively.
Figure 2 shows real DIC data obtained during FCG experiments as described in Section 2.2. The figure contains planar displacements and von Mises equivalent strains of \(9\) data samples. Images belong together in the sense that the \(x-\) displacement of the first sample is located at the top left of the left column. The corresponding \(y-\) displacement is located at the same position in the middle column, and the corresponding calculated equivalent strain is located at the same position in the right column. Here, the crack path as well as the characteristic crack tip field is clearly visible.
In Figure 3, we see fake samples after \(100\) epochs of GAN training. We can often identify the initial crack on the left edge and the crack path. Whereas most generated displacements are visually close to real displacements, significant differences are revealed in the von Mises strains, which are calculated afterwards. Especially classical DIC-GAN samples contain inconsistencies between \(x-\) and \(y-\) displacements and visual artefacts. This leads to large-scale vortexes and small-scale noise in the von Mises strains 1. Although far from being perfect, physics-guided DIC-GAN samples contain significantly less of these artefacts and inconsistencies and visually capture the inherent noise of the DIC system much better than classical DIC-GAN samples. Nevertheless, most fake samples are still visually distinguishable from real samples. In order to make sure the models are fully converged, we also performed some longer training runs.
Figure 4 shows fake samples after \(1000\) epochs. At this stage, the models are well converged and the visual difference between classical and physics-guided DIC-GAN samples is mainly disappeared. In general, the fake samples of both models show much better visual quality and less artefacts and inconsistencies compared to fake samples of generators trained for only \(100\) epochs 3. However, few samples suffer from severe inconsistencies and are qualitatively inferior 2. We refer to these failures as _garbage_ samples. Apart from these outliers, the vast majority of samples (of both models) are visually indistinguishable from real samples. Nevertheless, domain experts may notice that the characteristic crack tip field still seems unphysical in the fake samples especially when compared to real samples with long cracks.
Figure 2: Real samples of DIC displacement data (left and middle) and corresponding von Mises equivalent strains (right).
Figure 4: Visual comparison of generated classical and physics-guided DIC-GAN samples after 1000 epochs of training. Both models seem to produce mostly good samples 3 but also few garbage samples 2.
Figure 3: Visual comparison of generated classical and physics-guided DIC-GAN samples after 100 epochs of training. Classical DIC-GAN samples show a larger noise level in the von Mises strains compared to phyiscs-guided DIC-GAN samples 1.
### Sliced Wasserstein distances
For a thorough comparison of GANs, one needs to inspect a large number of fake samples. Doing this manually, would be very tedious and subjective. Instead, one should compare the results using meaningful, quantitative metrics.
For this, we follow [10] and calculate the sliced Wasserstein distances (SWD) introduced in Section 2.4 between fake data samples and real data samples on various scales. These scales are introduced by building a 5-level Laplacian pyramid [47] with resolutions \(16\times 16\), \(32\times 32\), \(64\times 64\), \(128\times 128\), \(256\times 256\). Each pyramid level corresponds to a specific spatial resolution. For each level, we compute the SWD between the training dataset and a generated fake dataset of the same size. More precisely, the SWDs are calculated between datasets of random \(7\times 7\) patches of the pyramid samples. The patches are pre-processed by normalizing each channel (i.e. \(x\) and \(y\) displacement) to mean 0 and standard deviation 1. To reduce uncertainty, we average the SWDs of ten runs with randomly sampled fake data. Since there are less unique patches for low resolutions, we adapt the number of random patches depending on the pyramid level. For the five resolutions, \(16\times 16\), \(32\times 32\), \(64\times 64\), \(128\times 128\), and \(256\times 256\), we use 128, 256, 512, 1024, 2048 patches, respectively. The integral in Equation (10) is approximated by choosing 512 random slices and average the results. We implemented a GPU-enabled version of the code from [10] using the PyTorch [46] framework.
At least intuitively, a small SWD shows that the fake and real samples are similar. At low resolution (e.g. \(16\times 16\)) only large-scale features like the crack length are visible and a small SWD would indicate that the variation of crack lengths in the fake dataset is similar to the training dataset. At high resolution (e.g. \(256\times 256\)) very fine-grained structures like the inherent DIC noise is encoded in the patches.
Figure 5 shows the calculated SWDs of classical and physics-guided DIC-GAN samples after 100 epochs of training. In order to estimate uncertainty, we trained 10 randomly initialized models each with the classical and the physics-guided DIC-GAN architecture. The main observation is that for all resolutions, physics-guided samples are closer to the training data than classical DIC-GAN samples. This indicates that physics-guided DIC-GAN samples are better in quality and variation. Especially for the high resolution \(256\times 256\), the SWDs show a large gap and confirm our visual observation of artefacts and unphysical noise as seen in the classical DIC-GAN samples in Figure 3. Nevertheless, the results can be significantly different for each trained generator. This fact is reflected in the large error bars of the SWDs.
The results after 1000 training epochs are displayed in Figure 6. Here, we used 2 training runs for each GAN architecture. As expected, the distances are all smaller than after 100 epochs. In contrast to the results after 100 epochs (cf. Figure 5), both GAN architectures are closer together. However, the physics-guided DIC-GAN samples have significantly smaller SWDs for the fine resolution \(256\times 256\) and the low resolutions \(16\times 16\) and \(32\times 32\). This suggests that after 1000 epochs of training the physics-guided samples are still closer to the real samples in terms of quality and variation. Nevertheless, the few garbage samples seen in Figure 4 2 could influence the SWDs especially at smaller pyramid levels.
Figure 5: Comparison of SWDs between classical DIC-GAN (left) and physics-guided DIC-GAN (right) trained for 100 epochs. The boxplot intervals range from the minimal to the maximal SWDs. The box includes ranges from the 25% to the 75% quantile and shows the median.
### Geometry scores
To compare the geometry score (GS) introduced in Section 2.4 of different trained GANs, we generated fake datasets with the same number of samples \(N=838\) as the training dataset. To calculate the MRLTs of the real and fake datasets, we mainly follow the recommendations in [29]. We set \(i_{\text{max}}=100\) and use \(n=1000\) random landmarks. The number of samples in each landmark is \(64\). The maximal resistance time \(\alpha_{\text{max}}\) is proportional to the maximal pairwise Euclidean distance between samples in each landmark, i.e. for \(j=1,\dots,n\):
\[\alpha_{\text{max}}^{j}=\gamma\max(\text{dist}(L_{j},L_{j})),\quad\gamma=\frac {1}{128}/\frac{N}{5000}. \tag{14}\]
We used the implementation from [29] to calculate the MRLTs.
Figure 7 shows the distributions of MRLTs after \(100\) epochs of training. The error band originates from the uncertainty induced by the random landmarks and, even more so, from the \(10\) different models trained for each GAN architecture. This results in large variations of calculated MRLTs. Nevertheless, on average the phyiscs-guided DIC-GAN distribution is closer to the MRLTs of the real data distribution than the classical DIC-GAN distribution. This observation is quantitatively reflected in a smaller mean GS of the phyiscs-guided models (see Table 1). However, both fake data distributions are still far away from the real data distribution and the GSs are large.
In Figure 8, we see the MRLT distributions after \(1000\) epochs of training. Both GAN results are much closer to the real data than after \(100\) epochs and the phyiscs-guided DIC-GAN MRLTs almost coincide with the real data MRLTs. This accordance is shown in the calculated GSs in Table 1 as well.
## 4 Conclusion
We introduced a machine learning framework to generate synthetic full-field DIC displacements by learning the underlying data distribution from a sufficiently large experimental dataset. The training data was obtained during fatigue crack growth experiments of the aluminium alloy AA2024-T3. In contrast to finite element simulations, our method is able to produce realistic data with inherent DIC noise. However, data-driven methods typically lack an understanding of physics and tend to overfit on the known training data.
Our approach is based on deep convolutional generative adversarial networks. The main novelty compared to the classical approach is a physics-guided discriminator. This discriminator, in addition to the generated \(x\)- and \(y\)- displacement fields, gets also the derived von Mises equivalent strain as input. This enables the discriminator to detect physical inconsistencies in the generated fake samples more easily, thus enhancing the training process.
In order to evaluate trained generator models on an objective basis, we used two quantitative metrics. First, the sliced Wasserstein distance (SWD) between real and fake samples and, secondly, the geometry
Figure 6: Comparison of SWDs between classical DIC-GAN (left) and physics-guided DIC-GAN (right) trained for \(1000\) epochs. The boxplot intervals range from the minimal to the maximal SWDs. The box includes ranges from the \(25\%\) to the \(75\%\) quantile and shows the median.
Figure 8: Comparison of MRLT distributions between the real dataset and fake datasets generated by classical and physics-guided DIC-GAN after 1000 epochs of training.
Figure 7: Comparison of MRLT distributions between the real dataset and fake datasets generated by classical and physics-guided DIC-GAN after 100 epochs of training.
score (GS) approximating the topological distance between a generated data manifold and the training data manifold.
We observed superior performance of the physics-guided DIC-GAN compared to the classical DIC-GAN approach. This result was observed by visual evaluation of generated samples and confirmed by lower SWDs and GSs of the physics-guided models. Both, SWD and GS, proved themselves to be valuable evaluation metrics. They are useful to identify mode collapse and to select the best trained models. Our findings confirm that hybrid models, which combine data-driven methods with physical domain knowledge, can lead to more powerful generative models and faster training.
The visual inspection revealed a varying sample quality. Especially the converged models after 1000 epochs of training, apart from mostly good samples, produce few garbage samples. Although the number of these garbage samples is model-dependent, we were not able to avoid their occurrence completely. Moreover, we still face the issue of (local) non-convergence and mode collapse. To overcome these issues, one could try to stabilize training using suitable regularization techniques [44, 48].
Another open problem concerns the control of boundary conditions like the crack path and external force. In contrast to finite element simulations, with our approach it is not possible to control them. This challenge could be tackled by using a conditional GAN framework instead [49] and is part of current research.
## 5 Acknowledgements
We acknowledge the financial support of the DLR-Directorate Aeronautics.
## 6 Data availability
The code and training data will be publicly available on Github ([https://github.com/dlr-wf](https://github.com/dlr-wf)) and Zenodo ([https://doi.org/10.5281/zenodo.7737880](https://doi.org/10.5281/zenodo.7737880)).
|
2308.06389
|
Matrix element corrections in the Pythia8 parton shower in the context
of matched simulations at next-to-leading order
|
We discuss the role of matrix element corrections (MEC) to parton showers in
the context of MC@NLO-type matchings for processes that feature unstable
resonances, where MEC are liable to result in double-counting issues, and are
thus generally not employed. By working with Pythia8, we show that disabling
all MEC is actually unnecessary in computations based on the narrow-width
approximation, and we propose alternative MEC settings which, while still
avoiding double counting, allow one to include hard-recoil effects in the
simulations of resonance decays. We illustrate our findings by considering
top-antitop production at the LHC, and by comparing MadGraph_aMC@NLO
predictions with those of POWHEG-BOX and standalone Pythia8.
|
Stefano Frixione, Simone Amoroso, Stephen Mrenna
|
2023-08-11T21:11:13Z
|
http://arxiv.org/abs/2308.06389v1
|
Matrix element corrections in the Pythia8 parton shower in the context of matched simulations at next-to-leading order
###### Abstract
We discuss the role of matrix element corrections (MEC) to parton showers in the context of MC@NLO-type matchings for processes that feature unstable resonances, where MEC are liable to result in double-counting issues, and are thus generally not employed. By working with Pythia8, we show that disabling all MEC is actually unnecessary in computations based on the narrow-width approximation, and we propose alternative MEC settings which, while still avoiding double counting, allow one to include hard-recoil effects in the simulations of resonance decays. We illustrate our findings by considering \(t\bar{t}\) production at the LHC, and by comparing MadGraph5_aMC@NLO predictions with those of POWHEG-BOX and standalone Pythia8.
###### Contents
* 1 Introduction
* 2 Matrix Element Corrections in the Pythia8 dipole shower
* 3 Monte Carlo event samples
* 4 Phenomenological results
* 5 Conclusions
* A Pythia8 settings
* A.1 Pythia8 standalone
* A.2 MG5_aMC
* A.3 POWHEG-BOX
* B Comparison with LO MG5_aMC results
* C Control of Matrix Element Corrections
Introduction
The exclusive simulation of processes that feature heavy unstable particles, such as the top, \(W\), \(Z\), and Higgs in the Standard Model, is complicated for a variety of reasons. Conceptually, the most straightforward approach is to choose a (set of) decay channel(s) for the unstable particle(s) and compute the process where the initial and final states only include light partons and the products of such decay channels. For example, in the case of \(t\bar{t}\) production, one may focus on a dilepton channel, and thus simulate (in hadronic collisions)1
Footnote 1: We shall use the case of \(t\bar{t}\) production in all of our examples; it should be clear that the arguments are general and apply to any process.
\[pp\,\longrightarrow\,b\bar{\ell}_{i}\nu_{i}\bar{b}\ell_{j}\,\bar{\nu}_{j}\,, \tag{1}\]
rather than
\[pp\,\longrightarrow\,t\bar{t}\,. \tag{2}\]
From the viewpoint of computing resources, the process of eq. (1) is much more expensive than that of eq. (2). This does not change when one takes into account the fact that eq. (2) must be supplemented by the simulation of the leptonic decays of the tops, namely:
\[t\,\longrightarrow\,b\bar{\ell}_{i}\,\nu_{i}\,,\qquad\bar{t}\,\longrightarrow\, \bar{b}\ell_{j}\,\bar{\nu}_{j}\,, \tag{3}\]
since the complexity of such a simulation is generally smaller than that of either eq. (1) or (2)2. Having said that, one remarks that the combination of eqs. (2) and (3) is equivalent to eq. (1) only in the limit of a vanishing top-quark width (the so-called narrow-width approximation); away from that limit, the agreement between the results of the two approaches might be degraded by the presence of non-resonant (i.e., non-\(t\bar{t}\) mediated) contributions. Even when only resonant contributions are present, it is generally the case that the incoherent simulation of the production process (eq. (2)) and of the decay processes (eq. (3)) constitutes a much poorer physics description than that emerging from eq. (1), and this because of two mechanisms. Firstly, the kinematics of a decay product of a given unstable particle may be affected by that of partons/leptons in the event which are themselves not decay products of that unstable particle. These effects are called (production) spin correlations, and cannot be accounted for by the separate simulations of the individual decays, while they are correctly included by the matrix elements that underpin eq. (1). Secondly, the kinematics of the decay products is affected by the emissions of the extra partons that appear in the hard process when perturbative corrections are considered, as in the case of NLO+PS simulations. By construction, a parton shower mimics the effects of such extra emissions, however with an accuracy which decreases with the hardness of the recoils they induce. Therefore, while perturbative corrections to eq. (1) will automatically include hard-recoil effects, such effects are poorly described when the simulation of the decays of eq. (3) is followed by an ordinary parton shower.
Footnote 2: A more extended discussion of the arguments that follow can be found e.g. in sect. 2.5 of ref. [1].
As far as the first mechanism is concerned, spin correlations are nowadays routinely taken into account in the context of production+decay simulations (eqs. (2) and (3)) by means of the procedures of ref. [2] (especially in NLO+PS approaches) or ref. [3]; we shall not discuss them any further in this work. Conversely, hard recoils, absent computations of the complete processes such as that of eq. (1), are approximated by Monte Carlo event generators (MC henceforth) by means of the so-called Matrix Element Corrections (MEC henceforth). The
manner in which MEC are simulated depends on the specific MC one employs, but they all rely on (exact or approximated) tree-level matrix elements that describe the emission(s) of interest.
Before the widespread adoption of matching and merging techniques, MCs (and specifically Pythia) used to add MEC to both the production (eq. (2)) and the decay (eq. (3)) processes. These mechanisms will henceforth be referred to as production and decay MEC, respectively; sample Feynman graphs involved in their computations, in the specific case of top-quark production and decay, are depicted in fig. 1. It must be stressed that MEC graphs for production and decay are _not_ allowed to interfere; in other words, they are employed incoherently.
In the context of an NLO+PS simulation3, MEC can potentially spoil the accuracy of the computation by double counting (essentially, by generating again an emission already accounted for at the hard matrix element level - this is the case for the contribution shown on the left-hand panel of fig. 1). In fact, this is not an issue for decay MEC, even in the presence of spin-correlation corrections, since the latter do not feature (at present) any extra emissions. There is therefore a good motivation to always include decay MEC in one's simulation, since they may induce, and accurately account for, large shifts in observables sensitive to emissions off decay products4. In contrast, the case of production MEC must be considered more carefully. In MC@NLO-type [5] simulations, the MC counterterms that guarantee the absence of double counting are determined by considering the showers _without_ any additional MEC; this implies that _production_ MEC are certain to spoil the accuracy of an MC@NLO simulation. This is not a simple formal problem; since emissions in MC@NLO are not ordered in hardness, it is not unlikely that the effects of such double counting can be visible in phenomenologically-relevant quantities. On the other hand, in a POWHEG-type simulation the hardest emission is always generated before the MC shower is added; potential NLO effects driven by production MEC are thus restricted to the region where the POWHEG-generated emission did not occur, i.e. a small-\(p_{\mathrm{T}}\) region where the cross section is Sudakov suppressed.
Footnote 3: Or of a multijet-merged one; for what we are concerned with here, the two are identical, and we shall refer to the former while generally understanding also the latter.
Footnote 4: For an example relevant to top-mass determinations, see e.g. [4].
The bottom line is that both MC@NLO- and POWHEG-based simulations are compatible with, and phenomenologically benefit from, decay MEC. Conversely, while production-MEC effects are essentially negligible in POWHEG and can be either included or discarded, they are responsible for double counting in MC@NLO, and must therefore not be employed there. Thus, the procedure to adopt in MC@NLO and POWHEG is in principle clear. In practice,
Figure 1: Sample graphs relevant to MEC in top production (left panel) and top decay (right panel). The top quark is depicted by means of a thicker fermion line. The hard production process \(\mathcal{H}\) is indicated by the blue circle.
unfortunately, the MC@NLO-type MC interfaces to date have disabled _all_ MEC instead of just those necessary to avoid double counting. We have already mentioned that the phenomenological implications of such choices should be restricted to a certain narrow class of observables. However, observed differences between MC@NLO- and POWHEG-based \(t\bar{t}\) predictions at the LHC are commonly attributed to the different underlying matching mechanisms, whereas it might be the case, and certainly for the observables mentioned above, that they stem from the absence as opposed to the presence, respectively, of decay-MEC effects.
The aim of this note is to document how the separation of production and decay MEC can be achieved5 in Pythia8[7], thus paving the way for MC@NLO-based simulations which are phenomenologically more complete than those carried out thus far. We shall present examples for \(t\bar{t}\) production, where Pythia8 standalone runs are compared with POWHEG-based simulations performed with POWHEG-BOX[8], and MC@NLO-based simulations performed with MadGraph5_aMC@NLO[1] (shortened as MG5_aMC henceforth).
Footnote 5: To the best of our understanding, this separation is also possible in Herwig7[6].
## 2 Matrix Element Corrections in the Pythia8 dipole shower
An introduction to the basics of MEC has been given in sect. 1; here, we focus on their implementations within Pythia8, and explicitly consider the various settings that must be chosen in order to preserve the accuracy and precision of the underlying perturbative predictions. The Pythia8 standalone parton shower algorithm is based on a leading-logarithm DGLAP evolution, which is by default corrected by means of MEC to reproduce the appropriate tree-level matrix element behaviour, including in the cases where Pythia8 is interfaced with external calculations. Technically, this is done by populating all of the phase space by candidate emissions whose number is overestimated, and which are then possibly vetoed with a rejection rate given by the ratio of the matrix elements over their shower approximations. While the MEC provide continuity between the exact matrix-element based calculation and the parton shower in the hard emission limit, it also provides the soft emission limit necessary for particles with mass on the order of the soft emission scale. Various options are available that allow for all, some, or none of the MEC to be included in a given simulation. It is therefore important to understand which of these options need to be employed in the context of NLO + PS simulations, so as to avoid the double counting issue discussed in sect. 1.
We start by pointing out that the separation between MEC in production and decay, exemplified by the two graphs in fig. 1, is conceptually well defined in the zero-width limit (the width being that of the particle that decays), since in such a limit the interference between the two types of tree-level graphs vanishes; this is important, because MEC are effected at the level of amplitude squared. However, this is not the way in which MCs, and specifically Pythia8, operate, since the primary distinction there is that between emissions off initial-state and off final-state legs (conventionally referred to as ISR and FSR, respectively). Thus, production MEC are relevant to both ISR and FSR, whereas decay MEC are solely relevant to FSR. This implies that production MEC constitute a difficult problem, since in general at the level of amplitude squared one cannot unambiguously separate the effects due the graphs where the extra parton is emitted by an initial-state leg from those due to emissions by a final-state leg6.
The Pythia8 solution to the problem posed by production MEC is rather draconian. Namely, MEC are applied to ISR only for colour-singlet resonant production [9] (i.e. to \(q\bar{q}\to V\), with \(V\) a heavy EW boson, and to \(gg/\gamma\gamma\to H\)), since in such a case the ambiguity mentioned before is absent. As far as FSR is concerned, one applies MEC by considering, for each possible emitting colour dipole, a matrix element deemed equivalent to it, according to table 1 of ref. [10] (for example, for \(t\bar{t}\) production, the relevant matrix element is \(\gamma^{*}\to t\bar{t}g\)).
According to what was said previously, the latter FSR mechanism applies to decay MEC as well. However, before resorting to that, priority is given to matrix elements which _exactly_ match the decay one is considering7; if those exist, they are used for the MEC. In the Standard Model, these include the \(t\to Wbg\) and the \(V\to q\bar{q}\) matrix elements; in particular, this implies that a top that decays hadronically is never seen as such, but rather as a decay chain of a top decaying to a \(Wb\) pair, followed by a hadronic \(W\) decay.
Footnote 7: Note that the same strategy would apply to production MEC if exactly-matching matrix elements could be found with an unambiguous separation between ISR and FSR, of which there is none that cannot also be seen as a production\(+\,\)decay mechanism (e.g. \(q\bar{q}\to Z^{*}\to q\bar{q}\); see later).
Regardless of whether equivalent or exactly-matching matrix elements are employed for MEC, their effects can be iterated by Pythia8 for subsequent emissions as well (as opposed to limiting them to the first emission). This is done by applying MEC on the system obtained by discarding all radiated partons except that whose kinematical configuration one seeks to correct presently; this may require a momentum reshuffling (for example: if a \(t\to Wbg\) configuration is obtained by means of two subsequent emissions off the \(b\), such emissions can be both corrected, one after the other, by using the \(t\to Wbg\) matrix element; \(g\to gg\) branchings are not corrected). For a decay chain (such as \(t\to W(\to q\bar{q})b\)), the iteration of MEC is applied individually to each resonance that decays (here, the top and the \(W\)).
The various strategies described above are controlled in Pythia8 by means of the following parameters, separately for ISR (X=SpaceShower) and FSR (X=TimeShower), which are to be set equal to either on (which is the default for all of them) or off:
X::MEcorrections : If =on, use MEC whenever available, regardless of whether they entail the use of equivalent or exactly-matching matrix elements.
X::MEextended : If =on, use equivalent matrix elements for MEC; ignored if X::MEcorrections=off.
X::MEafterFirst : If =on, apply MEC also to all emissions after the first; ignored if X::MEcorrections=off.
Among other things, the above implies that if X::MEcorrections=on, then the MEC stemming from exactly-matching matrix elements are always applied, irrespective of the value of X::MEextended. Thus, presently one can generally switch production MEC off while keeping decay MEC on, but not the other way around. This is just as well, since as it was discussed in sect. 1 MEC in production have to be employed only when neither NLO matching nor multi-jet merged simulations are available for the process of interest (which nowadays is a virtual impossibility), whereas MEC in decay are important for better phenomenological predictions in the context of relatively cheap (CPU-wise) generations. Having said that, this discussion clarifies that a more flexible approach to MEC in Pythia8 with respect to that given by the three (per shower type) parameters above would be desirable, namely one that allows the MEC to be applied or disabled until a certain condition is met8.
Footnote 8: See the discussion on this point in appendix C.
In summary, the necessity of avoiding double counting in MC@NLO-type simulations led to the recommendation that all MEC be switched off, by means of:
SpaceShower:MEcorrections = off TimeShower:MEcorrections = off An undesirable by-product of these settings is the absence of MEC for decays. However, we have seen that even with the public versions of Pythia8 (starting with Pythia8.219), this can be avoided by setting:
SpaceShower:MEcorrections = off TimeShower:MEcorrections = on TimeShower:MEextended = off in the majority of the cases of interest including, but not limited to, the processes that feature a \(t\bar{t}\) final state; both values of TimeShower:MEafterFirst are acceptable.
We stress again that this is not a clean separation between production and decay MEC in the context of a generic process, and therefore the above setting recommendations must not be seen as universal. As a cautionary tale, consider \(Z\)-mediated dijet production, \(q\bar{q}\to Z^{*}\to q\bar{q}\). This process is seen by Pythia8 as the production of a colour singlet followed by its decay, rather than as a unique \(2\to 2\) production process. As such, exactly-matching matrix elements exist for both production and decay (\(Z\leftrightarrow q\bar{q}g\)), which would not be the case had the (\(2\to 2\))-process picture been adopted. It follows that the value of MEextended is irrelevant in this case, and MEC are applied or not depending solely on the value of MEcorrections. Thus, if the latter is equal to on, one has (does not have) double counting if the process is generated by MG5_aMC as a \(2\to 2\) (\(2\to 1\) followed by tree-level \(Z\) decay). The bottom line is that, since avoiding any double counting is of paramount importance, in case of doubt the user is encouraged to contact the Pythia8 authors.
## 3 Monte Carlo event samples
We now proceed to evaluate the impact of MEC in resonance decays in realistic NLO+PS-accurate MC simulations of relevance for LHC phenomenology. For this, we consider different MC samples for the production of stable top-quark pairs in proton-proton collisions at \(\sqrt{s}=13\) TeV.
The first set of samples is generated using the MG5_aMC [1] program at the leading- and the next-to-leading-order (the latter with MC@NLO matching) in the strong coupling constant. The renormalization and factorization scales are set equal to the sum of the transverse energies of the final-state particles at the hard-process level, namely the top, the antitop, and when present a light quark or a gluon. The top quarks are subsequently decayed either leptonically or semileptonically (by including only electrons and muons), and preserving the tree-level spin correlations, with MadSpin [11]. A second sample is generated using standalone Pythia8.309 (thus, at the LO), with scales set equal to the geometric mean of the squared transverse masses of the two outgoing particles. The last sample is generated at the NLO with the HVQ [12] program in POWHEG-BOX-V2 [8, 13]. In POWHEG-BOX the scales are set equal to \(\sqrt{m_{t}^{2}+p_{\mathrm{T}}^{2}}\), with \(m_{t}\) and \(p_{\mathrm{T}}\) the top quark mass and transverse momentum, respectively, evaluated by employing the underlying Born configuration; the \(h_{\mathrm{damp}}\) parameter is set equal to \(m_{t}\). The top
decay is performed at the tree-level, and including the effect of spin correlations, using the approach of ref. [2] as is implemented in the POWHEG-BOX internal routines. In all samples the NNPDF31_nnlo_as_0118 [14] parton distributions are used, and the value of the top quark mass is set equal to \(m_{t}=172.5\) GeV. Both the MG5_aMC LO sample and the Pythia8 standalone one are normalized, prior to any final-state cuts, to the NLO total cross section as is predicted by MG5_aMC.
All of the predictions are then interfaced to Pythia8.309 (employed with the Monash tune [15]) to include the effects of parton showering, multiple parton interactions and underlying event. For the standalone Pythia8 and for the POWHEG-BOX samples, MEC are included wherever available, for both production and decay. For each of the MG5_aMC LO and NLO event sets, we shower the events twice. Namely, we use the current recommended MEC settings, in which matrix-element corrections are completely switched off, as well as the newer settings presented in sect. 2, in which MEC are included only in the decays. Finally, the showered particle-level events are passed through Rivet [16]-based analyses that implement the observables of interest.
## 4 Phenomenological results
The inclusion of MEC to resonance decays is in general expected to affect the kinematics of the reconstructed decay products. We shall show in this section a few illustrative observables that document the extent of these effects. In particular, the Pythia8 LO+PS and POWHEG-BOX +Pythia8 NLO+PS predictions that include MEC to production and decays are compared with MG5_aMC NLO+PS predictions where MEC are either switched off (the current default), or included only for the decay (the settings proposed here). Since the focus of this work is the impact of MEC, no effort is made to simulate hard radiation _in production_ in a manner which is as mutually consistent as possible across the various programs we employ. We note that the corresponding phase-space region is in any case better described by merged approaches, which also feature smaller systematics w.r.t. those of matched simulations. In order to facilitate the disentangling of the effects due to decay MEC from those due to hard radiation in the context of simulations stemming from exactly the same assumptions, in appendix B we present comparisons between the LO- and NLO-accurate predictions of MG5_aMC, for the same observables as those considered in this section.
All of the figures of this section have the same layout, namely consist of a main frame and a lower inset. In the main frame the four predictions (and possibly the experimental data) are displayed in absolute value, as blue (MG5_aMC with decay MEC), green (MG5_aMC without MEC), orange (POWHEG-BOX +Pythia8), and red (Pythia8) histograms. The lower inset shows the ratios of the latter three predictions (and possibly of the data) over the one for MG5_aMC with decay MEC, using the same colour patterns as in the main frame. For ease of reading the plots, the labels indicate, where relevant, which kind of MEC are applied: MEC in decay (\(\mathrm{M}\mathrm{e}^{\mathrm{d}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e }\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e}\mathrm{e} \mathrm{
considered are in a better than 5% agreement with each other around the resonances peak, where the validity of the narrow-width approximation is good, and the impact of production hard radiation is negligible. As one can see from fig. 6, hard radiation does have a visible (but still moderate) impact at large invariant masses, more pronounced in the case of the reconstructed top mass than for the \(W\) mass, which explains the small residual differences among our three benchmark results in that region.
Figure 3: The distribution in the invariant mass of the (lepton,\(b\)-jet) pair (left panel) and the leading \(b\)-jet transverse momentum (right panel) in dileptonic \(t\bar{t}\) events.
Figure 2: The distribution of the reconstructed top quark (left panel) and \(W\)-boson (right panel) mass in semileptonic \(t\bar{t}\) events.
The invariant mass of the (lepton,\(b\)-tagged jet) pair, \(m(l,b_{\rm jet})\) and the leading \(b\)-tagged jet transverse momentum are shown in fig. 3 for dileptonic \(t\bar{t}\) events. The \(m(l,b_{\rm jet})\) distribution exhibits a kinematic edge around \(\sqrt{m_{t}^{2}-m_{W}^{2}}\sim 150\) GeV, sensitive to the value of the top mass. We observe that decay MEC shift the position of the peak, and after their inclusion in MG5_AMC this simulation and the POWHEG-BOX one (and to a good extent that of Pythia8 as well) agree fairly well with each other around and below this peak. At larger \(m(l,b_{\rm jet})\) values the detailed description of the production mechanism, which is treated differently in the three codes, becomes important, and we observe relative differences of up to 30% in this region. Inspection of the left panel of fig. 7 confirms that the vast majority of these discrepancies are indeed due to hard radiation, with a residual \(\mathcal{O}(5\%)\) effect stemming from decay MEC. The impact of decay MEC is also evident at small values of the \(b\)-jet transverse momentum (for \(p_{\rm T}\) smaller than about 50 GeV); thus, in this region decay MEC improve the agreement between the three generators. However, at variance with \(m(l,b_{\rm jet})\), for the present observable the separation between hard-radiation and decay-MEC effects is less clear-cut; this can be best understood by looking at the right panel of fig. 7. Having said that, it is at large \(p_{\rm T}\) that the impact of hard radiation is larger than that of decay MEC; hence, the inclusion of decay MEC in MG5_AMC does not help reduce significantly the \(\mathcal{O}(10\%)\) discrepancies between MG5_AMC, POWHEG-BOX, and Pythia8.
Since decay MEC modify the kinematic properties of the \(b\)-quark and of the \(B\)-hadron resulting from the hadronization of the former, they also have an impact on the radiation pattern inside its corresponding \(b\)-tagged jet. In order to illustrate this, in fig. 4 we consider the distribution of the \(b\)-jet profile, and of the scaled \(B\)-hadron energy spectrum. The \(b\)-jet profile \(r\), a.k.a. \(b\)-jet shape, is defined as the average fraction of the jet transverse energy that lies inside an inner cone of radius \(r<R\), with \(R\) the jet-radius parameter; jets are defined according to the anti-\(k_{\rm T}\) algorithm [17], with parameters that depend on the specific analysis one considers. The scaled \(B\)-hadron energy, defined as the ratio of the \(B\)-hadron energy over the energy of the \(b\)-jet containing it (with energies defined in the lab frame), is a proxy for the longitudinal \(b\)-quark fragmentation function. The decay MEC are found to narrow the distribution of energy around the \(b\)-jet axis (i.e. there are comparatively more events at small \(r\) values), and to shift the peak of the scaled \(B\)-hadron energy spectrum to higher values. The effect of decay MEC is significant in the whole \(r\) range considered, while that of hard radiation is negligible (see the left panel of fig. 8): thus, after their inclusion, the agreement among the MG5_AMC, POWHEG-BOX, and Pythia8 simulations is of \(\mathcal{O}(2\%)\). The radiation pattern is more complicated in the case of the scaled \(B\)-hadron energy (see the right panel of fig. 8), with residual non-negligible hard-radiation effects at small values; still, in the bulk of the distribution the agreement among our three benchmark results is at the same level as for the jet energy profile.
We finally compare our predictions to experimental data of jet substructure in \(t\bar{t}\) semileptonic events that use charged-particle tracks from the CMS collaboration [18]. In fig. 5 we consider two observables that are sensitive to the radiation pattern inside \(b\)-tagged jets: the groomed subjet distance, \(\Delta R_{g}\), and the jet width, \(\lambda_{1}^{1}\). As is clearly demonstrated by fig. 9, these observables are essentially insensitive to hard radiation, and decay MEC dominate their behaviour. Given what has been observed so far, it is therefore not particularly surprising that the inclusion of decay MEC in MG5_AMC vastly improve its agreement with both POWHEG-BOX and Pythia8 results, whereas the previous MEC settings in MG5_AMC led to discrepancies of \(\mathcal{O}(\pm 15\%)\) with these two codes. The description of the data is also significantly improved.
## 5 Conclusions
We have illustrated how matrix element corrections to resonance decays as implemented in the Pythia8 parton shower can be consistently included in MC@NLO-type NLO + PS simulations in order to improve their phenomenological accuracy, without resulting in any double counting.
Figure 4: The differential \(b\)-jet shape distribution (left panel) and the scaled energy fraction of the \(B\)-hadron (right panel) in semileptonic \(t\tilde{t}\) events.
Figure 5: The groomed subjet distance (left panel) and the jet width (right panel) distribution as reconstructed from charged particle tracks in semileptonic \(t\tilde{t}\) events, compared to measured data by the CMS Collaboration [18].
We have discussed the impact of these corrections in a process of particular relevance for LHC physics, namely the production of top quark pairs. We have also verified that the same conclusions apply to the associated production of top-quark pairs and a Higgs boson - this must be expected, since this kind of effects are thought to largely factorize w.r.t. the hard process; we thus regard our findings as to be universally valid. We have found that decay MEC can have a relative impact on the shape of distributions of up to 20%. By comparing NLO + PS predictions stemming from MC@NLO- and POWHEG-type matching with standalone Pythia8 ones we find that a significant reduction in the spread among the results occurs if the MEC are included whenever possible in the various simulations. We thus encourage the usage of the MEC settings proposed in this paper for any practical applications, in order to improve the phenomenological accuracy of MC@NLO-type simulations, and to reduce systematic uncertainties.
## Acknowledgments
The authors thank Stefan Prestel for initiating a modification to the Pythia8 code to allow control of matrix element corrections at different stages of the event evolution, as well as Paolo Nason, Simon Plazter, and Silvia Ferrario Ravasio for useful information. SF thanks the CERN TH division for the kind hospitality during the course of this work.
Funding informationSM is supported by the Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics. SA is supported by the Helmholtz Association under the contract W2/W3-123.
## Appendix A Pythia8 settings
We report in this appendix the Pythia8 settings used to obtain the various predictions presented in this paper, and in particular the settings used to enable MEC to top decays in MC@NLO-type simulations. Our numerical results are based on Pythia8.306. No updates relevant to this study have occurred up to the current release Pythia8.310.
All settings not listed take their default values, corresponding to the Monash tune based on the NNPDF2.3 QCD+QED LO parton distributions functions.
### Pythia8 standalone
Top:gg2ttbar = on
Top:qqbar2ttbar = on
### Mg5_aMC
SpaceShower:pTmaxMatch = 1
SpaceShower:pTmaxFudge = 1
TimeShower:pTmaxMatch = 1
TimeShower:pTmaxFudge = 1
TimeShower:globalRecoil = on
TimeShower:limitPTmaxGlobal = on TimeShower:nMaxGlobalRecoil = 1 TimeShower:globalRecoilMode = 2 TimeShower:nMaxGlobalBranch = 1 TimeShower:weightGluonToQuark = 1 SpaceShower:MEcorrections = off TimeShower:MEcorrections = on TimeShower:MEextended = off
### POWHEG-BOX
POWHEG:nFinal = 2 POWHEG:veto = 1 POWHEG:pTdef = 1 POWHEG:emitted = 0 POWHEG:pTemt = 0 POWHEG:pThard = 0 POWHEG:vetoCount = 100 SpaceShower:pTmaxMatch = 2 TimeShower:pTmaxMatch = 2
## Appendix B Comparison with LO MG5_AMC results
In this appendix we consider the same observables as in sect. 4, and compare the results obtained with MG5_AMC +Pythia8 by turning off and on decay MEC. We do so both at the LO and the NLO accuracy; this allows one to disentangle the effects of the MEC from those of hard radiation in production in a more transparent manner w.r.t. the comparison between MG5_AMC and either POWHEG-BOX or Pythia8, in view of the fact that all of the simulations that appear here have identical settings for the short-distance cross sections.
All of the figures have the same layout, with a main frame and a lower inset. In the main frame we show both NLO results (blue: with decay MEC; green: without decay MEC) and LO results (orange: with decay MEC; red: without decay MEC). Thus, the blue and green histograms here are the same as those in sect. 4. The insets present the ratios of the various predictions over the NLO-with-MEC one.
Each of the figures in this appendix is a companion of a figure in sect. 4; in particular, the observable displayed in figs. 6, 7, 8, and 9 are the same as those in 2, 3, 4, and 5, respectively. The reader is encouraged to compare systematically the figures of this appendix with the corresponding ones in the main text.
In essence, in the plots shown here the differences between the blue and the green histograms, and those between the orange and red histograms, indicate that decay MEC have a non-negligible impact. Conversely, the differences between the blue and the orange histograms, and those between the green and red histograms, indicate that hard radiation has a non-negligible impact.
## Appendix C Control of Matrix Element Corrections
The Pythia8 setting TimeShower:MExtended=off disables the equivalent-matrix-element FSR MEC for all parton emissions in the production process. As discussed in the text, consistency with MC@NLO-type matching requires only that the MEC be disabled for the first emission. However, in the Pythia8 shower, the MEC also provide the soft emission limit nec
Figure 6: The distribution of the reconstructed top quark (left panel) and \(W\)-boson (right panel) mass in semileptonic \(t\bar{t}\) events.
Figure 7: The distribution in the invariant mass of the (lepton,\(b\)-jet) pair (left panel) and the leading \(b\)-jet transverse momentum (right panel) in dileptonic \(t\bar{t}\) events.
essary for particles with mass larger than or of the order of the soft emission scale. These corrections lead to the dead-cone effect.
For the case of a heavy particle system radiating a gluon, such as \(t\bar{t}\to t\bar{t}g\), it can be argued that the MEC to heavy particle radiators beyond the first emission, e.g. \(t\bar{t}g\to t\bar{t}gg\), are unimportant - dipoles with the \(g\) as the radiator will dominate over the mass suppressed
Figure 8: The differential \(b\)-jet shape distribution (left panel) and the scaled energy fraction of the \(B\)-hadron (right panel) in semileptonic \(t\bar{t}\) events.
Figure 9: The groomed subjet distance (left panel) and the jet width (right panel) distribution as reconstructed from charged particle tracks in semileptonic \(t\bar{t}\) events, compared to measured data by the CMS Collaboration [18].
\(t\) dipoles. We have tested this explicitly by modifying the Pythia8 shower to disable the MEC for only a user-determined number of (QCD) emissions. We observe little or no impact of this choice on any of our numerical results. To simplify the discussion and allow users to experiment with the impact of the MEextended setting in a public version of Pythia8, we have not used this new capability in our results.
This option will be publicly available in an upcoming Pythia8 release.
|
2306.11359
|
Aberration-driven tilted emission in degenerate cavities
|
The compensation of chromatic dispersion opened new avenues and extended the
level of control upon pattern formation in the \textit{temporal domain}. In
this manuscript, we propose the use of a nearly-degenerate laser cavity as a
general framework allowing for the exploration of higher contributions to
diffraction in the \textit{spatial} domain. Our approach leverages the
interplay between optical aberrations and the proximity to the self-imaging
condition which allows to cancel or reverse paraxial diffraction. As an
example, we show how spherical aberrations materialize into a transverse
bilaplacian operator and, thereby, explain the stabilization of temporal
solitons travelling off-axis in an unstable mode-locked broad-area
surface-emitting laser. We disclose an analogy between these regimes and the
dynamics of a quantum particle in a double well potential.
|
S. V. Gurevich, F. Maucher, J. Javaloyes
|
2023-06-20T07:57:09Z
|
http://arxiv.org/abs/2306.11359v2
|
# Quartic beams of temporal solitons in a nearly-degenerate laser cavity
###### Abstract
The engineering of chromatic dispersion opened new avenues of research and extended the level of control upon pattern formation in the _temporal domain_. In this manuscript, we propose the use of a nearly-degenerate laser cavity as a general framework allowing for the exploration of higher contributions to diffraction in the _spatial_ domain. Our approach leverages the interplay between optical aberrations and the proximity to the self-imaging condition, which allows cancelling, or reverse, paraxial diffraction. As an example, we show how spherical aberrations materialize into a transverse bilaplacian operator and fully explain the stabilization of off-axis temporal solitons in an unstable cavity. We disclose a clear analogy between these quartic beams of temporal solitons and the dynamics of a quantum particle in a double well potential. Our predictions are in good agreement with the experimental results from a mode-locked broad-area Vertical-Cavity Surface-Emitting laser.
The understanding of self-organized spatio-temporal patterns is key in photonics and the formation of shocks, vortices, tilted waves, cross-roll patterns, weak optical turbulence, and localized structures was observed experimentally and studied theoretically in large-aspect-ratio lasers, see e.g., [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11]. Another recent example of multi-dimensional spatio-temporal self-organization is the experimental observation of spatio-temporal mode-locking in multimode optical fibers [12; 13], see [14] for a review.
Dispersion engineering consists in combining elements with opposed chromatic properties to achieve an overall partial or total cancellation of the second order dispersion. This simple yet powerful idea permitted exploring the influence of higher order contributions. While optical temporally localized structures often result from the balance between self-phase modulation and anomalous second-order dispersion [15; 16; 17], it was recently proven that third and fourth-order dispersion lead to unforeseen effects such as the stabilization of solitons for positive quadratic and negative quartic dispersion [18; 19] or the realization of purely quartic solitons [20; 21].
The paraxial diffraction emerging as a beam propagate is mathematically equivalent to that of second-order chromatic dispersion but, at variance with the latter, it can never be negative. Notwithstanding, optical cavities in which the path of light is folded onto itself may contain a transverse plane that is its own image after a round-trip [22]. This so-called stigmatic condition is equivalent to an effective cancellation of paraxial diffraction. Imposing additionally the nullification of the round-trip wavefront curvature does not only achieve the self-imaging condition for the field intensity but also for its _amplitude_.
In this letter, we show that by combining these two degeneracy conditions with spherical aberrations, that lead to fourth-order wavefront curvature, we create higher order diffractive contributions that governs the pattern formation in the transverse plane of the cavity.
The accurate description of aberrations and their impact in modern cavities with high finesse is crucial [23]. Spherical aberrations are often present when focusing beams tightly such that the width of the beam becomes comparable to the wavelength. Aberrations may lead to hexagonal Turing patterns [24] and their appropriate description is critical to modern quantum science experiments probing fundamental physics [25]. On the other hand, the proximity to the degenerated self-imaging condition (SIC) was used, for instance, to manipulate the spatial coherence of the field [26; 27; 28; 29; 30], create a perfect coherent absorber [31], topological band structure [32], or to form propagation invariant beams [33]. To the best of our knowledge, we study here for the first time how aberrations can lead to quartic beams of temporal solitons in a nearly-degenerate mode-locked broad-area Vertical-Cavity Surface-Emitting laser.
Figure 1: Schematic of the fundamental HG mode \(A_{s}(x)\) (blue) and the corresponding potential \(V(x)\) (orange) of Eq. (1) in the real (left) and Fourier space (right) for (a) \(b<0\), \(c>0\) and \(s=1\) (b) \(b>0\), \(c>0\) and \(s=1\). Here, a resulting double-well potential in the Fourier space with the minima located at \(k_{0}\) correspond to a tilted HG mode in the real space with wavelength \(\lambda\sim 2\pi/k_{0}\). (c) Numerical solution of Eq. (1) for different values of \(b\) and fixed values of \(s=1\) and \(c=4.97\times 10^{-4}\). White line indicates the second moment.
## Results
To understand the effect of spherical aberrations in a degenerated cavity, let us first consider a quantum harmonic oscillator featuring a fourth-order derivative [34]:
\[-i\partial_{\theta}A=\left(cx^{2}+b\partial_{x}^{2}+s\partial_{x}^{4}\right)A\,. \tag{1}\]
where \(c\), \(b\) and \(s\) are real coefficients. The potential \(V=c\,x^{2}\) in Eq. (1) induces bound states that are defined as \(A(x,\theta)=A_{s}(x)e^{-i\omega\theta}\) with \(\lim_{x\rightarrow\pm\infty}A_{s}=0\). This defines a Sturm-Liouville problem (SLP). If \(bc<0\) and \(s=0\), its solutions are the so-called Hermite-Gauss (HG) modes \(A_{s}(x)=H_{n}(\frac{x}{\sigma})\), with \(H_{n}(x)\) denoting the n-th order Hermite function and \(\sigma=\sqrt{-b/c}\). The HG modes are invariant under the Fourier transform since the multiplicative and differential terms in Eq. (1) are exchanged upon performing the latter. However, the role played by the fourth order derivative in Eq. (1) becomes clear in Fourier space since the SLP reads
\[\left(\omega+c\partial_{k}^{2}-bk^{2}+sk^{4}\right)\hat{A}_{s}=0\,,\,\lim_{k \rightarrow\pm\infty}\hat{A}_{s}(k)=0, \tag{2}\]
where we defined \(\hat{A}_{s}\) as the Fourier transform of \(A_{s}\). It becomes apparent that the situation in Fourier space is equivalent to that of a particle in a _double-well_ potential. Without loss of generality, we assume \(b<0\) and \(c>0\) so that the potentials \(V\) and \(\hat{V}=-b\,k^{2}+sk^{4}\) are both presenting a minima at the origin as shown in Fig. 1 (a).
The solutions of Eq. (1) remain practically unaffected by small values of \(s\neq 0\) since the wave-function remains concentrated at low values of \(k\) for which \(bk^{2}\gg sk^{4}\). However, if \(b\) changes sign and becomes positive, the Fourier potential \(\hat{V}(k)\) develops a negative curvature around \(k=0\) leading to the destabilization of the corresponding HG modes. Hence, if \(s=0\), bounded solutions cease to exist. However, the situation is modified by the fourth order derivative, as depicted in Fig. 1(b). For \(s>0\), two new minima emerge symmetrically at \(k_{0}=\pm\sqrt{b/2s}\). Therefore, the ground state \(\hat{A}_{s}(k)\) in Fourier space can be approximated by a superposition of two bell-shaped functions localized around \(\pm k_{0}\). In real space this amounts to a strongly modulated eigenmode with wavelength \(\lambda=2\pi/k_{0}\), akin to what has been used to illustrate the interference between matter-waves [35; 36] that forms upon releasing two separate Bose-Einstein condensates in a double-well potential. This is depicted schematically in Fig. 1 (b) as well as quantitatively in panel (c) in which we solved for the ground state of Eq. (1) numerically.
For \(s\neq 0\) the associated SLP does not possess simple analytical solutions. However, using the ansatz \(A(x,\theta)=A_{s}(x)\exp\left[i(k_{0}x-\omega\theta)\right]\) and seeking for shallow solutions (see Supplementary Material I), we obtain an approximated SLP \(\left(\omega+cx^{2}-\frac{b^{2}}{4s}-2b\partial_{x}^{2}\right)A_{s}=0\); It is again an Hermite-Gauss equation as Eq. (1) in which the sign of \(b\) is reversed since \(b\rightarrow-2b\). This fully explains the re-stabilization of modulated HG modes whenever \(b>0\). The eigenmodes can be approximated by \(A(x,\theta)=H_{n}\left(\frac{x}{\sigma}\right)\exp\left[i(k_{0}x-\omega\theta)\right]\) and we can combine them to create a family of symmetric and anti-symmetric _tilted_ HG modes \(\Gamma_{n}\) and \(\Psi_{n}\) as
\[\Gamma_{n}(x,\theta) = H_{n}\left(\frac{x}{\sigma}\right)\,\cos(k_{0}\,x)\exp(-i\omega \theta)\,, \tag{3}\] \[\Psi_{n}(x,\theta) = H_{n}\left(\frac{x}{\sigma}\right)\,\sin(k_{0}\,x)\exp(-i\omega \theta)\,, \tag{4}\]
where \(\omega=b^{2}/\left(4s\right)-\sqrt{2bs}\)\((2n+1)\) and \(\sigma^{2}=\sqrt{2b/s}\).
Nonlinear dynamics confined by double-well potentials has been exploited as a prominent platform to explore a wide range of rich physics, such as atomic interferometry between ultracold matter-waves [35; 36], optical spontaneous PT-symmetry breaking [37] as well as the physics of ferromagnetic layers [38]. As we shall see, this fourth-order derivative leading to a double-well potential can be directly linked to the spherical aberrations occurring close to degeneracy in an optical cavity, which leads to the generation of quartic beams of temporal solitons with modulated HG profiles.
The situation considered in Eq. (1) appears when modeling the dynamics of passively mode-locked integrated external-cavity surface-emitting laser (MIXSELs) coupled face to face to a distant external mirror via a quasi-degenerate imaging system [39] as depicted in Fig. 2. Here, the gain (G) and saturable absorber (SA) media are enclosed in a single micro-cavity and the external mirror is assumed to be ideal while the collimating lens is not. Our theoretical approach is based on a Haus master equation model for passive mode-locking (PML) adapted to the experimentally relevant long cavity regime regime [40; 41; 42; 43], where the PML pulses become individually addressable temporal localized states (TLS), see Methods. In order to calculate the spatial operator of the cavity, we assume that spherical aberration is the dominant aberration and that it is essentially due to the presence of the collimating lens with short focal distance. It is a sensible approximation since the latter is exposed to the optical fields with the wider angular distributions. We consider that the collimator focal length evolves with
Figure 2: A schematic of the MIXSEL, where both the gain (green) and the saturable absorption (pink) are contained in the same micro-cavity. It is coupled face-to-face to a distant external mirror by an imperfect self-imaging system.
the radius as \(f_{c}\left(r_{\perp}\right)=f_{0}+\sigma r_{\perp}^{2}\) where \(\sigma\ll 1\) is proportional to the Seidel spherical aberration coefficient. Close to the self-imaging condition (SIC), the effect of spherical aberration can be analytically reduced to that of a transverse bilaplacian operator, see Methods and Supplementary material II. While perfectly adapted to the analysis, the Haus equation consists in a four-dimensional, stiff, multi-scale partial differential equation and its full analysis is beyond the scope of this manuscript. Nevertheless, a qualitative model for the transverse profile of the TLSs such as the one derived in [41; 10] can be obtained by essentially adapting New's method for PML [44] to the situation at hand. This method exploits the scale separation occurring between the pulse evolution, the so-called fast stage in which stimulated emission is dominant, and the slow stage that is controlled by the gain recovery processes. Assuming that the four-dimensional spatio-temporal profile \(E\left(r_{\perp},t,\theta\right)\) can be factored as \(E\left(r_{\perp},t,\theta\right)=A\left(r_{\perp},\theta\right)p\left(t\right)\) with \(p\left(t\right)\) a normalized TLS profile, one obtains a so-called Rosanov [45] equation for the slow evolution of the TLS transverse profile \(A\left(r_{\perp},\theta\right)\) as
\[\partial_{\theta}A = \left[f\left(\left|A\right|^{2}\right)+icr_{\perp}^{2}+\left(d+ ib\right)\nabla_{\perp}^{2}+is\nabla_{\perp}^{4}\right]A. \tag{5}\]
The normalized coefficients \(\left(b\right)\), \(\left(c\right)\) and \(\left(s\right)\) represent the residual paraxial diffraction, parabolic wavefront curvature and aberration coefficients close to SIC. Their expression and their derivation are given in Method and Supplementary Material I, respectively. We defined the effective nonlinearity as
\[f\left(P\right)=\left(1-i\alpha_{1}\right)J_{1}g\left(P\right)+\left(1-i\alpha _{2}\right)J_{2}g\left(sP\right)-\kappa\,. \tag{6}\]
with the nonlinear response of the two active materials to a pulse \(g\left(P\right)=\left(1-e^{-P}\right)/P\)[41; 44; 10].
The equations (5,6) provide a unified framework which allow bridging our results for spatio-temporal dynamics with the former results of [45; 46] for the case of static auto-solitons in bistable interferometers. There, the function \(g\left(P\right)\) should simply be replaced by the static Lorentzian function representing the saturation of an atomic transition \(\sim\left(1+P\right)^{-1}\). As such, our discussion of the effect of aberration close to SIC is equally valid for pulsating temporal solitons and for CW beams.
In one-dimension, Eq. (1) is recovered for the empty cavity, i.e. for \(f=0\). Hence, we expect the emergence of a _stable_ family of \(\Gamma_{n}\) and \(\Psi_{n}\) tilted modes (3,4) in the _unstable cavity_, where the condition \(bc<0\) is violated. In the nonlinear regime, we performed a bifurcation analysis of Eq. (5) in one transverse dimension using the path-continuation methods of the pde2path framework [47]. The results are summarized in Fig. 3 in the unstable cavity regime, where the peak powers for the fundamental tilted modes \(\Gamma_{0}\) (red) and \(\Psi_{0}\) (blue) are shown in the panel (a) as a function of the gain bias \(J_{1}\) normalized to the threshold value \(J_{th}\), whereas two exemplary profiles of \(\Gamma_{0}\) and \(\Psi_{0}\) at the same fixed gain value (gray dashed line) are depicted in Fig. 3 (b,c), respectively. For both modes, we notice the typical subcritical situation that leads to bistable TLS as detailed in [17]; The high intensity branch is stable while the lower branch is unstable and creates a separatrix with the stable off solution. For the parameters chosen the leftmost limiting fold bifurcations \(F_{1}\) and \(F_{2}\) are almost identical for \(\Gamma_{0}\) and \(\Psi_{0}\). A small region of the Andronov-Hopf (AH) instability for \(\Gamma_{0}\) exists between \(H_{22}\) and \(H_{23}\). In this region, a small amplitude oscillation is visible in time simulations. Both modes are limited by the AH bifurcations \(H_{11}\) and \(H_{24}\), respectively, for high gain values. We stress that these nonlinear modulated HG solutions that are solutions of Eqs. (5,6), correspond to a train of temporal solitons whose profile is a quartic beam supported by spherical aberration in an unstable cavity.
In two spatial dimensions, we assume the system to be weakly astigmatic, i.e. that the self-imaging condition is not reached simultaneously for both transverse dimensions, which corresponds to the experimental situation further presented. As such, we assume an unstable cavity in only one, e.g. \(x\)-direction, i.e. \(b_{x}>0\). The two-dimensional ansatz \(A(x,y,\theta)=A_{s}(x,y)\exp[i(k_{0,x}x-\omega\theta)]\) with \(k_{0,x}=\sqrt{b_{x}/2s}\) leads us to the same SLP than in the one-dimensional case, where now the bilaplacian is changing both the effective diffraction components such that \(b_{x}\rightarrow-2b_{x}\) and \(b_{y}\to b_{y}-b_{x}\) (see the Supplementary Material IB for details).
For a cavity stable in the vertical (\(y\)) direction and
Figure 3: (a) Branches of one-dimensional tilted HG modes \(\Gamma_{0}\) (red) and \(\Psi_{0}\) (blue) of Eq. (5) as a function of the normalized gain bias \(J_{1}/J_{\rm th}\). The maximal intensity is shown. The mode \(\Gamma_{0}\) is stable (solid line) between the fold \(F_{2}\) (cyan circle) and the AH point \(H_{22}\) as well as between AH points \(H_{23}\) and \(H_{24}\) (red squares), respectively. The mode \(\Psi_{0}\) gains the stability in the fold \(F_{1}\) and is stable till the AH bifurcation point \(H_{11}\). (b,c) An exemplary profiles of \(\Gamma_{0}\) and \(\Psi_{0}\) at the fixed value of \(J_{1}=0.65J_{\rm th}\) (gray dashed line). Real (turkís, dashed), imaginary (black, dotted) and intensity fields (blue, red, solid), respectively, are presented. Parameters are \(\alpha_{g}=1.5,\alpha_{a}=0.5,J_{2}=-0.06,h=1.98,\hat{s}=15,\eta=0.95,d=10^{-4},l= 0,b=0.78,c=4.97\cdot 10^{-4},s=1.0\).
unstable in the horizontal (\(x\)) one, we demonstrate by solving Eqs. (5,6) numerically the tilted localized pattern shown in Fig. 4 (a-c). The typical evolution for increasing values of \(b_{x}\) (i.e., entering more deeply into the unstable region) is presented together with the corresponding far-field power spectrum \(|\tilde{A}|^{2}\). The panels (d-f)) shows the two peaks in the far field related to the value of the transverse horizontal wave-number \(\pm k_{0,x}\). One observes the increase of the \(k_{0,x}\) with the increase of \(b_{x}\) leading to a higher frequency of the mode oscillations in the near-field.
We have tested these theoretical predictions in a large-aspect ratio Vertical External-Cavity Surface-Emitting Laser (VECSEL) operated in the TLS regime. The VECSEL cavity has an L-shape and it is delimited by a gain mirror (also called 1/2 VCSEL) and by a semiconductor SA mirror (SESAM). The former is based on a GaAs substrate with 12 strain-balanced InGaAs/GaAsP quantum wells (QWs) designed for barrier optical pumping and emitting at 1.06 \(\mu\)m. The latter features a single strained InGaAs/GaAs QW located at \(1\sim 2\) nm from the external surface leading to a carrier's recombination rate approximately two orders of magnitude faster than the gain medium. The VECSEL has been designed to fulfill the requirements for hosting TLS [41; 17], namely i) cavity round-trip larger than the carrier's recombination rates (\(\tau>\gamma_{j}\)) and ii) SESAM's saturable losses larger than a critical value (typically \(\Delta R>8\) %, [48]). The large aspect-ratio is achieved by using a nearly self-imaging external cavity and by pumping the gain mirror with a flat-top elliptical beam having a size of \(90\times 50\)\(\mu\)m. The details of the experimental setup are described in the Suppl. Mat. III.
The time intensity output of the VECSEL reveals multistability between different PML states featuring a number of pulses per round-trip spanning from zero (no emission) to four. Two of these coexisting states are shown in Fig. 5 (a,b). The bifurcation diagram of these states versus the pumping parameter (shown in Fig. S4 in Supplementary Material III) reveals that their stability ranges \(P_{p,a}<P_{p}<P_{p,b}\) share the same upper limit \(P_{p,b}\)=178 mW while the lower limit \(P_{p,a}\) increases with the number of pulses per round-trip. Coexistence of the four states is observed for 169 mW\(<\)\(P_{p}\)\(<\)178 mW. This multistability is the signature of the TLS regime, where the pulses can be individually addressed by shining short pump pulses onto the VECSEL, as shown in [48; 17; 24].
The spatial profile of the VECSEL emission depends on the \(B\) and \(C\) values of the ABCD ray transfer matrix describing the round-trip propagation of the field in the external cavity. The values of these parameters can be controlled experimentally by varying the position of the optical elements. Close to the SIC and for the choice of the optical elements used in our external cavity (see Suppl. Mat. III), the value of \(B\) can be controlled by shifting the SA mirror around its SIC position (\(x_{0}\)) while the value of \(C\) can be controlled by shifting one of the cavity lenses (cf. \(L_{2}\) in Fig. S3 of Suppl. Mat. III) around its SIC position (\(z_{0}\)). By calling these displacements \(\Delta x=x-x_{0}\) and \(\Delta z=z-z_{0}\) respectively, we can show (see Eq. S43 in Suppl. Mat. III) that \(B=8\Delta x\) while \(C=-2\,\Delta z/\left(64\text{mm}^{2}\right)\).
Close to SIC, and for an overall defocusing wavefront curvature (\(C>0\)), a stable cavity is obtained for negative diffraction (\(B<0\)) and in the interval \(-50\)\(\mu\)m \(<\Delta x<0\)\(\mu\)m the VECSEL emits an axial fundamental Gaussian mode whose waist decreases as \(\Delta x\) is increased. For \(\Delta x>0\) the cavity does not support any axial mode. However, instead of switching off, the laser continues to
Figure 5: The VECSEL cavity has been set to exhibit \(C\gtrapprox 0\) (\(\Delta z=-2.74\,\text{mm}\)) and \(B\gtrapx 0\) (\(\Delta x=35\,\mu\)m). (a) Two coexisting PML states observed for the same pumping power (\(P_{p}=175\,\text{mW}\)). The duration of these pulses cannot be resolved by our detection bandwidth, hence their width is smaller than 20 ps. Far field (c) and near field (d) intensity distribution of the VECSEL emission corresponding to the panel (b). Upon an intensity scaling factor, the same profiles are observed for any mode-locked state coexisting with the state in panel (b). (e) The angle of tilting of the emitted beams with respect the optical axis as a function of \(\Delta z\).
Figure 4: (a-c): Intensity profiles of the 2D pattern obtained by the numerical simulations for three values of \(\widetilde{B_{x}}=\) (0.78, 1.02, 1.25), respectively at the fixed value of the current \(J_{1}=0.65J_{\text{rh}}\). (d-f): Corresponding power spectra in the (\(k_{x},k_{y}\)) plane. Parameters are: \(b_{y}=-0.39,c_{x}=c_{y}=4.97\cdot 10^{-4}\). Other parameters as in Fig. 3.
emit a train of temporal soliton. This emission displays a far-field intensity distribution \(\hat{A}(k_{x},k_{y})\) with two spots equidistant from the center (\(k_{x,y}=0\)) and diametrically opposed, as shown in Fig. 5 (c). This far-field profile reveals that the VECSEL is emitting two tilted beams having transverse wave-vector components \(\pm\overrightarrow{k_{0}}\).
The near-field intensity distribution shown in Fig. 5 (d) shows a striped pattern resulting from the interference of these two tilted beams traveling synchronously. The angle of the tilted beams emitted with respect to the optical axis increases with \(\Delta x\) (i.e. with the value of \(B\)), as shown in Fig. 5 (e) and, correspondingly, the spatial frequency of the near-field profile increases. This dependence of \(\overrightarrow{k_{0}}\) on \(B\) is in agreement with the theoretical analysis (cf. Fig. 4). It is worthwhile noting that, out of the cavity, the two beams can be separated to avoid interference and their near-field intensity distributions correspond to fundamental Gaussian profiles (Fig. S5 in Suppl. Mat. III). Finally, the direction of the transverse wave-vectors \(\pm\overrightarrow{k_{0}}\) in the transverse plane depends on the astigmatism axes of the external cavity which can be varied by a slight tilting of the optical elements and/or by introducing an anisotropic element in the set-up, such as a glass window.
## Conclusion and Outlook
We have demonstrated the effect of wavefront aberrations in a degenerated cavity. We have shown that the interplay between spherical aberrations and the proximity to the SIC can lead to quartic beams of temporal solitons in a mode-locked broad-area Vertical-Cavity Surface-Emitting laser. These modulated quartic beams are analogous to the eigenmodes of a quantum particle in a double well potential. They can be analytically approximated by spatially modulated Hermite-Gauss modes and we linked the wavelength of their modulation to the parameters of the cavity. While we focused on the influence of spherical aberrations, we proposed a general framework that permits calculating, in principle, the effect of the other Seidel aberrations such as coma, distortion or field curvature in a cavity close to SIC. Performing the link between these wavefront curvature defects and their equivalent representation as a spatial operator shall certainly open interesting new research avenues in photonics. Further, the conditions of large ratio between the focal distances of the aberrated lens and the non-aberrated other elements could be relaxed. In this situation, spherical aberrations translates into a non-local spatial operators. Such nonlocal operators are known to lead to interesting pattern formation scenario, e.g. in vegetation pattern or for soliton interactions [49; 50; 51].
|
2305.08358
|
Quadratic Functional Encryption for Secure Training in Vertical
Federated Learning
|
Vertical federated learning (VFL) enables the collaborative training of
machine learning (ML) models in settings where the data is distributed amongst
multiple parties who wish to protect the privacy of their individual data.
Notably, in VFL, the labels are available to a single party and the complete
feature set is formed only when data from all parties is combined. Recently, Xu
et al. proposed a new framework called FedV for secure gradient computation for
VFL using multi-input functional encryption. In this work, we explain how some
of the information leakage in Xu et al. can be avoided by using Quadratic
functional encryption when training generalized linear models for vertical
federated learning.
|
Shuangyi Chen, Anuja Modi, Shweta Agrawal, Ashish Khisti
|
2023-05-15T05:31:35Z
|
http://arxiv.org/abs/2305.08358v2
|
# Quadratic Functional Encryption for Secure Training in Vertical Federated Learning
###### Abstract
Vertical federated learning (VFL) enables the collaborative training of machine learning (ML) models in settings where the data is distributed amongst multiple parties who wish to protect the privacy of their individual data. Notably, in VFL, the labels are available to a single party and the complete feature set is formed only when data from all parties is combined. Recently, Xu et al. [1] proposed a new framework called _FedV_ for secure gradient computation for VFL using multi-input functional encryption. In this work, we explain how some of the information leakage in Xu et al. can be avoided by using Quadratic functional encryption when training generalized linear models for vertical federated learning.
## I Introduction
In many emerging applications, a machine learning (ML) model must be trained using private data that is distributed among multiple parties. We study the setting of _vertical federated learning_ (VFL) where each individual party has access to a subset of features and labels and must cooperate to train a ML model that makes use of all the features. When privacy of user data is required, homomorphic encryption (HE) [2, 3], which enables the computation on encrypted data, provides a natural solution. In recent years, there has been a significant interest in HE based VFL systems, see e.g., [5, 6, 7, 8, 9, 10, 11, 12]. Some works such as [7, 8, 11] consider a two-party protocol without the trusted coordinator, while others [9, 10]. consider a multi-party settings. Those frameworks require a large amount of peer-to-peer communications. References [5, 6] propose frameworks comprised of one trusted coordinator, storing the global weights, and two parties, each with a subset of vertically partitioned data. However, these frameworks require the trusted coordinator to share plaintext global weights with parties, which undermines the model's confidentiality.
In a recent work, Xu et. al. [1] proposed a generic and efficient privacy-preserving vertical Federated Learning (VFL) framework known as _FedV_ in the multiparty setting. _FedV_ makes use of _single-input function encryption_ (SIFE) and _multi-input function encryption_ (MIFE), and makes the communication between the clients and the aggregator a one-round interaction. However, _FedV_ still have some key drawbacks. The protocol can reveal more information to the aggregator than just the final gradient in each iteration. Moreover, the protocol reveals the respective updated weights in each iteration to clients, which additionally creates leakage. For more details, please see Section IV.
### _Our Results._
We observe that the leakage created in _FedV_ is caused by choosing an multi-input functional encryption (MIFE) scheme that only supports linear functions. Due to this, the weights are required to be provided to each party for inclusion in encryption, which creates unnecessary leakage. We observe that for linear models, this leakage can be prevented by using a more powerful MIFE scheme, namely MIFE for _quadratic functions_ which can also be constructed using standard assumptions in cryptography [17, 18]. As our main contribution in this work, we demonstrate how such a function encryption scheme can be applied in VFL training by proposing a novel construction of function vectors that serve as a basis for generating decryption keys. Our approach leads to direct computation of the gradients, without leakage of any intermediate results as is the case with _FedV_. We discuss our proposed protocol, _SFedV_, for linear model training in Section IV and the extension to logistic regression model in Appendix C. We provide a thorough analysis of both security and efficiency in Section IV.
## II System Model
### _System Overview_
Our system model involves three types of entities: aggregator, \(N\) clients, and Trusted Third Party (TTP). In the \(t\)th iteration for \(t\in[T]\), each client holds a subset of features \(\mathbf{X}_{i}^{t}\in\mathbb{R}^{\times S\times F_{i}}\) where \(F_{i}\) is the number of features that client party \(i\) holds and \(S\) is the batch size. A complete feature set of the current iteration is expressed as \(\mathbf{X}^{t}=[\mathbf{X}_{0}^{t}\parallel...\parallel\mathbf{X}_{N-1}^{t}]\in\mathbb{R} ^{\times F}\). One of the client parties holds the corresponding labels \(\mathbf{y}^{t}\in\mathbb{R}^{\times 1}\). The aggregator holds the entire model weights \(\mathbf{w}^{t}=[\mathbf{w}_{0}^{t}\parallel\mathbf{w}_{1}^{t}\parallel...\parallel\mathbf{w} _{N-1}^{t}]\in\mathbb{R}^{\times 1}\) where \(\mathbf{w}_{i}^{t}\in\mathbb{R}^{F_{i}\times 1}\) is the partial weights that pertains to \(\mathbf{X}_{i}^{t}\). The aggregator is responsible for computing the model weights and the TTP is responsible for the generation of keys. In this work we focus on linear models of the form: \(f(\mathbf{X}^{t},\mathbf{w}^{t})=\mathbf{X}^{t}\cdot\mathbf{w}^{t}\) with a squared-error loss function: \(L(\mathbf{w}^{t})=\frac{1}{S}\sum_{x\in[S]}||\mathbf{y}^{t}[s]-(\mathbf{X}^{t}\cdot\mathbf{w} ^{t})[s]||^{2}\). In our discussion, we will define the prediction error as:
\[\mathbf{u}^{t}=(\mathbf{y}^{t}-\mathbf{X}_{0}^{t}\cdot\mathbf{w}_{0}^{t}-...-\mathbf{X}_{N-1}^{t} \cdot\mathbf{w}_{N-1}^{t}). \tag{1}\]
The gradient of \(L(\mathbf{w}^{t})\) with respect to \(\mathbf{w}^{t}\) is expressed as
\[g(\mathbf{w}^{t})=-\frac{2}{S}\begin{bmatrix}\mathbf{y}^{t}\mathbf{X}_{0}^{t}-\sum_{i=0}^ {N-1}\mathbf{w}_{i}^{t}\mathbf{x}_{i}^{t}\mathbf{X}_{0}^{t}\\ -1\end{bmatrix}\in\mathbb{R}^{1\times F} \tag{2}\]
The gradient is used to update the global weights in each iteration according to \(\mathbf{w}^{t+1}=\mathbf{w}^{t}-\alpha g(\mathbf{w}^{t})\) where \(\alpha\) is the learning rate. We also discuss logistic regression model in Appendix C. In each iteration of the training phase, our protocol takes as input an encrypted copy of the features \(\mathbf{X}_{i}^{t}\) and encrypted labels \(\mathbf{y}^{t}\in\mathbb{R}^{S}\) from clients, and collaboratively and securely computes the gradients \(g(\mathbf{w}^{t})\).
Our threat model is defined as follows: we assume the aggregator is honest-but-curious meaning it correctly follows the algorithms and protocols but will try to infer clients' private data. Additionally, we assume that the aggregator does not collude with anyone. Similarly, the trusted third party is assumed not to collude with anyone. With respect to the clients, we assume that there are at most \(N-1\) dishonest clients who may collude together and share their data to infer honest clients' information.
The protocol enables all the entities to collaboratively compute the gradient using vertically partitioned data. During the training process, we aim to achieve the following privacy requirements: 1) The client \(i\) and the aggregator should learn nothing about data \(\mathbf{X}_{j}\) of client \(j\) for \(i\neq j\). 2) Any client should learn nothing about the trained global model weights \(\mathbf{w}\), intermediate results including the prediction error as in (1) and the gradient \(g(\mathbf{w})\). Moreover, \(i\)th client should not learn anything about his/her own corresponding weights \(\mathbf{w}_{i}\).
## III Preliminaries
### _Functional Encryption_
Functional Encryption [14, 15, 16] is a public key encryption scheme that enables fine-grained access control over the encrypted data. In Single Input Functional Encryption(SIFE), the secret key is associated with a function \(f\), and the ciphertext is associated with the vector \(\mathbf{x}\). The decryption of ciphertext using the secret key outputs \(f(\mathbf{x})\). Intuitively, the security says that the adversary learns nothing about the input \(\mathbf{x}\) beyond what is revealed by \(\{f_{i}(\mathbf{x})\}_{i}\) for any set of secret keys corresponding to the functions \(\{f_{i}\}_{i}\) that the adversary holds.
### _Multi-Input Functional Encryption_
Goldwasser et al. [13] generalized the functional encryption to support functions with multiple inputs. Multi-Input Functional Encryption, denoted as MIFE supports functions with arity greater than one. In MIFE, the secret key is associated with a function \(f\), and the \(i\)th ciphertext is associated with the vector \(\mathbf{x}_{i}\) for \(i\in[N]\) where \(N\) is the arity of the function \(f\). The decryption of all the ciphertexts using the secret key outputs \(f(\mathbf{x}_{1},\ldots,\mathbf{x}_{N})\). We now describe this notion in more detail.
**Definition 1** (Multi-Input Functional Encryption (MIFE) [18]).: **Syntax.** Let \(N\) be the number of encryption slots, and \(\mathcal{F}=\{\mathcal{F}_{N}\}_{N\in\mathbb{N}}\) be a function family such that, for all \(f\in\mathcal{F}_{N}\), \(f:\mathcal{X}_{1}\times\cdots\times\mathcal{X}_{N}\rightarrow\mathcal{Y}\). Here \(\mathcal{X}_{i}\) and \(\mathcal{Y}\) be the input and output spaces (respectively). A multi-input functional encryption (MIFE) scheme for function family \(\mathcal{F}\) consists of the following algorithms.
\(\mathsf{Setup}(1^{\lambda},1^{N})\rightarrow(\mathsf{PP},\{\mathsf{EK}_{i} \}_{i},\mathsf{MSK})\). It takes a security parameter \(1^{\lambda}\), number of slots \(1^{N}\), and outputs a public parameter \(\mathsf{PP}\), \(N\) encryption keys \(\{\mathsf{EK}_{i}\}_{i\in[N]}\) and a master secret key \(\mathsf{MSK}\). (The remaining algorithms implicitly take \(\mathsf{PP}\) as input.)
\(\mathsf{Enc}(\mathsf{EK}_{i},\mathbf{x})\rightarrow\mathsf{CT}_{i}\). It takes the \(i\)th encryption key \(\mathsf{EK}_{i}\) and an input \(\mathbf{x}\in\mathcal{X}_{i}\), and outputs a ciphertext \(\mathsf{CT}_{i}\).
\(\mathsf{KeyGen}(\mathsf{MSK},f)\rightarrow\mathsf{SK}\). It takes the master secret key \(\mathsf{MSK}\) and function \(f\in\mathcal{F}\) as inputs, and outputs a secret key \(\mathsf{SK}\).
\(\mathsf{Dec}(\mathsf{CT}_{1},\ldots,\mathsf{CT}_{N},\mathsf{SK})\to y\). It takes \(n\) ciphertexts {\(\mathsf{CT}_{1},\ldots,\mathsf{CT}_{N}\) and secret key \(\mathsf{SK}\), and outputs a decryption value \(y\in\mathcal{Y}\) or a special abort symbol \(\bot\).
**Correctness.** An MIFE scheme for the function family \(\mathcal{F}\) is correct if for all \(\lambda,N\in\mathbb{N}\), \((\mathbf{x}_{1},\ldots,\mathbf{x}_{N})\in\mathcal{X}_{1}\times\cdots\times\mathcal{X} _{N}\), \(f\in\mathcal{F}_{N}\), we have
\[\Pr\left[y=f(x_{1},\ldots,x_{N}):\begin{array}{l}(\mathsf{PP},\{\mathsf{EK}_{i }\}_{i},\mathsf{MSK})\leftarrow\mathsf{Setup}(1^{\lambda},1^{N})\\ \{\mathsf{CT}_{i}\leftarrow\mathsf{Enc}(\mathsf{EK}_{i},\mathbf{x})\}_{i\in[N]}\\ \mathsf{SK}\leftarrow\mathsf{KeyGen}(\mathsf{MSK},f)\\ y=\mathsf{Dec}(\mathsf{CT}_{1},\ldots,\mathsf{CT}_{N},\mathsf{SK})\end{array} \right]=1.\]
**Security.** Intuitively, security says that no information about the messages can be learned by the adversary except what is revealed by virtue of functionality - in more detail, an adversary possessing some ciphertexts and secret keys can perform decryption and learn the output of the functionality, which itself leaks something about the underlying plaintext. But besides this necessary leakage, the adversary does not learn anything. We provide the formal definition of security in Appendix A.
**Multi-Input FE for Quadratic Functions.** Agrawal, Goyal, and Tomida [18] constructed a multi-input functional encryption for quadratic functions (qMIFE). Let us define the \(N\) input quadratic function \(f\) as \(f(\mathbf{x}_{1},\ldots,\mathbf{x}_{N})=\langle\mathbf{c},\mathbf{x}\otimes\mathbf{x}\rangle\) where \(\mathbf{x}=(\mathbf{x}_{1}||\ldots||\mathbf{x}_{N})\). Here \(\otimes\) denotes the Kronecker product. A \(n\)-input MIFE scheme for the function class \(\mathcal{F}_{m,n}\) is defined as: each \(i\)th client encrypts \(\mathbf{x}_{i}\in\mathbb{Z}^{m}\) using \(i\)th encryption key \(\mathsf{EK}_{i}\) to get the \(i\)th ciphertext \(\mathsf{CT}_{i}\) for \(i\in[n]\). The \(\mathsf{KeyGen}\) algorithm issues the secret key \(\mathsf{SK}\) for \(\mathbf{c}\in\mathbb{Z}^{(mn)^{2}}\) where \(\mathbf{c}\) is the vector representation of the function \(f\in\mathcal{F}_{m,n}\). The \(\mathsf{Dec}\) algorithm uses the secret key \(\mathsf{SK}\) to decrypt \(\mathsf{CT}_{1},\ldots,\mathsf{CT}_{n}\) to get \(\langle\mathbf{c},\mathbf{x}\otimes\mathbf{x}\rangle\) and nothing else.
### _FedV_
As the system model of _SFedV_ (Section II-A), _FedV_ involves an aggregator, N clients, and a Trusted Third Party (TTP). Each client holds a subset of features \(\mathbf{X}_{i}\in\mathbb{R}^{S\times F_{i}}\), and the first client also has the corresponding labels \(\mathbf{y}\in\mathbb{R}^{S}\) along with its subset of features. The aggregator holds the complete model weights \(\mathbf{w}=[\mathbf{w}_{0}||\mathbf{w}_{1}||...||\mathbf{w}_{N-1}]\) where \(\mathbf{w}_{i}\in\mathbb{R}^{F_{i}}\) is the partial weights that pertains to \(\mathbf{X}_{i}\). In each iteration, there are two steps to compute the gradient
\[g(\mathbf{w})=\frac{2}{S}\big{[}\mathbf{u}^{\top}\mathbf{X}_{0}\|\ldots\|\mathbf{u}^{\top}\mathbf{X}_ {N-1}\big{]} \tag{3}\]
where \((\mathbf{u})^{\top}=(\mathbf{y}-\mathbf{X}_{0}\mathbf{w}_{0}-...-\mathbf{X}_{N-1}\mathbf{w}_{N-1})^{\top}\).
In the first step called Feature Dimension Secure Aggregation, _FedV_ uses a Multi-Input Functional Encryption (MIFE) scheme for the inner product functionality [19, 20]
to securely compute the prediction error \(\mathbf{u}\). In this step, the aggregator sends each \(i\)th client the partial weights \(\mathbf{w}_{i}\). Then each \(i\)th client encrypts each sample of \((-\mathbf{X}_{i}\mathbf{w}_{i})\in\mathbb{R}^{S}\) and sends the ciphertext set \(\mathsf{CT}_{-\mathbf{X}_{i}\mathbf{w}_{i}}^{\mathsf{MFE}}\) to the aggregator. The first client, holding the label \(\mathbf{y}\), encrypts each sample of \(\mathbf{y}-\mathbf{X}_{1}\mathbf{w}_{1}\) and sends the ciphertext set \(\mathsf{CT}_{\mathbf{y}-\mathbf{X}_{1}\mathbf{w}_{1}}^{\mathsf{MFE}}\) to the aggregator. The aggregator asks the TTP for the secret key \(\mathsf{SK}_{\mathbf{w}}^{\mathsf{MFE}}\) corresponding to the fusion vector \(\mathbf{v}\). This vector \(\mathbf{v}\) can be a binary vector where one in \(i\)th position means that the aggregator has received ciphertext from client \(i\). Using the secret key \(\mathsf{SK}_{\mathbf{w}}^{\mathsf{MFE}}\), the aggregator decrypts the ciphertexts \(\{\{\mathsf{CT}_{-\mathbf{X}_{i}\mathbf{w}_{i}}^{\mathsf{MFE}}\}_{i=1}^{N-1},\mathsf{ CT}_{\mathbf{y}-\mathbf{X}_{1}\mathbf{w}_{i}}^{\mathsf{MFE}}\}\) to get the prediction error \(\mathbf{u}\) (Equation (1)), which is the inner product of the fusion vector \(\mathbf{v}\) and the partial predictions from clients.
In the second step called Sample Dimension Secure Aggregation, _FedV_ uses Single-Input Functional Encryption (\(\mathsf{SIFE}\)) scheme to compute the gradient \(g(\mathbf{w})\). In this step, each client \(i\) encrypts each element of \(\mathbf{X}_{i}\) and sends the ciphertext set \(\mathsf{CT}_{\mathbf{X}_{i}}^{\mathsf{SIFE}}\) to the aggregator. On receiving the secret key \(\mathsf{SK}_{\mathbf{u}}^{\mathsf{SIFE}}\) corresponding to the prediction error \(\mathbf{u}\) from TTP, the aggregator decrypts the ciphertexts to get \(\{\mathbf{u}^{\top}\mathbf{X}_{i}\}_{i\in[N]}\). The aggregator further processes the decryption results \(\{\mathbf{u}^{\top}\mathbf{X}_{i}\}_{i\in[N]}\) according to Equation (3) to get the gradient \(g(\mathbf{w})\). Using the gradients, the model weights are updated and then the training of the next epoch starts. Note the transmission of \(\mathsf{MIFE}\) ciphertext {(}\mathsf{CT}_{\mathbf{X}_{i}\mathbf{w}_{i}}^{\mathsf{MFE}}\) or \(\mathsf{CT}_{\mathbf{y}-\mathbf{X}_{1}\mathbf{w}_{1}}^{\mathsf{MFE}}\)) and \(\mathsf{SIFE}\) ciphertext (\(\mathsf{CT}_{\mathbf{X}_{i}}^{\mathsf{SIFE}}\)) can be simultaneous. Thus the communication between each client and the aggregator is a one-round interaction in each iteration.
**Leakage in FedV.** While _FedV_ preserves each client's data, it reveals the intermediate result, the prediction error \(\mathbf{u}\) to the aggregator. Moreover, in the Feature Dimension Secure Aggregation step, the \(i\)th client is required to know the respective weight \(\mathbf{w}_{i}\). Additionally, the aggregator can use the secret key of \(i\)th iteration to decrypt the ciphertext of some other iteration \(t^{\prime}\) for \(t^{\prime}\neq t\) to infer more information about client data.
## IV The Protocol
Now we introduce our protocol with Multi-input Quadratic Functional Encryption \(\mathsf{qMIFE}\)[18] as a privacy enhancement technology to do training in VFL setting.
At the beginning of the training phase, the aggregator initializes the global weights \(\mathbf{w}^{0}\) and starts training. The training phase is iterative, where in the \(i\)th iteration, the TTP runs the \(\mathsf{qMIFE}\).Setup algorithm to get the public parameters \(\mathsf{PP}^{t}\), \(N\) encryption keys \(\{\mathsf{EK}_{i}\}_{i\in[N]}^{t}\) and a master secret key \(\mathsf{MSK}^{t}\), then delivers the encryption key \(\mathsf{EK}_{i}^{t}\) to the corresponding client \(i\). After receiving the encryption key and determining the batch \(\mathbf{X}^{t}\) used for training in this iteration, each client uses \(\mathsf{EK}_{i}^{t}\) to encrypt \(\mathbf{x}_{i}^{t}\) which is the vectorized \(\mathbf{X}_{i}^{t}\) (\(\mathsf{vec}(\cdot)\) stacks the columns of a matrix into a vector) to get ciphertext \(\mathsf{CT}_{i}^{t}\). Each client sends \(\mathsf{CT}_{i}^{t}\) to the aggregator. The client that holds the labels encrypts \(\mathbf{x}_{i}^{t}\) and \(\mathbf{y}^{t}\) with \(\mathsf{EK}_{i}^{t}\) to get ciphertexts \(\mathsf{CT}_{i}^{t}\) and \(\mathsf{CT}_{\mathbf{y}}^{t}\) respectively and then sends \((\mathsf{CT}_{i}^{t},\mathsf{CT}_{\mathbf{y}}^{t})\) to the aggregator. At the same time, the aggregator computes a set of function vectors \(\mathsf{C}^{t}\) according to the model weights \(\mathbf{w}^{t}\) of the current iteration and sends them to TTP to generate a set of decryption keys. Detailed procedure is described in Section IV-A. Then, the aggregator decrypts all the ciphertexts \(\{(\mathsf{CT}_{i}^{t}\}_{i=0}^{N-1},\mathsf{CT}_{y}^{t})\) that were received from clients using the secret keys received from TTP, to get each element of (2) respectively. By concatenating those elements and further processing the results, the aggregator gets the gradients. After this, it can update the global weights and start the training of the next iteration. Algorithm 1 shows the training procedure for linear models and also supports the training for logistic regression as discussed in Appendix C.
### _Construction of Function Vectors_
Our goal is to compute the gradient \(g(\mathbf{w})\) (Equation (2)). For simplicity, we drop the superscript \(t\) in our discussion. We define \(\mathbf{x}=[\mathbf{x}_{0}||...||\mathbf{x}_{N-1}||\mathbf{y}]\) where \(\mathbf{y}\) is the label vector and \(\mathbf{x}_{i}\) is the vectorized \(\mathbf{X}_{i}\). Recall \(g(\mathbf{w})\) is a vector of length \(F\), where \(F\) is the total number of features. The key insight is that for the \(f^{\text{th}},f\in[F]\) element in (2), we construct a function vector \(\mathbf{c}_{f}\) based on weights \(\mathbf{w}\) of current iteration, such that \(g(\mathbf{w})[f]=-\frac{2}{5}\big{(}\mathbf{c}_{f},\mathbf{x}\otimes\mathbf{x}\big{)}\). Then the aggregator concatenates \(g(\mathbf{w})[f],f\in[F]\) to obtain the gradients.
For simplicity, we define the following:
\[\mathbf{z}_{i}=\mathbf{u}^{\top}\mathbf{X}_{i},\ \ b_{j}^{\top}=\mathbf{y}^{\top}\mathbf{X}_{i}^{ \top}\mathbf{X}_{i},\ \ b_{j}^{\top}=\mathbf{y}^{\top}\mathbf{X}_{i}. \tag{4}\]
Now we decompose \(g(\mathbf{w})\) and \(\mathbf{x}\otimes\mathbf{x}\) to reduce the assignment. Note that to compute \(g(\mathbf{w})\) it suffices to compute \(\mathbf{z}_{0},...,\mathbf{z}_{N-1}\) as in (3). Here we define a set of function vectors \(\mathcal{C}=\{\mathcal{C}_{i}\}_{i=0}^{N-1}\) and \(\mathcal{C}_{i}=\{\mathbf{c}_{i,p}\}_{p=0}^{F_{i-1}}\), where \(\mathcal{C}_{i}\) is a subset of function vectors that are used to compute elements in \(\mathbf{z}_{i}\), \(\mathbf{c}_{i,p}\) is the function vector to compute \(p\)th element of \(\mathbf{z}_{i}\) as in (5).
\[\mathbf{z}_{i}[p]=\langle\mathbf{c}_{i,p},\mathbf{x}\otimes\mathbf{x}\rangle \tag{5}\]
We construct \(\mathbf{c}_{i,p}\) block by block according to the decomposition of \(\mathbf{x}\otimes\mathbf{x}\). Consider dividing \(\mathbf{x}\otimes\mathbf{x}\) into \(N+1\) blocks as in the middle of Figure 1.
Since in the computation of \(\mathbf{z}_{i}\), only the component \(\mathbf{x}_{i}\otimes\mathbf{x}\) is required, we set \(\mathbf{0}\) vector of the corresponding lengths as the coefficients of the blocks \(\{\mathbf{x}_{j}\otimes\mathbf{x}\}_{j=0}^{N}\) if \(j\neq i\). We design \(\mathbf{a}_{i,p}\) to make \(\mathbf{z}_{i}[p]=\langle\mathbf{a}_{i,p},\mathbf{x}_{i}\otimes\mathbf{x}\rangle\).
Let \(\mathsf{D}_{i}^{f}\) denotes the \(f\)th column of \(\mathbf{X}_{i}\). Thus \(\mathbf{x}_{i}=[\mathsf{D}_{i}^{0};...;\mathsf{D}_{i}^{F_{i-1}}]\). Note that in the computation of \(\mathbf{z}_{i}[p]\) only the column \(\mathsf{D}_{i}^{p}\) is used, thus we set \(\mathbf{0}\) vector as the coefficients of \(\{\mathsf{D}_{i}^{g}\otimes\mathbf{x}\}_{q=0}^{F_{i-1}}\) if \(q\neq p\). Hence we can express \(\langle\mathbf{a}_{i,p},\mathbf{x}_{i}\otimes\mathbf{x}\rangle=\langle\mathbf{d},\mathsf{D}_{i}^{p} \otimes\mathbf{x}\rangle\). We design \(\mathbf{d}\) to achieve \(\mathbf{z}_{i}[p]=\langle\mathbf{d},\mathsf{D}_{i}^{p}\otimes\mathbf{x}\rangle\).
```
1:procedureTraining-Aggregator(\(\mathbf{w}^{t},s,\{F_{i}\}_{i=0}^{N-1},N\))
2:\(res=\mathbf{0}^{F}\)
3:for each\(i\in 0,...,N-1\)do
4:for each\(p\in 0,...,F_{i}-1\)do
5:\(c^{t}_{i,p}:=\mathrm{CGEN}(\mathbf{w}^{t},S,F,N,i,p)\)
6:endfor
7:endfor
8:\(C^{t}:=\{c^{t}_{i,p},i\in[N],p\in[F_{i}]\}\)
9:\(\{\{\mathrm{qMIFE}.\mathrm{SK}^{t}_{c_{i,p}}\}_{p=0}^{F_{i}}\}_{i=0}^{N-1}= \text{obtain-dk-from-TTP}(C^{t})\)
10:for each\(i\in 0,...,N-1\)do
11:if party \(i\) has label \(\mathbf{y}^{t}\)then
12:\((\mathrm{CT}^{t}_{i,t},\mathrm{CT}^{t}_{y})\)= obtain-ct-from-client()
13:else
14:\(\mathrm{CT}^{t}_{i}=\text{obtain-ct-from-client()}\)
15:endif
16:endfor
17:\(\mathrm{CT}^{t}=\{(\mathrm{CT}^{t}_{i})_{i=0}^{N-1},\mathrm{CT}^{t}_{y}\}\)
18:for each\(n\in 0,...,N-1\)do
19:for each\(p\in 0,...,F_{n}-1\)do
20:\(\mathrm{idx}=\sum_{i=0}^{n-1}F_{i}+p\)
21:\(res[\mathrm{idx}]=\text{qMIFE}.\mathrm{Dec}(\mathrm{CT}^{t},\text{qMIFE}.\mathrm{SK}^{t}_{c_{n,p}})\)
22:endfor
23:endfor
24:\(\nabla L(\mathbf{w}^{t})=-\frac{2}{3}res+\lambda\nabla R(\mathbf{w}^{t})\)
25:\(\mathbf{w}^{t+1}=\mathbf{w}^{t}-\alpha\nabla L(\mathbf{w}^{t})\)
26:endprocedure
27:procedureTraining-Client(\(\mathbf{X}^{t}_{i}\))
28:functionobtain-ct-from-client()
29:\(\text{qMIFE}.\mathrm{EK}^{t}_{i}\)= obtain-ek-from-TTP()
30:\(\mathbf{x}^{t}_{i}:=\text{vec}(\mathbf{X}^{t}_{i})\)
31:if party \(i\) has label \(\mathbf{y}^{t}\)then
32:\(\mathrm{CT}^{t}_{i}:=\text{qMIFE}.\mathrm{Enc}(\text{qMIFE}.\mathrm{EK}^{t}_{i },\mathbf{x}^{t}_{i})\)
33:\(\mathrm{CT}^{t}_{y}:=\text{qMIFE}.\mathrm{Enc}(\text{qMIFE}.\mathrm{EK}^{t}_{i },\mathbf{y}^{t})\)
34: Return \((\mathrm{CT}^{t}_{i},\mathrm{CT}^{t}_{y})\) to Aggregator
35:else
36:\(\mathrm{CT}^{t}_{i}:=\text{qMIFE}.\mathrm{Enc}(\text{qMIFE}.\mathrm{EK}^{t}_{i },\mathbf{x}^{t}_{i})\)
37: Return \(\mathrm{CT}^{t}_{i}\) to Aggregator
38:endif
39:endfunction
40:endprocedure
41:procedureTraining-TTP(\(1^{4},1^{N}\))
42:functionobtain-ek-from-TTP()
43:\((\mathrm{PP}^{t},\{\mathrm{EK}\}^{t}_{i},\mathrm{MSK}^{t})\leftarrow\text{ qMIFE}.\text{Setup}(1^{4},1^{N})\)
44: Deliver qMIFE.\(\mathrm{EK}^{t}_{i}\) to party \(i,i\in[N]\)
45:endfunction
46:functionobtain-dk-from-TTP(\(C^{t}\))
47:for each\(i\in 0,...,N-1\)do
48:for each\(p\in 0,...,F_{i}\)do
49:\(\text{qMIFE}.\mathrm{KeyGen}(\text{qMIFE}.\mathrm{MSK}^{t},\mathbf{c}^{t}_{i,p})\to \text{qMIFE}.\mathrm{SK}^{t}_{c_{i,p}}\)
50:endfor
51:endfor
52: Return \(\{\{\text{qMIFE}.\mathrm{SK}^{t}_{c_{i,p}}\}_{p=0}^{F_{i}}\}_{i=0}^{N-1}\)
53:endfunction
54:endprocedure
```
**Algorithm 1** Training Procedure
From (2), (3) and (4) note that we can express:
\[\mathbf{z}_{i}[p]=\sum_{j=0}^{N-1}-\mathbf{b}^{i}_{j}[p]+\mathbf{b}^{i}_{y}[p] \tag{6}\]
where we have \(\mathbf{z}_{i}[p]=(\mathbf{w}^{\top}\mathbf{X}_{i})[p]\), \(\mathbf{b}^{i}_{j}[p]=(\mathbf{w}^{\top}_{j}\mathbf{X}^{\top}_{j}\mathbf{X}_{i})[p]\),\(\mathbf{b}^{i}_{y}[p]=(\mathbf{y}^{\top}\mathbf{X}_{i})[p]\). We expand \(\mathbf{b}^{i}_{j}[p]\) by first performing multiplication term-by-term and then summing products as in (7). Based on this equation, we determine the method for constructing the coefficients vector \(\mathbf{d}\).
\[\mathbf{b}^{i}_{j}[p] =(\mathbf{w}^{\top}_{j}\mathbf{X}^{\top}_{j}\mathbf{X}_{i})[p]=(\mathbf{w}^{\top}_{ j}\mathbf{X}^{\top}_{j})\mathsf{D}^{p}_{i}\] \[=\left[\sum_{f=0}^{F_{i}-1}\mathbf{w}^{\top}_{j}[f]\mathsf{D}^{f}_{i }[0]\parallel...\parallel\sum_{f=0}^{F_{i}-1}\mathbf{w}^{\top}_{j}[f]\mathsf{D}^{f}_ {i}[S-1]\right]\mathsf{D}^{p}_{i}\] \[=\sum_{s=0}^{S-1}\sum_{f=0}^{F_{j}-1}\mathbf{w}_{j}[f]\mathsf{D}^{f}_ {j}[s]\mathsf{D}^{p}_{i}[s] \tag{7}\]
Now the goal is to construct \(\mathbf{d}\) such that \(\mathbf{z}_{i}[p]=\langle\mathbf{d},\mathsf{D}^{p}_{i}\otimes\mathbf{x}\rangle\). We keep decomposing \(\mathbf{d}\) and \(\mathsf{D}^{p}_{i}\otimes\mathbf{x}\) to blocks as in Figure 2.
In Figure 2, blocks with the same color will be designed to compute the corresponding term on the right side of Equation (6). Considering \(\mathbf{d}_{s,j},s\in[S]\), we can set the following relations:
\[\mathbf{b}^{i}_{j}[p] =\sum_{s=0}^{S-1}\langle\mathbf{d}_{s,j},\mathsf{D}^{p}_{i}[s]\mathbf{x}_{j}\rangle \tag{8}\] \[\mathbf{b}^{i}_{y}[p] =\sum_{s=0}^{S-1}\langle\mathbf{d}_{s,N},\mathsf{D}^{p}_{i}[s]\mathbf{y}\rangle \tag{9}\]
Next, we introduce the approach to construct \(\mathbf{d}_{s,j},j\in[N]\) according to the corresponding weight piece. We remove the outer summation in (7) and (8) to obtain:
\[\sum_{f=0}^{F_{j}-1}\mathbf{w}_{j}[f]\mathsf{D}^{f}_{j}[s]\mathsf{D}^{p}_ {i}[s]=\langle\mathbf{d}_{s,j},\mathsf{D}^{p}_{i}[s]\mathbf{x}_{j}\rangle \tag{10}\]
We design \(\mathbf{d}_{s,j}\) to achieve (10) as Figure 3 shows. We decompose \(\langle\mathbf{d}_{s,j},\mathsf{D}^{p}_{i}[s]\mathbf{x}_{j}\rangle\) into blocks \(\mathsf{D}^{p}_{i}[s]\mathsf{D}^{f}_{j},f\in[F_{j}]\). In each block \(\mathsf{D}^{p}_{i}[s]\mathsf{D}^{f}_{j}\), we take one entry \(\mathsf{D}^{p}_{i}[s]\mathsf{D}^{f}_{j}[s]\) and set its coefficient to \(\mathbf{w}_{j}[f]\) just as the left side of (10). For other unneeded terms, we set the coefficient to \(\mathbf{0}\).
The approach to construct \(\mathbf{d}_{s,j},j\in[N]\) can be easily extended to the case for \(\mathbf{d}_{s,N}\). By designing the blocks of \(\mathbf{d}\) this way, we can achieve \(\mathbf{z}_{i}[p]=\langle\mathbf{d},\mathbf{\mathsf{D}}_{i}^{t}\otimes\mathbf{x}\rangle\). The algorithms to construct the function vectors \(\mathbf{c}_{i,p}\) and \(d\) are provided in Appendix B.
### _Privacy Analysis_
Recall the aim of our framework. We want the client \(i\) and the aggregator to learn nothing about data \(\mathbf{X}_{j}\) of client \(j\) for \(i\neq j\). We also want that any client should learn nothing about the trained global model weights \(\mathbf{w}\), intermediate results including error between labels and feed-forward output as in (1) and the gradient \(g(\mathbf{w})\). Moreover, \(i\)th client should not learn anything about his/her own corresponding weights \(\mathbf{w}_{i}\). In this section, we prove that we have achieved the above-stated goal.
**Theorem IV.1**.: _If Quadratic MIFE (\(\mathsf{qMIFE}\)) is secure according to definition 2, then in each training iteration \(t\), \(i\)th client's data \(\mathbf{X}_{i}^{t}\) for \(i\in[N]\) is hidden from client \(j\) and the aggregator, trained global model weights \(\mathbf{w}^{t}\) and intermediate results \(\mathbf{u}^{t}\) as in (1) are hidden from the clients and \(i\)th client learns nothing about weight \(\mathbf{w}_{i}\)._
Let us fix the iteration number to be \(t\). In each iteration, the TTP runs the \(\mathsf{qMIFE}\).Setup algorithm to get public parameters \(\mathsf{PP}^{\prime}\), \(N\) encryption keys \(\{\mathsf{E}K_{i}\}_{i\in[N]}^{t}\) and a master secret key \(\mathsf{MSK}^{t}\). The clients encrypt their respective data and send the ciphertexts to the aggregator. The aggregator asks the TTP for the secret key corresponding to the set of vectors \(\mathbf{C}^{t}\). Each vector \(\mathbf{c}_{i,p}^{t}\) is set in such a way that the \(\mathsf{qMIFE}\).Dec only reveals the inner product \(\langle\mathbf{c}_{i,p}^{t},\mathbf{x}\otimes\mathbf{x}\rangle=((\mathbf{u}^{t})^{\top}\mathbf{X}_ {i}^{t})[p]\). Quadratic MIFE ensures that nothing about \(\mathbf{X}_{i}^{t}\) and \(\mathbf{u}^{t}\) is revealed to the aggregator. Moreover, each client encrypts their data using different encryption keys. The ciphertexts are indistinguishable; hence, clients cannot predict other clients' data.
Unlike _FedV_, in our framework, the client runs the \(\mathsf{qMIFE}\).Enc algorithm which only takes their respective data and encryption keys as input. Hence, each client \(i\) learns nothing about their respective weight \(\mathbf{w}_{i}^{t}\). Moreover, Quadratic MIFE ensures that client \(i\) learns nothing about the global weight \(\mathbf{w}^{t}\). The aggregator does not share the gradients in any form with the clients. Therefore, the gradient \(g(\mathbf{w}^{t})\) is also not revealed.
**Importance of using new \(\mathsf{qMIFE}\) instance for each iteration.** Let in the iteration \(t\), the ciphertext be \(\mathsf{CT}^{t}\) and secret key be \(\mathsf{qMIFE}\).\(\mathsf{SK}^{t}\). Suppose the TTP uses the same \(\mathsf{MSK}\) to generate secret keys \(\mathsf{qMIFE}\).\(\mathsf{SK}^{t+1}\) for some other iteration, say \(t+1\), then the aggregator may use \(\mathsf{qMIFE}\).\(\mathsf{SK}^{t+1}\) to decrypt the ciphertext \(\mathsf{CT}^{t}\) instead of using it to decrypt the ciphertext \(\mathsf{CT}^{t+1}\). Using this "mix-and-match" attack by performing decryptions of secret key and ciphertexts from different iterations, he will know \(g(\mathbf{w})=-\frac{2}{S}\big{[}(\mathbf{u}^{t+1})^{\top}\mathbf{X}_{0}^{t}||...||(\mathbf{u} ^{t+1})^{\top}\mathbf{X}_{N-1}^{t}\big{]}\).
If TTP generates different \(\mathsf{qMIFE}\) instance for every iteration, then decryption of \(\mathsf{CT}^{t}\) with secret key \(\mathsf{qMIFE}\).\(\mathsf{SK}^{t+1}\) will give some garbage value which will be irrelevant for the aggregator. Therefore, it is important for the TTP to generate a new \(\mathsf{qMIFE}\) instance for every iteration.
**Comparison of our framework with FedV.** Unlike _FedV_, our framework does not leak the intermediate result \(\mathbf{u}\) to the aggregator. The global weights \(\mathbf{w}\) are kept secret from the clients and each client also learns nothing about their respective weights. In addition to this, we also ensure that the aggregator cannot use the mix-and-match attack to learn some useful information.
### _Efficiency Analysis_
**Communication.** Regarding communication complexity, _SFedV_ requires one-way client-aggregator communication, while _FedV_ needs one-round client-aggregator communication due to the delivery of global weights by the aggregator. Additionally, _SFedV_ uses a new \(\mathsf{qMIFE}\) instance in each iteration to prevent mix-and-match attacks. Thus, an increase of communication between TTP and clients becomes necessary. Note that _FedV_ can also prevent mix-and-match attacks by using new instances of \(\mathsf{MIFE}\) and \(\mathsf{SIZE}\) in each iteration. In such a scenario, the client-TTP communication complexity for each iteration will be the same for both _FedV_ and _SFedV_.
**Computation.** Table I provides a comparison between _FedV_ and _SFedV_ in terms of the number of encryption and decryption processes in each iteration. The significant improvement of _SFedV_ is attributed to the advancement of quadratic \(\mathsf{MIFE}\) and the careful design of function vectors.
In terms of the number of the vector corresponding to which the secret keys are generated, _FedV_ uses two vectors for two steps: \(\mathbf{v}\) for feature dimension secure aggregation and \(\mathbf{u}\) for sample dimension secure aggregation. In contrast, our _SFedV_ framework employs \(F\) vectors \(\mathbf{c}\), where \(F\) is the total number of features. The increase in size can be justified by our use of a quadratic \(\mathsf{MIFE}\) scheme instead of inner product \(\mathsf{MIFE}\).
## V Conclusions
Prior \(N\)-party VFL framework _FedV_ incurs information leakage which seriously undermines individual data privacy. In this work, to address the privacy issues, We propose a leak-free protocol, called _SFedV_, for multiparty VFL regression model training. Our approach simplifies the VFL pipeline and preserves the privacy of client data, model weights, and intermediate results, by designing special function vectors and using a quadratic MIFE scheme to compute gradients directly.
\begin{table}
\begin{tabular}{c c c c} \hline \hline & \multicolumn{2}{c}{_FedV_} & \multicolumn{2}{c}{_SFedV_} \\ & \(\mathsf{MIFE}\) & \(\mathsf{SIZE}\) & \(\mathsf{qMIFE}\) \\ \hline Encryptions on each client & \(S\) & \(S\cdot F_{i}\) & 1 \\ Decryptions on the aggregator & \(S\) & \(F\) & \(F\) \\ \hline \hline \multicolumn{3}{c}{\(S\): Batch size. \(F\): Total number of features. \(F_{i}\): Number of features belonging to client \(i\).} \\ \end{tabular}
\end{table} TABLE I: Comparison of _FedV_ and _SFedV_ regarding the number of encryption processes on each client and the number of decryption processes on the aggregator in each iteration.
|
2304.07342
|
GPULZ: Optimizing LZSS Lossless Compression for Multi-byte Data on
Modern GPUs
|
Today's graphics processing unit (GPU) applications produce vast volumes of
data, which are challenging to store and transfer efficiently. Thus, data
compression is becoming a critical technique to mitigate the storage burden and
communication cost. LZSS is the core algorithm in many widely used compressors,
such as Deflate. However, existing GPU-based LZSS compressors suffer from low
throughput due to the sequential nature of the LZSS algorithm. Moreover, many
GPU applications produce multi-byte data (e.g., int16/int32 index,
floating-point numbers), while the current LZSS compression only takes
single-byte data as input. To this end, in this work, we propose GPULZ, a
highly efficient LZSS compression on modern GPUs for multi-byte data. The
contribution of our work is fourfold: First, we perform an in-depth analysis of
existing LZ compressors for GPUs and investigate their main issues. Then, we
propose two main algorithm-level optimizations. Specifically, we (1) change
prefix sum from one pass to two passes and fuse multiple kernels to reduce data
movement between shared memory and global memory, and (2) optimize existing
pattern-matching approach for multi-byte symbols to reduce computation
complexity and explore longer repeated patterns. Third, we perform
architectural performance optimizations, such as maximizing shared memory
utilization by adapting data partitions to different GPU architectures.
Finally, we evaluate GPULZ on six datasets of various types with NVIDIA A100
and A4000 GPUs. Results show that GPULZ achieves up to 272.1X speedup on A4000
and up to 1.4X higher compression ratio compared to state-of-the-art solutions.
|
Boyuan Zhang, Jiannan Tian, Sheng Di, Xiaodong Yu, Martin Swany, Dingwen Tao, Franck Cappello
|
2023-04-14T18:23:25Z
|
http://arxiv.org/abs/2304.07342v2
|
# gpuLZ: Optimizing LZSS Lossless Compression for Multi-byte Data on Modern GPUs
###### Abstract.
Today's graphics processing unit (GPU) applications produce vast volumes of data, which are challenging to store and transfer efficiently. Thus, data compression is becoming a critical technique to mitigate the storage burden and communication cost. LZSS is the core algorithm in many widely used compressors, such as Deflate. However, existing GPU-based LZSS compressors suffer from low throughput due to the sequential nature of the LZSS algorithm. Moreover, many GPU applications produce multi-byte data (e.g., int16/int32 index, floating-point numbers), while the current LZSS compression only takes single-byte data as input. To this end, in this work, we propose gpuLZ, a highly efficient LZSS compression on modern GPUs for multi-byte data. The contribution of our work is fourfold: First, we perform an in-depth analysis of existing LZ compressors for GPUs and investigate their main issues. Then, we propose two main algorithm-level optimizations. Specifically, we (1) change prefix sum from one pass to two passes and fuse multiple kernels to reduce data movement between shared memory and global memory, and (2) optimize existing pattern-matching approach for multi-byte symbols to reduce computation complexity and explore longer repeated patterns. Third, we perform architectural performance optimizations, such as maximizing shared memory utilization by adapting data partitions to different GPU architectures. Finally, we evaluate gpuZ on six datasets of various types with NVIDIA A100 and A4000 GPUs. Results show that gpuLZ achieves up to 272.1\(\times\) speedup on A4000 and up to 1.4\(\times\) higher compression ratio compared to state-of-the-art solutions.
L
L
## 1. Introduction
Many applications running on high-performance parallel and distributed systems generate large amounts of data, which leads to storage bottlenecks due to limited capacity. Meanwhile, interconnect technologies in distributed systems advance relatively more slowly than computing power, causing inter-node communication and I/O bottlenecks to become a severe issue (Brandt et al., 2016). This motivates the design of software solutions to increase the interconnect bandwidth, such as communication-avoiding linear algebra (Brandt et al., 2016; Brandt et al., 2016).
Data compression is a popular solution to reduce communication and I/O overheads significantly. For example, due to the high data reduction capabilities, lossy compression has recently been extensively studied to alleviate I/O bottlenecks in large-scale distributed applications such as high-performance computing (HPC) simulations. Since the saved data is often used for post-analysis and visualization, errors introduced by error-bounded lossy compression are acceptable for many applications (Brandt et al., 2016; Brandt et al., 2016; Brandt et al., 2016; Brandt et al., 2016; Brandt et al., 2016).
However, lossy compression may not be applicable for inter-node communication in most distributed applications since data is usually exchanged between nodes at least once per time step, resulting in an accumulation of compression errors beyond the acceptable level. This is especially important for HPC simulations where numerical stability is critical, as accumulated compression errors can affect the correctness of the results.
Unlike lossy compression, lossless compression can avoid the loss of accuracy despite the relatively low compression ratio. In practice, among many lossless compression algorithms, LZ-series lossless compression is one of the most important algorithms. It can identify repeated subsequences/patterns, thereby reducing spatial redundancy of the input sequence. Specifically, LZSS (LSS, 2016) is a derivative of the classical LZ77 algorithm (Zhou et al., 2017) (i.e., the first LZ compression algorithm). It holds a sliding window for the input stream to search for the longest match and then encodes each match as one pointer, including its length and offset (will be discussed in SS2). Input data with longer repeated subsequences are more likely to achieve higher compression ratios with LZSS. As an entropy coder, LZSS is often combined with other types of lossless coders
to (e.g., with Huffman encoding as Deflate (Huffman, 2017)) remove both spatial and frequency redundancy.
On one hand, multi-byte data such as long integers and floating-point numbers are common as input to lossless compression (Zhu et al., 2017; Wang et al., 2018; Wang et al., 2018). However, the classic LZSS compression only takes a single byte as the input unit, ignoring the data characteristics of different data types. Using multiple bytes as units in LZSS can improve both compression throughput (due to fewer symbols to process) and ratio (due to longer repeated patterns).
On the other hand, more and more applications are being implemented on the GPU due to its high performance and energy efficiency (Krizhevsky et al., 2015), resulting in multiple critical use cases of GPU compression. For example, GPU compression can speed up GPU-CPU data transfers (Wang et al., 2018). It can also reduce GPU memory footprint to support larger input in deep learning (Krizhevsky et al., 2015). However, it is challenging to parallelize LZSS on GPUs due to its strong data dependency (Wang et al., 2018). Simply chunking data and distributing them to different GPU threads would cause warp divergence (Wang et al., 2018).
CULZSS (Zhu et al., 2017) is a state-of-the-art open-source GPU LZSS compressor. It can achieve relatively higher compression throughput on the GPU than the CPU solution (Krizhevsky et al., 2015). However, CULZSS faces several critical issues: 1 It cannot handle multi-byte data, and simply modifying its algorithm to accommodate multi-byte input may result in a significant drop in compression ratio. 2 It lacks tuning of parameters such as block size and sliding window size for different GPU architectures. 3 Its encoding process is performed on the CPU, which introduces high CPU-GPU data movement overhead.
To solve the above issues, we propose a highly optimized LZSS compression for multi-byte data on modern GPUs (called gpuLZ1). Specifically, we deeply analyze CULZSS and identify its performance issues. Based on these issues, we propose two main algorithm-level optimizations and a series of performance optimizations. These optimizations can improve compression throughput and ratio simultaneously. To the best of our knowledge, _this is the first work that optimizes LZSS compression for multi-byte data on GPUs._
Footnote 1: The code is available at [https://github.com/hipdacs-lab/GPULZ](https://github.com/hipdacs-lab/GPULZ).
The main contributions of this paper are summarized as follows.
* We develop a highly efficient LZSS compression on GPUs for multi-byte data. We perform an in-depth analysis of CULZSS and investigate its main performance issues.
* We optimize the prefix sum from one pass to two passes and fuse multiple kernels (e.g., matching and local prefix sum) to reduce data movement between shared memory and global memory.
* We propose a pattern-matching method for multi-byte data, which can reduce computational complexity and explore longer repeated patterns.
* We propose a data partitioning method that can adapt to different GPU architectures to maximize shared memory utilization.
* We evaluate gpuLZ on six datasets with NVIDIA A100 and A4000 GPUs. The evaluation demonstrates that gpuLZ outperforms CULZSS by up to 27.1\(\times\) in compression throughput with no degradation of compression ratio (even 20.6% improvement).
In SS2, we present the background about CUDA architecture, LZSS algorithm, GPU implementations of LZSS, and their issues. In SS3, we present the design of gpuLZ with our algorithm-level and architectural performance optimizations. In SS4, we evaluate gpuLZ and compare it with other GPULZ compression. In SS5, we conclude the paper and discuss our future work.
## 2. Background and Motivation
In this section, we present the background of CUDA architecture, LZSS algorithm, and its state-of-the-art GPU implementations.
### CUDA Architecture
CUDA is a parallel computing platform and API that allows the software to use NVIDIA GPUs for general-purpose processing. Thread is the basic programmable unit for GPU programmers to use massive numbers of CUDA cores. CUDA threads are organized into three levels, grid, block, and thread. Specifically, a group of 32 threads is called a _warp_. All threads in the same warp will execute the same instruction. However, if different threads in a warp follow different control paths, some threads are masked from performing any useful work. This situation is called _warp divergence_, which is one of the fundamental factors that limit the performance of GPUs. Multiple warps are combined to form a thread _block_, and a set of thread blocks form a thread _grid_.
Regarding the CUDA memory hierarchy, the largest and slowest memory is called the _global memory_, which is accessible by all threads. The next layer is _shared memory_, which is a fast and programmable cache. All the threads in the same thread block have access to the same shared memory. Lastly, the fastest layer is the thread-private register to each thread. To achieve good performance, CUDA programmers must effectively utilize the memory subsystem. For example, when threads in a warp request contiguous global-memory locations, these requests can be aggregated into a single transaction (called _coalesced memory access_); non-coalesced memory access will cause a significant performance slowdown.
### Lzss
LZSS is a variant of LZ77 (Wang et al., 2018), the first algorithm in the LZ compression family. LZSS has the same fundamental idea as other LZ algorithms: search through a sliding window for the longest possible sub-sequence match and encode all identified matches. To clearly explain the LZSS algorithm, we introduce some basic concepts as follows.
* **Input stream** is the sequence of bytes to be compressed.
* **Symbol** is the single-/multi-byte unit of the input stream.
* **Look-ahead buffer** is the byte sequence from the coding position to the end of the input stream.
* **Coding position** is the byte position in the input stream currently encoded in the look-ahead buffer.
* **Sliding window** is a buffer (of size \(W\)), which is the number of bytes from the coding position backward. The window is empty at the beginning, then grows to size \(W\) as the input stream is processed, and "slides" along with the coding position.
* **Pointer** contains two numbers: the first one is the length of the match, and the second one is the starting offset. The starting offset is the count of bytes from the coding position back to the window, and the length is the number of bytes to read forward from the starting offset.
* **Literal** represents the current byte if there is no match.
* **Flag array**'s each bit indicates whether its corresponding bytes (in compressed data) represent a pointer or a literal. The basic steps of LZSS can be summarized as follows.
1. Set the coding position to the start of the input stream;
2. Find the longest match started from the coding position;
3. If a match is found, output the pointer \(P\) and move the coding position and the sliding window \(L\) bytes forward, where \(L\) denotes the length of the match;
4. If no match is found, output the first byte in the look-ahead buffer and move the coding position and the sliding window one byte forward;
5. Use a flag array to record whether a match is a found;
6. If the look-ahead buffer is not empty, return to Step 2).
Figure 1 illustrates a simple example to demonstrate how LZSS works. Specifically, the original 102 bytes are compressed to 56, bytes including a flag array. A pair of numbers in brackets represents a pointer, where the first is the offset and the second is the length. If \(W\) (also the maximum match length) is less than 256, both the offset and the length can be represented in one byte. This example demonstrates why LZSS is well suited for compressing data with many repeated patterns. However, it also indicates LZSS's strong sequential execution characteristics, since the current match must start from the coding position determined by the last match. Due to this strong dependency, LZSS cannot fully leverage the massive parallelism of GPU for high performance.
### GPU LZ Compression
CULZSS (Kernel et al., 2017; Zhang et al., 2018) is a state-of-the-art GPU implementation of the LZSS algorithm. It first partitions the input data into multiple chunks to increase the parallelism and then launches a matching kernel on the GPU and an encoding kernel on the CPU. Specifically, the matching kernel lets each GPU thread find the longest match for each byte of the input stream and stores all matches in the global memory, After that, all matches will be copied from the GPU to the CPU. Finally, the CPU encoding kernel will sequentially process these matches like the original LZSS. Note that not all matches will be used in the encoding: if one match covers the following matches, these overlapped matches will be skipped. Furthermore, Ozsoy _et al._(Ozsoy et al., 2018) improved CULZSS by overlapping the GPU and CPU computations to increase the performance. In addition, CULZSS-Bit (Zsoy et al., 2018) adapted CULZSS to handle bit-wise symbols.
We also note that the nvCOMP library (Kernel et al., 2017) developed by NVIDIA provides a series of lossless compressors on the GPU, including LZ4. LZ4 is a fast LZ compression implementation, especially featuring fast decoding. Unlike LZSS, LZ4 does not use a flag array to indicate the match pointer; instead, it uses a fixed format with a token (including literal and match length) to save each match. However, compared to LZSS's flag array, the fixed-length token may result in a lower compression ratio, especially when the length of continuous literals is relatively short. More importantly, we cannot modify the proprietary nvCOMP to accommodate multi-byte data. Thus, in this work, we focus on CULZSS as our major comparison baseline.
### Issues of CULZSS
CULZSS faces several critical issues: 1 No support of multi-byte data: CULZSS treats the input as a sequence of single bytes regardless of the data type. This can lead to lower compression ratios because patterns are in multi-byte units and/or lower compression throughput due to the higher computational complexity of searching for matches in units of single bytes. 2 Fixed data chunk size: Data chunk size highly impacts the compression ratio and throughput, but it is a fixed value in CULZSS. Thus, it is challenging to adapt CULZSS to different GPU architectures with different shared memory sizes. 3 Fixed sliding window size: CULZSS uses a fixed sliding window size, which prevents a potential tradeoff between compression ratio and throughput. 4 Under-utilization of shared memory: CULZSS underutilizes the GPU shared memory, resulting in multiple buffer updates in shared memory. 5 CPU encoding: CULZSS copies matches (twice the size of the input stream) from the GPU to the CPU and performs the encoding, which causes a significant performance drop due to data copies and slow sequential CPU encoding.
## 3. Our Proposed Design
In this section, we first overview our gpuLZ. Then, we describe our proposed algorithm-level optimizations to solve the above issues. Finally, we present the implementation details of our GPU kernels.
### Overview of gpuLZ
The goal of our design is to fully utilize the GPU resources for high compression throughput and maintain as high in compression ratio as the original/sequential LZSS algorithm. We illustrate our proposed gpuLZ in Figure 3. Specifically, gpuLZ consists of five steps: matching, local (block-level) prefix sum, encoding, global (grid-level) prefix sum, and deflating. We propose three kernels for these steps. Specifically, Kernel I is for the matching step, the local prefix sum, and encoding to generate the compressed symbols for each data chunk (SS3.3.2); Kernel II is for the global prefix sum to
Figure 1. An example of LZSS algorithm. The left is original data, and the right is compressed data. Two numbers in brackets denote length and offset.
Figure 3. Workflow of our proposed gpuLZ.
calculate the memory address (or write offset) of each compressed data chunk in the global memory (SS3.3.3); and Kernel III is for deflating empty bytes based on the calculated offsets and writing the compressed data (SS3.3.4).
Note that we use three separate kernels because a grid-level synchronization across thread blocks is needed to calculate the global memory addresses. In comparison, there is an implicit synchronization between two kernels. Moreover, although we cannot use a single kernel to handle all computations in the shared memory due to hardware constraints (SS3.2.2), we propose an optimization (i.e., two-pass prefix sum with kernel fusion) that minimizes data movement between the shared and global memories and reduces the global memory footprint (SS3.2.2).
For the matching step, similar to CULZSS, we also find the longest match in the sliding window for each symbol in the input stream, even though some matches will not be used in the encoding process. However, as discussed in SS2.4, CULZSS's matching step does not consider multi-byte symbol, which significantly degrades the compression ratio and throughput. To solve this issue, we propose a multi-byte matching approach, which can reduce the computational complexity and find longer matches (SS3.2.3). For the encoding step, as discussed in SS2.4, CULZSS must copy matches from the GPU to the CPU and perform the CPU encoding sequentially, leading to low throughput. This makes CULZSS impractical for use cases where data generated on the GPU needs to be compressed. Thus, we propose a new compression workflow, including encoding and deflating (see SS3.2.2 and SS3.2.1, respectively) to get rid of the handling from the CPU side completely.
### Algorithm-level Optimizations
Next, we describe our four algorithm-level optimizations in detail.
#### 3.2.1. Exploring Optimal Workflow
First, we explore the optimal workflow of LZSS compression on the GPU. CULZSS uses the CPU to encode/compress the matches found by the GPU, as shown in Figure 4 (a). The CPU encoding kernel and the GPU matching kernel are executed asynchronously to maximize overlapping. However, this workflow makes the encoding process difficult to parallelize, and the GPU-to-CPU data movement is time-consuming. To solve this issue, we propose to perform the encoding on the GPU. One straightforward solution is to replace the CPU encoding directly with a GPU kernel without changing the workflow, as illustrated in Figure 4 (b). However, this workflow is still not optimal because every encoding kernel depends on the last matching kernel. This dependency causes only one kernel to be executed at one time (i.e., sequential execution), which brings the GPU resources starvation problem, especially for small data chunks. Moreover, the multiple kernel launches further increase the time overhead.
To address this issue, we propose our second workflow, as illustrated in Figure 4 (c). In this design, we perform the matching and encoding steps in two separate GPU kernels. While the dependency between these two kernels still exists, the size of the data processed is changed from one data chunk to the entire input stream, which solves the starvation problem. However, this design requires a large global memory space to store the intermediate data (i.e., high memory footprint) and causes a large amount of data to be moved between global memory and shared memory. To this end, we propose our third workflow that performs the matching and encoding steps in the same kernel, as shown in Figure 4 (d). One major issue with fusing the matching and encoding kernels is that the output includes empty bytes. Thus, we need to add another kernel to eliminate these empty bytes (called "deflating kernel"). This workflow is non-trivial because it is only feasible with our proposed two-pass prefix sum (detailed in SS3.2.2).
#### 3.2.2. Two-pass Prefix Sum with Kernel Fusion
Fine-grained parallelization of LZSS on GPU is more challenging than coarse-grained parallelization on CPU because GPU thread blocks do not communicate with each other while the kernel function is running, causing the memory offset of each compressed data chunk to be unknown. To solve this issue, we first locally calculate the size of each symbol after compression (could be either a pointer or a literal), then globally synchronize these sizes, and finally calculate the global offset for each symbol based on a prefix sum. One approach to achieve global synchronization in a CUDA kernel is to use "cooperative groups" (Hardt et al., 2016). However, the cooperative groups API has a limited number of threads (e.g., 1,280 threads on \(A4000\)), smaller than we need, which is typically 5,000 threads. Thus, we need to divide the kernel into two with an implicit device-level synchronization involved. However, this design requires moving
Figure 4. Three workflows of GPU LZSS.
Figure 5. Our proposed two-pass prefix sum.
compressed data back to the global buffer, incurring multiple data movements between shared memory and global memory.
To solve this issue, we propose an optimization called two-pass prefix sum. It includes both a local prefix sum and a global prefix sum. The design is shown in Figure 5. Specifically, the local prefix sum calculates the offset for each compressed symbol within each data chunk/thread block. Here we adopt an optimized two-sweep prefix-sum algorithm that fits GPU well (Beng et al., 2017). It includes up-sweep and down-sweep processes, detailed in SS3.3.2.
After we get the compressed size of each data chunk from the local prefix sum, we can calculate the offset of each compressed chunk through a global prefix sum across data chunks. Compared with the single-pass prefix sum, our proposed two-pass prefix sum only needs to store the size of each compressed data chunk instead of the size of each compressed symbol, which significantly reduces the amount of data written to and read from the global memory (e.g., by at least \(C\) times, where \(C\) is the number of symbols per data chunk). Moreover, our two-pass prefix sum can also reduce space complexity and the global memory footprint. Note that to avoid moving the match result back and forth between the shared and global memories for the local prefix sum, we propose to fuse the local prefix-sum computation into the matching kernel. Thus, we can perform the local prefix-sum computation directly on the matching result stored in the shared memory. This can also reduce the global memory footprint.
After the local prefix sum, each GPU thread encodes symbols based on the calculated local offsets and the found matches, similar to the CPU sequential encoding in LZSS, as mentioned in SS2.2. Note that since this encoding is performed at the thread-block level, no grid-level synchronization is needed. As a result, the encoding can be further fused with the matching and local prefix sum steps to form Kernel 1. It is also worth noting that compared with CULZSS, our encoding enables massive GPU threads, which maximizes the parallelism and encoding throughput.
#### 3.2.3. Multi-byte Matching Approach
CULZSS only performs the matching step on a single-byte basis, which leads to a decrease in compression throughput and a potential loss of compression ratio. This is because, for datasets based on multi-byte symbols, single-byte matching would lose the characteristics of a specific data structure. To this end, we propose a novel multi-byte matching approach that finds matches based on symbols instead of bytes. This strategy has two advantages: 1 Searching for matches based on symbols is less expensive than searching for matches based on bytes since there are far fewer symbols than bytes, which increases compression throughput. 2 It can bring potentially higher compression ratios because each match can contain more bytes. We use \(S\) to denote the symbol length in the following discussion.
However, the potential gain in compression ratio is not guaranteed, especially when the match length is generally short. Therefore, to maximize the chance of increasing the compression ratio with our multi-byte matching approach, we propose to adaptively select the symbol length and increase the sliding window size. For example, assuming that the input data type is int32, by default, gpuLZ adopt the 4-byte symbol length and the sliding window size of 128. Our approach is to adapt the symbol length (ranging from 1 to 4) and the sliding window size (ranging from 32 to 2552) to achieve the best trade-off between the compression ratio and the throughput.
Footnote 2: Note that we set the maximum \(W\) to 255 because we only use one byte to save the sliding window size and reserve 0 for no match found.
After studying the impacts of symbol length \(S\) and sliding window size \(W\) on various datasets (detailed in SS4.2), we propose a lightweight parameter selection approach. Specifically, assuming the datasets contain multiple fields that are the input to gpuLZ at one time, we monitor the average compression ratio with the multi-byte matching strategy (default). 4 When the average compression ratio is relatively low (for instance, lower than 1.5), we switch back to single-byte matching, considering that the multi-byte matching is not effective under low compression ratio circumstances (will be illustrated in SS4.2). This is because the multi-byte matching results in a smaller number of repeated patterns and ignores byte-level repeated patterns. 2 When the average compression ratio is high, we keep using multi-byte matching, considering that multi-byte matching has a significant improvement in compression ratio over single-byte matching. On the other hand, for \(W\), we enlarge it to \(S\) times when we use the multi-byte matching strategy since the multi-byte matching can bring a speedup of about \(S\) times over the single-byte matching, which will offset the higher time complexity brought by a larger sliding window size.
In addition, we provide another option that allows users to set different sliding window sizes, e.g., 32/64/128/255 as level 1-4. A higher level will bring a higher compression ratio but lower throughput. The user can decide the level based on their needs. For example, if compression throughput is a priority, users should select level 1; if the compression ratio is a priority, users should select level 4.
### Details of gpuLZ and Its Implementation
Finally, we introduce our architectural performance optimizations with some implementation details. We describe these details following our compression workflow. We first introduce the data partition and then describe the kernel details.
#### 3.3.1. Data Partition
First, we divide the input data into multiple blocks and then divide each block into multiple chunks. We launch a GPU kernel for each data block and map each data chunk to one GPU thread block in the kernel. Our data partition strategy is illustrated in Figure 6. The reasons for performing a two-level data partition are as follows: 1 _Data block:_ GPUs have limited memory capacity (e.g., 16 GBs for an NVIDIA A4000 GPU), so partitioning data into blocks allows the GPU to process datasets that are larger than its memory space. 2 _Data chunk:_ As aforementioned, the encoding step requires iterating over the found matches and encoding the matches that are not covered by previous matches, introducing data dependencies and sequential execution. Thus, data partitioning can enforce matches not across different chunks, increasing the encoding parallelism.
_Data block size:_ CULZSS divides the input data into many small blocks (e.g., 1 MB) such that it overlaps CPU encoding with GPU matching as much as possible. However, this very small block significantly limits the GPU resource utilization and overall computational efficiency. In contrast, since our design does not involve the CPU for encoding, we can use a relatively large block size to
increase GPU efficiency. Therefore, we choose the block size of 30% of the global memory (e.g., 12 of 40 GB for NVIDIA A100 GPU).
_Data chunk size:_ On the one hand, our data partition results in lower compression ratios since we ensure matches cannot span chunks; in other words, larger chunks have higher compression ratios. On the other hand, the chunk size needs to satisfy the GPU hardware constraint as each chunk is stored temporarily in the GPU shared memory. Note that the shared memory is part of the GPU's L1 cache, so the more shared memory is used the less L1 cache is left. The trade-off between shared memory and L1 cache is discussed in detail in SS4.2. In addition, each thread in a thread block processes multiple symbols in a data chunk. We use \(C\) to denote the data chunk size in the following discussion.
#### 3.3.2. Kernel I
Next, we describe Kernel I and our optimization in warp divergence. We also present its pseudocode in Listing 1.
```
1input:originaldata
2output:compresseddata,flagarray,offsetarrays
3
4//findmatches
5foriterationinchunkSize/blockDim.x:
6tid=threadIdx.x+iteration*windowSize
7whilewindowPointer<bufferStart&&bufferPointer<blockSize
8ifbuffer[bufferPointer]==buffer[windowPointer]:
9ifoffset[bufferPointer]==@:
10offset=bufferPointer-windowPointer
11len++
12bufferPointer++
13else:
14iflen>maxLen:
15maxLen=len
16maxOffset=offset
17len=@
18offset=@
19bufferPointer=bufferStart
20windowPointer++
21writeToSenderMem(lengthArr,offsetArr)
22initializeToZero(prefixArr)
23
24//synchronizethreads
25__syncthreads()
26//findencodinginformation
27__shared__uint8_tbyteFlagArr[(dataChunkSize/8)]
28ifthreadidx.x==@:
29whileencodeIndex<blockSize:
30iflengthBuffer[encodeIndex]<minEndcodeLength:
31prefixBuffer[encodeIndex]=sizeof(dataType)
32encodeIndex++
33else:
34prefixBuffer[encodeIndex]=2
35encodeIndex++lengthBuffer[encodeIndex]
36generateFlagArr(byteFlagArr)
37
38
39//localprefixsum
40localPrefixUpSweep(prefixArr)
41saveTheCompressedChunkSize()
42localPrefixDownSweep(prefixArr)
43
44
45//compressdata
46foriterationinchunkSize/blockDim.x:
47encodeBasedOnlocalPrefix()
48
49//copyflagarraybacktoglobalmem
50writeBackToGlobal(byteFlagArr)
```
**Listing 1**Proposed Kernel I
At the beginning of Kernel I, we load the input stream into the shared memory. Different from CULZSS, which uses two arrays in the shared memory to store the input stream and the sliding window, we integrate them into one array and use pointers to indicate the start positions of the input stream and sliding window. This design saves the context switch time for updating arrays and fully utilizes the shared memory to accelerate the matching step.
Then, we find matches for every symbol in the designated data chunk. In the sequential LZSS, the matching process is highly time-consuming. In the best case, it takes O(n) (when no match is found), while in the worst case, it takes O(n\({}^{2}\)) time complexity (when the sliding window and the look-ahead buffer contain the same symbol). This unstable time complexity would cause a severe divergence among different GPU threads. To solve this issue, we use an optimized matching method with a stable time complexity to reduce the divergence among threads. Our method is described as follows.
1. We use a search pointer to the start of the window and a position pointer to the coding position (Lines 12-25).
2. We move the search pointer until it reaches the same value as the position pointer points to, and then move both pointers until they point to different values (Lines 13-17).
3. We record the length and offset of the current match only if it is longer than the previously recorded match (Lines 18-24).
4. We let the position pointer point to the coding position again, advance the search pointer to the next location, and repeat step 2) until the sliding window is iterated all over or the look-ahead buffer is empty (Lines 12, 22-24).
Although this method cannot guarantee an optimal output (i.e., always finding the longest matches), it gives a sufficient result (will be shown in the evaluation) with a very stable time complexity of O(n). Moreover, unlike the traditional LZSS, the maximum encoding length in our method does not exceed the offset. Both aspects can minimize the possibility of warp divergence.
After that, we encode the matches (Lines 35-43). Specifically, we use one thread per thread block to calculate the compressed size and produce the flag array (due to LZSS's sequential nature). Specifically, if a match is found for the current symbol, it takes two bytes (i.e., one for length and one for offset) to encode the match and appends one bit "1" to the flag array to denote "match" and then skips the symbols that the match covers (Lines 40-42). If no match
Figure 6. Data partition strategy
is found, it saves the original symbol (i.e., the number of bytes for the input data type) and appends one bit "0" to the flag array to denote "no match" (Lines 37-39). Note that before the local prefix sum to calculate the memory offsets, we allocate an array in the shared memory to save the size of each compressed symbol (Line 7) and initialize the array to all zeros (Line 27). Thus, we can skip the symbols covered by a match to further improve performance.
Finally, we use the local prefix sum (described in SS3.2.2) to calculate the memory offset of each symbol within its data chunk and write the compressed data chunks and their sizes to the global memory for deflating. We use a two-sweep method (Beng et al., 2017) to implement our local prefix sum, as illustrated in Figure 7. Specifically, in the up-sweep phase, we calculate the summation of two elements with a distance of \(2^{\text{step-1}}\) (Figure 6(a)). Then, we can get the summation of all elements stored in the last position. After that, we save this back to the global memory for the following global prefix sum and reinitialize it with 0. In the down-seep phase, we copy each element to a new position and calculate the summation of two elements (Figure 6(b)). Finally, we get the prefix sum of the whole array.
#### 3.3.3. Kernel II
Next, we present Kernel II and show its pseudocode in Listing 2. Specifically, we perform the global prefix sum (discussed in SS3.2.2) in Kernel II. Since prefix sum has been implemented and highly optimized in the CUB library (Beng et al., 2017), we directly call it to achieve high performance. Note that Kernel II launches the CUB's prefix-sum kernel twice, because we not only need to calculate the offset of the compressed data (Line 5) but also need to calculate the offset of the flag array (Line 8).
```
1input:compressedsize,flagarraysize
2output:compressedoffset,flagarrayoffset
3
4//calculatetheoffsetforcompressedsize
5cub::DeviceScan::ExclusiveSum(compressedSize,prefix)
6
7//calculatetheoffsetforflagarray
8cub::DeviceScan::ExclusiveSum(flagSize, flagPrefix)
```
Listing 2Proposed Kernel II
#### 3.3.4. Kernel III
Finally, we implement the deflating process in Kernel III. The pseudocode of Kernel III is presented in Listing 3. We use the same granularity as the matching step to fully utilize the parallelism of the GPU. Specifically, we calculate the sizes of the flag array and the compressed data chunk (Lines 5-6) and then write the flag array (Lines 9-11) and the compressed data (Lines 14-16) based on the memory offsets.
```
1input:compressedData,cOffset,flagArray,fOffset
2output:compressedOut,flagArrayOut
3
4inttid=threadIdx.x
5flagArraySize=fOffset[Index+1]-fOffset[Index]
6compressedSize=cOffset[Index+1]-cOffset[Index]
7
8//writebackflagarray
9whiletid<flagArraySize:
10write(flagArray,foffset[index],tid)
11tid=blockDim.x
12
13//writebackcompresseddata
14whiletid<compressedSize:
15write(compressedData,cOffset[index],tid)
16tid+=blockDim.x
```
Listing 3Proposed Kernel III
### Experimental Setup
PlatformsWe use two platforms in our evaluation: 1 One node from the Big Red 200 supercomputer (Beng et al., 2017), equipped with two 64-core AMD EPYC 7742 CPUs @2.25GHz and four NVIDIA Ampere A100 GPUs (108 SMs, 40GB), running CentOS 7.4 and CUDA 11.4.120. 2 An in-house workstation equipped with one 24-core Intel Xeon W-2265 CPU @3.50GHz and two NVIDIA GTX A4000 GPUs (40 SMs, 16 GB), running Ubuntu 20.04.5 and CUDA 11.7.99.
DatasetsWe conduct our evaluation using six representative multi-byte datasets from two benchmarks, i.e., TPC-H benchmark (Zhu et al., 2017) and Scientific Data Reduction Benchmarks (SDRBench) (Zhu et al., 2017). Specifically, TPC-H benchmark is a suite of business-oriented ad-hoc queries and concurrent data modifications. It includes one int32 integer dataset (i.e., tpch-int32) and one utf-8 string dataset (i.e., tpch-string). SDRBench includes three uint16 datasets (i.e., hurr-quant, hace-quant, nyx-quant) and one float32 dataset (i.e., rrm). Note that the three uint16 datasets are intermediate data 3 generated from three real-world HPC simulation datasets, i.e., HACC (cosmology particle simulation) (Hurricane (ISABEL weather simulation) (Hurricane, 2017), and Nyx (cosmology simulation) (Zhu et al., 2017), which have been widely used in previous studies on scientific data compression (Zhu et al., 2017; Zhu et al., 2017; Zhu et al., 2017; Zhu et al., 2017; Zhu et al., 2017); the float32 trtm dataset is from a seismic imaging application for petroleum exploration (Han et al., 2017; Zhu et al., 2017).
Figure 7. Two-sweep algorithm
_Baselines._ We compare gpuLZwith two baselines: 1 _CULZSS_: CULZSS is the state-of-the-art GPU implementation (open-source) (Friedman et al., 2017) of LZSS, but it uses the GPU to find matches and the CPU to encode matches. 2 _nvCOMP's LZ4_: LZ4 is similar to LZSS but uses a particular data format to achieve portability. We use the state-of-the-art GPU implementation (closed-source) of LZ4 from nvCOMP (Friedman et al., 2017). We use the latest nvCOMP 2.6.0.
_Evaluation metrics._ We focus on evaluating and analyzing GPU-based LZ compressors on two main metrics. 1 _Compression ratio_ is one of the most commonly used metrics in compression research. It can be calculated as the ratio of the original data size and reconstructed data size. Higher compression ratios mean denser information aggregation against the original data and faster data transfer. 2 _Compression throughput_ is the primary consideration when using a GPU-based lossy compressor instead of a CPU-based one. It can be calculated as the ratio of original data size to compression/de-compression time. Higher throughput means faster compression and more significant benefits of using compression.
### Impacts of Parameters \(C\), \(W\), and \(S\)
First, we evaluate the impacts of parameters \(C\), \(W\), and \(S\). We conduct the experiments on both the A100 and A4000 platforms. The compression ratio is shown in Table 1, and the compression throughput is shown in Table 2. Specifically, we choose the data chunk sizes (i.e., \(C\)) of 2048, 4096, 8192, and 16,384. The data chunk size directly decides the shared memory size we utilize in our design. Because the shared memory is part of the L1 cache. As a result, we can observe the impact of the trade-off between shared memory and L1 cache on the overall throughput. Note that some fields in the table are empty because of the limited shared memory. The sliding window size \(W\) will directly decide the time complexity. The longer the sliding window is, the higher the time complexity will be. It will also potentially increase the compression ratio. Moreover, we introduce multi-byte symbols into the LZSS algorithm to explore the potential compression ratio and throughput gains. To this end, we select three symbol lengths (i.e., \(S\)): 1, 2, and 4 bytes.
First, we focus on the impact of \(C\). As mentioned before, we partition the data into chunks to allow LZSS to execute in parallel. However, due to the independence of each data chunk, the compression ratio would drop slightly because the match does not span the boundaries of data chunks, leading to the limited match length. The evaluation result also proves this, as illustrated in Table 1. The compression ratio increases as the data chunk size increases in all test cases. The average improvement is 1.02\(\times\). However, as the data chunk size increases, the compression throughput decreases in almost all test cases. This proves that a larger L1 cache is better for compression throughput than utilization of shared memory, at least in the range of feasible data chunk sizes of gpuLZ. With smaller data chunk size, the compression throughput is improved by 1.33\(\times\) on average. Note that the compression throughput drops significantly with larger data chunk sizes. For example, on the 4-byte nyx-quantization dataset, the compression throughput drops from 19.05 to 18.76 when the data chunk size changes from 2048 to 4096. At the same time, it drops from 14.67 to 8.36 when the data chunk size changes from 8192 to 16,384. This is because when the data chunk size is 16,384, the shared memory size is close to the hardware's limit, resulting in a fairly small L1 cache size and further impacting the overall throughput. This phenomenon is more obvious when the data chunk size is bigger. Note that A100 has a higher speedup than A4000 when the block size is large because A100 has larger L1 cache (192 KB/SM) than A4000 (128 KB/SM).
Next, we explore the impact of \(W\). On the one hand, as analyzed before, a larger sliding window brings a potentially longer match, increasing the compression ratio. Table 1 shows that the ratio of compression ratio to the sliding window size is near linearly in
\begin{table}
\begin{tabular}{l c c c c c c c|c c c c|c c c} & \multicolumn{3}{c|}{**chunk size: 2048**} & \multicolumn{3}{c|}{**chunk size: 4096**} & \multicolumn{3}{c|}{**chunk size: 8192**} & \multicolumn{3}{c|}{**chunk size: 16384**} \\ window size \(\downarrow\) & 1 byte & 2 bytes & 4 bytes & 1 byte & 2 bytes & 4 bytes & 1 byte & 2 bytes & 4 bytes & 1 byte & 2 bytes & 4 bytes \\ \hline hurr & 32 & 3.4 & 3.77 & 3.58 & 3.18 & 3.84 & 3.66 & n/a & 3.88 & 3.70 & n/a & n/a & 3.72 \\ quant & 64 & 3.79 & 4.39 & 4.05 & 3.86 & 4.50 & 4.18 & n/a & 4.56 & 4.25 & n/a & n/a & 4.28 \\ & 128 & 4.39 & 4.91 & 4.44 & 4.51 & 5.09 & 4.64 & n/a & 5.18 & 4.75 & n/a & n/a & 4.81 \\ & 255 & 4.89 & 5.32 & 4.78 & 5.07 & 5.59 & 5.15 & n/a & 5.73 & 5.36 & n/a & n/a & 5.47 \\ \hline hace & 32 & 1.55 & 1.67 & 1.59 & 1.55 & 1.68 & 1.60 & n/a & 1.68 & 1.61 & n/a & n/a & 1.61 \\ quant & 64 & 1.71 & 1.82 & 1.71 & 1.72 & 1.84 & 1.73 & n/a & 1.85 & 1.74 & n/a & n/a & 1.75 \\ & 128 & 1.87 & 1.97 & 1.83 & 1.88 & 2.00 & 1.86 & n/a & 2.02 & 1.88 & n/a & n/a & 1.89 \\ & 255 & 2.01 & 2.12 & 1.92 & 2.03 & 2.18 & 1.99 & n/a & 2.20 & 2.03 & n/a & n/a & 2.05 \\ \hline nyx & 32 & 3.97 & 5.07 & 4.80 & 4.04 & 5.20 & 4.95 & n/a & 5.27 & 5.02 & n/a & n/a & 5.06 \\ quant & 64 & 5.06 & 6.18 & 5.73 & 5.19 & 6.42 & 6.00 & n/a & 6.54 & 6.14 & n/a & n/a & 6.21 \\ & 128 & 6.14 & 7.19 & 6.52 & 6.36 & 7.57 & 6.99 & n/a & 7.79 & 7.25 & n/a & n/a & 7.38 \\ & 255 & 7.08 & 8.03 & 7.11 & 7.46 & 8.65 & 7.94 & n/a & 9.01 & 8.42 & n/a & n/a & 8.64 \\ \hline ipch & 32 & 1.31 & 1.25 & 1.29 & 1.32 & 1.26 & 1.30 & n/a & 1.26 & 1.30 & n/a & n/a & 1.30 \\ int32 & 64 & 1.37 & 1.30 & 1.34 & 1.38 & 1.31 & 1.35 & n/a & 1.31 & 1.35 & n/a & n/a & 1.36 \\ & 128 & 1.43 & 1.34 & 1.38 & 1.44 & 1.35 & 1.39 & n/a & 1.36 & 1.40 & n/a & n/a & 1.41 \\ & 255 & 1.50 & 1.38 & 1.41 & 1.51 & 1.39 & 1.43 & n/a & 1.40 & 1.44 & n/a & n/a & 1.45 \\ \hline ipch & 32 & 1.55 & 1.58 & 1.46 & 1.56 & 1.59 & 1.47 & n/a & 1.60 & 1.48 & n/a & n/a & 1.48 \\ string & 64 & 2.02 & 1.96 & 1.72 & 2.04 & 1.99 & 1.76 & n/a & 2.01 & 1.78 & n/a & n/a & 1.79 \\ & 128 & 2.57 & 2.43 & 2.03 & 2.62 & 2.50 & 2.12 & n/a & 2.54 & 2.17 & n/a & n/a & 2.20 \\ & 255 & 3.08 & 2.84 & 2.27 & 3.19 & 3.00 & 2.47 & n/a & 3.09 & 2.58 & n/a & n/a & 2.64 \\ \hline rtm & 32 & 2.45 & 2.72 & 2.88 & 2.47 & 2.75 & 2.91 & n/a & 2.77 & 2.93 & n/a & 2.94 \\ float32 & 64 & 2.59 & 2.80 & 2.92 & 2.61 & 2.83 & 2.96 & n/a & 2.85 & 2.98 & n/a & n/a & 2.99 \\ & 128 & 2.66 & 2.84 & 2.94 & 2.69 & 2.88 & 2.99 & n/a & 2.89 & 3.01 & n/a & n/a & 3.02 \\ & 255 & 2.69 & 2.85 & 2.97 & 2.72 & 2.90 & 3.02 & n/a & 2.92 & 3.05 & n/a & n/a & 3.07 \\ \hline \end{tabular}
\end{table}
Table 1. Compression ratio of cruLZ. Note that some fields are noted as “n/a” due to out of the limited shared memory.
almost all datasets. For example, on the \(2\)-byte tpch-int32 dataset, the compression ratio is \(1.26\), \(1.31\), \(1.35\), and \(1.39\) when the sliding window size is \(32\), \(64\), \(128\), and \(255\), respectively. Moreover, the overall compression ratio improvement by extending the sliding window size from \(32\) to \(255\) is \(1.4\times\). On the other hand, a larger sliding window incurs more operations per thread, decreasing the compression throughput. The average speedup when we change the sliding window size from \(255\) to \(32\) is \(3.9\times\). Compared with the relatively small increase in compression ratio, the throughput decreases dramatically as the sliding window size doubles. However, we find gpuLZ highly stable in throughput across different datasets under the same configuration (i.e., \(C\), \(W\), and \(S\)). For example, the throughput of \(C=2048\), \(W=32\), and \(S=2\) is \(9.57\) GB/s, \(8.49\) GB/s, \(10.14\) GB/s, \(8.3\) GB/s, \(7.96\) GB/s, and \(9\) GB/s on \(A4000\) on [hur, hacc, nyx]-quant, tpch-[int32, string], and rtm datasets, respectively.
Finally, we discuss the impact of the symbol length \(S\). As mentioned, the multi-byte symbol length can introduce a potential compression ratio improvement and increase the compression throughput due to longer matches and fewer symbols to process. Table 1 shows that the compression ratio improvement is not determined as we expected. It has different patterns for different datasets. For example, on the three uint16 quantization-code datasets, the compression ratio reaches the peak at \(S=2\), which is the same as the length of uint16. However, on the int32 tpch-int32 dataset, the compression ratio is optimal at \(S=1\), which is different from the length of int32. This is because the number of repeated patterns is relatively smaller in the tpch-int32 dataset, as indicated by the low compression ratio. Thus, using a 1-byte symbol (i.e., \(S=1\)) may detect more byte-level repeated patterns and achieve a higher compression ratio than using a 4-byte symbol. On the off-8 tpch-string dataset and the float32 rtm dataset, the best compression ratio is achieved at \(S=1\) and \(S=4\), respectively, which is the same as the unit length of their data types.
Regarding throughput, the impact of the symbol length is more obvious; that is, longer symbol results in higher throughput. The average throughput improvement is \(4.5\times\) when we change \(S\) from \(1\) to \(4\). Combined with the above observation regarding compression ratio, we find that \(S=2\) has both a higher compression ratio and throughput than \(S=1\) in some cases. For example, on the hurr-quantization dataset with any \(W\) and \(C\), \(S=2\) can always lead to a better compression ratio and throughput than \(S=1\). Note that this observation can be generalized to all LZ compressors.
### Evaluation on Compression Ratio
Next, we compare the compression ratio of gpuLZ with CULZSS and nvCOMP's LZ4, as shown in Figure 8. Note that in the figure, we use "spulz" to denote the default configuration (\(C=2048\), \(S=2\), and \(W=128\)) and "spulz-best" to denote the best compression ratio
\begin{table}
\begin{tabular}{l|c c c c c c c c|c c
from all settings. The figure shows that compared with CULZSS, GPULZ achieves a similar compression ratio on all datasets because the compression ratio is highly dependent on the sliding window size. In our default configuration, we use \(W=128\) as the same as the CULZSS. In the best cases (overall configurations), GPULZ has an improvement of \(1.4\times\) on compression ratio. Compared with nvCOMP's LZ4, GPULZ achieves an average compression ratio improvement of \(1.23\times\). Specifically, on the hurr-quantization and nyx-quantization datasets, GPULZ has the highest compression ratio improvements, which are \(1.53\times\) and \(1.8\times\), respectively. In the best cases (of all configurations), GPULZ has an improvement of \(1.42\times\) on compression ratio thanks to our fine-tuned parameters.
### Evaluation on Compression Throughput
Then, we evaluate the performance of gpuLZ, with its scalability on A100 and A4000 shown in Figure 9. Note that "culzss" denotes the overall throughput of CULZSS, including the GPU matching kernel and the CPU encoding process. "culzss-kernel" denotes the throughput of the GPU matching kernel. "gpu" denotes our method with the default setting (\(C=2048\), \(S=2\), and \(W=128\)). "gpuz-best" denotes the best compression throughput from all settings. We note that the settings for the best case are generally C=2048, W=32, S=4, except for the nyx-quantization and rtm datasets on A100, which achieve the best performance with C=4096, W=32, S=4. This is because A100 has a larger L1 cache (192 KB/SM) than A4000 (128 KB/SM); a larger chunk size (i.e., more shared memory utilization) will not significantly affect performance. Compression throughput is potentially increased by fully utilizing the high-speed shared memory.
Compared with CULZSS, our method has an average speedup of \(22.19\times\) on all datasets with A4000. When compared with our best case, this speedup increases to \(130.01\times\). The reason is threefold: 1 the slow CPU encoding process, 2 the data movement overhead between CPU and GPU, and 3 the overhead of multiple times of launching the same kernel. Note that the entire process of gpuLZ is almost as fast as the matching kernel of CULZSS both on A100 and A4000, thanks to our optimizations in both algorithm and implementation. Compared with nvCOMP's LZ4, GPULZ has similar compression throughputs on the hurr-quantizaton, tpch-int32, and tpch-string datasets but slightly slower on the hacc-quantization and nyx-quantization datasets. However, since nvCOMP is not an open-source library, we can only infer the underlying reason, which may be that some field sizes in these datasets are too small. By comparison, GPULZ has more stable performance across all datasets, and our best case achieves higher throughput than nvCOMP with both A100 and A4000 platforms on almost all datasets. For example, the average speedup is \(4.32\times\) on A100.
In addition, we also implement our decompression and evaluate its throughput. Considering decompression is easy to parallel, we do not describe and compare it with other compressors in detail; instead, we only show the average decompression throughput across all datasets. Specifically, the average decompression throughput of
Figure 8. Compression ratio of different GPU compressors.
Figure 9. Compression throughput of different GPU compressors on A100 and A4000.
GPULZ on all datasets is 16.4 GB/s on A4000 and 29.1 GB/s on A100. For comparison, nvCOMP's LZ4 has an average decompression throughput of 21.1 GB/s on A4000.
### Use-case of gpuLZ
Finally, we apply gpuLZ to cuSZ (a state-of-the-art GPU lossy compressor for scientific data) due to its high performance on the quantization-code datasets to improve the compression ratio. Note that the original cuSZ only has a Huffman encoding (Hu et al., 2017), whereas the improved cuSZ includes gpuLZ before the Huffman encoding. We evaluate the original cuSZ and the improved cuSZ on the A100 platform under the relative error bound 1e-2. Besides Hurricane, NYX, and RTM, we also include one more dataset from SDRBench, i.e., CESM (climate simulation) (Beng et al., 2019).
Table 3 shows that the improved cuSZ obtains an improvement of 1.9\(\times\)\(\sim\) 8.7\(\times\) in compression ratio with a slightly lower compression throughput. We note that the improved cuSZ has higher compression ratio improvements on larger error bounds and higher dimensional datasets (e.g. 3D Hurricane and RTM), since the quantization code generated by cuSZ in these cases has more spatial redundancy, thus benefiting gpuLZ. Enabling higher compression ratios is critical for many HPC applications using lossy compression (rather than lossless compression). We also note that some CPU lossy compressors with multi-threading support such as SZ (Zhu et al., 2017) and ZFP (Zhu et al., 2017) can also achieve compression throughputs of about 2\(\sim\)4 GB/s on 32 cores (Hu et al., 2017), but their overall throughput is limited by moving uncompressed data from the GPU to the CPU; in comparison, the time of moving compressed data (with hundreds of compression ratios) with the improved cuSZ is much lower.
## 5. Conclusion and Future Work
In this paper, we propose a series of optimizations for one of the most important lossless compression algorithms LZSS for multi-byte data on GPUs. Specifically, we develop a new method for multi-byte pattern matching, optimize the prefix-sum operation, and fuse multiple GPU kernels, thereby improving both compression ratio and throughput (due to lower computational time complexity, less data movement, and potentially longer matches). gpuLZ achieves up to 272.1\(\times\) speedup and up to 1.4\(\times\) higher compression ratio over state-of-the-art solutions.
In the future, we plan to evaluate gpuLZ on more multi-byte datasets. We will attempt to develop an analytical model for searching the optimal parameter combination for different datasets. In addition, we will integrate gpuLZ into more data-intensive applications running on different parallel and distributed systems.
## Acknowledgment
This research was supported by the Exascale Computing Project (ECP), Project Number: 17-SC-20-SC, a collaborative effort of two DOE organizations\(-\)the Office of Science and the National Nuclear Security Administration, responsible for the planning and preparation of a capable exascale ecosystem, including software, applications, hardware, advanced system engineering and early testbed platforms, to support the nation's exascale computing imperative. The material was supported by the U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research (ASCR), under contract DE-AC02-06CH11357. This work was also supported by the National Science Foundation under Grants OAC-2003709, OAC-2104023, OAC-2303064, OAC-2247080, and OAC-2312673. This research was also supported in part by Lilly Endowment, Inc., through its support for the Indiana University Pervasive Technology Institute.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Dataset** & \multicolumn{2}{c}{**cuSZ**} & \multicolumn{2}{c}{**cuSZ w/ gpuLZ**} \\ \cline{2-5} & **CR** & **THR** & **CR** & **THR** \\ \hline CESM & 22.6 & 12.0 & 43.2 & 2.7 \\ Hurricane & 24.3 & 31.9 & 29.1 & 5.9 \\ Nyx & 30.1 & 87.2 & 74.8 & 10.4 \\ RTM & 28.6 & 49.2 & 249.8 & 7.2 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Comparison of compression ratio and throughput (GB/s) between original cuSZ and improved cuSZ (with gpuLZ) on A100 platform.
|
2305.06375
|
A Leptonic Model for Neutrino Emission From Active Galactic Nuclei
|
It is often stated that the observation of high-energy neutrinos from an
astrophysical source would constitute a smoking gun for the acceleration of
hadronic cosmic rays. Here, we point out that there exists a purely leptonic
mechanism to produce TeV-scale neutrinos in astrophysical environments. In
particular, very high-energy synchrotron photons can scatter with X-rays,
exceeding the threshold for muon-antimuon pair production. When these muons
decay, they produce neutrinos without any cosmic-ray protons or nuclei being
involved. In order for this mechanism to be efficient, the source in question
must produce very high-energy photons which interact in an environment that is
dominated by keV-scale radiation. As an example, we consider the active galaxy
NGC 1068, which IceCube has recently detected as a source of TeV-scale
neutrinos. We find that the neutrino emission observed from this source could
potentially be generated through muon pair production for reasonable choices of
physical parameters.
|
Dan Hooper, Kathryn Plant
|
2023-05-10T18:00:02Z
|
http://arxiv.org/abs/2305.06375v2
|
# A Leptonic Model for Neutrino Emission From Active Galactic Nuclei
###### Abstract
It is often stated that the observation of high-energy neutrinos from an astrophysical source would constitute a smoking gun for the acceleration of hadronic cosmic rays. Here, we point out that there exists a purely leptonic mechanism to produce TeV-scale neutrinos in astrophysical environments. In particular, very high-energy synchrotron photons can scatter with X-rays, exceeding the threshold for muon-antimon pair production. When these muons decay, they produce neutrinos without any cosmic-ray protons or nuclei being involved. In order for this mechanism to be efficient, the source in question must produce very high-energy photons which interact in an environment that is dominated by keV-scale radiation. As an example, we consider the active galaxy NGC 1068, which IceCube has recently detected as a source of TeV-scale neutrinos. We find that the neutrino emission observed from this source could potentially be generated through muon pair production for reasonable choices of physical parameters.
+
Footnote †: preprint: FERMILAB-PUB-23-232-T
## I Introduction
The conventional wisdom in the field of neutrino astrophysics is that the detection of high-energy neutrinos from a given source would be unambiguous evidence that that object accelerates cosmic-ray protons or nuclei. In particular, whereas gamma rays can be generated through both leptonic (inverse Compton scattering, synchrotron) and hadronic (pion production) processes, it has long been thought that high-energy astrophysical neutrinos would be produced only through the production and decay of pions, which are generated through the interactions of high-energy protons with gas or radiation. In this sense, neutrinos play a critical role in our efforts to identify the sources of the hadronic cosmic-ray spectrum.
If the diffuse spectrum of high-energy astrophysical neutrinos observed by IceCube [1; 2; 3; 4; 5] is generated through hadronic interactions in optically thin environments (_i.e._, those transparent to gamma rays), these neutrinos will inevitably be accompanied by gamma rays from the decays of neutral pions. Normalizing the pion production rate to the spectrum of neutrinos reported by IceCube, one finds that these sources should collectively generate a flux of gamma rays that would approximately saturate, or even exceed, the isotropic background reported by the Fermi Collaboration [6; 7; 8; 9]. When this fact is combined with the lack of observed correlations between the directions of IceCube's neutrinos and known gamma-ray sources [10; 11; 12; 13; 14; 15; 16], transparent source scenarios appear to be somewhat disfavored, instead suggesting that many of these neutrinos are produced in optically thick environments, in what are known as "hidden sources." From this perspective, the dense cores of Active Galactic Nuclei (AGN) are seen as a particularly well-motivated class of sources for IceCube's diffuse neutrino flux [7; 17; 18; 19; 20] (for a review, see Ref. [21]).
The IceCube Collaboration has recently reported an excess of 79 events from the direction of the nearby active galaxy, NGC 1068 (also known as M77), corresponding to a \(4.2\sigma\) detection of \(\sim 1-10\,\mathrm{TeV}\) neutrinos [22] (see also, Ref. [10]). Although NGC 1068 has been detected by the Fermi telescope at \(\sim 0.1-30\,\mathrm{GeV}\) energies [23; 24], MAGIC has failed to detect gamma rays from source, placing strong limits on its emission in the \(\sim 0.1-10\,\mathrm{TeV}\) band [25]. The lack of TeV-scale gamma rays from this source allows us to rule out the possibility that the observed neutrinos are produced in an optically thin environment, instead favoring scenarios in which cosmic-ray protons are accelerated and produce pions in the dense region immediately surrounding this AGN's supermassive black hole [26]. Observations by NuSTAR [27] and XMM-Newton [28] have detected bright X-ray emission from this source (corresponding to an intrinsic luminosity of \(L_{X}\sim 10^{44}\,\mathrm{erg/s}\) in the 2-10 keV band, and extending up to energies of \(\epsilon_{X}\sim 10^{2}\,\mathrm{keV}\)), suggesting that the densities of high-energy radiation in the central region (_i.e._, the corona) could be large enough to efficiently absorb gamma rays through pair production, while still allowing neutrinos to escape.
The conventional wisdom is that the neutrinos observed from NGC 1068 should allow us to definitively identify this object as an accelerator of cosmic ray protons. In this letter, we propose an alternative mechanism for generating the neutrino emission from AGN which is purely leptonic in nature. In particular, cosmic ray electrons in or near the AGN's corona could produce very high-energy gamma rays which would scatter with X-rays to produce muon-antimuon pairs. These muons would then decay to produce neutrinos, without any need for high-energy protons. No protons would be harmed in the making of
these neutrinos.
## II Neutrinos from muon pair production
The production of very high-energy photons through the process of synchrotron emission requires the presence of very high-energy electrons in a region with a very strong magnetic field [29]. Equating the timescales for acceleration and synchrotron losses, the maximum energy to which an electron can be accelerated is given by (see, for example, Ref. [30])
\[E_{e}^{\rm max}\sim 300\,{\rm TeV}\times\left(\frac{B}{0.03\,{\rm G}}\right)^{ -1/2}, \tag{1}\]
where \(B\) is the strength of the magnetic field. The intensity of the synchrotron radiation from a relativistic electron peaks near the critical frequency, \(\nu_{c}\), corresponding to an energy of
\[E_{\rm syn} \sim h\nu_{c}=\frac{3\pi E_{e}^{2}\nu_{g}\sin\alpha_{p}}{m_{e}^{2}} \tag{2}\] \[\approx 9\,{\rm TeV}\times\left(\frac{E_{e}}{300\,{\rm TeV}}\right)^{2} \left(\frac{B}{2\times 10^{3}\,{\rm G}}\right)\left(\frac{\sin\alpha_{p}}{ \sqrt{2/3}}\right),\]
where \(\nu_{g}=eB/2\pi m_{e}\) is the the non-relativistic gyrofrequency, and \(\alpha_{p}\) is the pitch angle. In combining Eqs. 1 and 2, we find that for the case of a uniform magnetic field, synchrotron photons are limited to energies below \(E_{\rm syn}\sim 0.1\,{\rm GeV}\), corresponding to what is known as the "burroff limit" [31]. As we will show below, the production of neutrinos through muon pair production requires \(\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$>$}} \mathcal{O}(1-10\,{\rm TeV})\) photons, well above what is allowed by the burnoff limit. Synchrotron emission, however, could potentially reach these energies in scenarios in which electrons are accelerated in regions with relatively small magnetic fields (such as in the torus, for example [32], or in shocks in the corona [33; 34]) before they encounter regions of stronger magnetic fields, possibly in dense clumps within the corona [35]. We could also consider scenarios that feature anisotropic acceleration, or synchrotron emission that is produced from a population of electrons with a significant bulk Lorentz factor.
Alternatively, the very high-energy photons that are required in this scenario could be produced through the process of inverse Compton scattering. In particular, for target photons with energies near \(\epsilon\sim m_{e}^{2}/E_{e}\sim 0.03\,{\rm eV}\times(10\,{\rm TeV}/E_{e})\), such scattering would occur near the boundary of the Thomson and Klein-Nishina regimes, yielding \(E_{\gamma}\sim E_{e}\), but without suffering from a large degree of Klein-Nishina suppression. Note that if the very high-energy photons are produced through inverse Compton scattering, no large magnetic fields would be required.
To estimate the spectrum of the synchrotron radiation that is emitted from a population of electrons, we adopt the simplifying approximation that each electron radiates the entirety of its power at its critical frequency. While this is not precisely correct, the power does climb until \(\nu\sim\nu_{c}\), and falls exponentially at higher frequencies, leading to results that are not very different from those that would be obtained in a more careful calculation. Under this approximation, the total synchrotron power per unit frequency can be written as follows:
\[j(\nu)\approx\left(\frac{dN_{e}}{dE_{e}}\right)\left(\frac{\Delta E_{e}}{ \Delta\nu_{c}}\right)\left(\frac{-dE_{e}}{dt}\right), \tag{3}\]
where \(dN_{e}/dE_{e}\) is the spectrum of the radiating electrons, \(\Delta E_{e}/\Delta\nu_{c}=E_{e}/2\nu_{c}\) is the degree to which the critical frequency of the synchrotron emission changes as an electron cools, and \(-dE_{e}/dt\) is the synchtron energy loss rate. After averaging over the pitch angle, the energy loss rate from synchrotron is given by [29]
\[\frac{dE_{e}}{dt} =-\frac{2\sigma_{t}B^{2}E_{e}^{2}}{3\mu_{0}m_{e}^{2}} \tag{4}\] \[\approx-2.5\times 10^{3}\,{\rm TeV}/{\rm s}\times\left(\frac{E_{e}}{ {\rm TeV}}\right)^{2}\left(\frac{B}{10^{3}\,{\rm G}}\right)^{2}\!\!,\]
where \(\sigma_{t}\) is again the Thomson cross section
For the case of a power-law spectrum of electrons, \(dN_{e}/dE_{e}=AE_{e}^{-p}\), the spectrum of synchrotron emission is approximately given by
\[j(\nu) \approx\left(AE_{e}^{-p}\right)\left(\frac{E_{e}}{2\nu}\right) \left(\frac{2\sigma_{t}B^{2}E_{e}^{2}}{3\mu_{0}m_{e}^{2}}\right) \tag{5}\] \[\approx\frac{A\sigma_{t}B^{2}}{3\mu_{0}m_{e}^{2}\nu}\left(\frac{2 \pi m_{e}^{3}\nu}{eB}\right)^{(3-p)/2}\!\!,\]
where, in the second line, we have made the substitution, \(E_{e}\approx(2\pi m_{e}^{3}\nu/eB)^{1/2}\). From this exercise, we can see that a population of electrons with a power-law index, \(p\), will produce a spectrum of synchrotron radiation that takes the approximate form of \(dN_{\gamma}/d\nu\propto j(\nu)/\nu\propto\nu^{-(p+1)/2}\).
In the hot corona, we adopt a blackbody distribution to describe the X-rays, with a temperature of \(T_{X}\sim 1-10\,{\rm keV}\). TeV-scale gamma rays can collide with these X-rays to produce not only electron-positron pairs, but also muon-antimuon pairs. In the center-of-momentum frame, the total energy of such a collision is given by
\[E_{\rm CM} =[2E_{\gamma}\epsilon_{X}\left(1-\cos\theta\right)]^{1/2} \tag{6}\] \[\approx 0.5\,{\rm GeV}\times\left(\frac{E_{\gamma}}{10\,{\rm TeV}} \right)^{1/2}\!\!\left(\frac{\epsilon_{X}}{27\,{\rm keV}}\right)^{1/2}\!\! \left(\frac{1-\cos\theta}{0.5}\right)^{1/2},\]
where \(\theta\) is the angle between the incoming photons, and we have scaled the average energy of a target photon to the temperature of the hot corona, \(\langle\epsilon_{X}\rangle\approx 2.7\,T_{X}=2.7-27\,{\rm keV}\). Notice that for \(E_{\gamma}\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$>$}} \mathrm{TeV}\times(10\,{\rm keV}/T_{X})\), the energy of these collisions will typically exceed not
only the threshold for electron-positron pair production, but also that for the production of muon-antimuon pairs.
The cross sections for electron-positron and muon-antimuon pair production are given by [36; 37; 38]:
\[\sigma_{\gamma\gamma}=\frac{2\pi\alpha^{2}}{E_{\rm CM}^{2}}\bigg{[}2\beta(\beta ^{2}-2)+(3-\beta^{4})\ln\bigg{(}\frac{1+\beta}{1-\beta}\bigg{)}\bigg{]}, \tag{7}\]
where
\[\beta\equiv\bigg{[}1-\bigg{(}\frac{2m_{e,\mu}}{E_{\rm CM}}\bigg{)}^{2}\bigg{]} ^{1/2}. \tag{8}\]
Using this cross section, we can integrate over the distribution of target photons to determine the rate of lepton-antilepton production for a gamma-ray propagating through an isotropic radiation field:
\[\Gamma_{\gamma\gamma}=\int_{-1}^{1}\frac{1-\cos\theta}{2}\,d(\cos\theta)\int \sigma_{\gamma\gamma}\,\frac{dn_{X}}{d\epsilon_{X}}\,d\epsilon_{X}, \tag{9}\]
where \(dn_{X}/d\epsilon_{X}\) is the differential number density of target photons. In Fig. 1, we show the ratio of the rates for muon and electron pair production as a function of gamma-ray energy. At very high energies, this curve asymptotically approaches unity, corresponding to equal rates for the production of electron-positron and muon-antimuon pairs. In the relativistic limit, these muons will carry half of the energy of the very high-energy photon, leading to neutrinos with \(E_{\nu}\approx E_{\gamma}/6\).
We note that muon pair production will be further suppressed if keV-scale photons do not dominate the radiation fields that are present within the scattering region. In the presence of large number densities of \(\sim\mathcal{O}(\rm{eV}-\rm{keV})\) photons (see Ref. [39]), most of the very high-energy photons will instead scatter with this lower-energy radiation to produce electron-positron pairs.
## IV Neutrinos from muon pair production in NGC 1068
To calculate the neutrino spectrum from muon pair production in the core of an AGN, such as NGC 1068, we follow the procedure described in the previous section, adopting an electron spectrum of \(dN_{e}/dE_{e}\propto E_{e}^{-2}\,\exp(-E_{e}/300\,\rm{TeV})\). The results of this calculation are shown in Fig. 2, for several choices of the magnetic field strength and the temperature of the hot corona. In each case, we have normalized the electron spectrum such that the total power injected above 1 TeV is \(\sim(2-12)\times 10^{42}\,\rm{erg/s}\), as indicated in the caption. This normalization has been set under the assumption that the corona is optically thick to very high-energy photons. If this is not the case, the normalization of the high-energy electrons would need to be increased accordingly. The shaded band in these figures represent the neutrino spectrum observed from NGC 1068, as reported by the IceCube Collaboration [22]. We have treated this emission as isotropic, and have taken the distance to NGC 1068 to be 14.4 Mpc. Note that for the magnetic fields considered here, the timescale for muon synchrotron energy losses is much longer than the lifetime of these particles.
## V Discussion and summary
In this letter, we have proposed a novel mechanism for the production of high-energy neutrinos in astrophysical environments which is purely leptonic and does not rely on the acceleration of protons or nuclei. Instead of generating neutrinos through the process of pion production, we suggest that very high-energy gamma rays could interact with X-rays in the source to produce muon-antimuon pairs, which subsequently decay to generate high-energy neutrinos. Such very high-energy photons could potentially be generated through either synchrotron or inverse Compton scattering, and could lead to a spectrum of TeV-scale neutrinos that is compatible with that recently reported from the active galaxy, NGC 1068.
We consider it unlikely that this mechanism is responsible for most of the diffuse neutrino spectrum reported by the IceCube Collaboration. In realistic astrophysical environments, muon pair production could efficiently produce neutrinos in the \(\sim 1-100\,\rm{TeV}\) range, but would not significantly contribute at higher energies. Although we have focused here on the cores of AGN, other environments in which high-energy photon are present in high-temperature radiation fields could also generate neu
Figure 1: The ratio of the rates for muon-antimuon pair production and electron-positron pair production as a function of gamma-ray energy, for a blackbody spectrum of target photons with a temperature of \(T_{X}=1\,\rm{keV}\) (dashed) or \(T_{X}=10\,\rm{keV}\) (solid).
trinos through muon pair production. Gamma-ray bursts, for example, could be interesting in this context.
One way to potentially determine whether the neutrinos from a given source are produced through muon pair production or through pion production would be to measure their flavor ratios [40]. Whereas pion decay produces neutrinos in a ratio of \(\nu_{e}:\nu_{\mu}:\nu_{\tau}=1:2:0\) (which after oscillations becomes \(\nu_{e}:\nu_{\mu}:\nu_{\tau}\approx 1:1:1\)), muon decay yields \(\nu_{e}:\nu_{\mu}:\nu_{\tau}=1.5:1.5:0\) (or after oscillations, \(\nu_{e}:\nu_{\mu}:\nu_{\tau}\approx 1.2:0.9:0.9\)). Although such a measurement would certainly be very challenging [41], this information could, in principle, be used to discriminate between these production mechanisms. We also point out that this mechanism can only produce neutrinos at energies above the threshold for muon pair production, corresponding to \(E_{\nu}\gtrsim 0.3\,\mathrm{TeV}\times(10\,\mathrm{keV}/T)\). If the neutrino spectrum from NGC 1068 is observed to extend to lower energies, hadronic interpretations would be favored.
###### Acknowledgements.
We would like to thank Takahiro Sudoh, Haocheng Zhang, Kohta Murase, Ke Fang, Tim Linden, Gordan Krnjaic, Carlos Blanco, Damiano Caprioli, and Rodolfo Capdevilla for helpful discussions. DH is supported by the Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics. KP is supported by the National Science Foundation under grant AST-1828784.
|
2304.11520
|
Processing Natural Language on Embedded Devices: How Well Do Transformer
Models Perform?
|
This paper presents a performance study of transformer language models under
different hardware configurations and accuracy requirements and derives
empirical observations about these resource/accuracy trade-offs. In particular,
we study how the most commonly used BERT-based language models (viz., BERT,
RoBERTa, DistilBERT, and TinyBERT) perform on embedded systems. We tested them
on four off-the-shelf embedded platforms (Raspberry Pi, Jetson, UP2, and UDOO)
with 2 GB and 4 GB memory (i.e., a total of eight hardware configurations) and
four datasets (i.e., HuRIC, GoEmotion, CoNLL, WNUT17) running various NLP
tasks. Our study finds that executing complex NLP tasks (such as "sentiment"
classification) on embedded systems is feasible even without any GPUs (e.g.,
Raspberry Pi with 2 GB of RAM). Our findings can help designers understand the
deployability and performance of transformer language models, especially those
based on BERT architectures.
|
Souvika Sarkar, Mohammad Fakhruddin Babar, Md Mahadi Hassan, Monowar Hasan, Shubhra Kanti Karmaker Santu
|
2023-04-23T03:01:39Z
|
http://arxiv.org/abs/2304.11520v4
|
# Exploring Challenges of Deploying BERT-based NLP Models in Resource-Constrained Embedded Devices
###### Abstract
BERT-based neural architectures have established themselves as popular state-of-the-art baselines for many downstream NLP tasks. However, these architectures are data-hungry and consume a lot of memory and energy, often hindering their deployment in many real-time, resource-constrained applications. Existing lighter versions of BERT (e.g., DistilBERT and TinyBERT) often cannot perform well on complex NLP tasks. More importantly, from a designer's perspective, it is unclear what is the "right" BERT-based architecture to use for a given NLP task that can strike the optimal trade-off between the resources available and the minimum accuracy desired by the end user. System engineers have to spend a lot of time conducting trial-and-error experiments to find a suitable answer to this question. This paper presents an _exploratory study of BERT-based models under different resource constraints and accuracy budgets_ to derive empirical observations about this _resource/accuracy trade-offs_. Our findings can help designers to make informed choices among alternative BERT-based architectures for embedded systems, thus saving significant development time and effort.
## 1 Introduction
Transformer-based architectures (Vaswani et al., 2017), especially BERT-based models (Devlin et al., 2018), have established themselves as popular state-of-the-art baselines for many downstream NLP tasks, including _Intent Classification (IC)_, _Sentiment Classification (SC)_, and _Named Entity Recognition (NER)_. However, a well-known criticism of BERT-based architectures is that they are data-hungry and consume a lot of memory and energy; therefore, deploying them in many resource-constrained devices is quite challenging. In fact, due to their excessive size (431 MB) and parameters (110 M), deploying a pre-trained BERT-base model in resource-constrained embedded devices is often impractical, especially at the production level with certain minimum accuracy/performance requirements.
To alleviate this problem, researchers have introduced lighter versions of BERT e.g., DistilBERT (Sanh et al., 2019) and TinyBERT (Jiao et al., 2020)). But such reductions in complexity come at a cost, usually in terms of a drop in accuracy. The degree of degradation in performance depends on the difficulty of the task, especially since those models often cannot perform well on complex NLP tasks, including emerging entity (Derczynski et al., 2017) or mixed emotion detection (Demszky et al., 2020). Therefore, designers must make an inevitable trade-off between an accurate model and a model that can run smoothly in a resource-constrained environment. Unfortunately, developers often have little idea about this trade-off and have to spend a lot of time conducting trial-and-error experiments to find a suitable architecture that is both feasible for their resource-constrained hardware and meets their desired level of accuracy.
To design a dialog-based interaction system for an embedded device, we need models that are feasible to _(a)_ run on resource-constrained hardware and _(b)_ meet the desired level of accuracy. From a developer's perspective, it is still unclear what is the "right" BERT-based architecture to use for a given NLP task that can strike a suitable _trade-off between the resources available and the minimum accuracy_ desired by the user. Due to the staggering size of the \(BERT_{Base}\) model, we experiment with different distilled BERT models (e.g., DistilBERT and TinyBERT) for IC, SC, and NER tasks. However, we find that existing ready-to-use distilled models perform poorly on some SC and NER datasets (Sec. 5). Hence there is a need to _explore other models that can better optimize the efficiency/accuracy trade-offs_.
This research performs an _exploratory study of BERT-based models under different resource constraints and accuracy budgets to derive empirical data about this resource/accuracy trade-offs_. In particular, we generate overhead of running various BERT architectures from an off-the-shelf Raspberry Pi-based six-degrees-of-freedom (6-DOF) robot platform.
In summary, our contributions are as follows.
* Our study systematically investigates the performance of BERT-based language models on a real robotic platform, more precisely on a six-degrees-of-freedom (6-DOF) robot testbed. Furthermore, we analyzed the trade-offs between complexity and accuracy in resource-constrained environments across multiple downstream NLP tasks. (Sec. 4-Sec. 5).
* We explore the potential design options for various BERT-based architectures on diverse resource-limited platforms viz Raspberry Pi boards with 2 GB, 4 GB, and 8 GB of RAM configurations. Through empirical observations, we have developed a comprehensive lookup table that encompasses various significant design parameters like inference timings, memory usage, and energy consumption (Sec. 5). Our findings can help designers make choices among alternative BERT-based architectures under given resource constraints, thus saving development time and hassle.
* In addition to evaluating BERT-based language models on a real-world robotic platform, our goal was to fill the gap between simulated studies and real-world scenarios since no previous work has deployed these models on a robotic platform.
We now start with the state-of-the-art techniques (Sec. 2). We then introduce the problem statement and present selected datasets (Sec. 3). Section 4 describes our experiment methodology before we discuss our findings in Sec. 5.
## 2 Related Work
We discuss related research along two dimensions: _(a)_ BERT-based models and their efficient variants and _(b)_ using NLP on embedded devices.
BERT-based Models and their Variants.The performance of BERT comes at a high computation and memory cost, which makes on-device inference really challenging. To mitigate this issue, researchers have proposed knowledge distillation approaches from the original BERT model, for example, _(a)_ "finetune" the BERT model to improve task-specific knowledge distillation (Turc et al., 2019; Tsai et al., 2019), _(b)_ use Bi-LSTM models (Tang et al., 2019) for knowledge distillation from BERT, _(c)_ leverage single-task models to teach a multi-task model (Clark et al., 2019), _(d)_ distillation of knowledge from an ensemble of BERT into a single BERT (Liu et al., 2020), _(e)_ TinyBERT (Jiao et al., 2020) uses a layer-wise distillation strategy for BERT in both the pre-training and fine-tuning stages, and _(f)_ DistilBERT (Sanh et al., 2019) halves the depth of the BERT model through knowledge distillation in the pre-training stage and an optional fine-tuning stage. On a different direction, the _Patient Knowledge Distillation_ approach (Sun et al., 2019) compresses an original large model ("teacher") into an equally-effective lightweight shallow network ("student"). Other BERT models (e.g., SqueezeBERT (Iandola et al., 2020), MobileBERT (Sun et al., 2020), QBERT (Zafrir et al., 2019), ALBERT (Lan et al., 2020)) can also reduce resource consumption than the vanilla BERT. EdgeBERT (Tambe et al., 2021), an algorithm-hardware co-design approach, performs latency-aware energy optimizations for multi-task NLP problems. However, unlike ours, EdgeBERT _(a)_ does not apply attention heads pruning, and _(b)_ does not report scores on downstream NLP tasks on real-world embedded systems.
NLP for Embedded Robots.Researchers have explored NLP techniques to facilitate natural communication between humans and embedded devices, especially in the context of voice-controlled cognitive robots. For example, Megalingam et al. (2019) presents a voice recognition tool that compares the user's input commands with the stored data. Zhang et al. (2018) propose a ROS-based robot that analyzes commands using an offline grammar recognition library. Megalingam et al. (2021) propose a cost-efficient speech processing module running on ROS that can provide natural language responses to the user. There also exists ROS-integrated independent speech recognition packages (Zhang and Xu, 2015; Sharan et al., 2019) as well as Arduino-based (Verma et al., 2020) and custom (Lv et al., 2008) voice-control robot platforms. House et al. (2009) a voice-controlled robotic arm
(named VoiceBot) for individuals with motor impairments (Bilmes et al., 2005). However, most of these works focused on rule-based approaches, and we note that transformer architectures are still under-explored in terms of their practical deployment challenges in real-world embedded and robotic devices, which is the focus of this study.
### Uniqueness of Our Work
While existing work can reduce the size of BERT models through distillation and pruning, from a system design perspective, it is still difficult and tedious for a developer to find out the "right" BERT-based architecture to use in an embedded platform. To date, it is also unclear which lighter version of BERT would strike the optimal trade-off between the resources available in an embedded device and the minimum accuracy desired. Our empirical evaluation and design space exploration table (see Sec. 5) can help the system and machine learning engineers to pick suitable architectures depending on target system configuration and performance constraints (e.g., accuracy, \(F_{1}\) score). To the best of our knowledge, this work is _one of the first efforts to study the feasibility of deploying BERT-based models_ in real-world resource-constrained robots.
## 3 Problem Statement & Datasets
### NLP Tasks under Consideration
This paper primarily focuses on exploring the challenges of deploying natural language dialog interfaces on resource-constrained embedded devices. The core technical challenge associated with any dialog system is to accurately understand and interpret user "utterances" and perform the right "action" accordingly. At a fundamental level, user utterance understanding relies on multiple basic NLP tasks including (but not limited to): _(a) Intent classification (IC)_ -- to understand the need of the user, _(b) Sentiment Classification (SC)_ -- to understand user emotions, and 3) _Named-entity Recognition (NER)_ -- to extract related entities. These tasks are fundamental NLP tasks for any voice-controlled real-life robot or interactive system, chatbots, virtual assistants, such as Microsoft's LUIS systems and Google's Dialogflow. As the first exploratory study of language models on a real-life robotic platform, in this work, we exclusively focus on these three popular NLP tasks (viz., IC, SC, and NER) and study _how BERT-based models can be optimally deployed_ to accomplish dialog processing in embedded devices.
### Datasets
Our study includes the following datasets1: _(a)_ HuRIC (for IC), _(b)_ GoEmotion (for SC), and _(c)_ CoNLL and WNUT17 (for NER), as we present below.
Footnote 1: Appendix B presents additional details about these datasets.
#### 3.2.1 Intent Classification: HuRIC
For IC, we use Human Robot Interaction Corpus (HuRIC) (Vanzo et al., 2020), which is the state-of-the-art single-class classification dataset. The basic idea of HuRIC is to build a reusable corpus for human-robot interaction in natural language for a specific application domain, i.e., house service robots. HuRIC includes a wide range of user utterances given to a robot representing different situations in a house environment. Table 1 presents some statistics of HuRIC.
#### 3.2.2 Sentiment Classification: GoEmotion
We use GoEmotion (Demszky et al., 2020) dataset from Google AI for the SC task. GoEmotion is a human-annotated dataset of 58,000 Reddit comments extracted from popular English-language subreddits and labeled with 27 emotion categories. As the largest fully annotated English language fine-grained emotion dataset to date, the GoEmotion taxonomy was designed with both psychology and data applicability in mind Table 2 presents some statistics of GoEmotion.
\begin{table}
\begin{tabular}{c|c} \hline Statistic & Count \\ \hline \hline Number of examples & 729 \\ \hline Number of intent labels & 11 \\ \hline Size of training dataset & 583 \\ \hline Size of test dataset & 146 \\ \hline \end{tabular}
\end{table}
Table 1: Statistics of HuRIC dataset.
\begin{table}
\begin{tabular}{c|c} \hline Statistic & Count \\ \hline \hline Number of labels & 27 + Neutral \\ \hline Maximum sequence length in overall datasets & 30 \\ \hline Size of training dataset & 43,410 \\ \hline Size of test dataset & 5,427 \\ \hline Size of validation dataset & 5,426 \\ \hline \end{tabular}
\end{table}
Table 2: Statistics of GoEmotion data set.
#### 3.2.3 Named-entity Recognition: CoNLL & WNUT17
For NER we consider two datasets, viz., CoNLL Sang and De Meulder (2003) and WNUT17 Derczynski et al. (2017).
CoNLL.CoNLL-2003 Sang and De Meulder (2003) was released as a part of CoNLL-2003 shared task: language-independent named entity recognition. The English corpus from this shared task consists of Reuters news stories between August 1996 and August 1997, each annotated with the entities associated with them. The data set consists of a training file, a development file, and a test file. The details of CoNLL-2003 are presented in Table 3.
Wnut17.While the CoNLL corpus is based on news stories, we were looking for a dataset that contains user utterances such as those available on HuRIC. Unfortunately, we could not find such a NER dataset but discovered a very similar corpus that contains user-generated text, named WNUT201Derczynski et al. (2017). The dataset's shared task focuses on identifying unusual, previously-unseen entities in the context of emerging discussions. Identifying entities in noisy text is really challenging, even for human annotators, due to novel entities and surface forms. In this dataset, user comments were mined from different social media platforms because they are large, and samples can be mined along different dimensions, such as texts from/about geo-specific areas, about home aid, and particular topics and events. Table 4 summarizes the dataset properties.
## 4 Experimental Setup
We now summarize BERT architectures and configurations used in our experiments (Sec. 4.1 and Sec. 4.2) and present the robot testbed (Sec. 4.3). To measure the performance of each task (IC, SC, NER), we use three popular metrics (i.e., Precision, Recall, and \(F_{1}\) score). Section 4.4 lists the design questions explored in our investigation.
### Off-the-Shelf BERT Variants
We use pretrained base variant of BERT Devlin et al. (2018), RoBERTa Liu et al. (2019), DistilBERT Sanh et al. (2019), TinyBERT Jiao et al. (2020) model from Huggingface2 and finetune the models on respective datasets (i.e., HuRIC, GoEmotion, CoNLL, and WNUT17).
Footnote 2: [https://huggingface.co/](https://huggingface.co/).
Table 5 presents the hyper-parameter used in our experiments.
### Pruning and Custom Configurations
We also experiment with custom, smaller BERT configurations. We can reduce the model size on two fronts: _(a)_ by reducing the layer size and _(b)_ by pruning various attributes.
In our study, we experiment with _different layer combinations_ of BERT models and test their performance on _different hardware configurations_, which are presented in Table 9, and 10. With two layers of \(BERT_{Base}\) (instead of 12), the model size reduces significantly, but so does the accuracy (in terms of \(F_{1}\) score). Still, these models give better accuracy than the distilled models on complex NLP tasks. Also, where \(BERT_{Base}\) model with 12 layers cannot run on a resource-constrained device, using a lesser number of layers enable a model to execute on tiny devices with good accuracy compared to the distilled methods (DistilBERT and TinyBERT).
For further shrinking of the model, pruning can be applied to weights, neurons, layers, channels, attention heads, depending on the heuristic used. In this paper, we focus on pruning _attention heads_, which play an important role in the model's performance and contribute a large number of
\begin{table}
\begin{tabular}{c|c} \hline Statistic & Count \\ \hline \hline Number of examples & 5690 \\ \hline Number of labels & 6 \\ \hline Size of training dataset & 3394 \\ \hline Size of test dataset & 1287 \\ \hline Size of validation dataset & 1009 \\ \hline \end{tabular}
\end{table}
Table 4: Statistics of WNUT17 dataset.
\begin{table}
\begin{tabular}{c|c|c} \hline \hline \begin{tabular}{c} Physic \\ \end{tabular} & \begin{tabular}{c} Articles \\ \end{tabular} & \begin{tabular}{c} Sentences \\ \end{tabular} &
\begin{tabular}{c} Tokens \\ \end{tabular} \\ \hline \hline Training set & 946 & 14,987 & 203,621 \\ \hline Development set & 216 & 3,466 & 51,362 \\ \hline Test set & 231 & 3,684 & 46,435 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Statistics of CoNLL dataset.
\begin{table}
\begin{tabular}{l|c|c} \hline \hline Hyperparameter & NER & IC/SC \\ \hline \hline Number of epochs & 3 & 3 \\ Batch size & 64 & 64 \\ Learning rate & \([e-6,e-4]\) & \([e-6,e-4]\) \\ Weight decay & \([0.01,0.3]\) & \([0.01,0.3]\) \\ Optimizer & Adam & Adam \\ Adam epsilon & 1e-8 & 1e-8 \\ Max sequence length & 64 & 128 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Hyperparameter values for finetuning BERT on IC/SC and NER tasks.
parameters. Although multi-headed attention is a driving force behind many recent state-of-the-art models, Michel et al. (2019) finds that even if models have been trained using multiple heads, in practice, a large percentage of attention heads can be removed without significantly impacting performance. In fact, in some layers, _attention heads_ can even be reduced to a single head.
Based on this fact, we experiment with reducing the size of \(BERT_{Base}\) by dynamically pruning the _attention heads_. Each attention head provides a distribution of attention for the given input sequence. For each attention head, we calculate the head importance by calculating the _entropy_ of each head. After that, we mask all the attention heads with the lowest entropy. This process is repeated for each layer of the encoder. After masking the heads, we calculate the overall \(F_{1}\) score of the masked model and determine the drop in \(F_{1}\) score compared to the original unpruned model. If drop is less than a predefined threshold, we prune the masked attention heads. We repeat the process until the drop in \(F_{1}\) score reaches the predefined threshold. This pruning procedure reduces the model size significantly while maintaining the desired model performance.
### Testbed & Hardware Configurations
We evaluate the BERT models on a six-degrees-of-freedom (6-DOF) robot testbed. We used Raspberry Pi 4 Model B as the main computing unit of the robot. We attached the Adafruit servo extension board (HAT) to control the six servos used for robot movement. For voice processing, we used ReSpeaker microphone (Respeaker, 2017) and leveraged the existing python library (i.e., speech recognition) for voice-to-text conversion. For energy measurements during the inference steps, we used UM25C energy meter (UM25C, 2017). Figure 1 depicts our experimental setup. For better design-space exploration with various hardware resources, we used four variants viz., Raspberry Pi boards with 2 GB, 4 GB, and 8 GB of RAM, respectively. We used Robot Operating System (ROS) (Quigley et al., 2009) (version 2) for controlling the robot and processing the input data (see Appendix C for details).
### Design Challenges & Research Questions
We conducted extensive experiments to investigate the following research questions (RQs).
* **RQ 1.** Given specific user-defined constraints, such as system resources (processor, memory) and performance budgets (accuracy, inference time), what is the optimal (if any) BERT-based architecture satisfying those constraints?
* **RQ 2.** What is the accuracy vs. model-size trade-off as we prune the models?
* **RQ 3.** What are the trade-offs of accuracy and corresponding resource usage (e.g., memory, inference-time, energy consumptions) as we perform pruning?
## 5 Results
In this section, we report our results for the three basic NLP tasks, i.e., IC, SC, and NER for both existing (e.g., BERT, RoBERTa, DistilBERT, and TinyBERT) and custom BERT architectures.
### Experience with Existing BERT Variants
#### Intent Classification(IC).
Recall from our earlier discussion (Sec. 3.2.1) that we used HuRIC dataset for IC. Table 6 presents our findings after running IC tasks using the HuRIC dataset on our robot testbed. We observed that all the models performed similarly on this dataset, achieving more than 90% \(F_{1}\) Score.
a very low \(F_{1}\) Score. The failure of distilled models can be attributed to the difficulty of the task. As multi-label SC requires each utterance to be classified with more than one sentiment. Therefore, this dataset is not a straightforward positive-negative sentiment detection.
Named-entity Recognition(NER).As we mention in Sec. 3.2.3, we use two different datasets to test the NER task. Table 8 summarizes the performance over both the NER datasets and shows that for the CoNLL dataset. In this setup, all models performed comparatively the same. However, for the WNUT17 dataset (which focuses on identifying unusual, previously-unseen entities), the performance of distilled models dropped sharply. This drop tends to be due to the difficulty of analyzing this task, as NER evaluates the ability to detect and classify novel, emerging, singleton-named entities in noisy inputs.
### Exploration with Custom Architectures
Based on our experiment results (Tables 6-8), we further explore different alternative BERT-based architectures by reducing the layers and pruning the attention heads from the original \(BERT_{Base}\) model. For this exploration, we primarily focus on the challenging tasks, i.e., multi-label SC and emerging NER where off-the-shelf models (e.g., DistilBERT and TinyBERT) failed to perform.
Tables 9-10 presents our exploration findings.3 We discuss our observations in Sec. 5.2.1 and provide answers to the research questions posed in Sec. 4.4. Before we move forward with the discussion, we now present a brief overview of the attributes reported in Tables 9-10.
Footnote 3: **Note:** we omit the results for IC on custom BERT architectures as existing models suffice to perform this task.
* \(F_{1}\) _Threshold (\(\theta\))_: The \(\theta\)-cells represents what percentage of the \(F_{1}\) score (with respect to \(BERT_{Base}\)) is retained by the models. In our experiment, we varied \(\theta\) between 50% to 90% and reported the model details in respective columns. For example, \(\theta\) set to 80% implies the \(\theta_{80}\) column.
* _Platform_: Indicates the memory capacity of the Raspberry Pi board attached to the robot.
* _Layer_: Represents the number of layers retained.
* _Model Size_: The size of the saved model after training.
* _Parameters_: This metric indicates the total number of parameters in the saved model.
* _Pruning_: Pruning percentage represents the reduction in size from the \(BERT_{Base}\) model. For example, a pruning percentage of 70% implies that the pruned model is 70% smaller than \(BERT_{Base}\).
* _Energy Consumption_: The average energy consumed (in watts) by the robot controller
\begin{table}
\begin{tabular}{c c c|c c c|c c c|c c c} \hline \multicolumn{11}{c}{} & \multicolumn{11}{c}{**Intent Classification Task (Dataset: HuRIC)**} \\ \hline \hline \multicolumn{2}{c|}{**BERT**} & \multicolumn{3}{c|}{**RoBERTa**} & \multicolumn{3}{c|}{**DistilBERT**} & \multicolumn{3}{c}{**TinyBERT**} \\ \hline
**Precision** & **Recall** & \(F_{1}\) & **Precision** & **Recall** & \(F_{1}\) & **Precision** & **Recall** & \(F_{1}\) & **Precision** & **Recall** & \(F_{1}\) \\ \hline
0.943 & 0.985 & 0.961 & 0.975 & 0.952 & 0.962 & 0.951 & 0.903 & 0.927 & 0.912 & 0.897 & 0.902 \\ \hline \end{tabular}
\end{table}
Table 6: Performance of BERT, RoBERTa, DistilBERT, and TinyBERT for the IC task.
\begin{table}
\begin{tabular}{c c c|c c c|c c|c c c} \hline \multicolumn{11}{c}{} & \multicolumn{11}{c}{**Number-entity Recognition Task**} \\ \hline \hline \multicolumn{11}{c}{} & \multicolumn{11}{c}{**Dataset: CoNLL**} \\ \hline \hline \multicolumn{11}{c}{} & \multicolumn{11}{c|}{**Dataset**: **CoNLL**} \\ \hline \multicolumn{11}{c|}{} & \multicolumn{11}{c|}{**RoBERTa**} & \multicolumn{3}{c|}{**DistilBERT**} & \multicolumn{3}{c}{**TinyBERT**} \\ \hline
**Precision** & **Recall** & \(F_{1}\) & **Precision** & **Recall** & \(F_{1}\) & **Precision** & **Recall** & \(F_{1}\) & **Precision** & **Recall** & \(F_{1}\) \\ \hline
0.891 & 0.963 & 0.926 & 0.882 & 0.955 & 0.917 & 0.906 & 0.967 & 0.935 & 0.872 & 0.958 & 0.911 \\ \hline \hline \multicolumn{11}{c}{} & \multicolumn{11}{c|}{**Dataset**: **WNUT17**} \\ \hline \multicolumn{11}{c|}{} & \multicolumn{11}{c|}{**RoBERTa**} & \multicolumn{3}{c|}{**DistilBERT**} & \multicolumn{3}{c}{**TinyBERT**} \\ \hline
**Precision** & **Recall** & \(F_{1}\) & **Precision** & **Recall** & \(F_{1}\) & **Precision** & **Recall** & \(F_{1}\) & **Precision** & **Recall** & \(F_{1}\) \\ \hline
0.671 & 0.295 & 0.410 & 0.537 & 0.315 & 0.397 & 0.316 & 0.014 & 0.028 & 0.0 & 0.0 & 0.0 \\ \hline \end{tabular}
\end{table}
Table 7: Performance of BERT, RoBERTa, DistilBERT, and TinyBERT for the SC task.
\begin{table}
\begin{tabular}{c c c|c c|c c|c c|c c c} \hline \multicolumn{11}{c}{} & \multicolumn{11}{c}{**Number-entity Recognition Task**} \\ \hline \hline \multicolumn{11}{c}{} & \multicolumn{11}{c|}{**Dataset: CoNLL**} \\ \hline \hline \multicolumn{11}{c|}{} & \multicolumn{11}{c|}{**RoBERTa**} & \multicolumn{3}{c|}{**DistilBERT**} & \multicolumn{3}{c}{**TinyBERT**} \\ \hline
**Precision** & **Recall** & \(F_{1}\) & **Precision** & **Recall** & \(F_{1}\) & **Precision** & **Recall** & \(F_{1}\) & **Precision** & **Recall** & \(F_{1}\) \\ \hline
0.891 & 0.963 & 0.926 & 0.882 & 0.955 & 0.917 & 0.906 & 0.967 & 0.935 & 0.872 & 0.958 & 0.911 \\ \hline \hline \multicolumn{11}{c|}{} & \multicolumn{11}{c|}{**Dataset**: **WNUT17**} \\ \hline \hline \multicolumn{11}{c|}{} & \multicolumn{11}{c|}{**RoBERTa**} & \multicolumn{3}{c|}{**DistilBERT**} & \multicolumn{3}{c}{**TinyBERT**} \\ \hline
**Precision** & **Recall** & \(F_{1}\) & **Precision** & **Recall** & \(F_{1}\) & **Precision** & **Recall** & \(F_{1}\) & **Precision** & **Recall** & \(F_{1}\) \\ \hline
0.671 & 0.295 & 0.410 & 0.537 & 0.315 & 0.397 & 0.316 & 0.014 & 0.028 & 0.0 & 0.0 & 0.0 \\ \hline \end{tabular}
\end{table}
Table 8: Performance of BERT, RoBERTa, DistilBERT, and TinyBERT for the NER task.
(Raspberry Pi) during the inference of a given command.
* _Memory Consumption_: Maximum memory usage (in megabytes) of the corresponding NLP task running on the robot during the inference time.
* _Inference Time_: Depending on the specific task, the average time required for the model to infer the appropriate _Intent, Sentiment or Entity_ from a given command.
#### 5.2.1 Observations
We now discuss our major observations and address the research questions introduced in Sec. 4.4.
Selecting "suitable" model subject to given constraints [RQ 1].We can address this specific research question by inspecting Table 9 and 10. Note that Table 9 and 10 provide information on the model size, performance, parameters, and pruning for the SC and NER tasks, respectively. Let us assume a system designer is looking for suitable NER models for a 2 GB robot platform that maintains approximately 70% of BERT's accuracy (\(\theta_{70}\)). In this case, we can _(a)_ scan through the NER performance metrics (i.e., Table 10), and _(b)_ observe from 2 GB _Platform_ row and \(\theta_{70}\) column that a six-layered and pruned (45% reduced) BERT model can run on a 2 GB platform and attain 70% of \(BERT_{Base}\)'s original \(F_{1}\) score. Hence, our exploration (and similar experiments along this line) can _aid the designers to select appropriate models with desired performance guarantees_.
Accuracy and model-size trade-offs for pruned architectures [RQ 2].Table 9 and Table 10 further provide insights on the pruning vs. \(F_{1}\) score trade-off. For example, in Table 9, the 2 GB _Platform_ row shows a set of models that can run on that system. The same row and \(\theta_{80}\) (80% \(F_{1}\)_score threshold_) column shows that even pruning 55% of a \(BERT_{Base}\) model with four layers can retain 80% of original \(F_{1}\) score of \(BERT_{Base}\), while the model size can be reduced to 198.7 MB from 441.55 MB. Tables 9 and 10 indicate that although _pruning has only a minor impact on memory consumption, it does not have a significant effect on energy consumption_. Furthermore, our analysis of Table 9 suggests that _inference time is directly proportional to the size of the model, implying that decreasing the size of the model leads to a corresponding decrease in inference time_.
Accuracy vs. system resource trade-offs for pruned architectures [RQ 3].If a user has precise requirements for inference time and memory consumption, one can scan Table 9 and 10 to pick the optimum model that meets those requirements. For instance, if we want to find NER models that can make inferences in less than 0.35 seconds on an embedded platform that has 4 GB of memory, the corresponding _Platform_ row (4 GB) in Table 10, shows us the model parameters that can satisfy this requirement(e.g., two-layered, four-layered). Since both of them are feasible for the chosen platform, designers can choose any of them based on the required application performance. As an example, if we pick a two-layered BERT model, the accuracy is 60%, and the memory consumption is 711.9 MB. In contrast, if we select the six-layered BERT model, it can achieve 90% accuracy, with a cost of higher memory consumption of 735.8 MB. Hence, at the expense of 3.36% higher memory consumption, it is possible to get 20% more accuracy. Such a lookup-based approach allows the designers to perform a desired _cost-benefit analysis_.
Since many embedded platforms used for NLP tasks (e.g., voice-controlled robots) are battery-operated, energy consumption for inferring user commands is a crucial parameter. Hence we also analyze the energy usage of the NLP tasks. For any selected BERT model, one can also find the system energy consumption from Table 9 and 10, for two different task respectively. From design pace exploration table it is evident that during the inference of a given command, _energy consumption does not vary significantly over various models_.
## 6 Discussion & Limitations
In this research, we explore different custom architectures of BERT-based language models from the practical viewpoint of deploying them in a resource-constrained embedded system. We show that it is _not always feasible to shrink_ the size of "finetuned" \(BERT_{Base}\) model that can satisfy certain user-defined accuracy/performance budgets. We also report which models are deployable to resource-constrained devices with given user requirements. We believe our empirical findings will help the developers quickly narrow down the plausible BERT-based architecture for target applications, thus saving a lot of development time and effort.
We note that our study is limited to BERT-based models for four existing datasets (i.e., may not generalize to other language models and datasets). However, our evaluation framework is modular and can be retrofitted to other architectures/datasets without loss of generality.
One of the challenges to figuring out the _optimal_ BERT-based architecture is the lack of application-specific (viz., voice-controlled robots for home automation) datasets. Existing datasets either _(a)_ do not have enough examples for training deep learning models or _(b)_ do not provide complex, practical queries to test the robustness of a given model. We believe that building suitable datasets for IoT-specific robotics applications such as voice-control home/industrial automation is an interesting open research problem.
## 7 Conclusion
We present an exploration of BERT-based neural architectures and study the feasibility of deploying BERT models on resource-constrained systems. Our study will allow the designers to select the "right" architecture and parameters depending on the resource constraints and performance requirements. We believe that our approach is general and can be adapted to multiple domains, such as voice-controlled home and industrial automation, precision agriculture, and medical robots, among others.
|
2303.04834
|
Measurement of the angular momenta of pre-main-sequence stars: early
evolution of slow and fast rotators and empirical constraints on spin-down
torque mechanisms
|
We use TESS full-frame imaging data to investigate the angular momentum
evolution of young stars in Orion Complex. We confirm recent findings that
stars with rotation periods faster than 2 d are overwhelmingly binaries, with
typical separations of tens of AU; such binaries quickly clear their disks,
leading to a tendency for rapid rotators to be diskless. Among (nominally
single) stars with rotation periods slower than 2 d, we observe the familiar,
gyrochronological horseshoe-shaped relationship of rotation period versus
$T_{\rm eff}$, indicating that the processes which govern the universal
evolution of stellar rotation on Gyr timescales are already in place within the
first few Myr. Using spectroscopic $v\sin i$ we determine the distribution of
$\sin i$, revealing that the youngest stars are biased toward more pole-on
orientations, which may be responsible for the systematics between stellar mass
and age observed in star-forming regions. We are also able for the first time
to make empirical, quantitative measurements of angular momenta and their time
derivative as functions of stellar mass and age, finding these relationships to
be much simpler and monotonic as compared to the complex relationships
involving rotation period alone; evidently, the relationship between rotation
period and $T_{\rm eff}$ is largely a reflection of mass-dependent stellar
structure and not of angular momentum per se. Our measurements show that the
stars experience spin-down torques in the range ~$10^{37}$ erg at ~1 Myr to
~$10^{35}$ erg at ~10 Myr, which provide a crucial empirical touchstone for
theoretical mechanisms of angular momentum loss in young stars.
|
Marina Kounkel, Keivan G. Stassun, Lynne A. Hillenbrand, Jesús Hernández, Javier Serna, Jason Lee Curtis
|
2023-03-08T19:13:12Z
|
http://arxiv.org/abs/2303.04834v1
|
Measurement of the angular momenta of pre-main-sequence stars: early evolution of slow and fast rotators and empirical constraints on spin-down torque mechanisms
###### Abstract
We use TESS full-frame imaging data to investigate the angular momentum evolution of young stars in Orion Complex. We confirm recent findings that stars with rotation periods faster than 2 d are overwhelmingly binaries, with typical separations of tens of AU; such binaries quickly clear their disks, leading to a tendency for rapid rotators to be diskless. Among (nominally single) stars with rotation periods slower than 2 d, we observe the familiar, gyrochronological horseshoe-shaped relationship of rotation period versus \(T_{\rm eff}\), indicating that the processes which govern the universal evolution of stellar rotation on Gyr timescales are already in place within the first few Myr. Using spectroscopic \(v\sin i\) we determine the distribution of \(\sin i\), revealing that the youngest stars are biased toward more pole-on orientations, which may be responsible for the systematics between stellar mass and age observed in star-forming regions. We are also able for the first time to make empirical, quantitative measurements of angular momenta and their time derivative as functions of stellar mass and age, finding these relationships to be much simpler and monotonic as compared to the complex relationships involving rotation period alone; evidently, the relationship between rotation period and \(T_{\rm eff}\) is largely a reflection of mass-dependent stellar structure and not of angular momentum per se. Our measurements show that the stars experience spin-down torques in the range \(\sim 10^{37}\) erg at \(\sim\)1 Myr to \(\sim 10^{35}\) erg at \(\sim\)10 Myr, which provide a crucial empirical touchstone for theoretical mechanisms of angular momentum loss in young stars.
0000-0002-4882-8870]Marina Kounkel
0000-0002-4882-7808]Keivan G. Stassun
0000-0002-4882-8870]Lynne A. Hillenbrand
0000-0002-4882-8870]Jesus Hernandez
0000-0002-4882-8870]Javier Serna
0000-0002-4882-8870]Jason Lee Curtis
## 1 Introduction
Young stars are born from large clouds of gas, and even at the earliest stages, these pre-stellar objects exhibit rotation (Covey et al., 2005). As the natal core transfers material through the protoplanetary disk onto the protostar sitting at the center, and the protostar itself contracts, through the conservation of angular momentum (\(L\)), the rotational speed would have increased until reaching break-up velocity. However, there are several mechanisms in place to regulate its rotation. A protostar is tidally locked to the inner part of the protoplanetary disk, matching its Keplerian rotation at the inner wall, removing the excess angular momentum through winds and jets that form bipolar outflows (Bouvier et al., 1997; Matt et al., 2012; Gallet et al., 2019).
After the outer envelope is consumed, and a young star eventually depletes its disk, and, in the absence of another reservoir that would enable mass to flow, the angular momentum would not increase further. This young star will continue to contract, reducing its size by a factor of few until it settles onto the main sequence. Despite this, the vast majority of stars will not rotate more rapidly over this time as a result of the contraction, as their angular momentum is decreasing due to magnetic braking (Skumanich, 1972). In large part, this spin down of the angular momentum is very regular as a function of mass, enabling to estimate the age of a star through the measurement of its rotational period via gyrochronology (e.g., Barnes, 2003).
There have been many subsequent efforts over the past 20 years to constrain gyrochronology relations, both theoretically (e.g., Gossage et al., 2021), and empirically (e.g., Angus et al., 2020; Curtis et al., 2020), but they mainly focused on main sequence stars, with only a limited consideration for pre-main sequence stars. Recently,
Kounkel et al. (2022, hereafter, Paper I) have used TESS full frame images (FFI) to measure rotational periods of \(\sim\)100,000 stars with known ages, producing the largest such catalog to-date that enabled the analysis angular momentum evolution of stars between ages of \(\sim\)10 Myr to \(\sim\)1 Gyr. Stars younger than that present a particular complexity due to their rapid evolution, requiring a more careful analysis of their ages, as such they were largely ignored in the analysis.
In this paper, we seek to fill in this gap in the current understanding of the rotational evolution of young stars. In particular, we examine rotation of stars within the Orion Complex, which is one of the closest massive star forming regions, spanning the area on the sky between \(76^{\circ}<\alpha<88^{\circ}\) and \(-10^{\circ}<\delta<15^{\circ}\) containing multiple sub-populations with ages from 1 to 10 Myr (Kounkel et al., 2018). It covers the crucial age that is often missed by previous gyrochronology studies. These data enable a precise measurement of a number of parameters beneficial for characterization of stellar rotation in this region.
Stellar rotation in Orion has been examined repeatedly in the past (e.g., Stassun et al., 1999; Rebull, 2001; Herbst et al., 2002), although the completeness of the census and the level of precision of the underlying data cannot be compared to the modern surveys. More recently, Serna et al. (2021) have also revisited rotation in this region. They have measured rotational periods for 517 member of the Orion Complex using TESS FFI, of which 352 stars also had available rotational velocity \(v\sin i\) measurements. As expected, they find a significant anti-correlation between rotation period and \(v\sin i\). They also note that in the stars younger than \(<5\) Myr, \(v\sin i\) appears to systematically decrease, through some type of angular momentum extraction mechanism (such as through disk locking), but that at the ages older than that, \(v\sin i\) increases due to conservation of angular momentum. Additionally, accreting/disk-bearing young stars (Classical T Tauri stars - CTTSs / Class II) appear to rotate slower than non-accreting/diskless stars (Weak-lined T Tauri stars - WTTSs / Class III) during the first 1.5 Myr, but they appear to have a more comparable distribution at later ages.
While Serna et al. (2021) have primarily examined the distribution of \(v\sin i\), in this work we aim to examine the rotational period evolution of these young stars, significantly expanding the sample. We also leverage empirically determined stellar radii from Kounkel et al. (2020) and precise individual age estimates newly determined by McBride et al. (2021) to quantify the stellar angular momenta and its evolution as a function of stellar mass and age.
In Section 2 we summarize the sources of photometric and spectroscopic data we use to assemble our study sample of \(\sim\)9000 stars with rotation periods, precise individual ages between 1-10 Myr, disk classifications, and empirical radius determinations. Section 3 presents the principal results of our investigation, and in Section 4 we discuss those results in the context of empirical constraints on theoretical spin-down torque mechanisms, understanding the nature of the rapid rotators, the relationship of rotation to stellar disks, and an intriguing bias in the distribution of stellar inclination angles as a function of age. We conclude with a summary of our findings in Section 5.
## 2 Data
The latest census of kinematically selected members of the Orion Complex consists of over 10,000 optically bright stars (Kounkel et al., 2020). Paper I have analyzed TESS FFI data for all of these sources, and have observed periodic signatures in the light curves for \(\sim\)5700 stars of this sample. As these stars are young, they tend to have strong magnetic fields resulting in large spots, resulting in obvious rotational signatures. Furthermore, with typical rotational periods \(<\)10 days for the stars at these ages, full period data can easily be recovered even in a single TESS sector. The distribution of rotational periods for the Orion Complex (Paper I) is shown in Figure 1.
APOGEE spectra are available for several thousand stars across the Complex (Da Rio et al., 2016; Cottle et al., 2018; Kounkel et al., 2018). APOGEE is a high resolution (\(R\sim\)22,500) spectrograph covering H-band, operating as part of Sloan Digital Sky Survey (Wilson et al., 2019). These data have been extensively analyzed, improving the fidelity of spectral parameters for these young stars, in comparison to the primary pipeline used in public releases (Abdurro'uf et al., 2022). APOGEE Net (Olney et al., 2020; Sprague et al., 2022) provides \(T_{\rm eff}\) and \(\log g\) estimates that have been calibrated relative to the pre-main sequence isochrones. Kounkel et al. (2019) have estimated rotational velocity \(v\sin i\) of the stars. They also characterized multiplicity within this sample, including identification of both single-lined (SB1) and double-lined (SB2) spectroscopic binaries. In total, APOGEE spectra of 1831 periodic variables from Paper I in the Orion Complex are available. We adopt these 1831 stars as the primary sample throughout the paper, as only they have the full set of necessary measurements described in this section.
In addition to the directly measured spectroscopic parameters, supplementary data for these stars have been computed. In particular, Kounkel et al. (2018) have es
timated angular diameters for these stars through the fitting of the spectral energy distribution (SED), using an atmospheric model corresponding to \(T_{\rm eff}\) of each star. Coupled with trigonometric parallaxes, this results in a direct estimate of stellar radii. This is an independent radius measurement to those also provided as part of Gaia DR3 (Fouesneau et al., 2022) or in the TESS Input Catalog (Stassun et al., 2019). While all three have comparable estimates, the radii in Kounkel et al. (2018) can be considered more robust for this sample, as a) they are obtained through spectroscopically derived \(T_{\rm eff}\) to identify the best model (in field stars, this can produce \(<\)2% precision; Stassun et al. 2017), b) the SED fit spans all of the available fluxes across the full electromagnetic spectrum, c) a special consideration is given to the stars with IR excess, which would affect the SED fit.
However, \(T_{\rm eff}\) used for the SED fitting in Kounkel et al. (2018) were derived using the IN-SYNC pipeline (Cottaar et al., 2014), which had a number of known systematics. \(T_{\rm eff}\) from the APOGEE Net pipeline are more reliable for these young stars, they offer a better calibration relative to not only the isochrones but also to the more evolved field star. APOGEE Net \(T_{\rm eff}\) are on average \(\sim\)300 K cooler than the IN-SYNC \(T_{\rm eff}\). As such, we renormalize the previously reported radii using updated \(T_{\rm eff}\)through Stefan-Boltzmann law, resulting in \(R\) estimate that are on average 20% larger than previously reported. We note that the effect of spots is not considered; we discuss the implications from them in Section 3.3.
We identify all of the known disk-bearing and/or accreting stars in the sample using a combination of different catalogs. They include classification based on H\(\alpha\) emission (Suarez et al., 2017; Briceno et al., 2019), classification of NIR excess from Spitzer and other high sensitivity surveys of various regions (Hernandez et al., 2007, 2010; Megeath et al., 2012; Grossschedl et al., 2019), as well as a classification of NIR excess based on WISE data across the entire sky (Marton et al., 2016). While accretion signature and disk classification are not necessarily interchangeable, they are expected to be equivalent in a vast majority of cases. Together, these correspond to 580 out of 1831 stars in our sample, which we consider to be Class II/CTTS. The remaining sources are considered to be Class III/WTTS.
We estimate masses for this sample using MassAge (Hernandez J. et al. in prep), through comparing \(T_{\rm eff}\) and bolometric luminosity to MIST isochrones (Choi et al., 2016). Finally, we estimate ages of stars using neural net Sagitta (McBride et al., 2021). This neural net is utilizing Gaia & 2MASS photometry (G, BP, RP, J, H, K), Gaia parallaxes, and the typical extinction along the line of sight to predict the likelihood of a star being pre-main sequence, as well as the age of a young star, up to \(\sim\)80 Myr. It was trained on the census of pre-main sequence stars found in moving groups from Kounkel (2020), with the age of the stars typically inherited from their parent population (subdividing the moving groups into smaller meaningful populations with a narrow age spread), supplemented with the sources ages of which have been determined independently in literature. Appendix A motivates the selection and compares this age model to other techniques. As masses and ages are derived using different models, this may introduce some systematics. But, given that these stars are typically moving vertically down the Hayashi tracks without a significant change in \(T_{\rm eff}\), there should not be a significant variance with mass as a function of age.
The distribution of these ages for the stars in Orion is shown in Figure 2. The uncertainties are generated through propagating the typical uncertainties in the input astrometry and photometry through the model, they
Figure 1: Distribution of rotational periods in the sample as a function of \(T_{\rm eff}\). Left panel: all sources in the Orion Complex from (Paper I), using \(T_{\rm eff}\) estimate from TIC (Stassun et al., 2019). Right: a subset of sources observed by APOGEE, with spectroscopically determined \(T_{\rm eff}\).
typically have a range of 0.05-07 dex. We do note that there is some overlap between the stars that have originally been used for training Sagitta and the stars in this work; we do not expect this to significantly affect the quality of the derived ages.
## 3 Results
In this section, we present the results of the distributions of stellar rotation periods for our study sample of Orion Complex stars, typically having ages \(\lesssim\)10 Myr. The sample includes stars that are rapid rotators and slow rotators, stars that have disks (CTTSs) and those that lack disks (WTTSs), and stars that are single or in binaries. We consider these subsets and their relationships in turn.
However, to begin, we note that the overall distribution of stellar rotation periods among the nominally single stars in the sample (Figure 1) already at these very young ages manifest the familiar "horseshoe-shaped" pattern in the \(P_{\rm rot}\) vs. \(T_{\rm eff}\) diagram that has become the basis for gyrochronology at later ages (e.g., Barnes, 2003). This suggests that, despite significant scatter at any given age, the dominant physical mechanisms that govern the evolution of angular momentum are already in place and very quickly sculpting the distribution of angular momenta into the universal patterns with stellar mass and age that make gyrochronology possible.
### Rapid rotators
Examining the period distribution of CTTS and WTTS objects in Figure 3, there is a clear difference between the two samples. Very few CTTS have rotational periods faster than 2 days. On the other hand, \(\sim\)1/3 of WTTSs have rotational periods \(<\)2 days. We use this 2 day period to delineate between "slow" and "fast" rotators.
Examining their distribution as a function of age (Figure 4), we find that the overall fraction of rapid rotators remains more or less constant, however there are important differences between the WTTSs and CTTSs with regards to this trend. Among WTTSs, the overall fraction of fast rotators decreases from \(\sim\)45% at ages \(<\)1 Myr to \(\sim\)30% at ages \(>\) 3 Myr. The reason for this trend is likely to be increasing dilution of the diskless stars over time by more slow rotators, as all stars irrespective of their rotation eventually deplete their disks, until the age where almost all stars are diskless (\(\sim\)3 Myr; see, e.g., Ribas et al., 2014). Among CTTSs, there is an opposite trend: there are almost no rapid rotators with disks at an age \(<\)1 Myr, but their fraction increases moderately to \(\sim\)10% at ages \(>\) 3 Myr. We return to discuss the implications of this trend in Section 4.2.
At ages \(<\)1 Myr, rapid rotators tend to have rotational periods between 1 and 2 days. As they age, however, the fraction of systems with periods \(<\) 1 day increase relative to the total number of stars. This process appears to continue at older ages, as in populations older than \(\sim\)20-30 Myr, most rapid rotators appear to favor periods \(<\)0.5 days (Paper I). Most likely, without an effective method of reducing a sufficient fraction of their angular momentum, the stars continue to spin up as they shrink towards their main sequence radius, and their spin-up increases until the magnetic braking can overcome this speed increase, or until they stop contracting (e.g., Rebull et al., 2022).
### Slow rotators
In the analysis of slow rotators, we limit the sample only to the sources with rotational periods \(>\)2 days. We examine the trends in this sample as a function of age (Figure 5).
There is no appreciable difference in the distribution of periods relative to the ages of the stars. However, over the first 10 Myr, the radii of the stars rapidly contract. Because of this, there is a significant evolution in the angular momentum. We estimate \(L\) assuming a solid body rotation via \(L=2/5MR^{2}\Omega\), where \(\Omega=2\pi/P\). \(L\) is not conserved during this contraction, as such the stars are losing angular momentum, most likely due to magnetic braking, despite the lack of significant evolution in rotational periods.
One complexity that can skew this approximation of \(L\) is the differential rotation in stars, with rotation at the poles being slower than near the equator. At the moment we cannot determine the precise latitude of the spots. However, pre-main sequence stars moving on Hayashi tracks are expected to have significantly smaller differential rotation than, e.g., the Sun, with the shear of only 3.9%, as it decreases with increasing convective zone depth (Kuker and Stix, 2001; Marsden et al., 2011). As such, a solid body approximation is reasonable for the young stars in this sample.
The resulting distribution of L appears to be mostly consistent between slowly rotating CTTSs and WTTSs, suggesting that the overall angular momentum content of the stars evolves largely irrespective of their disks. And, indeed, this spin down will continue throughout the lifetime of low mass stars, long past their shedding of the disks or entering the main sequence (Paper I).
We bin the data using a kernel of \(\pm\)0.1 dex in age, \(\pm\)200 K in \(T_{\rm eff}\), and find the running median in \(L\) in each bin. Similarly to Paper I, using the resulting average \(L\) in the 2D plane, we perform an empirical fit using a
formalism of
\[\log L=a_{0}+a_{1}\log T_{\rm eff}+a_{2}(\log T_{\rm eff})^{2}+a_{3} (\log T_{\rm eff})^{3}+\] \[b_{0}\log t+b_{1}\log t\log T_{\rm eff}+b_{2}\log t(\log T_{\rm eff })^{2} \tag{1}\]
where \(\log T_{eff}\) is the \(\log_{10}\) of \(T_{\rm eff}\) in K, and \(\log t\) is \(\log_{10}\) of age in years. The best fit is shown in Figure 6, and the coefficients are included in Table 1. Independently, we also fit specific angular momentum per unit mass (\(H\)). We note that this functional form is arbitrary, and a comparable fit can be achieved with a different power dependence on both \(T_{\rm eff}\) and \(\log t\). In Paper I this form was chosen as the lowest power polynomial that could fit the age range from 10 Myr to 1 Gyr and be easily reversible with respect to age. In the Orion sample extending only up to 10 Myr, a maximum power of 2 instead of 3 can also result in a good fit, but for the sake of consistency we retain the same formalism.
Through taking a partial derivative with respect to time of the resulting function, we observe the evolution in the rate of spin down (Figure 7). The loss of angular momentum is most effective at the youngest ages, with d\(L\)/d\(t\) decreasing by more than an order of magnitude over the first 10 Myr. The primary source of an
Figure 4: Fraction of rapid rotators among CTTSs and WTTSs with \(T_{\rm eff}\)\(<\)6700 K as a function of age. Top panel shows all rapid rotators with periods \(<\) 2 days, bottom panel is restricted to the subset with periods \(<\) 1 day.
Figure 3: Period distribution between CTTS and WTTS objects.
Figure 2: Left: Distribution of the adopted ages for the stars in Orion Complex. Right: Typical uncertainties in the sample.
gular momentum loss is likely to be magnetic breaking: young stars may have other factors that would affect their rotation (including accretion and outflows), but the combined net \(\mathrm{d}L/\mathrm{d}t\) from these processes appears to be significantly smaller than the one we observe (e.g., Gallet et al., 2019).
The spin down is more effective among the higher mass stars, this appears to be the case both with \(\mathrm{d}L/\mathrm{d}t\) as well as \(\mathrm{d}H/\mathrm{d}t\) (See also Section 4.1 for discussion). The rate of loss of specific angular momentum appears to be somewhat proportional to \(H\) itself; i.e., at a given age, a 5000 K star appears to have (on average) a specific angular momentum \(\sim\)3.6 times higher than a 3500 K star, and, similarly the ratio of \(\mathrm{d}H/\mathrm{d}t\) for the same stars is also \(\sim\)3.6.
While the underlying trend of decreasing \(L\) and \(H\) at older ages is significant, at all ages there is significant scatter in their distribution, to a degree that is unlikely to improve with more precise measurements of mass, radius, or age. As such, while after a few hundred Myr a population of a given age would be able to form a relatively narrow gyrochrone (e.g., Douglas et al., 2019), early on there is a wide range of the initial conditions pertaining to the angular momentum in the system that are yet to be erased.
Nonetheless, it is striking that the angular momenta of the stars evidently follow much simpler, more monotonic relationships with stellar mass and age than rotation
\begin{table}
\begin{tabular}{c c c} \hline \hline Coefficient & Value (\(L\)) & Value (\(H\)) \\ \hline \(a_{0}\) & \(-1926.1822\) & \(1554.6763\) \\ \(a_{1}\) & \(1352.6072\) & \(-1590.9434\) \\ \(a_{2}\) & \(-299.8859\) & \(524.9803\) \\ \(a_{3}\) & \(21.2112\) & \(-56.0318\) \\ \(b_{0}\) & \(139.5928\) & \(186.4053\) \\ \(b_{1}\) & \(-76.1140\) & \(-102.8078\) \\ \(b_{2}\) & \(10.3335\) & \(14.1278\) \\ \hline \end{tabular}
\end{table}
Table 1: Fitted coefficients for gyrochronology relations
Figure 5: Dependence of radius, rotational period, angular momentum, and specific angular momentum on temperature and age of stars in Orion with \(P>2\) days. The typical uncertainties in the data are 30 K in \(T_{\mathrm{eff}}\), 0.1 R\({}_{\odot}\) in radius, 0.01 day in period, 0.05 dex in L, and 0.04 dex in H. In young disk-bearing stars typical uncertainties in radius are larger due to the infrared excess affecting the quality of the SED fitting.
Figure 6: A fit of angular momentum L. Yellow dots correspond to the data within each age bin. The red line shows the running median within the data. Thick blue line shows the best fit for L for a given age. Thin blue lines are offset in age by \(\pm\)0.3 dex, and dotted lines are offset in age by \(\pm\)0.6 dex. The typical scatter between the data and the resulting fit of L is shown in each panel. Open circles represent CTTSs, filed circles represent WTTSs.
periods alone (i.e., compare various panels in Figure 5 to the upper right panel in Figure 5).
### Inclination
The stellar radii measurements derived from the SED fitting are available from Kounkel et al. (2018). An alternative way to estimate the radius is through a combination of the rotational period and rotational velocity, however as the latter is only available as \(v\sin i\), the resulting radius estimate \(\mathrm{R}\sin i\) would also have inclination dependence. As such, comparing these two estimates enables a constraint on \(\sin i\)(e.g., Jeffries, 2007; Jackson & Jeffries, 2010).
Mathematically, the upper limit of \(\sin i\) is 1. In practice the ratio between \(\mathrm{R}\sin i/R\) does occasionally exceed 1 by more than the formal errors suggest, pointing to a systematic calibration issue in some of the measurements. Here we analyze some of the consideration that can lead to systematic issues that a) result in unreasonably large \(\sin i\) and b) create a distribution of orientations of stars that, due to observational biases, do not appear to be entirely random.
#### 3.3.1 \(T_{\mathrm{eff}}\)
Young stars tend to be strongly magnetically active, with a very high spot fraction coverage. Because of this, it may be difficult to describe these stars with a single temperature. Depending on the wavelength that is used to fit \(T_{\mathrm{eff}}\), it is possible to have a systematic difference of a few 100 K depending on the wavelength of the spectrum (e.g., Lodieu et al., 2018), as some regions of the spectrum are more sensitive to the temperature of the photosphere (\(T_{\mathrm{phot}}\)) or the spot (\(T_{\mathrm{spot}}\)), as opposed to the true \(T_{\mathrm{eff}}\). The estimate of radius from the SED fitting is directly dependent on \(T_{\mathrm{eff}}\), obtained via \(R\propto\sqrt{F/T_{\mathrm{eff}}^{4}}\), where \(F\) is the total flux.
We note that as the radii reported in Kounkel et al. (2018) relied on temperatures that was systematically hotter than what is adopted here. Without the renormalization with updated \(T_{\mathrm{eff}}\), this systematic bias propagated from \(T_{\mathrm{eff}}\) to radii would further exaggerate the number of sources with \(\sin i\)\(>\)1. As such, this demonstrates the improvement in temperature calibration in APOGEE Net compared to earlier estimates, significantly reducing the bias in \(\sin i\) from \(T_{\mathrm{eff}}\). However, without performing an explicit two temperature fit, spectroscopically measured temperature will be lower than the temperature of the unspotted photosphere, but higher than the actual \(T_{\mathrm{eff}}\). Due to the resulting luminosity differences of templates, during the SED fitting \(R\) can becomes underestimated.
For example, a star with \(T_{\mathrm{phot}}=4000\) K and \(T_{\mathrm{spot}}=\)3500 K with 50% spot coverage would have \(T_{\mathrm{eff}}\) of 3775 K. However, in the H band, the resulting spectrum would be best matched by a 3900 K template, resulting in \(R\) difference of 10%.
#### 3.3.2 Differential rotation
As previously mentioned, differential rotation in stars with large convective zones is expected to be relatively small (Kuker & Stix, 2001; Marsden et al., 2011), affecting young stars only on a few % level. However, if a spot is found at higher latitudes, the rotation period at the equator would be somewhat faster than the one that is observed. This will result in overestimating \(\mathrm{R}\sin i\), which will also overestimate \(\sin i\).
#### 3.3.3 Binaries
The excess in \(\sin i\)\(>\) 1 distribution is most pronounced for sources that have previously been identified as SB2, reaching values up to 10. In such sources, \(v\sin i\) tend to be overestimated, as the line broadening in the spectra is affected by the presence of both stars in the system,
Figure 7: Rate of loss of angular momentum \(L\) (left) and specific angular momentum \(H\) (right), as a function of temperature and age, assuming the empirical relation in Figure 6 and Table 1.
which the spectroscopic fit of \(v\sin i\) did not take into the account. Similarly, there may be additional issues, such as mismatch in the source of origin between \(v\sin i\) and the rotational period, as well as the excess flux from the companion in the SED fitting.
Unresolved binaries that are widely separated enough to have their orbital motion being negligible in \(v\sin i\), will also have issue in their \(R\) estimate from the SED fitting. As \(F\) will be contaminated by the second star in the system, \(R\) will be artificially inflated, not representative of the radius of either the primary or the secondary. Since \(R\) is too large, but \(\mathrm{R}\sin i\) in these wide binaries be more similar to systems with just a single star, the resulting \(\sin i\) will be underestimated. When we compare the distribution of slow and fast rotators, fast rotators appear to favor significantly smaller \(\sin i\) than the slow rotators.
Paper I have found that fast rotators generally occupy the binary sequence in a given population. The difference in the \(\sin i\) distribution between slow and rapid rotators is consistent with originating from unresolved binaries. As such, to provide a more equitable distribution of \(\sin i\) that is dependent on other stellar properties, in evaluating other biases we focus solely on slow rotators.
#### 3.3.4 Detectability of periods and \(v\sin i\)
It is important to consider the biases of the underlying catalog that influence the resulting distribution. In order to be able to measure rotation period, a star (of any age) needs to be spotted, and these spots need to rotate in and out of view. The strongest signal is going to be produced by the edge-on systems. In the face-on systems, rotation cannot be observed. In the intermediate orientations, the systems with intrinsically larger spot sizes would be favored, since otherwise an induced variability from a weaker spot that may not fully rotate out of the field of view may be too shallow to be detected. We find that sources with a large amplitude of variability appear to favor edge-on inclinations, in comparison to other sources (Figure 8). If initially sources with strong and weak variability have spots of the same size (given that their ages and masses are comparable), weakly variable stars will have more intermediate inclinations, (resulting in a portion of the spot to never rotate out of the field of view or, alternatively, never rotate in the field of view), and strongly variable stars will be edge-on.
Similarly \(v\sin i\) would also be biased against face-on systems, due to the necessary resolution in the spectra to confidently measure very weak broadening. As such,
Figure 8: Distribution of \(\mathrm{R}\sin i\)/\(\mathrm{R}\) ratios, as a proxy of \(\sin i\). Top: a comparison between slow and fast rotators. Middle: a comparison of the sources with strong (\(>\)5% of the total flux) and weak (\(<1\%\)) variability (limited just to the slow rotators with P\(>2\) days), as well as double lined spectroscopic binaries from Kounkel et al. (2019). Bottom: a comparison of CTTSs and WTTSs, also limited just to slow rotators.
the expected distribution of inclination angles is not expected to be random, but rather favor \(\sin i\)\(\longrightarrow\)1.
This does have implications for the stars in older populations in which magnetic activity is weaker, leading to smaller spots. At more advanced ages, if a star is detected as variable at more advanced ages, it is more likely to be edge on, as inclined systems might have the amplitude of variability less than the instrumental sensitivity.
#### 3.3.5 Extinction
Young stars have additional biases in their detection. For example, \(\sin i\) of CTTSs tends to be slightly smaller than \(\sin i\) of WTTS stars. Applying a two-sided KS test shows that the two populations are different at 3\(\sigma\) level. This difference is driven solely by the deficit of the edge-on CTTSs; restricting both samples to \(\sin i\)\(<\)0.9, these inclined CTTSs and WTTSs are consistent to originating from the same distribution at \(<\)1\(\sigma\) level.
CTTSs still have dusty disks, and an edge-on disk orientation would produce a highly reddened source, most likely rendering it to be too faint to be observed in a magnitude-limited survey like APOGEE. As such, while WTTSs would favor an edge-on orientation, CTTSs in the sample would tend to the sources where the photosphere is not fully obscured by the disk, but still as close to \(\sin i\)\(\longrightarrow\)1 as possible within this constraint. We further discuss the implications of this in Section 4.4.
## 4 Discussion
### Empirical constraints on models of angular momentum loss
The fact that young, low-mass stars must somehow deplete their angular momentum by at least an order of magnitude prior to the main sequence has been understood as a challenge since the earliest measurements of their rotational characteristics (e.g., Cohen & Kuhi, 1979; Herbst et al., 1982; Bouvier et al., 1986; Basri, 1987). Whereas the rotational evolution of low-mass stars on the main sequence has long been well understood as the result of magnetized stellar winds acting on Gyr timescales (e.g., Skumanich, 1972; Kawaler, 1988; Barnes et al., 2001), it has also been recognized that standard stellar winds cannot achieve the necessary angular momentum losses for PMS stars on the short timescale of \(\sim\)10\({}^{7}\) yr. Consequently, a number of mechanisms have been proposed that are unique to the PMS phase, most of which involve harnessing the inertia of circumstellar disks and/or the power of accretion from those disks during the disk lifetime of the first few Myr.
For example, early attempts to model the exchange of angular momentum from star to disk in CTTSs were based on the model of Ghosh & Lamb (1979), in which the star transfers angular momentum to the inner part of the disk where the stellar magnetosphere intersects (and truncates) it. Shu et al. (1994) developed a modified form of the Ghosh & Lamb (1979) model--the so-called "X-wind" model--that invokes the pinching of field lines in the immediate vicinity of the co-rotation radius in the disk to regulate the star's rotation, keeping the star's rotation period constant even as the star contracts (i.e., "disk-locking"), and the excess stellar angular momentum ejected in a disk wind outflow (e.g., Koenigl, 1991).
Alternatives to the X-wind model include those that still utilize the star-disk interaction but remove the angular momentum from the star more directly. For example, the magnetospheric ejection model (e.g., Zanni & Ferreira, 2013) is based on MHD simulations of the sporadic opening and reconnection of magnetospheric field lines, which eruptively launch the magnetically confined mass; this is conceptually similar to stellar coronal mass ejections but with the long lever arm of the star-disk interaction region. Another example is the so-called accretion powered stellar wind model (e.g., Matt et al., 2012), in which disk accretion itself acts to open stellar field lines and launch a powerful stellar wind, thereby extracting angular momentum directly from the stellar surface.
All of these mechanisms make predictions for the torque that can be exerted on the star. With our empirically determined angular momentum loss rates for TTSs, anchored in directly measured rotation rates, distances, and empirically determined radii for a very large number of stars with ages 10\({}^{6}\)-10\({}^{7}\) yr (i.e., Figure 7), we are in a position to quantitatively confront the model predictions beyond previous comparisons to stellar rotation period distributions (e.g., Stassun et al., 1999; Rebull et al., 2004; Irwin & Bouvier, 2009).
The recent work of Gallet et al. (2019) provides an especially useful basis for comparison of the model predictions to our measurements, as those authors constructed a family of models that attempt to self-consistently incorporate the action of all of the disk/accretion-based mechanisms described above (cf., their Figure 6). The models suggest that, for accretion rates that are not too high (i.e., \(\dot{M}\lesssim 10^{-8}\) M\({}_{\odot}\)/yr; otherwise, the net torque on the star is positive), spin-down torques as strong as 10\({}^{36}\) erg are possible during the first \(\sim\)1 Myr and declining below 10\({}^{35}\) erg after 1-2 Myr. By comparison, our measurements (Figure 7) imply that the stars in fact experience torques as strong as 10\({}^{37}\) erg during the first \(\sim\)1 Myr, declining to \(\sim\)10\({}^{36}\) erg after \(\sim\)2 Myr, and finally below \(\sim\)10\({}^{35}\) erg at \(\sim\)10 Myr. Thus, it appears that the disk/accretion-based mechanisms may not sup
ply the full torque budget that we have measured, at least not on their own; an additional source of torque may be required that does not rely on disk inertia or accretion power.
One possibility is extreme coronal mass ejections (CMEs). Aarnio et al. (2012) quantitatively explored this possibility by translating the empirical solar relationship between X-ray energy and CME ejected mass rates (see Aarnio et al., 2011) to the observation by the Chandra Orion Ultradeep Project of extremely large flaring loops on TTSs in Orion (Favata et al., 2005). Aarnio et al. (2012) found that the stars can eject very massive CMEs with a high frequency during the first \(\sim\)10 Myr, and that the high observed CME losses are independent of whether the stars have retained their disks or are actively accreting (see also Aarnio et al., 2013). These studies estimated the torque from extreme CMEs as ranging from \(10^{35}\) erg to \(10^{37}\) erg (cf., Figure 4 in Aarnio et al., 2012), comparable to the angular momentum losses we have determined empirically in this work (Figure 7).
### Nature of rapid rotators
Previous observations have suggested that rapid rotators appear to be almost ubiquitously associated with unresolved binaries (Stauffer et al., 2016, 2018; Simonian et al., 2019; Gillen et al., 2020; Bouma et al., 2021; Kounkel et al., 2022). In populations with a clearly defined cluster sequence on the HR diagram, they tend to occupy the binary sequence Given the resolution of the data used to make this assessment, here "unresolved" binaries includes all of the separations up to several tens of au.
In younger regions (\(<\)10 Myr) such as Orion, an age spread of just a few Myr is apparent in the HR diagram due to a rapid stellar evolution at these ages. As such, the binary sequence can be difficult to observe directly due to confusion with younger stars. However, as has been discussed in Section 3.3.3, the \(\sin i\) distribution of slow and fast rotators in Figure 8 is consistent with the assumption of rapid rotators in Orion also being associated with wide but still unresolved multiples, similarly to what has been previously found older clusters.
We note that rapid rotators appear less likely to form around shorter period binaries with separation \(<\) a few au. We examine their occurrence rate around SB1s (which favors systems with periods \(<\)1000 days Kounkel et al., 2019). There are 4 times fewer rapid rotators among SB1s than in the full sample (6% vs 27%). We discuss this in the following subsection. Thus, the range of separations of binaries hosting rapid rotators is expected to be a few to a few tens of au.
Given this range of separations, it may be possible to resolve these systems with high resolution imaging. A few such studies have been conducted in the Orion Nebula (Duchene et al., 2018; De Furio et al., 2019, 2022) reaching separations down to 10 au, but, due to a limited sample size and the magnitude limit so far, there are no sources we identify as rapid rotators in common in these studies. In future, high resolution imaging of the sources identified as rapid rotators may be more fruitful in finding resolved binaries in comparison to a blind search.
However, more in-depth studies have been conducted in other star forming regions, such as Taurus molecular clouds. Due to its proximity, resolved binaries have been detected down to separations of 3 au (Kraus et al., 2011). Cross-matching this census with the rotational periods from Paper I yields a sample of 18 stars, of which 5 have been resolved as binaries. Four of these binaries are rapid rotators, with projected separations ranging from 4 to 20 au consistent with our interpretation (Figure 9). The sole non-rapidly rotating resolved binary has a separation of 150 au, which suggests that the systems with separations larger than few tens of au are less likely to result in spin-up.
### Disk evolution in rapid rotators
We observed in Figure 3 a clear connection between stellar rotation period and the presence of a disk, in the sense that the rapid rotators are overwhelmingly found among the WTTSs (i.e., diskless stars). This association of rapid rotation with the lack of a disk may be a direct consequence of the association between rapid rotation and binarity (see Section 4.2), and the fact that binaries with separations of a few au to tens of au tend to disperse their disks on faster timescales than single stars (e.g., Kraus et al., 2012).
While it is well understood that very close, tidally locked binaries (with orbital periods of only a few days) will naturally be forced to rotate rapidly (e.g., Melo et al., 2001), it is not immediately intuitive why binary systems with separations up to several dozen au should imbue their stars with a greater amount of angular momentum. One possibility suggested by recent simulations of binary star formation (Kuruwitz et al., 2017) is that binaries are less effective at removing angular momentum from the system: for example, in those simulations, a binary with separation of 45 au has an efficiency of only 42% in transporting angular momentum away from the system via outflows, relative to a single star. In that case, our observation that the fastest rotators tend to be in binaries that are wider than the reach of tidal
interaction would be a vestige of the binary formation process.
Kuruwita et al. (2017) also find that close binary systems with separations of 2.5 au have an efficiency of 87% in transporting angular momentum away from the system in comparison to the single stars. That is to say, compact binaries are significantly less likely to have angular momentum excess in comparison to the wider systems with separations of a few tens of au. This is consistent with our observations of close binaries being less likely to develop rapid rotation. Most likely, this is due to the compact systems resembling single stars in their interaction with the protoplanetary disk, with the outflows being magnetocentrifugally driven, compared to the outflows being driven by magnetic pressure gradient in wider binaries (Kuruwita et al., 2017).
It is interesting in this context that we have observed a small fraction of rapid rotators among CTTSs, and that this prevalence of rapid rotators among the disked stars appears to increase with time (see Figure 4). This has also been observed by Serna et al. (2021). We may again interpret these rapid rotators as likely being binaries as well, but whose orbital configurations permitted the disk to survive for some time (thus remaining as CTTSs for this duration). In that case, the increasing prevalence of rapid rotators among the CTTSs with time may suggest that the increased angular momentum deposited into the inner system by the binary formation process (see above) was partially stored in the disk, and then gradually accreted onto the stars over the disk lifetime, thus allowing the stars to later emerge as rapid rotators just like their diskless binary cousins.
### Age-dependent inclination bias
As can be seen in Figure 8, the average difference in \(\sin i\) between CTTSs and WTTSs is \(\sim\)10-15%. If we
Figure 10: Kernel density estimate distribution of R\(\sin i/R\) ratios of CTTSs and WTTSs as a function of age. Note that the last two age bins are different between the two panels, due to a decreased number of CTTSs at older ages. The distribution is often bimodal, the sources that appear to be inclined are likely to be unresolved binaries.
Figure 9: Top: Rotational period as a function of color for the sample from Kraus et al. (2011) in Taurus, with the periods from Paper I. Blue dots highlight binaries that have been resolved through high resolution imaging, red diamonds — the sources in which no resolved companion has been identified to date. Black line shows the period of 2 days, delineating slow and rapid rotators (see also Figure 4 in Stauffer et al., 2018). Bottom: rotational period as a function of projected separation in resolved binaries
do assume that we see WTTSs across all inclinations between 0 and 90\({}^{\circ}\), then the maximum typical inclination of CTTSs in the sample would range between 0\({}^{\circ}\) and 55-65\({}^{\circ}\), with other inclinations being too extinct. This translates to the dust being found at the disk scale height of \(\sim\)0.5-0.6. This is in excess of what is typically found in protoplanetary disks (scale height of \(\sim\)0.2 has been reported in some systems, but this is also considered to be quite high Natta et al., 2001; Olofsson et al., 2013; Montesinos et al., 2021). Some dust can be present at \(>3\) times the scale height, and some extinction can be expected even from the modestly inclined disks (D'Alessio et al., 2006).
However, the difference in the observed inclination may not necessarily be tied to the presence of the protoplanetary disk itself, but rather, the relative difference in age, owning to CTTSs being on average younger than WTTSs. If we compare \(\sin i\) distribution for CTTSs and WTTSs separately across different age bins (Figure 10), then the younger sources have a distribution of \(\sin i\) that favors smaller values than those that are more evolved. This appears to hold true regardless of if the star has a significant IR excess from disk/has active accretion, or not. CTTSs still favor a slightly smaller \(\sin i\) than WTTSs at a given age bin, but the difference is less drastic than in the overall distribution.
As such the source of opacity that reddens these stars likely comes from the outer debris disk/envelope that would not necessarily show significant IR excess in all but the longest of wavelengths, which may have been missed by the selection of disk-bearing stars. It is likely to settle or dissipate in a few Myr, but a puffy debris disk appears to be relatively common at ages \(<\)2 Myr, even if a star has already lost its inner disk. Regardless of the dominant source of opacity, be it the disk or the envelope, it is the dust along the line of sight that results in a deficit of young edge-on systems in a magnitude-limited sample. The younger a star is, the less likely is the dust to have been settled. No other age-dependent properties are likely to be responsible for this bias.
The difference in the distribution of inclinations between the stars of different ages may be an important challenge in being able to determine stellar ages. If the spots are not distributed randomly on the photosphere, but, e.g., spots are preferentially located along the pole instead of the equator, as has been demonstrated in several young systems (Donati et al., 2013, 2014, 2015) or more evolved stars with strong magnetic fields (e.g., Roettenbacher et al., 2016), then this may impose a bias on the stellar age. The systems where the spots are prominently visible would appear to be less luminous than their unspotted counterparts. Coupled with the biases on the inclination, as well as the relative spot size and the spot contrast at different \(T_{\rm eff}\) this may be responsible for the trends in age as a function of mass in a given population in Figure A1.
When separated into different age bins, \(\sin i\) distribution in Figure 10 often appears as bimodal. It is possible that the more inclined peak is dominated by the unresolved binaries. Although the rapid rotators are excluded from the sample presented in that figure, other binaries are still expected to be present in the sample, and, just like rapid rotators, they would have systematically underestimated \(\sin i\).
## 5 Conclusions
Using TESS full-frame imaging data, APOGEE spectroscopic data, broadband spectral energy distributions, and Gaia parallaxes, we assemble the largest collection to date of stars younger than 10 Myr with rotation periods from Paper I, spectroscopic \(T_{\rm eff}\) and \(v\sin i\), disk classifications, empirically determined radii from Kounkel et al. (2020), and precise individual age estimates from McBride et al. (2021). With the resulting sample of \(\sim\)9000 stars in the Orion Complex, we examine the angular momentum content of young stars and the evolution of angular momentum with stellar age.
Similarly to previous studies, we find that stars with rotation periods faster than 2 d are predominantly binaries, with typical separations of tens of AU. Such binaries are known to rapidly clear their disks, and consequently we observe an association between rapid rotation and disklessness. It is not clear how such binaries, which are much wider than the reach of tidal effects, come to possess greater amounts of total angular momentum, however it is consistent with recent simulations which find that outflows from binaries are less efficient at removing angular momentum from the inner system as compared to single stars.
Among the (nominally single) stars with rotation periods slower than 2 d, we find that the angular momentum loss is most effective at the youngest ages, slowly decelerating its effect over time. Additionally, higher mass stars have higher total angular momentum (as well as specific angular momentum per unit mass) in comparison to their lower mass counterparts, but they experience more efficient angular momentum depletion, even at ages \(<\)10 Myr.
More generally, we observe the familiar, gyrochronological horseshoe-shaped relation between rotation period and \(T_{\rm eff}\), implying that the processes responsible for the universal evolution of stellar rotation on Gyr timescales are already in place within the first few Myr. We also find that our empirically quantified stellar an
gular momenta exhibit much simpler and monotonic relationships with stellar mass and age as compared to the relationships involving rotation period alone. We conclude that the relationship between rotation period and \(T_{\rm eff}\) is largely a manifestation of the mass dependence of stellar structure and not of angular momentum per se.
In addition, using the combination of rotational periods, rotational \(v\sin i\), as well as stellar radii measured from the SED fitting, we estimate the \(\sin i\) of the stars, revealing a variety of biases in their distribution. We find that most variable systems tend to prefer an edge-on orientation, as this is most favorable configuration to observe star spots rotating in and out of view. We also find that in a magnitude-limited sample, the younger systems tend to have on average lower \(\sin i\) than the systems that are more evolved. This applies both to the stars with protoplanetary disks, as well as those that have already managed to disperse it. Most likely this difference in the distribution is driven by the extinction from the dust in the outer envelopes / debris disks, taking a few Myr to fully dissipate, even if the inner disk is fully gone. If there is a preferential distribution of spots along the photosphere, this difference in inclination distribution as a function of age may introduce a bias in the age estimate of some stars.
Most importantly, our directly measured rotation periods, together with precise Gaia distances and our empirically determined stellar radii for very large sample of stars with ages 1-10 Myr, permit us to construct empirical determinations of stellar angular momentum distributions and thus empirical determinations of angular momentum loss rates. Our measurements show quantitatively that the stars experience spin-down torques in the range \(\sim 10^{37}\) erg at \(\sim\)1 Myr to \(\sim 10^{35}\) erg at \(\sim\)10 Myr. Recent modeling efforts that utilize the inertia of circumstellar disks and/or the power of disk accretion to extract stellar angular momentum during the disk lifetime predict torques in the range \(\sim 10^{36}\) erg at \(\sim\)1 Myr to \(\lesssim 10^{35}\) erg by 2-3 Myr (see, e.g., Gallet et al., 2019). Other mechanisms that do not depend solely on disk inertia or accretion power may therefore be required. For example, empirical observations of the extreme coronal mass ejections (CMEs) associated with X-ray super-flares observed in stars with ages 1-10 Myr appear capable of exerting torques ranging from \(\sim 10^{35}\) erg to \(\sim 10^{37}\) erg (see, e.g., Aarnio et al., 2012, 2013). Our empirical measurements of the spin-down torques experienced by young stars should serve as a touchstone for these and other theoretical mechanisms of angular momentum loss in young stars.
## Appendix A Age Consistency
We estimate ages for stars through a variety of different means. Similarly to Serna et al. (2021), we have inferred the ages of these stars by using \(T_{\rm eff}\), as well as Gaia and 2MASS photometry relative to various isochrones (Baraffe et al., 2015; Choi et al., 2016; Marigo et al., 2017; Somers et al., 2020) using MassAge code (Hernandez J. in prep). We also use Sagitta (McBride et al., 2021), which is a neural net trained on the photometry of young stars and the typical ages of star forming regions in which they are found. Finally, we also use spectroscopically derived \(\log g\) values, which strongly correlates with stellar age. All of these estimates are model-dependent. Although they all show similar progression of evolution, ordering sources from oldest to youngest in a similar manner, the relative zero point calibration, particularly as a function of mass, is systematically different between all of the approaches.
Previously, Kounkel et al. (2018) deconvolved the Orion Complex into 190 kinematically coherent subgroups. Collectively, these groups can be used to examine the spatial and temporal structure of the region. Independently, however, they are relatively compact and stars within a given group are expected to be coeval; i.e., they have self-consistent age distribution regardless of their mass.
Assuming this coevality, we can test the performance of various models in determining stellar ages, regardless of the absolute zero point calibration between them. For a given group, we estimate a median age that is reported by a given model, considering only the stars with \(T_{\rm eff}\) between 3800 and 4500 K. Then, we estimate an age in a small bin within 150 K of a given temperature, for all of the bins containing at least 4 stars, within each indi
vidual group. The difference between the two estimates shows the age gradient imposed by the underlying model as a function of mass. Afterwards, all of the groups of a given age range are averaged together to characterize the systematic trends.
We find that for \(T_{\rm eff}\) of 3800-5000 K, all of the models appear to be relatively stable, producing a coeval distribution of stars. However, this does not hold for cooler stars. In the populations younger than \(<4\) Myr, isochrone models tend to overestimate the ages of stars by \(\sim\)0.1 dex, or \(\sim\)25%, with the excess being more extreme for the youngest groups. PARSEC isochrones (Marigo et al., 2017) appear to have the worst performance, producing an excess of up to \(\sim\)0.3 dex, overestimating the ages of cool stars by a factor of \(\sim\)2. For groups older than 4 Myr, the trend is reversed, and the ages can be underestimated by as much as \(\sim\)0.2 dex.
In all cases, strongly magnetic models with large spot coverage (Somers et al., 2020) tend to be offset from the same models without spots by \(\sim\)0.2 dex. But, because of the underlying trends in the data, in groups \(<4\) Myr, non-magnetic tracks show greater coevality, whereas in groups \(>\)4 Myr, strongly magnetic tracks appear to be coeval. We note that magnetic tracks infer significantly older ages for a given group as a whole, with the difference of 0.3 dex between tracks with spot sizes of 51% and those without spots (Table A1).
Across all ages, Sagitta (McBride et al., 2021) tends to have the greatest consistency in ages as a function of stellar mass, with only a modest overestimation in ages in groups \(<\)2 Myr (we discuss this trend in Section 4), but otherwise appears stable across other age ranges. In part, this is by construction, as the underlying model is empirical, trained on the average ages of stars in a given population across the solar neighborhood. The other models are theoretical, and may be lacking some of the physics for a precise representation of very cool stars.
For this reason, to ensure coevality between low mass and high mass stars in a given region, in subsequent sections we adopt ages from Sagitta. It's absolute calibration is most similar to the models from Baraffe et al. (2015), as well as the magnetic tracks with \(\sim\)17% spot coverage. But, as the offset in age between the models is systematic, using a different set of models does not significantly influence the results presented in this work.
TOPCAT (Taylor, 2005), Plotly (Inc., 2015)
This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement.
|
2307.07612
|
Avalanche Instability as Nonequilibrium Quantum Criticality
|
A fundamental instability in the nonequilibrium conduction band under a
electric field bias is proposed via the spontaneous emission of coherent
phonons. Analytic theory, supported by numerical calculations, establishes that
the quantum avalanche, an abrupt nonequilibrium occupation of excited bands,
results from the competition between the collapse of the band minimum via the
phonon emission and the dephasing of the electron with the environment. The
continuous avalanche transition is a quantum phase transition with the
nonequilibrium phase diagram determined by the avalanche parameter $\beta$,
with peculiar reentrant avalanche domes close to the phase boundary. We further
confirm the nature of the quantum avalanche with the temperature dependence.
|
Xi Chen, Jong E. Han
|
2023-07-14T20:16:37Z
|
http://arxiv.org/abs/2307.07612v1
|
# Avalanche Instability as Nonequilibrium Quantum Criticality
###### Abstract
A fundamental instability in the nonequilibrium conduction band under a electric field bias is proposed via the spontaneous emission of coherent phonons. Analytic theory, supported by numerical calculations, establishes that the quantum avalanche, an abrupt nonequilibrium occupation of excited bands, results from the competition between the collapse of the band minimum via the phonon emission and the dephasing of the electron with the environment. The continuous avalanche transition is a quantum phase transition with the nonequilibrium phase diagram determined by the avalanche parameter \(\beta\), with peculiar reentrant avalanche domes close to the phase boundary. We further confirm the nature of the quantum avalanche with the temperature dependence.
In the past half-century, materials under strong electromagnetic field have been extensively studied. In particular, the resistive phase transition driven by a high electric field has generated strong research efforts [1; 2; 3; 4; 5]. However, despite the scientific and technological importance of the phenomena, conceptual advancement has been limited since it requires an understanding of many-body dynamics far-from-equilibrium [6].
The challenge partly comes from the lack of theoretical milestones. Despite mounting experimental reports [7; 8; 9; 10; 5], theories have not provided decisive new insights into outstanding issues. One such problem is resistive switching, in which the mechanism of the insulator-to-metal transition by a DC electric field has been debated, as to the electronic or thermal origin, for many decades without much consensus. Part of the problem is that theoretical efforts have been often too complex to systematically relate to well-established equilibrium counterparts. The goal of our analytic theory is to identify a mechanism of nonequilibrium quantum transition akin to critical phenomena and provide a conceptual and transparent framework that could initiate future discussion.
In the past decades, we have asked how electrons overcome the energy gap to induce dielectric breakdown, mainly within the framework of Landau-Zener tunneling [1; 7; 11]. Despite strong efforts, theories failed to address the energy-scale discrepancy where the experimental switching fields are orders of magnitude smaller than theoretical predictions [12; 13; 14; 15]. In a recent work [16], an alternative answer was proposed. In materials, electrical resistivity is not infinite, and, with bias, there exist many charge-carriers present in the bulk limit, despite with very dilute concentration. Once electrons are coupled to an inelastic medium, instability develops with a uniform electric field, in principle at infinitesimally small strength, leading to an eventual resistive breakdown of the system at experimental scales [16]. We show here that, through an analytic study, the avalanche instability is a subset of the fundamental instability of a band under bias by analyzing a minimal nonequilibrium steady-state model.
We introduce a model of a 1-dimensional electron gas coupled to optical-phonons with electrons subject to a static and uniform electric field \(E\) with the Hamiltonian
\[H(t) = \int\left[\psi^{\dagger}(x)\left(\frac{1}{2m}(-i\hbar\partial_{x} +eEt)^{2}+\Delta\right)\psi(x)\right.\] \[+ \left.\frac{1}{2}\left(p_{\varphi}(x)^{2}+\omega_{0}^{2}\varphi(x )^{2}\right)+g_{\rm ep}\varphi(x)\psi^{\dagger}(x)\psi(x)\right]dx\]
with the (spinless) electron creation/annihilation operator \(\psi^{\dagger}(x)/\psi(x)\), the Einstein phonon field \(\varphi(x)\) of frequency \(\omega_{0}\) with its conjugate momentum \(p_{\varphi}(x)\), and the electron-phonon coupling constant \(g_{\rm ep}\). The conduction band is placed \(\Delta\) above the Fermi energy of the particle reservoir. A uniform DC electric field is included as a vector potential \(-eEt\hat{\bf x}\) in the temporal gauge [17; 18]. We use the unit system that \(\hbar=e=k_{B}=1\), with the Boltzmann constant \(k_{B}\).
The system is coupled to the environment with electrons and phonons connected to the fermionic and bosonic baths [16; 19; 20], respectively. The importance of phonon baths has been highlighted [16; 21], recently. The fermion reservoirs account for exchange of electrons from bands outside model (such as substrate) and provides dissipation. More importantly this mechanism sets the electron lifetime via dephasing from external sources other than phonons. The hybridization to the fermion bath is given as \(\Gamma\), which we assume to be independent of energy and to be structureless with infinite bandwidth, for simplicity. The electrons can deposit their excess energy into phonons with the scattering rate controlled by the coupling constant \(g_{\rm ep}\), with the excited phonons eventually decaying into an Ohmic bath [22; 23].
This minimal model has been shown to induce a quantum avalanche [16] where a phase transition to a strong nonequilibrium occupation of the band occurs at a small electric-field scale. The mechanism for the quantum avalanche is as follows. As depicted in FIG. 1(a), spontaneous phonon emission does not occur in the \(E=0\) limit due to the absence of states below the band minimum. Even with the faint line-broadening into the gap due to \(\Gamma\), these states are quite insignificant. However, with a non-zero electric field [see FIG. 1(b)], the potential slope provides electronic levels at any energy. (Here,
we temporarily switch to the static gauge with potential \(V(x)=-eEx\) for the sake of argument.) While the off-edge states are due to the evanescent tail centered at different position as depicted as the orange envelope function in (b), it allows much enhanced spontaneous phonon emission compared to (a). This smear of band-edge due to a uniform electric field is the Franz-Keldysh effect [24]. With the replica state generated by a phonon-emission reinforces the evanescent off-edge state so that it can act as the reference state that generates the second replica state. The formation of the multiple replicas requires the phase coherence between the electron and the phonon throughout, which is limited by the electron dephasing time. This sets the threshold for the quantum avalanche. We note that the nature of the transition is spontaneous electronic transition below the band, instead of the sequential dissipation of excess electronic energy into phonon quanta.
We emphasize that the following theoretical analysis is confirmed by fully numerical calculations. As published in several nonequilibrium dynamical mean-field theory works [16; 19; 25; 26], the dissipation mechanisms are rigorously implemented in self-consistent calculations. It is essential to include the dissipation on an equal footing as the system Hamiltonian to ensure numerical convergence. We detail the numerical procedures for the lattice model with the static gauge and with full dissipation in the Supplementary Materials (SM) for completeness. The tight-binding parameter \(t\) was set to \(ta^{2}=\hbar^{2}/(2m)=1\) with lattice constant \(a\) which is scaled to 1 in the calculation.
Before we present the analytic theory, we discuss numerical evidence for the quantum avalanche, in FIG. 1(c). The occupation number of the band \(n_{\rm ex}\) shows a continuous phase transition at the finite electric field at \(E_{\rm av}\). There are two key observations: (1) The avalanche field \(E_{\rm av}\) is almost linearly proportional to the coupling to the environment \(\Gamma\), and (2) the occupation number before the avalanche has no direct consequence on the strength of \(E_{\rm av}\). The fact that \(E_{\rm av}\propto\Gamma\) indicates that the avalanche arises from the formation of off-edge states established during the timescale set by the dephasing time \(\Gamma^{-1}\). That is, with a long-dephasing time (\(\Gamma\to 0\)), the multi-phonon replica becomes more robust with a smaller electric field. The second observation directly points to the quantum nature of the transition. It is highly counter-intuitive that the initial occupation, which reflects the thermal occupation of the conduction band via the line-broadening \(\Gamma\), goes against the avalanche transition. As will be argued shortly about the gap dependence of the avalanche, the avalanche only requires the existence of the particle source, but not the proximity of the particle reservoirs. We confirmed numerically that the avalanche occurs in square and cubic lattices.
Now, we identify the condition for the abrupt increase of electron occupation in the band. The electron occupation is directly obtained from the lesser Green's function (GF), \(n_{\rm ex}=-i\int G_{p}^{<}(t,t)\frac{dp}{2\pi}\), with the lesser GF defined as \(G_{p}^{<}(t_{2},t_{1})=i\langle c_{p}^{\dagger}(t_{1})c_{p}(t_{2})\rangle\), with the Fourier transformed fermion variable \(c_{p}=\int e^{-ipx}\psi(x)dx\). The enhancement of occupation results from the lesser self-energy \(\Sigma^{<}\), symbolically through \(G^{<}=G^{R}\Sigma^{<}G^{A}\). The details of the calculation are given in the SM. FIG. 2(a) and (b) represent the two lowest-order self-energies due to one-phonon and two-phonon emission, respectively, and we look for the condition that these processes lead to comparable magnitude so that we expect an infinite summation of these 'rainbow' diagrams leads to an occupation avalanche.
The lowest-order self-energy \(\Sigma_{p}^{(2),<}(t_{2},t_{1})\), FIG. 2a), can be written [17] as
\[\Sigma_{p}^{(2),<}(t_{2},t_{1})=ig_{\rm ep}^{2}\int\frac{dq}{2\pi}D_{0}^{<}(t _{2},t_{1})G_{0q}^{<}(t_{2},t_{1}), \tag{2}\]
Figure 1: (a) Energy scheme of a conduction band above the Fermi level by \(\Delta\) at equilibrium. Spontaneous emission of phonon into an electronic level below the band edge is not allowed. (b) With the electric field \(E>0\), the potential slope provides energy levels tunneling below the band edge, enabling spontaneous transition into the forbidden region by emitting local phonons. As the electronic replica state, with its energy lowered by \(\omega_{0}\), is reinforced by the multiple-phonon processes, an abrupt quantum transition occurs in an avalanche. (c) Numerical results showing occupation number \(n_{\rm ex}\) of the conduction band as a function of \(E\). The avalanche field \(E_{\rm av}\) is an increasing function of the dephasing rate \(\Gamma\), suggesting that the dephasing competes with the avalanching mechanism. Counter-intuitively, smaller pre-avalanche occupations led to earlier avalanches, as shown in the inset.
Figure 2: (a) Lowest-order self-energy to electron by electron-phonon coupling. Electron (solid line) emits/absorbs a phonon (wiggly line) in the scattering. (b) The next-order self-energy showing two-phonon process. The integral is performed over the internal (red) Keldysh times \(s_{1}\) and \(s_{2}\). (c) Out of the 12 possible arrangements of \((s_{1},s_{2})\) on the Keldysh contour, the dominant contribution comes with \(s_{1,2}<t_{1,2}\) on each contour, as shown.
where \(D_{0}^{<}(t_{2},t_{1})\) is the standard Keldysh GF for phonon, and \(G_{0q}^{<}(t_{2},t_{1})\) for non-interacting electron with momentum \(q\). The electronic lesser GF is given as
\[G_{0p}^{<}(t_{2},t_{1})=in_{\rm ex}(p)e^{-\Gamma|t_{2}-t_{1}|}U_{p}(t_{2},t_{1}), \tag{3}\]
with an unspecified initial occupation \(n_{\rm ex}(p)\) and the time-evolution factor of a free electron
\[U_{p}(t_{2},t_{1}) = \exp\left[-i\int_{t_{1}}^{t_{2}}\left(\frac{(p+Es)^{2}}{2m}+ \Delta\right)ds\right]\] \[= \exp\left[-i\left(\frac{(p+ET)^{2}}{2m}+\Delta\right)t-\frac{iE^{ 2}t^{3}}{24m}\right],\]
with the average time \(T=\frac{1}{2}(t_{2}+t_{1})\) and the relative time \(t=t_{2}-t_{1}\). We note that physical observables are gauge-independent [27; 28; 29] and the mechanical momentum \(\bar{p}=p+ET\) appears in the manner as above. Assuming that the occupation of the conduction band is only at \(\bar{p}\approx 0\) up to the onset of the avalanche and that the bath temperature \(T\) is much smaller than the phonon energy \(\omega_{0}\), we obtain
\[\Sigma_{p}^{(2),<}(t_{2},t_{1})\approx\frac{in_{\rm ex}g_{\rm ep}^{2}}{2\omega _{0}}e^{-\Gamma|t|-i[(\Delta-\omega_{0})t+E^{2}t^{3}/24m]}. \tag{5}\]
Note that the bandedge \(\Delta\) is shifted down by \(\omega_{0}\) due to the phonon-emission.
The next-order self-energy, while we only look at the nested diagram, can be quite formidable due to the 12 different arrangements of the internal Keldysh times \(s_{1}\) and \(s_{2}\), see FIG. 2(b). However, using the fact that \(|G^{<}|\ll|G^{>}|\) in the dilute limit, the only dominant time-ordering is as shown in (b), \(-\infty<s_{1,2}<t_{1,2}\) in the backward/forward Keldysh time-contour, respectively. As detailed in SM, \(\Sigma_{p}^{(4),<}(t)\) is approximated as
\[\Sigma_{p}^{(4),<}(t) \approx -\frac{n_{\rm ex}mg_{\rm ep}^{4}}{(2\omega_{0})^{2}}e^{-i[( \Delta-2\omega_{0})t+E^{2}t^{3}/24m]} \tag{6}\] \[\times\int\frac{dq}{2\pi}\int ds\frac{e^{i(q^{2}/2m+\omega_{0})s }}{qE(t+s)+2im\Gamma}.\]
In the above approximation, it is crucial that the dephasing rate \(\Gamma\) and the resulting electric field \(E\) are much smaller than other energy scales such as the phonon frequency \(\omega_{0}\) and the kinetic energy. We define the figure of merit value \(\lambda\) for the enhancement of multi-phonon effect as \(\Sigma_{p}^{(4),<}(0)/\Sigma_{p}^{(2),<}(0)\), and
\[\lambda\approx\frac{img_{\rm ep}^{2}}{2\omega_{0}}\int\frac{dq}{2\pi}\int ds \frac{e^{i(q^{2}/2m+\omega_{0})s}}{qEs+2im\Gamma}. \tag{7}\]
This integral is readily expressed in terms of the modified Bessel function \(K_{0}(x)\), and we arrive at the avalanche condition \(\lambda=1\) as
\[1=\frac{mg_{\rm ep}^{2}}{\omega_{0}E}K_{0}\left(\frac{2\Gamma\sqrt{2m\omega_{ 0}}}{E}\right), \tag{8}\]
which is one of our main results.
We turn our attention to the integral, Eq. (7), which is quite revealing. In the \(E=0\) limit, the \(s\)-integral gives \(\delta(q^{2}/2m+\omega_{0})=0\) due to the absence of target electronic states below the band edge, as depicted in FIG. 1(a). Therefore, in the equilibrium limit, there is no enhancement for an avalanche. At a finite \(E\), the pole at \(qs=-2im\Gamma/E\) results in a dominant contribution while smearing the energy conservation due to the time-dependent Hamiltonian, and leads to a strong enhancement for an avalanche. Further discussions on this issue are given in SM.
Using the asymptotic relation of \(K_{0}(x)\) for large \(x\), we can express the solution in the small avalanche field \(E_{\rm av}\) limit as
\[E_{\rm av}\approx\frac{2\Gamma\sqrt{2m\omega_{0}}}{\ln\left(\sqrt{\frac{\pi}{ 32}}\beta\right)}, \tag{9}\]
with the avalanche parameter defined as
\[\beta=\frac{g_{\rm ep}^{2}}{\Gamma}\left(\frac{2m}{\omega_{0}^{3}}\right)^{1/ 2}. \tag{10}\]
In the following, we will compare the avalanche condition (8) against numerical results. Two of the most fundamental aspects of the quantum avalanche are displayed in FIG. 3. In (a), almost linear dependence between \(E_{\rm av}\) and \(\Gamma\) illustrates te fact that the \(\Gamma=0\) limit is fundamentally singular in the nonequilibrium limit [16], and that the dephasing is crucial in understanding nonequilibrium steady-state.
The saturation of \(E_{\rm av}\) for large \(\Delta\) in FIG. 3(b) is quite surprising. The weak dependence of \(E_{\rm av}\) on large \(\Delta\) suggests that the avalanche is not a function of the proximity of the band to the particle source, but rather a fundamental instability of the conduction band itself once the particle source is accessible. This observation is consistent with FIG. 1(c), in which the avalanche occurred
Figure 3: (a) Avalanche field \(E_{\rm av}\) versus dephasing rate \(\Gamma\), numerical (data points) and analytic (solid line) results. \(E_{\rm av}\to 0\) as \(\Gamma\to 0\) almost linearly. (b) Saturation of \(E_{\rm av}\) with large \(\Delta\). The saturation shows that the avalanche is due to the fundamental instability of conduction band itself, not due to the proximity to the particle reservoir. The initial absence of \(E_{\rm av}\) at small \(\Delta\) is due to shift of gap by the electron-phonon coupling. The dashed lined are guide to the eye.
earlier with lower initial occupations in the band. In the analytic argument, it is a quite natural conclusion since Eq. (8) is a result of integration between fermion GFs where the energy difference enters and the gap \(\Delta\) dependence drops out. The following discussions resulting from Eq. (8) correspond to the large \(\Delta\) limit. We caution here that \(E_{\rm av}\) saturates only with the model parameter \(\Delta\) and, in physical systems, the gap dependence could come back indirectly since the bandgap is roughly proportional to the phonon energy [30].
The dependence on the phonon parameters \(\omega_{0}\) and \(g_{\rm ep}\) are shown in FIG. 4(a) and (b). The monotonic dependence is expected since a larger phonon energy requires a stronger \(E\)-field. It is still remarkable that the curvature change agrees between the analytic and numerical results. The coupling constant dependence in (b) is impressive. The inverse dependence is expected since a weak coupling would need a higher field to generate an avalanche. What is interesting is that the sharp increase occurs at a finite value of \(g_{\rm ep}\).
The threshold behavior of \(g_{\rm ep}\) can be understood analytically. As shown in FIG. 4(c), the R.H.S. of Eq. (8) is a non-monotonic function. As \(g_{\rm ep}\) decreases the maximum of the R.H.S. hits 1, after which there does not exist an avalanche solution anymore. By parametrizing \(x=2\Gamma\sqrt{2m\omega_{0}}/E\), this condition amounts to \([xK_{0}(x)]_{x=x_{0}}^{\prime}=0\) at \(x_{0}=0.595047\). Eq. (8) can be rewritten as \(xK_{0}(x)=4/\beta\) with Eq. (10). Therefore, the critical condition for the existence of an avalanche becomes
\[\left.\frac{g_{\rm ep}^{2}}{\Gamma}\left(\frac{2m}{\omega_{0}^{3}}\right)^{1/2 }\right|_{c}=\frac{4}{x_{0}K_{0}(x_{0})}=8.574=\beta_{c}, \tag{11}\]
demonstrating the competition between the dephasing and the avalanching mechanism.
Numerical solutions predict peculiar reentrant behaviors near the threshold. As shown in FIG. 4(d), \(E_{\rm av}\) values increase as \(g_{\rm ep}\) is reduced. The \(n_{\rm ex}\) curves after the avalanche do not simply collapse to zero as \(g_{\rm ep}\) is reduced, but they develop reentrant avalanche domes before they reach \(\beta=\beta_{c}\). We speculate that after an avalanche the electrons get excited strongly and the additional dephasing due to the charge-fluctuation mitigates the avalanche leading to the dome behavior. We emphasize that this most elementary nonequilibrium interacting model presents us with very rich physics. The inclusion of the phonon self-energy did not change the reentrant behavior.
The competition between the avalanching mechanism and the dephasing is best summarized in the \(g_{\rm ep}\)-\(\Gamma\) phase diagram for the existence of an avalanche [see FIG. 5(a)]. According to Eq. (11), the line \(\beta=\beta_{c}\) (or \(g_{\rm ep}\propto\sqrt{\Gamma}\)) divides the phase with avalanche (\(\beta>\beta_{c}\)) and that without avalanche (\(\beta<\beta_{c}\)).
Finally, we discuss the temperature dependence of the avalanche in FIG. 5(b). In the resistive switching literature, the conventional understanding is that the switching field of the insulator-to-metal transition decreases with increasing temperature since the order in solids softens with temperature [31]. The quantum avalanche [16] mechanism has predicted the opposite trend to this thermal scenario. This is due to the thermal dephasing counteracting the avalanching mechanism. Therefore, understanding the numerical temperature dependence of \(E_{\rm av}\) is crucial. As shown in (b), the temperature \(T^{*}\) at which
Figure 4: (a) Avalanche field \(E_{\rm av}\) versus phonon frequency \(\omega_{0}\). The electron replica generation is inversely proportional to \(\omega_{0}\), thus needing a larger \(E\)-field to generate high phonon energy. (b) \(E_{\rm av}\) versus electron-phonon coupling \(g_{\rm ep}\). Not only the inverse dependence on \(g_{\rm ep}\) but also the sharp increase at a threshold \(g_{\rm ep}\) is well agreed between the numerical and analytic results. (c) Graphs for criterion Eq. (8), as \(g_{\rm ep}\) is varied. Solutions (dots) cease to exist for \(\beta<\beta_{c}\). (d) Numerical results of \(n_{\rm ex}\) vs \(E\)-field as \(g_{\rm ep}\) is varied. As \(g_{\rm ep}\) is approached the threshold value, the avalanche becomes reentrant with the size of domes eventually diminishing to zero at the threshold \(g_{\rm ep}\).
Figure 5: (a) \(\Gamma\)-\(g_{\rm ep}\) phase diagram for avalanche. The competition between the electron-phonon coupling and the dephasing results in the phase boundary line \(\beta=\beta_{c}\) (\(g_{\rm ep}\propto\sqrt{\Gamma}\)). \(\beta>\beta_{c}\) supports avalanche phase. \(\Delta=1\) and \(\omega_{0}=0.3\). (b) Finite temperature behavior of \(E_{\rm av}\). The increasing \(E_{\rm av}(T)\) behavior is a direct evidence of a non-thermal mechanism. The analytic prediction of the temperature \(T^{*}\) at the onset (arrows) agrees well with the numerical \(E_{\rm av}(T)\).
the \(E_{\rm av}\) changes significantly is at \(T^{*}\approx 0.17\omega_{0}\). At this temperature, the Bose-Einstein function \(n_{b}(\omega_{0})=(e^{\omega_{0}/T^{*}}-1)^{-1}\) is less than 0.005, which suggests that the mechanism may not be straightforward.
The numerical temperature dependence is resolved by introducing the additional dephasing due to the electron-phonon coupling. The dephasing is evaluated from the retarded self-energy \(\Sigma^{R}(\omega)\) at the bandedge \(\omega=\Delta\). Since the \(\Sigma^{>}\) contribution dominates \(\Sigma^{R}\), we have in the small \(E\)-field limit
\[-{\rm Im}\Sigma^{R}(\Delta)\approx\frac{mg_{\rm cp}^{2}}{2\omega_{0}\sqrt{2m \omega_{0}}}n_{b}(\omega_{0})=\frac{1}{4}\Gamma\beta n_{b}(\omega_{0}). \tag{12}\]
By replacing \(\Gamma\) in Eq. (8) by the effective dephasing \(\Gamma+|{\rm Im}\Sigma^{R}(\Delta)|\), we define the activation temperature \(T^{*}\) at \(|{\rm Im}\Sigma^{R}(\Delta)|\approx\frac{1}{4}\Gamma\) and obtain
\[T^{*}=\frac{\omega_{0}}{\ln(1+\beta)}. \tag{13}\]
The position for \(T^{*}\), marked by arrows in FIG. 5(b), agrees well with the numerical results.
In conclusion, we have proposed the avalanche instability of a conduction band under an electric field, as a general mechanism for a nonequilibrium quantum criticality. Analytical theory, with a comprehensive agreement with numerical confirmation, demonstrates the quantum origin of the avalanche and presents a step toward understanding the quantum nature of the nonequilibrium phase transition. Further studies are necessary to test the ubiquity of the mechanism under various dephasing mechanisms.
**Acknowledgements**
Authors acknowledge the computational support from the CCR at University at Buffalo. JEH benefited greatly from discussions with Camille Aron for his insightful comments and encouragement, and with Ki-Seok Kim who pointed out the importance of the multi-phonon diagrams. Helpful discussions with Enrico Arrigoni, Gabriel Kotliar, Jonathan Bird and Peihong Zhang are appreciated. JEH is grateful for the hospitality of the ENSNCRS where part of the work is completed.
|
2305.09046
|
Convex optimization over a probability simplex
|
We propose a new iteration scheme, the Cauchy-Simplex, to optimize convex
problems over the probability simplex $\{w\in\mathbb{R}^n\ |\ \sum_i w_i=1\
\textrm{and}\ w_i\geq0\}$. Other works have taken steps to enforce positivity
or unit normalization automatically but never simultaneously within a unified
setting. This paper presents a natural framework for manifestly requiring the
probability condition. Specifically, we map the simplex to the positive
quadrant of a unit sphere, envisage gradient descent in latent variables, and
map the result back in a way that only depends on the simplex variable.
Moreover, proving rigorous convergence results in this formulation leads
inherently to tools from information theory (e.g. cross entropy and KL
divergence). Each iteration of the Cauchy-Simplex consists of simple
operations, making it well-suited for high-dimensional problems. We prove that
it has a convergence rate of ${O}(1/T)$ for convex functions, and numerical
experiments of projection onto convex hulls show faster convergence than
similar algorithms. Finally, we apply our algorithm to online learning problems
and prove the convergence of the average regret for (1) Prediction with expert
advice and (2) Universal Portfolios.
|
James Chok, Geoffrey M. Vasil
|
2023-05-15T22:14:22Z
|
http://arxiv.org/abs/2305.09046v1
|
# Convex optimization over a probability simplex
###### Abstract
We propose a new iteration scheme, the Cauchy-Simplex, to optimize convex problems over the probability simplex \(\{w\in\mathbb{R}^{n}\ |\ \sum_{i}w_{i}=1\ \text{and}\ w_{i}\geq 0\}\). Other works have taken steps to enforce positivity or unit normalization automatically but never simultaneously within a unified setting. This paper presents a natural framework for manifestly requiring the probability condition. Specifically, we map the simplex to the positive quadrant of a unit sphere, envisage gradient descent in latent variables, and map the result back in a way that only depends on the simplex variable. Moreover, proving rigorous convergence results in this formulation leads inherently to tools from information theory (e.g. cross entropy and KL divergence). Each iteration of the Cauchy-Simplex consists of simple operations, making it well-suited for high-dimensional problems. We prove that it has a convergence rate of \(\mathcal{O}(1/T)\) for convex functions, and numerical experiments of projection onto convex hulls show faster convergence than similar algorithms. Finally, we apply our algorithm to online learning problems and prove the convergence of the average regret for (1) Prediction with expert advice and (2) Universal Portfolios.
Constrained Optimization, Convex Hull, Simplex, Universal Portfolio, Online Learning, Simplex Constrained Regression, Examinations, Convergence, Gradient Flow
65K10, 68W27, 68W40, 91G10, 97U40
## 1 Introduction
Optimization over the probability simplex, (_i.e._, unit simplex) occurs in many subject areas, including portfolio management [1, 2, 3], machine learning [4, 5, 6, 7], population dynamics [8, 9], and multiple others including statistics and chemistry [10, 11, 12, 13]. This problem involves minimizing a function \(f(w)\) (assumed convex) with \(w\in\mathbb{R}^{n}\) within the probability simplex,
\[\min_{w\in\Delta^{n}}f(w),\quad\text{where}\ \Delta^{n}=\{w\in\mathbb{R}^{n}\ |\ \sum_{i}w_{i}=1\ \text{and}\ w_{i}\geq 0\}. \tag{1}\]
While linear and quadratic programs can produce an exact solution under certain restrictions on \(f\), they tend to be computationally expensive when \(n\) is large. Here we provide an overview of iterative algorithms to solve this problem approximately and introduce a new algorithm to solve this problem for general convex functions \(f\).
Previous works each enforce one facet of the simplex constraint, positivity, or the unit-sum. Thus requiring an extra step to satisfy the remaining constraint. Our proposed method manages to satisfy both constraints, and we see the ideas encapsulated by these previous attempts within it.
Previous Works
_Projected gradient descent_ (PGD) is a simple method to solve any convex problem over any domain \(R\). The iteration scheme follows
\[w^{t+1}=\text{proj}_{R}(w^{t}-\alpha_{t}\nabla f), \tag{2}\]
where \(\text{proj}_{R}\) is the projection of a point into \(R\). Since the step \(w^{t}-\alpha_{t}\nabla f\) does not preserve positivity, an explicit solution to the projection cannot be written. However, algorithms exist to perform projection into the unit simplex in \(O(n)\) time1[14; 15], where \(n\) is the dimension of \(w\). While this added cost is typically negligible, once \(n\) is big, it adds a high cost to each iteration.
Footnote 1: Assuming addition, subtraction, multiplication, and division take \(\mathcal{O}(1)\) time.
This formulation of PGD can be simplified with linear constraints \(Aw=b\), where \(A\) is an \(k\times n\) matrix and \(b\in\mathbb{R}^{k}\). As suggested in [16; 17; 18; 19], a straightforward update scheme projects \(\nabla f\) into the nullspace of \(A\) and descends the result along the projected direction. For the unit-sum constraint, this algorithm requires solving the constrained optimization problem
\[\arg\min_{x}\frac{1}{2}\|\nabla f-x\|^{2},\quad\text{with}\quad\sum_{i}x_{i}=0. \tag{3}\]
This problem yields to the method of Lagrange multipliers, giving the solution
\[w^{t+1}=w^{t}-\alpha_{t}\left(\nabla f-\frac{1}{n}\sum_{i}\nabla_{i}f\right). \tag{4}\]
While this scheme satisfies the unit-sum constraint, in a similar manner to (2), it does not satisfy the positivity constraint. Thus requiring an additional projection with no explicit solution [20; 18].
_Exponentiated Gradient Descent_ (EGD), first presented by Nemirovsky and Yudin [21] and later by Kivinen and Warmuth [22], instead enforces positivity by using exponentials, _i.e._,
\[w_{i}^{t+1}=\frac{w_{i}^{t}\exp(-\alpha_{t}\nabla_{i}f)}{\sum_{j}w_{j}^{t}\exp (-\alpha_{t}\nabla_{j}f)}. \tag{5}\]
In the proper formulation of EGD, there is no normalizing factor, giving the iteration scheme \(w^{t+1}=w^{t}\exp(-\alpha_{t}\nabla f)\). Thus preserving positivity but not the unit-sum constraint. However, preserving positivity yields an explicit solution for the projection onto the simplex, given by the normalizing factor. In this case, projection requires \(\mathcal{O}(1)\) time.
_Projection-free methods_, instead, require each iteration to remain explicitly within the domain.
_The Frank-Wolfe method_ is a classic scheme [23] that is experiencing a recent surge popularity [24; 25; 26; 27]. The method skips projection by assuming \(R\) is convex. That is,
\[w^{t+1}=(1-\gamma_{t})w^{t}+\gamma_{t}s^{t}, \tag{6}\] \[\text{where}\quad s^{t}=\arg\min_{s\in R}\;s\cdot\nabla f\quad \text{and}\quad 0\leq\gamma_{t}\leq 1.\]
Since \(R\) is convex, \(w^{t+1}\in R\) automatically. Frank-Wolfe-based methods tend to be fast for sparse solutions but display oscillatory behavior near the solution, resulting in slow convergence [28; 29].
_The Pairwise Frank-Wolfe_ (PFW) method improves upon the original by introducing an 'away-step' to prevent oscillations allowing faster convergence [30; 28; 31].
A well-studied [8; 32; 33; 34; 35; 36; 37] formulation of (1) takes \(f\) to be quadratic, known as the standard quadratic optimization problem. In these situations, a line search has analytical solutions for the Frank-Wolfe and PFW methods but not for EGD and PGD. EGD and PGD require approximate methods (e.g. backtracking method [38]), adding extra run time per iteration.
## 3 The Main Algorithm
For convex problems over a probability simplex, we propose what we named the Cauchy-Simplex (CS)
\[w^{t+1}=w^{t}-\eta_{t}d^{t}, \tag{7}\] \[\text{with }d^{t}=w^{t}(\nabla f-w^{t}\cdot\nabla f),\qquad 0< \eta_{t}\leq\eta_{t,\max},\] \[\text{and }\eta_{t,\max}^{-1}=\max_{i}(\nabla_{i}f-w^{t}\cdot \nabla f).\]
The upper bound on the learning rate \(\eta_{t}\) ensures that \(w_{i}^{t+1}\) is positive for all \(i\). Summing over the indices of \(d^{t}\)
\[\sum_{i}w_{i}^{t}(\nabla_{i}f-w^{t}\cdot\nabla f)=(w^{t}\cdot\nabla f)\bigg{(}1- \sum_{i}w_{i}^{t}\bigg{)}. \tag{8}\]
Thus, if \(\sum_{i}w_{i}^{t}=1\) then \(d^{t}\) lies in the null space of \(\sum_{i}w_{i}\) and \(w^{t+1}\) satisfies the unit-sum constraint. Hence giving a scheme where each iteration remains explicitly within the probability simplex, much like projection-free methods.
### Motivating Derivation
Our derivation begins by modeling \(w\) through a latent variable, \(\psi\),
\[w_{i}=\frac{\psi_{i}^{2}}{\sum_{j}\psi_{j}^{2}}, \tag{9}\]
which automatically satisfies positivity and unit probability. Now consider gradient descent on \(\psi\),
\[\frac{d\psi}{dt}=-\alpha\frac{\partial f}{\partial\psi},\quad\text{with} \quad\alpha>0. \tag{10}\]
Using the notation \(\dot{\psi}=d\psi/dt\), the chain rule gives
\[\frac{dw}{dt}=\sum_{j}\frac{\partial w}{\partial\psi_{j}}\frac{d\psi_{j}}{dt} =\frac{2}{\|\psi\|^{2}}\bigg{(}\psi\,\dot{\psi}-w\,\psi\cdot\dot{\psi}\bigg{)}. \tag{11}\]
Computing the derivatives and simplifying them gives
\[\frac{dw}{dt}=\frac{2}{\|\psi\|^{2}}\,\psi\,\dot{\psi}=-\beta w(\nabla f-w \cdot\nabla f)\quad\text{where}\quad\beta=\frac{4\alpha}{\|\psi\|^{2}}, \tag{12}\]
where \(\nabla f=\nabla_{w}f\). Thus giving the iterative scheme
\[w^{t+1}=w^{t}-\eta_{t}d^{t}\quad\text{where}\quad d^{t}=w^{t}(\nabla f-w^{t} \cdot\nabla f). \tag{13}\]
### On the Learning Rate
Unlike EGD and PGD, but similar to Frank-Wolfe and PFW, each iteration of the CS has a maximum possible learning rate \(\eta_{t,\max}\). This restriction may affect the numerical performance of the algorithm. To ease this problem, note that once an index is set to zero, it will remain zero and can be ignored. Giving an altered maximum learning rate
\[\tilde{\eta}_{t,\max}=\frac{1}{\max_{i\in S}(\nabla_{i}f-w^{t}\cdot\nabla f)}, \tag{14}\]
where \(S=\{i\mid w_{i}>0\}\). It follows that \(\eta_{t,\max}\leq\tilde{\eta}_{t,\max}\), allowing for larger step-sizes to be taken.
### Connections to Previous Methods
There are two other ways of writing the Cauchy-Simplex. In terms of the flow in \(w\):
\[\frac{dw}{dt}=-W\Pi_{w}\nabla f,\quad\text{where}\quad\Pi_{w}=\text{I}- \mathbb{1}\otimes w, \tag{15}\]
and \(W\) is a diagonal matrix filled with \(w\), I is the identity matrix, and \(\mathbb{1}\) is a vector full of 1s.
In terms of the flow in \(\log(w)\):
\[\frac{d}{dt}\log(w)=-\Pi_{w}\nabla f, \tag{16}\]
giving an alternative exponential iteration scheme
\[w^{t+1}=w^{t}\exp(-\eta_{t}\Pi_{w}\nabla f). \tag{17}\]
**Claim 1**.: \(\Pi_{w}\) _is a projection provided \(\sum_{i}w_{i}=1\)._
Proof.: By direct computation,
\[\Pi_{w}^{2} = \mathrm{I}^{2}-2(\mathbb{1}\otimes w)+(\mathbb{1}\otimes w)( \mathbb{1}\otimes w) \tag{18}\] \[= \mathrm{I}-2(\mathbb{1}\otimes w)+\left(\sum_{i}w_{i}\right)\,( \mathbb{1}\otimes w)\ =\ \Pi_{w},\]
and is, therefore, a projection.
**Remark 1**.: _While \(\Pi_{w}\) is a projection, it takes \(\mathcal{O}(1)\) time to compute._
The formulations (15) and (16) draw a direct parallel to both PGD and EGD, as summarized in Table 1.
PGD can be written in continuous form as
\[\frac{dw}{dt}=-\Pi_{1/n}\nabla f\quad\text{where}\quad\Pi_{1/n}=\mathrm{I}- \frac{1}{n}\mathbb{1}\otimes\mathbb{1}. \tag{19}\]
The projector helps PGD satisfy the unit-sum constraint, but not perfectly for general \(w\). However, introducing the multiplication with the matrix \(W\) naturally reduce the iteration step size near the bounds, preserving positivity.
EGD, similarly, can be written in continuous form as
\[\frac{d}{dt}\log(w)=-\nabla f. \tag{20}\]
Performing the descent through \(\log(w)\) helps EGD preserves positivity. Introducing the projector helps the resulting exponential iteration scheme (17) to agree with the linear iteration scheme (13) up to \(\mathcal{O}(\eta_{t}^{2})\) terms. Thus helping preserve the unit-sum constraint.
**Claim 2**.: _The Cauchy-Simplex exponentiated iteration scheme (17) agrees up to \(\mathcal{O}(\eta_{t}^{2})\) with it's linear iteration scheme (7)._
Proof.: Taylor expanding (17) w.r.t \(\eta_{t}\) around zero gives
\[w^{t+1}=w^{t}(1-\eta_{t}\Pi_{w}\nabla f)+\mathcal{O}(\eta_{t}^{2}). \tag{21}\]
**Remark 2**.: _Combining PGD and EGD does not give an iteration scheme that preserves positivity and the unit-sum constraints._
Unlike both PGD and EGD, the continuous-time dynamics of the CS are enough to enforce the probability-simplex constraint. This allows us to use the gradient flow of CS, _i.e._ (12), to prove convergence when optimizing convex functions (seen in Section 4). This contrasts with PGD and EGD, in which the continuous dynamics only satisfy one constraint. The discretization of these schemes is necessary to allow an additional projection step, thus satisfying both constraints.
### The Algorithm
The pseudo-code of our method can be seen in Algorithm 1.
\begin{table}
\begin{tabular}{|c|c|c|} \cline{2-3} \multicolumn{1}{c|}{} & \(\sum_{i}w_{i}\neq 1\) & \(\sum_{i}w_{i}=1\) \\ \hline \(w_{i}\ngeq 0\) & GD: \(\frac{dw}{dt}=-\nabla f\) & PGD: \(\frac{dw}{dt}=-\Pi_{1/n}\nabla f\) \\ \hline \(w_{i}\geq 0\) & EGD: \(\frac{d}{dt}\log(w)=-\nabla f\) & CS: \(\frac{d}{dt}\log(w)=-\Pi_{w}\nabla f\) \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of different optimization methods and which constraints they satisfy.
## 4 Convergence Proof
We prove the convergence of the Cauchy-Simplex via its gradient flow. We also state the theorems for convergence of the discrete linear scheme but leave the proof in the appendix.
**Theorem 1**.: _Let \(f\) be continuously differentiable w.r.t \(w^{t}\) and \(w^{t}\) continuously differentiable w.r.t \(t\). Under the Cauchy-Simplex gradient flow (12), \(f(w^{t})\) is a decreasing function for \(w^{t}\in\text{int}(\Delta^{n})\), i.e. \(\sum_{i}w^{t}_{i}=1\) and \(w^{t}_{i}>0\)._
Proof.: By direction computation
\[\frac{df}{dt}=\frac{\partial f}{\partial w^{t}}\cdot\frac{dw^{t}}{dt}=-\nabla f \cdot\left(w^{t}(\nabla f-w^{t}\cdot\nabla f)\right)=-\text{Var}[\nabla f|w^{t}], \tag{22}\]
where \(\text{Var}[\nabla f|w^{t}]=w^{t}\cdot(\nabla f-w^{t}\cdot\nabla f)^{2}=w^{t} \cdot(\Pi_{w^{t}}\nabla f)^{2}\). Since \(w^{t}_{i}>0\), it follows that \(\frac{df}{dt}\leq 0\) and that \(f\) is decreasing in time.
**Remark 3**.: _For \(w\in\text{int}(\Delta^{n})\), \(\Pi_{w}\nabla_{i}f=0\) for all \(i\) only if \(w\) is an optimal solution to (1). This can be verified by checking the KKT conditions shown in the appendix. Thus \(f\) is a strictly decreasing function in time and stationary only at the optimal solution._
**Remark 4**.: _If the initial condition satisfies \(w^{0}\in\text{int}(\Delta^{n})\), then \(w^{t}\in\text{int}(\Delta^{n})\) for all \(t\). This can be seen as (15) preserves the unit-sum constraint, and the rearranged equation (16) preserves positivity._
**Theorem 2**.: _Let \(f\) be convex and continuously differentiable w.r.t \(w^{t}\), \(w^{t}\) continuously differentiable w.r.t \(t\), and \(w^{*}\in\Delta^{n}\) be a solution to (1). Under the Cauchy-Simplex gradient flow (12), the relative entropy_
\[D(w^{*}|w^{t})=\sum_{w^{*}_{i}\neq 0}w^{*}_{i}\log\left(\frac{w^{*}_{i}}{w^{t}_ {i}}\right) \tag{23}\]
_is a decreasing function in time for \(w^{t}\in\text{int}(\Delta^{n})\)._
Proof.: We rewrite the relative entropy into
\[D(w^{*}|w^{t})=\sum_{w^{*}_{i}\neq 0}w^{*}_{i}\log(w^{*}_{i})-\sum_{i}w^{*}_{ i}\log(w^{t}_{i}). \tag{24}\]
By direction computation
\[\frac{d}{dt}D(w^{*}|w^{t})=-\sum_{i}\frac{w^{*}_{i}}{w^{t}_{i}}\frac{dw^{t}_{ i}}{dt}=\sum_{i}w^{*}_{i}(\Pi_{w^{t}}\nabla_{i}f)=\nabla f\cdot(w^{*}-w^{t}). \tag{25}\]
Since \(f\) is convex, and \(w^{*}\) is a minimum of \(f\) in the simplex,
\[\frac{d}{dt}D(w^{*}|w^{t})\leq f(w^{*})-f(w^{t})\leq 0. \tag{26}\]
**Theorem 3**.: _Let \(f\) be convex and continuously differentiable w.r.t \(w^{t}\), \(w^{t}\) continuously differentiable w.r.t \(t\), and \(w^{*}\in\Delta^{n}\) is a solution to (1). For \(w^{0}=(1/n,\ldots,1/n)\) and \(T>0\), the Cauchy-Simplex gradient flow (12) gives the bound_
\[f(w^{T})-f(w^{*})\leq\frac{\log(n)}{T}. \tag{27}\]
Proof.: Taking \(u=w^{*}\) and integrating (26) w.r.t \(t\) gives
\[\int_{0}^{T}\left(f(w^{t})-f(w^{*})\right)dt\leq-\int_{0}^{T}\frac{d}{dt}D(w^{ *}|w^{t})\,dt=D(w^{*}|w^{0})-D(w^{*}|w^{T}). \tag{28}\]
Since relative entropy is non-negative [39; 40],
\[\int_{0}^{T}\left(f(w^{t})-f(w^{*})\right)dt\leq D(w^{*}|w^{0}). \tag{29}\]
Completing the integral on the left side of the inequality and dividing by \(T\) gives
\[\frac{1}{T}\int_{0}^{T}f(w^{t})\,dt-f(w^{*})\leq\frac{1}{T}D(w^{*}|w^{0}). \tag{30}\]
Using Theorem 1, \(f(w^{t})\) is a decreasing function. Thus
\[f(w^{T})=\frac{1}{T}\int_{0}^{T}f(w^{T})\,dt\leq\frac{1}{T}\int_{0}^{T}f(w^{t} )\,dt. \tag{31}\]
Noting that for \(w^{0}=(1/n,\ldots,1/n)\), \(D(u|w^{0})\leq\log(n)\) for all \(u\in\Delta^{n}\), gives the required bound
\[f(w^{T})-f(w^{*})\leq\frac{D(w^{*}|w^{0})}{T}\leq\frac{\log(n)}{T}. \tag{32}\]
**Theorem 4** (Convergence of Linear Scheme).: _Let \(f\) be a differentiable convex function that obtains a minimum at \(w^{*}\) with \(\nabla f\)\(L\)-Lipschitz continuous. Let \(w^{0}=(1/n,\ldots,1/n)\) and \(\{\eta_{t}\}_{t}\) be a decreasing sequence that satisfies \(\eta_{t}\leq\min\{1/L,\eta_{t,\max}\}\) and 2_
Footnote 2: Note that \(C_{\gamma_{t}}\) is an increasing function of \(\gamma_{t}\), with \(C_{\gamma_{t}}\geq 0\) for \(\gamma_{t}\in[0,1]\). Thus \(\gamma_{t}=\eta_{t}/\eta_{t,\max}\) can also be chosen to satisfy (33) and that \(\{\eta_{t}\}_{t}\) is a decreasing sequence with \(\eta_{t}\leq\min\{1/L,\eta_{t,\max}\}\).
\[C_{\gamma_{t}}\leq\frac{(t+1)\,w^{t}\cdot(\Pi^{t}\nabla f^{t})^{2}}{2\max_{i }(\Pi^{t}\nabla f^{t})_{i}^{2}}, \tag{33}\]
_with \(\gamma_{t}=\eta_{t}/\eta_{t,\max}\), \(C_{\gamma_{t}}=\gamma_{t}^{-2}\log(e^{-\gamma_{t}}/(1-\gamma_{t}))\) and \(\eta_{t,\max}\) defined in (7). Then the linear Cauchy-Simplex scheme (7) produces iterates \(\{w^{t}\}_{t}\) such that_
\[f(w^{T})-f(w^{*})\leq\frac{\log(n)}{T\,\eta_{T}}. \tag{34}\]
## 5 Applications
### Projection onto the Convex Hull
Projection onto a convex hull arises in many areas like machine learning [5; 6; 4], collision detection [41] and imaging [42; 43]. It involves finding a point in the convex hull of a set of points \(\{x_{i}\}_{1\leq i\leq n}\), with \(x_{i}\in\mathbb{R}^{d}\), that is closest to an arbitrary point \(y\in\mathbb{R}^{d}\), _i.e._,
\[\min_{w}\|wX-y\|^{2}\qquad\text{where}\qquad\sum_{i}w_{i}=1\text{ and }w_{i}\geq 0, \tag{35}\]
and \(X=[x_{1},\cdots,x_{n}]^{T}\) is a \(\mathbb{R}^{n\times d}\) matrix. This is also known as simplex-constrained regression.
**Experimental Details**: We look at a convex hull sampled from the unit hypercube \([0,1]^{d}\) for \(d\in[10,15,\ldots,50]\). For each hypercube, we sample 50 points uniformly on each of its surfaces, giving a convex hull \(X\) with \(n=100d\) data points.
Once \(X\) is sampled, 50 \(y\)'s are created outside the hypercube perpendicular to a surface and unit length away from it. This is done by considering the 50 points in \(X\) lying on a randomly selected surface of the hypercube. A point \(y_{\text{true}}\) is created as a random convex combination of these points. The point \(y\) can then be created perpendicular to this surface and a unit length away from \(y_{\text{true}}\), and thus also from the convex hull of \(X\).
Each \(y\) is then projected onto \(X\) using CS, EGD, and PFW. These algorithms are ran until a solution, \(\hat{y}\), is found such that \(\|\hat{y}-y_{\text{true}}\|\leq 1e^{-5}\) or \(10\,000\) iterations have been made. We do not implement PGD and Frank-Wolfe due to their inefficiency in practice.
**Implementation Details**: The learning rate for EGD, PFW, and CS is found through a line search. In the case of the PFW and CS algorithms, an explicit solution can be found and used. At the same time, EGD implements a back-tracking linear search with Armijo conditions [38] to find an approximate solution.
Experiments were written in Python and ran on Google Colab. The code can be found on GitHub3. The random data is seeded for reproducibility.
Footnote 3: [https://github.com/infamousoap/ConvexHull](https://github.com/infamousoap/ConvexHull)
**Results**: The results can be seen in Fig. 1. For \(d=10\), we see that PFW outperforms both CS and EGD in terms of the number of iterations required and the time taken. But for \(d>10\), on average, CS converges with the least iterations and time taken.
### Optimal Question Weighting
It is often desirable that the distribution of exam marks matches a target distribution, but this rarely happens. Modern standardized tests (e.g. IQ exams) solve this problem by transforming the distribution of the raw score of a given age group so it fits a normal distribution [44, 45, 46].
While IQ exams have many criticisms [47, 48, 49], we are interested in the raw score. As noted by Gottfredson and Linda S. [46], the raw score has no intrinsic meaning as it can be boosted by adding easier questions to the test. We also argue it is hard to predict the difficulty of a question relative to an age group and, thus, even harder to give it the correct weight. Hence making the raw score a bad reflection of a person's performance.
Here we propose a framework to find an optimum weighting of questions such that the weighted scores will fit a target distribution. A demonstration can be seen in Fig. 2.
Consider \(d\) students taking an exam with \(n\) true or false questions. For simplicity, assume that person \(j\) getting question \(i\) correct can be modeled as a random variable \(\mathcal{X}_{i,j}\) for \(i=1,\ldots,n\) and \(j=1,\ldots,d\). Consider the discrete distribution of \(X_{j}=\sum_{i}w_{i}\mathcal{X}_{i,j}\) for some \(w\in\Delta^{n}\), the weighted mark of person \(j\). This distribution can be approximated as
Figure 1: Number of steps and time required for PFW, EGD, and CS to project 50 randomly sampled points onto the \(d\)-hypercube. The bars indicate the minimum and maximum values.
continuous distribution,
\[\rho_{\varepsilon}(z)=\frac{1}{d}\sum_{j=1}^{d}\mu_{\varepsilon}(z-X_{j})\quad \text{with}\quad\mu_{\varepsilon}(x)=\frac{1}{\varepsilon}\mu(x/\varepsilon), \tag{36}\]
\(\varepsilon>0\) and \(\mu\) is a continuous probability distribution, i.e. \(\mu\geq 0\) and \(\int\mu(x)dx=1\). This is also known as kernel density estimation.
We want to minimize the distance between \(\rho_{\varepsilon}\) and some target distribution \(f\). A natural choice is the relative entropy,
\[\min_{w\in\Delta^{n}}D(\rho_{\varepsilon}|f)=\min_{w\in\Delta^{n}}\int\rho_{ \varepsilon}(x)\log\left(\frac{\rho_{\varepsilon}(x)}{f(x)}\right)\,dx, \tag{37}\]
of which we take its Riemann approximation,
\[\min_{w\in\Delta^{n}}\hat{D}(\rho_{\varepsilon}|f)=\min_{w\in\Delta^{n}}\sum_ {k=1}^{M}\rho_{\varepsilon}(x_{k})\log\left(\frac{\rho_{\varepsilon}(x_{k})}{ f(x_{k})}\right)(x_{k}-x_{k-1}), \tag{38}\]
where \(\{x_{k}\}_{0\leq k\leq M}\) is a partition of a finite interval \([a,b]\).
We remark that this problem is not convex, as \(\mu\) cannot be chosen to be convex w.r.t \(w\) and be a probability distribution.
**Experiment Details**: We consider 25 randomly generated exam marks, each having \(d=200\) students taking an exam with \(n=75\) true or false questions. For simplicity, we assume that \(\mathcal{X}_{i,j}\sim\text{Bernoulli}(q_{i}s_{j})\) where \(0<q_{i}<1\) is the difficulty of question \(i\) and \(0<s_{j}<1\) the \(j\)-th student's smartness.
For each scenario, \(q_{i}=7/8\) for \(1\leq i\leq 60\) and \(q_{i}=1/5\) for \(60<i\leq 75\), while \(s_{j}=7/10\) for \(1\leq j\leq 120\) and \(s_{j}=1/2\) for \(120<j\leq 200\). \(\mathcal{X}_{i,j}\sim\text{Bernoulli}(q_{i}s_{j})\) are then sampled. This setup results in a bimodal distribution, with an expected average of \(0.532\) and an expected standard deviation of \(0.1206\), as shown in Figure 2.
Figure 2: Optimal question weighting for (randomly generated) exam scores with 200 students and 75 questions. The setup follows the experimental details in Section 5.2. The kernel density estimate uses a truncated unit normal distribution with \(\varepsilon=0.05\), and the target distribution is a truncated normal distribution with a mean of 0.5 and a standard deviation of 0.1. We take \(w^{0}=(0.01,\ldots,0.01)\), and the Cauchy-Simplex is applied, with each step using a backtracking line search. The resulting weighted histogram and kernel density estimate is shown.
The distribution of the weighted marks is shown on the top row, and its QQ plots against a normal distribution of mean 0.5 and a standard deviation of 0.1 are shown on the bottom row.
At iterations 0, 5, and 20, the weighted scores have a mean of 0.499, 0.514, and 0.501, with standard deviations of 0.128, 0.124, and 0.109, respectively.
For the kernel density estimate, \(\mu(x)\) is chosen as a unit normal distribution truncated to \([0,1]\), with smoothing parameter \(\varepsilon=0.05\). Similarly, \(f\) is a normal distribution with mean \(0.5\) and variance \(0.1\), truncated to \([0,1]\). We take the partition \(\{k/400\}_{0\leq k\leq 400}\) for the Riemann approximation.
The algorithms CS, EGD, and PFW are ran for 150 iterations.
**Implementation Details**: The learning rate for EGD, PFW, and CS is found through a line search. However, explicit solutions are not used. Instead, a back-tracking line search with Armijo conditions is used to find an approximate solution.
Experiments were written in Python and ran on Google Colab and can be found on GitHub4. The random data is seeded for reproducibility.
Footnote 4: [https://github.com/infamousoap/ConvexHull](https://github.com/infamousoap/ConvexHull)
**Results**: A table with the results can be seen in Table 2. In summary, of the 25 scenarios, PFW always produces solutions with the smallest relative entropy, with CS producing the largest relative entropy 13 times and EGD 12 times. For the time taken to make the 150 steps, PFW is the quickest 15 times, EGD 7 times, and CS 3 times. At the same time, EGD is the slowest 13 times, CS 7 times, and PFW 5 times.
It is perhaps expected that PFW outperforms both EGD and CS due to the low dimensionality of this problem. However, the CS produces similar relative entropies to EGD while maintaining a lower average run time of 5.22 seconds compared to the average run time of 5.68 sec for EGD.
\begin{table}
\begin{tabular}{l|c c|c c|c c} \hline & \multicolumn{2}{c|}{CS} & \multicolumn{2}{c|}{EGD} & \multicolumn{2}{c}{PFW} \\ \hline Trial & Distance & Time & Distance & Time & Distance & Time \\ \hline
[MISSING_PAGE_POST]
*4.98** & **0.035408** & 5.98 \\ \hline Average & \multicolumn{2}{c|}{5.22} & \multicolumn{2}{c|}{5.68} & \multicolumn{2}{c}{4.97} \\ \hline \end{tabular}
\end{table}
Table 2: Results for optimal question weighting after running CS, EGD, and PFW for 150 iterations. The final distance (relative entropy) and the time taken in seconds are shown. Bold represents the minimum (best) value, and underline represents the maximum (worst) value.
### Prediction from Expert Advice
Consider \(N\) 'experts' (e.g., Twitter) who give daily advice for \(1\leq t\leq T\). Suppose that, as a player in this game, taking advice from expert \(i\) on the day \(t\) will incur a loss \(l_{i}^{t}\in[0,1]\). This loss is not known beforehand, only once the advice has been taken. The weighted loss \(w^{t}\cdot l^{t}\) is the expected loss if we pick our expert w.r.t some distribution \(w^{t}\in\Delta^{N}\). This problem is also known as the multi-armed bandit problem.
A simple goal is to generate a sequence of weight vectors \(\{w^{t}\}_{t}\) to minimize the averaged expected loss. This goal is, however, a bit too ambitious as the loss vectors \(l^{t}\) are not known beforehand. An easier problem is to find a sequence, \(\{w^{t}\}_{t}\), such that its averaged expected loss approaches the average loss of the best expert as \(T\rightarrow\infty\), that is
\[\frac{1}{T}R_{T}\to 0,\quad\text{where}\quad R_{T}=\sum_{t=1}^{T}w^{t} \cdot l^{t}-\min_{i}\sum_{t=1}^{T}l_{i}^{t} \tag{39}\]
as \(T\rightarrow\infty\). \(R_{T}\) is commonly known as the regret of the strategy \(\{w^{t}\}_{t}\).
Previous works [50, 51, 52, 53] all yield \(\mathcal{O}(\sqrt{T\log N})\) convergence rate for the regret.
**Theorem 5**.: _Consider a sequence of adversary loss vectors \(l^{t}\in[0,1]^{N}\). For any \(u\in\Delta^{N}\), the regret generated by the Cauchy-Simplex scheme_
\[w^{t+1}=w^{t}(1-\eta_{t}\Pi_{w^{t}}\nabla f^{t})\quad\text{where}\quad\nabla f ^{t}=l^{t}, \tag{40}\]
_is bounded by_
\[\sum_{t=1}^{T}w^{t}\cdot l^{t}-\sum_{t=1}^{T}u\cdot l^{t}\leq\frac{D(u|w^{1})} {\eta}+\frac{T\eta}{2(1-\eta)}, \tag{41}\]
_for a fixed learning rate \(\eta_{t}=\eta<1\)._
_In particular, taking \(w^{1}=(1/N,\ldots,1/N)\) and \(\eta=\frac{\sqrt{2\log(N)}}{\sqrt{2\log(N)}+\sqrt{T}}\) gives the bound_
\[\sum_{t=1}^{T}w^{t}\cdot l^{t}-\sum_{t=1}^{T}u\cdot l^{t} \leq \sqrt{2T\log(N)}+\log(N). \tag{42}\]
_Moreover, this holds when \(u=e_{j}\), where \(j\) is the best expert and \(e_{j}\) is the standard basis vector._
### Universal Portfolio
Consider an investor with a fixed-time trading horizon, \(T\), managing a portfolio of \(N\) assets. Define the price relative for the \(i\)-th stock at time \(t\) as \(x_{i}^{t}=C_{i}^{t}/C_{i}^{t-1}\), where \(C_{i}^{t}\) is the closing price at time \(t\) for the \(i\)-th stock. So today's closing price of asset \(i\) equals \(x_{i}^{t}\) times yesterday's closing price, _i.e._ today's price relative to yesterday's.
A portfolio at day \(t\) can be described as \(w^{t}\in\Delta^{N}\), where \(w_{i}^{t}\) is the proportion of an investor's total wealth in asset \(i\) at the beginning of the trading day. Then the wealth of the portfolio at the beginning of day \(t+1\) is \(w^{t}\cdot x^{t}\) times the wealth of the portfolio at day \(t\).
Consider the average log-return of the portfolio
\[\frac{1}{T}\log\left(\prod_{t=1}^{T}w^{t}\cdot x^{t}\right)=\frac{1}{T}\sum_ {t=1}^{T}\log(w^{t}\cdot x^{t}). \tag{43}\]
Similarly to predicting with expert advice, it is too ambitious to find a sequence of portfolio vectors \(\{w^{t}\}_{t}\) that maximizes the average log-return. Instead, we wish to find such a sequence that approaches the best fixed-weight portfolio, _i.e._
\[\frac{1}{T}LR_{T}\to 0,\quad\text{where}\quad LR_{T}=\sum_{t=1}^{T} \log(u\cdot x^{t})-\sum_{t=1}^{T}\log(w^{t}\cdot x^{t}) \tag{44}\]
as \(T\rightarrow\infty\), for some \(u\in\Delta^{N}\). If such a sequence can be found, \(\{w^{t}\}_{t}\) is a universal portfolio. \(LR_{T}\) is commonly known as the log-regret.
Two standard assumptions are made when proving universal portfolios: (1) For every day, all assets have a bounded price relative, and at least one is non-zero, _i.e._\(0<\max_{i}x_{i}^{t}<\infty\) for all \(t\), and (2) No stock goes bankrupt during the trading period, _i.e._\(a:=\min_{i,t}x_{i}^{t}>0\), where \(a\) known as the market variability parameter. This is also known as the no-junk-bond assumption.
Over the years, various bounds on the log-regret have been proven under both assumptions. Some examples include Cover, in his seminal paper [54], with \(\mathcal{O}(\log T)\), Helmbold _et al._[3] with \(\mathcal{O}(\sqrt{T\log N})\), Agarwal _et al._[55] with \(\mathcal{O}(N^{1.5}\log(NT))\), Hazan and Kale [56] with \(\mathcal{O}(N\log(T+N))\), and Gaivoronski and Stella [57] with \(\mathcal{O}(C^{2}\log(T))\) where \(C=\sup_{x\in\Delta^{N}}\|\nabla_{b}\log(b\cdot x)\|\) and adding an extra assumption on independent price relatives. Each with varying levels of computational complexity.
**Remark 5**.: _Let \(x^{t}\in[a,b]^{N}\) be a bounded sequence of price relative vectors for \(0\leq a\leq b<\infty\). Since the log-regret is invariant under re-scalings of \(x^{t}\), w.l.o.g. we can look at the log-regret for the re-scaled return vectors \(x^{t}\in[\tilde{a},1]^{N}\) for \(0\leq\tilde{a}\leq 1\)._
**Theorem 6**.: _Consider a bounded sequence of price relative vectors \(x^{t}\in[a,1]^{N}\) for some positive constant \(0<a\leq 1\), and \(\max_{i}x_{i}^{t}=1\) for all \(t\). Then the log-regret generated by the Cauchy-Simplex_
\[w^{t+1}=w^{t}(1-\eta\Pi_{w^{t}}\nabla f^{t}),\quad\text{where} \quad\nabla f^{t}=-\frac{x^{t}}{w^{t}\cdot x^{t}}, \tag{45}\]
_is bounded by_
\[\sum_{t=1}^{T}\log(u\cdot x^{t})-\sum_{t=1}^{T}\log(w^{t}\cdot x^{t})\leq \frac{D(u|w^{1})}{\eta}+\frac{T\eta}{2a^{2}(1-\eta)}, \tag{46}\]
_for any \(u\in\Delta^{N}\) and \(0<\eta\leq 1\)._
_In particular, taking \(w^{1}=(1/N,\ldots,1/N)\) and \(\eta=\frac{a\sqrt{2\log(N)}}{a\sqrt{2\log(N)}+\sqrt{T}}\) gives the bound_
\[\sum_{t=1}^{T}\log(u\cdot l^{t})-\sum_{t=1}^{T}\log(w^{t}\cdot l^{t})\leq \frac{\sqrt{2T\log(N)}}{a}+\log(N). \tag{47}\]
**Experimental Details**: We look at the performance of our algorithm on four standard datasets used to study the performance of universal portfolios: (1) NYSE is a collection of 36 stocks traded on the New York Stock Exchange from July 3, 1962, to Dec 31, 1984, (2) DJIA is a collection of 30 stocks tracked by the Dow Jones Industrial Average from Jan 14, 2009, to Jan 14, 2003, (3) SP500 is a collection of the 25 largest market cap stocks tracked by the Standard & Poor's 500 Index from Jan 2, 1988, to Jan 31, 2003, and (4) TSE is a collection of 88 stocks traded on the Toronto Stock Exchange from Jan 4, 1994, to Dec 31, 1998.5
Footnote 5: The datasets were original found on [http://www.cs.technion.ac.il/~rani/portfolios/](http://www.cs.technion.ac.il/~rani/portfolios/), but is now unavailable. It was retrieved using the WayBack Machine [https://web.archive.org/web/20220111131743/http://www.cs.technion.ac.il/~rani/portfolios/](https://web.archive.org/web/20220111131743/http://www.cs.technion.ac.il/~rani/portfolios/).
Two other portfolio strategies are considered: (1) Helmbold _et. al._[3] (EGD), who uses the EGD scheme \(w^{t+1}=w^{t}\exp(\eta\nabla f^{t})/\sum_{i}w_{i}^{t}\exp(\eta\nabla_{i}f^{t})\), with \(\nabla f^{t}=x^{t}/w^{t}\cdot x^{t}\), and (2) Buy and Hold (B&H) strategy, where one starts with an equally weighted portfolio and the portfolio is left to its own devices.
Two metrics are used to evaluate the performance of the portfolio strategies: (1) The Annualized Percentage Yield: \(\text{APY}=\text{R}^{1/y}-1\), where \(R\) is the total return over the full trading period, and \(y=T/252\), where 252 is the average number of annual trading days, and (2) The Sharpe Ratio: \(\text{SR}=(\text{APY}-R_{f})/\sigma\), where \(\sigma^{2}\) is the variance of the daily returns of the portfolio, and \(R_{f}\) is the risk-free interest rate. Intuitively, the Sharpe ratio measures the performance of the portfolio relative to a risk-free investment while also factoring in the portfolio's volatility. Following [58], we take \(R_{f}=4\%\).
We take the learning rate as the one used to prove that CS and EGD are universal portfolios. In particular, \(\eta_{\text{CS}}=\frac{a\sqrt{2\log N}}{a\sqrt{2\log N}+\sqrt{T}}\) and \(\eta_{\text{EGD}}=2a\sqrt{2\log(N)/T}\), respectively, where \(a\) is the market variability parameter. We assume that the market variability parameter is given for each dataset.
Experiments were written in Python and can be found on GitHub6.
Footnote 6: [https://github.com/infamous](https://github.com/infamous) soap/UniversalPortfolio
**Results**: A table with the results can be seen in Table 3. For the NYSE, DJIA, and SP500 datasets, CS slightly outperforms EGD in both the APY and Sharpe ratio, with EGD having a slight edge on the APY for the NYSE dataset. But curiously, the B&H strategy outperforms both CS and EGD on the TSE.
We remark that this experiment does not reflect real-world performance, as the market variability parameter is assumed to be known, transaction costs are not factored into our analysis, and the no-junk-bond assumption tends to overestimate performance [59; 60; 61]. However, this is outside of the scope of this paper. It is only shown as a proof of concept.
## 6 Conclusion
This paper presents a new iterative algorithm, the Cauchy-Simplex, to solve convex problems over a probability simplex. Within this algorithm, we find ideas from previous works which only capture a portion of the simplex constraint. Combining these ideas, the Cauchy-Simplex provides a numerically efficient framework with nice theoretical properties.
The Cauchy-Simplex maintains the linear form of Projected Gradient Descent, allowing one to find analytical solutions to a line search for certain convex problems. But unlike projected gradient descent, this analytical solution will remain in the probability simplex. A backtracking line search can be used when an analytical solution cannot be found. However, this requires an extra parameter, the maximum candidate step size. The Cauchy-Simplex provides a natural answer as a maximum learning rate is required to enforce positivity, rather than the exponentials used in Exponentied Gradient Descent.
Since the Cauchy-Simplex satisfies both constraints of the probability simplex in its iteration scheme, its gradient flow can be used to prove convergence for differentiable and convex functions. This implies the convergence of its discrete linear scheme. This is in contrast to EGD, PFW, and PGD, in which its discrete nature is crucial in satisfying both constraints of the probability simplex. More surprisingly, we find that in the proofs, formulas natural to probability, _i.e._, variance, and relative entropy, are necessary when proving convergence.
We believe that the strong numerical results and simplicity seen through its motivating derivation, gradient flow, and iteration scheme make it a strong choice for solving problems with a probability simplex constraint.
## Appendix A Convergence Proofs
We will use the notation \(\Pi^{t}=\Pi_{w_{t}}\) and \(\nabla f^{t}=\nabla f(w^{t})\) for the remaining section.
### Decreasing Relative Entropy
**Theorem 7**.: _For any \(u\in\Delta^{N}\), each iteration of the linear Cauchy-Simplex (7) satisfies the bound_
\[D(u|w^{t+1})-D(u|w^{t})\leq\eta_{t}u\cdot(\Pi^{t}\nabla f^{t})+C_{\gamma_{t}} \eta_{t}^{2}u\cdot(\Pi^{t}\nabla f^{t})^{2}, \tag{48}\]
_where \(C_{\gamma_{t}}=\gamma_{t}^{-2}\log(e^{-\gamma_{t}}/(1-\gamma_{t}))\) and \(\eta_{t}=\gamma_{t}\eta_{t,\max}\) with \(\gamma_{t}\in(0,1)\)._
Proof.: By direct computation
\[D(u|w^{t+1})-D(u|w^{t}) = \sum_{i}u_{i}\log\left(\frac{1}{1-\eta_{t}\Pi^{t}\nabla_{i}f^{t}}\right) \tag{49}\] \[= \sum_{i}u_{i}\log\left(\frac{e^{-\eta_{t}\Pi^{t}\nabla_{i}f^{t} }}{1-\eta_{t}\Pi^{t}\nabla_{i}f^{t}}e^{\eta_{t}\Pi^{t}\nabla_{i}f^{t}}\right)\] (50) \[= \eta_{t}u\cdot(\Pi^{t}\nabla f^{t})+\sum_{i}u_{i}\log\left(\frac {e^{-\eta_{t}\Pi^{t}\nabla_{i}f^{t}}}{1-\eta_{t}\Pi^{t}\nabla_{i}f^{t}}\right). \tag{51}\]
\begin{table}
\begin{tabular}{l|c c|c c|c c} \hline \hline & \multicolumn{2}{c|}{CS} & \multicolumn{2}{c|}{EGD} & \multicolumn{2}{c}{B\&H} \\ \hline Dataset & APY & Sharpe & APY & Sharpe & APY & Sharpe \\ \hline NYSE & 0.162 & **14.360** & **0.162** & 14.310 & 0.129 & 9,529 \\ DJIA & **-0.099** & **-8.714** & -0.101 & -8.848 & -0.126 & -10.812 \\ SP500 & **0.104** & **4.595** & 0.101 & 4.395 & 0.061 & 1.347 \\ TSE & 0.124 & 10.225 & 0.123 & 10.204 & **0.127** & **10.629** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance of different portfolio strategies on different datasets. Bold represents the maximum (best) value, and underline represents the minimum (worst) value.
Since the learning rate can be written as \(\eta_{t}=\gamma_{t}\eta_{t,\max}\), with \(\gamma_{t}\in(0,1)\),
\[\eta_{t}\Pi^{t}\nabla_{i}f^{t}\leq(\gamma_{t}\eta_{t,\max})\max_{i}\Pi^{t}\nabla_ {i}f^{t}=\gamma_{t}<1. \tag{52}\]
Consider the inequality \(0\leq\log(e^{-x}/(1-x))\leq C_{\gamma_{t}}x^{2}\) for \(x<\gamma_{t}\) and \(C_{\gamma_{t}}=\gamma_{t}^{-2}\log(e^{-\gamma_{t}}/(1-\gamma_{t}))\). Note that \(C_{\gamma_{t}}\geq 0\) for \(\gamma_{t}\in[0,1]\) and \(C_{\gamma_{t}}\to\infty\) as \(\gamma_{t}\to 1\). Therefore,
\[0\leq\log\left(\frac{e^{-\eta_{t}\Pi^{t}\nabla_{i}f^{t}}}{1-\eta_{t}\Pi^{t} \nabla_{i}f^{t}}\right)\leq C_{\gamma_{t}}\eta_{t}^{2}(\Pi^{t}\nabla_{i}f^{t} )^{2}. \tag{53}\]
Giving the required inequality
\[D(u|w^{t+1})-D(u|w^{t}) \leq \eta_{t}u\cdot(\Pi^{t}\nabla f^{t})+C_{\gamma_{t}}\eta_{t}^{2}u \cdot(\Pi^{t}\nabla f^{t})^{2}. \tag{54}\]
### Proof of Theorem 4
Proof.: By Theorem 7,
\[D(w^{*}|w^{t+1})-D(w^{*}|w^{t}) \leq \eta_{t}\nabla f^{t}\cdot(w^{*}-w^{t})+C_{\gamma_{t}}\eta_{t}^{2} w^{*}\cdot(\Pi^{t}\nabla f^{t})^{2}. \tag{55}\]
By convexity of \(f\), rearranging gives
\[\eta_{t}\big{(}f(w^{t})-f(w^{*})\big{)}\leq D(w^{*}|w^{t})-D(w^{*}|w^{t+1})+C_{ \gamma_{t}}\eta_{t}^{2}w^{*}\cdot(\Pi^{t}\nabla f^{t})^{2}. \tag{56}\]
By Theorem 8, we have that
\[f(w^{t})\geq f(w^{t+1})+\frac{\eta_{t}}{2}w^{t}\cdot(\Pi^{t}\nabla f^{t})^{2}. \tag{57}\]
Repeatedly applying this inequality gives
\[f(w^{T})+\frac{1}{2}\sum_{k=t}^{T-1}\eta_{k}w^{k}\cdot(\Pi^{k}\nabla f^{k})^{ 2}\leq f(w^{t}), \tag{58}\]
for \(t\leq T-1\). Thus, (56) gives the bound
\[\eta_{t}(f(w^{T})-f(w^{*}))\leq D(w^{*}|w^{t}) - D(w^{*}|w^{t+1})+C_{\gamma_{t}}\eta_{t}^{2}w^{*}\cdot(\Pi^{t} \nabla f^{t})^{2}\] \[-\frac{1}{2}\eta_{t}\sum_{k=t}^{T-1}\eta_{k}w^{k}\cdot(\Pi^{k} \nabla f^{k})^{2}.\]
Summing over time and collapsing the sum gives
\[(f(w^{T})-f(w^{*}))\sum_{t=0}^{T-1}\eta_{t}\leq D(w^{*}|w^{0}) - D(w^{*}|w^{T})\] \[+\sum_{t=0}^{T-1}C_{\gamma_{t}}\eta_{t}^{2}w^{*}\cdot(\Pi^{t} \nabla f^{t})^{2}\] \[-\frac{1}{2}\sum_{t=0}^{T-1}\sum_{k=t}^{T-1}\eta_{k}\eta_{t}w^{k} \cdot(\Pi^{k}\nabla f^{k})^{2}.\]
We can rewrite the last term as
\[\sum_{t=0}^{T-1}\sum_{k=t}^{T-1}\eta_{k}\eta_{t}w^{k}\cdot(\Pi^{k}\nabla f^{k} )^{2}=\sum_{t=0}^{T-1}\sum_{k=0}^{t}\eta_{k}\eta_{t}w^{t}\cdot(\Pi^{t}\nabla f ^{t})^{2}. \tag{61}\]
Thus, the last two terms of (60) can be bounded by
\[S := \sum_{t=0}^{T-1}\eta_{t}\left(C_{\gamma_{t}}\eta_{t}w^{*}\cdot( \Pi^{t}\nabla f^{t})^{2}-\frac{1}{2}\sum_{k=0}^{t}\eta_{k}w^{t}\cdot(\Pi^{t} \nabla f^{t})^{2}\right) \tag{62}\] \[\leq \sum_{t=0}^{T-1}\eta_{t}^{2}\left(C_{\gamma_{t}}\max_{i}\,(\Pi^{t }\nabla f^{t})_{i}^{2}-\frac{t+1}{2}w^{t}\cdot(\Pi^{t}\nabla f^{t})^{2}\right), \tag{63}\]
since \(\{\eta_{t}\}_{t}\) is a decreasing sequence and \(w^{*}\in\Delta^{n}\). By assumption,
\[C_{\gamma_{t}}\leq\frac{(t+1)\ w^{t}\cdot(\Pi^{t}\nabla f^{t})^{2}}{2\max_{i}{( \Pi^{t}\nabla f^{t})_{i}^{2}}}, \tag{64}\]
thus \(S\leq 0\), and (60) gives
\[f(w^{T})-f(w^{*})\leq\frac{D(w^{*}|w^{0})-D(w^{*}|w^{T})}{T\eta_{T}}. \tag{65}\]
Taking \(w^{0}=(1/n,\ldots,1/n)\), then \(D(u|w^{0})\leq\log(n)\) for all \(u\in\Delta^{n}\). Since relative entropy is non-negative,
\[f(w^{T})-f(w^{*})\leq\frac{\log(n)}{T\,\eta_{T}}. \tag{66}\]
**Theorem 8** (Linear Progress Bound).: _Let \(f\) be convex, differentiable, and \(\nabla f\) is \(L\)-Lipschitz continuous. Then each iteration of the linear Cauchy-Simplex (7) guarantees_
\[f(w^{t+1})\leq f(w^{t})-\frac{\eta_{t}}{2}\text{Var}[\nabla f^{t}\,|\,w^{t}], \quad\text{for}\quad\eta_{t}\leq\min\bigg{\{}\frac{1}{L},\eta_{t,\max}\bigg{\}}, \tag{67}\]
_where \(\eta_{t,\max}\) is defined in (7)._
Proof.: Since \(f\) is convex with \(\nabla f\) Lipshitz continuous, we have the inequality [62]
\[f(w^{t+1})\leq f(w^{t})+\nabla f^{t}\cdot(w^{t+1}-w^{t})+\frac{L}{2}\|w^{t+1}- w^{t}\|^{2}. \tag{68}\]
Our iteration scheme gives that
\[\|w^{t+1}-w^{t}\|^{2}=\eta_{t}^{2}\sum_{i}\Big{(}w_{i}^{t}\,\Pi^{t}\nabla_{i}f ^{t}\Big{)}^{2}\leq\eta_{t}^{2}\sum_{i}w_{i}^{t}(\Pi^{t}\nabla_{i}f^{t})^{2}, \tag{69}\]
since \(0\leq w_{i}^{t}\leq 1\). Hence
\[f(w^{t+1})\leq f(w^{t})-\eta_{t}\text{Var}[\nabla f^{t}|w_{t}]+\eta_{t}^{2} \frac{L}{2}\text{Var}[\nabla f^{t}|w_{t}]. \tag{70}\]
Minimizing the right side of this inequality with respect to \(\eta_{t}\) gives that
\[f(w^{t+1})\leq f(w^{t})-\frac{\eta_{t}}{2}\text{Var}[\nabla f^{t}\,|\,w^{t}], \quad\text{for}\quad\eta_{t}\leq\min\bigg{\{}\frac{1}{L},\eta_{t,\max}\bigg{\}}. \tag{71}\]
### Proof of Theorem 5
Proof.: Rearranging Theorem 7 gives
\[-\eta_{t}u\cdot(\Pi^{t}\nabla f^{t})\leq D(u|w^{t})-D(u|w^{t+1})+C_{\gamma_{t }}\eta_{t}^{2}u\cdot(\Pi^{t}\nabla f^{t})^{2}. \tag{72}\]
Since \(\nabla f^{t}=l^{t}\), we have the inequality
\[-1\leq\Pi^{t}\nabla_{i}f^{t}=l_{i}^{t}-w^{t}\cdot l^{t}\leq 1, \tag{73}\]
as \(l_{i}^{t}\in[0,1]\). Thus dividing (72) by \(\eta_{t}\) gives
\[w^{t}\cdot l^{t}-u\cdot l^{t}\leq\frac{D(u|w^{t})-D(u|w^{t+1})}{\eta_{t}}+C_{ \gamma_{t}}\eta_{t}, \tag{74}\]
where \(\eta_{t}=\gamma_{t}\eta_{t,\max}\), for some \(\gamma_{t}\in(0,1)\).
Since the maximum learning rate has the lower bound
\[\eta_{t,\max}=\frac{1}{\max_{i}l_{i}^{t}-w^{t}\cdot l^{t}}\geq\frac{1}{\max_{i }l_{i}^{t}}\geq 1, \tag{75}\]
we can take a fixed learning rate \(\eta_{t}=\eta\in(0,1)\). Moreover, \(\gamma_{t}=\eta/\eta_{t,\max}\leq\eta\). Since \(C_{\gamma_{t}}\) is an increasing function of \(\gamma_{t}\), \(C_{\gamma_{t}}\leq C_{\eta}\), thus giving the bound
\[w^{t}\cdot l^{t}-u\cdot l^{t}\leq\frac{D(u|w^{t})-D(u|w^{t+1})}{ \eta}+C_{\eta}\eta. \tag{76}\]
Summing over time and collapsing the sum gives the bound
\[\sum_{t=1}^{T}w^{t}\cdot l^{t}-\sum_{t=1}^{T}u\cdot l^{t} \leq \frac{D(u|w^{1})-D(u|w^{T+1})}{\eta}+TC_{\eta}\eta \tag{77}\] \[\leq \frac{D(u|w^{1})}{\eta}+TC_{\eta}\eta\] (78) \[= \frac{D(u|w^{1})}{\eta}+\frac{T\log(e^{-\eta}/(1-\eta))}{\eta}, \tag{79}\]
by definition of \(C_{\eta}\). Using the inequality \(\log(e^{-x}/(1-x))/x\leq x/(2(1-x))\) for \(0\leq x\leq 1\),
\[\sum_{t=1}^{T}w^{t}\cdot l^{t}-\sum_{t=1}^{T}u\cdot l^{t} \leq \frac{D(u|w^{1})}{\eta}+\frac{T\eta}{2(1-\eta)}. \tag{80}\]
Let \(w^{1}=(1/N,\ldots,1/N)\), then \(D(u|w^{1})\leq\log(N)\) for all \(u\in\Delta^{N}\). Thus giving the desired bound
\[\sum_{t=1}^{T}w^{t}\cdot l^{t}-\sum_{t=1}^{T}u\cdot l^{t} \leq \frac{\log(N)}{\eta}+\frac{T\eta}{2(1-\eta)}. \tag{81}\]
The right side of this inequality is minimized when \(\eta=\frac{\sqrt{2\log(N)}}{\sqrt{2\log(N)}+\sqrt{T}}<1\). Upon substitution gives the bound
\[\sum_{t=1}^{T}w^{t}\cdot l^{t}-\sum_{t=1}^{T}u\cdot l^{t} \leq \sqrt{2T\log(N)}+\log(N). \tag{82}\]
### Proof of Theorem 6
Proof.: Rearranging Theorem 7 gives
\[-\eta_{t}u\cdot(\Pi^{t}\nabla f^{t})\leq D(u|w^{t})-D(u|w^{t+1})+C_{\gamma_{t }}\eta_{t}^{2}u\cdot(\Pi^{t}\nabla f^{t})^{2}. \tag{83}\]
Since \(\nabla f=-x^{t}/(w^{t}\cdot x^{t})\), we have the inequality
\[-\frac{1}{a}\leq\Pi\nabla_{i}f=1-x_{i}^{t}/(w^{t}\cdot x^{t}) \leq 1, \tag{84}\]
as \(x_{i}^{t}\in[a,1]\). Thus diving by \(\eta_{t}\) gives the bound
\[\frac{u\cdot x^{t}}{w^{t}\cdot x^{t}}-1\leq\frac{D(u|w^{t})-D(u|w^{t+1})}{\eta _{t}}+\frac{C_{\gamma}\eta_{t}}{a^{2}}. \tag{85}\]
Using the inequality \(e^{x}-1\geq x\) for all \(x\) gives
\[\log\left(\frac{u\cdot x^{t}}{w^{t}\cdot x^{t}}\right)\leq\frac{D(u|w^{t})-D(u |w^{t+1})}{\eta_{t}}+\frac{C_{\gamma}\eta_{t}}{a^{2}}. \tag{86}\]
Since the maximum learning rate has the lower bound
\[\eta_{t,\max}=\frac{1}{\max_{i}(1-x_{i}^{t}/w^{t}\cdot x^{t})}= \frac{1}{1-\min_{i}(x_{i}^{t}/w^{t}\cdot x^{t})}\geq 1, \tag{87}\]
we can take a fixed learning rate \(\eta_{t}=\eta\).
Following the steps from Theorem 5 gives the bound
\[\sum_{t=1}^{T}\log(u\cdot t^{t})-\sum_{t=1}^{T}\log(w^{t}\cdot t^{t}) \leq\frac{D(u|w^{1})}{\eta}+\frac{T\eta}{2a^{2}(1-\eta)}. \tag{88}\]
Taking \(w^{1}=(1/N,\ldots,1/N)\) and minimizing the right-hand side of the inequality w.r.t \(\eta\) gives \(\eta=\frac{a\sqrt{2\log(N)}}{a\sqrt{2\log(N)}+\sqrt{T}}\). Thus giving the bound
\[\sum_{t=1}^{T}\log(u\cdot t^{t})-\sum_{t=1}^{T}\log(w^{t}\cdot t^ {t})\leq\frac{\sqrt{2T\log(N)}}{a}+\log(N). \tag{89}\]
## Appendix B Karush-Kuhn-Tucker Conditions
The Karush-Kuhn-Tucker (KKT) Conditions are first-order conditions that are necessary but insufficient for optimality in constrained optimization problems. For convex problems, it becomes a sufficient condition for optimality.
Consider a general constrained optimization problem
\[\min_{w}f(w)\qquad\text{s.t.}\qquad g_{i}(w)\leq 0\quad\text{and}\quad h_{j}(w)=0, \tag{90}\]
for \(i=1,...,n\) and \(j=1,...,m\). The (primal) Lagrangian is defined as
\[\mathcal{L}(w,\alpha,\beta)=f(w)+\sum_{i}\alpha_{i}g_{i}(w)+\sum _{j}\beta_{j}h_{j}(w). \tag{91}\]
Consider the new optimization problem
\[\theta(w)=\max_{\alpha,\beta;\ \alpha_{i}\geq 0}\mathcal{L}(w,\alpha,\beta). \tag{92}\]
Note that
\[\theta(w)=\begin{cases}f(w)&\text{if $w$ satisfies the constraints}\\ 0&\text{otherwise.}\end{cases} \tag{93}\]
Hence, \(\alpha_{i}\) and \(\beta_{j}\) are slack variables that render a given Lagrangian variation equation irrelevant when violated.
To solve (90), we can instead consider the new optimization problem
\[\min_{w}\theta(w)=\min_{w}\max_{\alpha,\beta;\ \alpha_{i}\geq 0}\mathcal{L}(w, \alpha,\beta). \tag{94}\]
Assume \(f\) and \(g_{i}\) are convex, \(h_{j}\) is affine, and the constraints are feasible. A solution \((w^{*},\alpha^{*},\beta^{*})\) is an optimal solution to (90) if the following conditions, known as the KKT conditions, are satisfied:
\[\frac{\partial}{\partial w_{i}}\mathcal{L}(w^{*},\alpha^{*}, \beta^{*}) = 0\qquad\text{(Stationarity)} \tag{95}\] \[\alpha^{*}_{i}g_{i}(w^{*}) = 0\qquad\text{(Complementary Slackness)}\] (96) \[g_{i}(w^{*})\ \leq\ 0,\ h_{j}(w) = 0\qquad\text{(Primal Feasibility)}\] (97) \[\alpha^{*}_{i} \geq 0\qquad\text{(Dual Feasibility)}, \tag{98}\]
for all \(i\) and \(j\).
When the constraint is a simplex, the Lagrangian becomes
\[\mathcal{L}(w,\alpha,\beta)=f(w)-\sum_{i}\alpha_{i}w_{i}+\beta \left(\sum_{i}w_{i}-1\right). \tag{99}\]
Thus stationarity gives
\[\frac{\partial}{\partial w_{i}}\mathcal{L}=\nabla_{i}f-\alpha_{i }+\beta=0. \tag{100}\]
Let \(Q=\{i:w_{i}=0\}\) be the active set and \(S=\{i:w_{i}>0\}\) be the support. The complementary slackness requires \(\alpha_{i}=0\) for \(i\in S\), so stationarity gives \(\beta=\nabla_{i}f\), _i.e._ constant on the support. The active set's dual feasibility and stationarity conditions thus require \(\alpha_{i}=\beta+\nabla_{i}f\geq 0\).
## Acknowledgements
We thank Prof. Johannes Ruf for the helpful discussion and his suggestion for potential applications in the multi-armed bandit problem, which ultimately helped the proof for universal portfolios.
|
2306.10047
|
Neighborhood-based Hard Negative Mining for Sequential Recommendation
|
Negative sampling plays a crucial role in training successful sequential
recommendation models. Instead of merely employing random negative sample
selection, numerous strategies have been proposed to mine informative negative
samples to enhance training and performance. However, few of these approaches
utilize structural information. In this work, we observe that as training
progresses, the distributions of node-pair similarities in different groups
with varying degrees of neighborhood overlap change significantly, suggesting
that item pairs in distinct groups may possess different negative
relationships. Motivated by this observation, we propose a Graph-based Negative
sampling approach based on Neighborhood Overlap (GNNO) to exploit structural
information hidden in user behaviors for negative mining. GNNO first constructs
a global weighted item transition graph using training sequences. Subsequently,
it mines hard negative samples based on the degree of overlap with the target
item on the graph. Furthermore, GNNO employs curriculum learning to control the
hardness of negative samples, progressing from easy to difficult. Extensive
experiments on three Amazon benchmarks demonstrate GNNO's effectiveness in
consistently enhancing the performance of various state-of-the-art models and
surpassing existing negative sampling strategies. The code will be released at
\url{https://github.com/floatSDSDS/GNNO}.
|
Lu Fan, Jiashu Pu, Rongsheng Zhang, Xiao-Ming Wu
|
2023-06-12T12:28:54Z
|
http://arxiv.org/abs/2306.10047v1
|
# Neighborhood-based Hard Negative Mining for Sequential Recommendation
###### Abstract.
Negative sampling plays a crucial role in training successful sequential recommendation models. Instead of merely employing random negative sample selection, numerous strategies have been proposed to mine informative negative samples to enhance training and performance. However, few of these approaches utilize structural information. In this work, we observe that as training progresses, the distributions of node-pair similarities in different groups with varying degrees of neighborhood overlap change significantly, suggesting that item pairs in distinct groups may possess different negative relationships. Motivated by this observation, we propose a graph-based negative sampling approach based on neighborhood overlap (GNNO) to exploit structural information hidden in user behaviors for negative mining. GNNO first constructs a global weighted item transition graph using training sequences. Subsequently, it mines hard negative samples based on the degree of overlap with the target item on the graph. Furthermore, GNNO employs curriculum learning to control the hardness of negative samples, progressing from easy to difficult. Extensive experiments on three Amazon benchmarks demonstrate GNNO's effectiveness in consistently enhancing the performance of various state-of-the-art models and surpassing existing negative sampling strategies. The code will be released at [https://github.com/floatSDSDS/GNNO](https://github.com/floatSDSDS/GNNO).
sequential recommendation, hard negative mining, graph mining +
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding Corresponding author.
+
Footnote †: Corresponding Corresponding author.
+
Footnote †: Corresponding Corresponding author.
+
Footnote †: Corresponding Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
a significant shift in the distribution is observed. In this study, we treat item pairs in _group-medium_ as hard negative pairs, positing that they contribute more information during the training process. Since item pairs in _group-high_ have strong connections and are likely to be false negatives, we propose to exclude them from the item pool for negative sampling.
Specifically, we propose a graph-based _n_egative sampling approach based on _n_eighborhood overlap (GNNO). GNNO first constructs a WITG as described in Sec 3.2. Utilizing the built WITG, it selects negative samples for each target item by considering the extent of the neighborhood overlap between the target item and any other item. Additionally, GNNO employs curriculum learning (CL) to adjust the maximum hardness of negative samples, ranging from easy to hard. Among existing graph-based negative sampling strategies, GNNO is most similar to RecNS (Zhu et al., 2017) in that we also perform region division and propose sampling variations that account for unique characteristics in each region. The key differences between GNNO and RecNS are three-fold: (1) GNNO is designed for sequential recommendation and explicitly utilizes structural information in user behavior sequences; (2) Rather than dividing the sampling region by \(k\)-hop distance, GNNO creates a negative sampler that samples from a distribution based on the relative Jaccard index; (3) GNNO suggests negative samples from distant regions are indispensable for training an SR model. In summary, the contributions of this paper include:
* To the best of our knowledge, this is the first work to investigate the structural properties of negative samples in SR. We observed that item pairs with varying levels of neighborhood overlap on WITG may exhibit distinct characteristics, signifying a valuable signal for hard negative discrimination in sequential recommendation.
* We introduce GNNO, which adaptively samples negatives for each item based on the relative Jaccard similarity on WITG. We also employ CL to control the maximum hardness of negative samples during training.
* Comprehensive comparative experiments on three Amazon benchmarks demonstrate the effectiveness of our proposed method.
## 2. Related Works
_Negative Sampling Methods for Sequential Recommendation._ As mentioned in Sec. 1, attempts for informative negative sampling for SR are broadly categorized into two groups: sampling-based methods (Beng et al., 2016; Chen et al., 2017; Chen et al., 2018; Liu et al., 2019; Liu et al., 2019) and generation-based methods (Chen et al., 2017; Liu et al., 2019; Liu et al., 2019; Liu et al., 2019). ANCE (Liu et al., 2019) proposes to sample hard negatives globally by using an asynchronously updated ANN index. CBNS (Liu et al., 2019) proposes to sample negatives cross-batch other than simply in-batch sampling. SRNS (Chen et al., 2017) leverages the variance-based characteristics of false negatives and hard negatives to improve the sampling strategy.
_Graph-based Negative Sampling Methods._ Existing graph-based methods for negative sampling mostly concentrate on collaborative filtering (CF)-based recommendations (Chen et al., 2017; Zhu et al., 2017), graph contrastive learning (Liu et al., 2019; Liu et al., 2019; Liu et al., 2019; Liu et al., 2019). MixGCF (Chen et al., 2017) efficiently injects information of positive samples into negative samples via a mix-up mechanism. HORACE (Liu et al., 2019) and STENCIL (Liu et al., 2019) study heterogeneous graph contrastive learning and utilize global topology features including PageRank (Kip
## 3. Method
In this section, we start with the problem statement of sequential recommendation. Subsequently, we present our proposed approach. As illustrated in Figure 2, GNNO comprises three modules: (1) WITG construction; (2) overlapping-based negative sampling distribution generator; and (3) curriculum scheduler for negative sampling.
### Sequential Recommendation
Formally, let \(\mathcal{U}\) and \(\mathcal{I}\) denote the user and item sets from the interaction sequences, respectively. For each user \(u\in\mathcal{U}\), we use a chronologically ordered list \(\mathbf{s}=[i_{1},i_{2},...,i_{N}]\) to denote his or her behavior sequence, where \(i_{t}\) is the \(t^{th}\) interaction item of \(u\) and \(N\) is the length of the sequence. The task of SR is to predict the subsequent user-item interactions with the given sequence \(\mathbf{s}\).
Training an SR model requires choosing an objective function \(L\) and a negative sampling distribution \(p_{n}\). The commonly adopted Bayesian Personalized Ranking (BPR) loss (Hinton et al., 2015) w.r.t. all training sequences and time steps is defined as follows:
\[L_{BPR}=\sum_{(\mathbf{s}_{t},i_{t})}-\log\sigma(\hat{y}(\mathbf{s}_{t},i_{t}) -\hat{y}(\mathbf{s}_{t},i_{t}^{-})), \tag{1}\]
where \(\mathbf{s}_{t}=[i_{1},i_{2},...,i_{t-1}]\) is the historical sub-sequence of \(\mathbf{s}\) at time step \(t\), \(i_{t}\) is the target item, \(\sigma\) denotes the sigmoid function, \(\hat{y}(\mathbf{s},i)\) is a trainable network that predicts a matching score between a sequence \(\mathbf{s}\) and an item \(i\). \(i_{t}^{-}\sim p_{n}\) is a negative item sampled from a distribution \(p_{n}\). Optimizing Eq 1 offers the model ability to score the target item \(i_{t}\) higher than the negative item \(i_{t}^{-}\) for a given \(\mathbf{s}_{t}\).
### Weighted Item Transition Graph (WITG)
In contrast to the commonly used user-item graph, to make use of global information in behavior sequences, we propose to mine hard negatives on a WITG \(\mathcal{G}(\mathcal{I},\mathcal{E})\)(Zhou et al., 2017) where \(\mathcal{E}\) is the edge set. A WITG contains global item transition patterns extracted from all user behavior sequences in the training set \(\mathcal{D}\). \(\mathcal{G}\) is constructed by traversing every sequence in \(\mathcal{D}\). For a sequence \(\mathbf{s}\in\mathcal{D}\), if there exists no edge between the items \(i_{m}\) and \(i_{(m+k)}\) in \(\mathcal{G}\), we connect them and set the edge weight \(w(i_{m},i_{(m+k)})=1/k\), where \(k\) represents the importance of a target item \(i_{m}\) to its \(k\)-hop neighbor \(i_{(m+k)}\) in \(\mathbf{s}\). Otherwise, if there is already an edge between them, we update the edge weight as \(w(i_{m},i_{(m+k)})\leftarrow w(i_{m},i_{(m+k)})+1/k\).
### Neighborhood-based Negative Sampler
Given a target item, we can divide the other items into groups w.r.t. their neighborhood overlap with the target item. As analyzed in Sec. 1, these groups demonstrate distinct characteristics during the training of a SR model, thus it can be advantageous to differentiate and treat them differently. Here, we propose to use the Jaccard similarity to measure the degree of neighborhood overlap between the target item and the others. Let \(\mathcal{N}(i)\) denote the neighbor set of an item \(i\) on \(\mathcal{G}\). The Jaccard similarity between any two items \(i\) and \(j\) can be defined as:
\[J(i,j)=\frac{\mathcal{N}(i)\cap\mathcal{N}(j)}{\mathcal{N}(i)\cup\mathcal{N} (j)}. \tag{2}\]
If \(i\) and \(j\) have many common neighbors, they will have a large similarity. Meanwhile, the denominator acts as a normalizer, and the similarity will be small if the degree of either \(i\) or \(j\) is large.
As explained in Figure 1, _group-medium_ is likely to be quality hard negatives, hence we want to give them higher sampling weight. Meanwhile, we want to keep _group-zero_ and _group-medium_ for diversity, but we want to give them low sampling probability. Finally, we want to exclude _group-high_ as they are likely to be false negatives. Therefore, we propose to sample the negatives for a target item \(i\) from the following distribution:
\[p_{n}(i^{-}|i)=\frac{e^{J(i,i^{-})}}{\sum_{j\in\mathcal{N}(i)}e^{J(i,j)}}, \tag{3}\]
where \(\mathcal{N}^{\prime}(i)=\mathcal{I}\setminus\{j|J(i,j)>\lambda\}\) is the item set without _group-high_, and \(\lambda\) is the threshold for _group-high_ at the current training step.
### Curriculum Scheduler for Negative Sampling
Concentrating too much on hard negatives in the early training stage may bring negative effects to the model (Beng et al., 2019). We therefore employ curriculum learning (CL) techniques to schedule the hardness of the negatives. The main idea of CL is to order negative samples during training based on their difficulty (Kang et al., 2019). Let \(Q\) be the maximum training step, then for each training step \(q\in[1,...,Q]\), we update \(\lambda\) to form \(\mathcal{N}^{\prime}(i)\) in Eq. 3. The maximum Jaccard score is updated from an initial hardness \(b\) with a linear pacing function \(f(q)\):
\[f(q)=c*q+b, \tag{4}\]
where \(q\) is the current time step and \(c\) is the pace coefficient. For simplicity, we set \(b=0\) in our experiments. Moreover, we set a maximum hardness \(\lambda_{max}\) to clip the growth of \(\lambda\). Finally, \(\lambda\) is updated as:
\[\lambda(q)=min(f(q),\lambda_{max}). \tag{5}\]
## 4. Experiments
### Experimental Setup
#### 4.1.1. Datasets
Our experiments are conducted on three subsets of the well-known Amazon dataset1(Kang et al., 2019), which includes rich user-item review interactions. Specifically, we choose the sub datasets _Beauty_, _Phone_, _and_ _Toy_ for our empirical study. In our experiments, we use the 5-core data. Items and users with no positive records are filtered out for each subset. The statistics of the filtered datasets are shown in Table 1.
Footnote 1: [http://jmcauley.ucsd.edu/data/amxon](http://jmcauley.ucsd.edu/data/amxon)
#### 4.1.2. Baselines
We compare our method with both state-of-the-art sequential recommendation methods and negative sampling strategies. As shown in Table 4, we compare our method with the following baselines for sequential recommendation: **BPRMF**(Hinton et al., 2015), **Caser**(Kang et al., 2019), **GRU4Rec**(Kang et al., 2019), **BERT4Rec**(Kang et al., 2019), **SARec**(Kang et al., 2019), **TimiRec**(Kang et al., 2019), and **ContraRec**(Kang et al., 2019). Meanwhile, as shown in Table 3, we also compare our method with the following negative sampling
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & **user** & **item** & **entity** & **edge** & **spareness** \\ \hline Beauty & 22,363 & 12,101 & 195,502 & 530,266 & 20.015 \\ Toys & 19,413 & 11,925 & 148,455 & 486,740 & 16.325 \\ Phone & 22,364 & 12,102 & 176,139 & 530,266 & 24.555 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Dataset statistics.
approaches: **DNS**(Kang et al., 2018) that adaptively samples the negative item scored highest by the recommender, **MCNS**(Kang et al., 2018)that samples negatives with the distribution sub-linearly related to the positive distribution and accelerates the sampling process by Metropolis-Hastings, and **MixGCF**(Chen et al., 2019) that injects positive samples into negatives via hop mixing to synthesize hard negatives. Note that MCNS and MixGCF are graph-based but DNS is not.
#### 4.1.3. Implementation details
All recommendation baselines are implemented with ReChorus2, a framework for top-\(k\) recommendation. GNNO and other negative sampling strategies are implemented on the state-of-the-art SR method ContraRec. We use the default settings of ContraRec. Specifically, we use Adam as the optimizer and BERT4Rec as the sequence encoder. The batch size is set to 4096, and the dimension of hidden units is set to 64. The hyper-parameters of GNNO, as shown in Table 2, are selected based on the performances on the validation set using grid search.
Footnote 2: [https://github.com/THUwangcy/ReChorus](https://github.com/THUwangcy/ReChorus)
#### 4.1.4. Evalutaion Protocols
Following (Han et al., 2018), we adopt the commonly used leave-one-out strategy to evaluate model performance. We employ two evaluation metrics: **hit rate (HR@K)** and **normalized discounted cumulative gain (NDCG@K)**. For each sequence, the target products are mixed up with candidates randomly sampled from the entire product set, forming a candidate set of size 1000.
### Results and Analysis
#### 4.2.1. Comparisons with Negative Sampling Baselines
Table 3 summarizes the recommendation performance of GNNO in comparison to existing negative sampling approaches on three Amazon benchmarks. GNNO outperforms the baselines in almost all cases, demonstrating its efficacy. It can be seen that all graph-based methods perform better than DNS, which highlights the advantages of incorporating structural information for negative sampling.
#### 4.2.2. Comparisons with Baselines for Sequential Recommendation
Table 4 shows the recommendation performance of GNNO against state-of-the-art methods for SR on three Amazon benchmarks. GNNO improves over ContraRec on every dataset. It demonstrates the effectiveness of the proposed negative sampling strategy, showing that pushing away negatives with some neighborhood overlap is beneficial for SR. In addition, both ContraRec and GNNO outperform other baselines consistently, where the performance gain is probably brought by the context-context contrastive learning task.
## 5. Conclusion
In this work, we observed that as training progresses, the embedding similarity between item pairs in different groups with varying degrees of neighborhood overlap on a weighted item transition graph (WITG) changes significantly. Based on this observation, we propose GNNO, which samples negatives with respect to the Jaccard index on a global WITG. Additionally, GNNO employs curriculum learning to manage the hardness of negative samples at each training step. Extensive experiments on three Amazon benchmarks demonstrate the effectiveness of our proposed method. In future work, we plan to study hard negative mining over dynamic graphs for sequential recommendation.
## 6. Acknowledgement
We would like to thank the anonymous reviewers for their helpful comments. This research was partially supported by the General Research Fund No.15222220 funded by the UGC of Hong Kong.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline
**Dataset** & **Method** & **HR@5** & **NDCG@5** & **HR@20** & **NDCG@20** \\ \hline \multirow{6}{*}{**Beauty**} & **BPRM** & 0.3588 & 0.2593 & 0.5716 & 0.3202 \\ & **Caaser** & 0.3198 & 0.2238 & 0.5772 & 0.2970 \\ & **GRURate** & 0.3254 & 0.2318 & 0.5762 & 0.3090 \\ & **BERT4Rec** & 0.3590 & 0.2658 & 0.5734 & 0.3266 \\ & **SASRec** & 0.3653 & 0.2780 & 0.5744 & 0.3372 \\ & **TimeRec** & 0.3781 & 0.2812 & 0.5958 & 0.3433 \\ & **ContraRec** & 0.4112 & 0.3158 & 0.6111 & 0.3727 \\ \cline{2-5} & **GNNO** & **0.4149** & **0.3215** & **0.6174** & **0.3792** \\ \hline \multirow{6}{*}{**Phone**} & **BPRM** & 0.3600 & 0.2709 & 0.5945 & 0.3352 \\ & **Caaser** & 0.3873 & 0.2745 & 0.6727 & 0.3550 \\ & **GRURate** & 0.4122 & 0.2973 & 0.6935 & 0.3777 \\ & **BERT4Rec** & 0.4209 & 0.3145 & 0.6664 & 0.3849 \\ & **SASRec** & 0.4432 & 0.3349 & 0.6809 & 0.4032 \\ & **TimeRec** & 0.4338 & 0.3241 & 0.6712 & 0.3925 \\ & **ContraRec** & 0.4831 & 0.3673 & 0.7210 & 0.4358 \\ & **GNNO** & **0.4897** & **0.3754** & **0.7249** & **0.4340** \\ \hline \multirow{6}{*}{**Tays**} & **BPRM** & 0.3107 & 0.2255 & 0.5255 & 0.2684 \\ & **Caaser** & 0.2921 & 0.2000 & 0.5584 & 0.2757 \\ & **GRURate** & 0.3113 & 0.2168 & 0.5802 & 0.2934 \\ \cline{1-1} & **BERT4Rec** & 0.3457 & 0.2561 & 0.5582 & 0.3164 \\ \cline{1-1} & **SASRec** & 0.3940 & 0.2731 & 0.5694 & 0.3328 \\ \cline{1-1} & **TimeRec** & 0.3555 & 0.2628 & 0.5549 & 0.33287 \\ \cline{1-1} & **ConraRec** & 0.4013 & 0.3065 & 0.6177 & 0.3676 \\ \cline{1-1} \cline{2-5} & **GNNO** & **0.4074** & **0.3148** & **0.6245** & **0.3762** \\ \hline \hline \end{tabular}
\end{table}
Table 4. Comparison of our proposed method with baselines for sequential recommendation on three Amazon reviews sub-datasets. The best results are highlighted in bold.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & \(neg_{hard}\) & \(neg_{hard}\) & \(c\) (\(q_{4}\)) & \(\lambda_{max}\) (\(\#_{3}\)) \\ \hline Beauty & 9 & 16 & 0.04 & 0.5 \\ Toys & 2 & 10 & 0.05 & 0.2 \\ Phones & 4 & 10 & 0.01 & 0.9 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Hyper-parameter settings. \(\#neg_{hard}\) and \(\#neg_{rand}\) indicate the number of hard negatives and random negatives, respectively.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline
**Dataset** & **Method** & **HR@5** & **NDCG@5** & **HR@20** & **NDCG@20** \\ \hline \multirow{6}{*}{**Beauty**} & **DNS** & 0.3585 & 0.2984 & 0.5509 & 0.3561 \\ & **MCNS** & 0.4101 & 0.3141 & 0.6315 & 0.3721 \\ & **MixGCF** & 0.4123 & 0.3168 & 0.6140 & 0.3743 \\ \cline{2-5} & **GNNO** & **0.4149** & **0.3215** & **0.5174** & **0.3792** \\ \cline{2-5} & **DNS** & 0.4307 & 0.3300 & 0.5653 & 0.3943 \\ \cline{2-5} & **MCNS** & 0.4794 & 0.3646 & 0.7179 & 0.4343 \\ & **MixGCF** & **0.4180** & 0.3655 & 0.7170 & 0.4342 \\ \cline{2-5} & **GNNO** & **0.4897** & **0.3754** & **0.7249** & **0.4430** \\ \hline \multirow{6}{*}{**Pione**} & **DNS** & 0.3470 & 0.2666 & 0.5598 & 0.3266 \\ & **MCNS** & 0.4035 & 0.3058 & 0.6238 & 0.3711 \\ \cline{1-1} & **MixGCF** & 0.4049 & 0.3097 & **0.6252** & 0.3723 \\ \cline{1-1} \cline{2-5} & **GNNO** & **0.4074** & **0.3148** & 0.6245 & **0.3762** \\ \hline \hline \end{tabular}
\end{table}
Table 3. Comparison of our proposed method with hard negative mining baselines on three Amazon review sub datasets. The best results are highlighted in bold.
|
2301.04095
|
Optimal randomized multilevel Monte Carlo for repeatedly nested
expectations
|
The estimation of repeatedly nested expectations is a challenging task that
arises in many real-world systems. However, existing methods generally suffer
from high computational costs when the number of nestings becomes large. Fix
any non-negative integer $D$ for the total number of nestings. Standard Monte
Carlo methods typically cost at least $\mathcal{O}(\varepsilon^{-(2+D)})$ and
sometimes $\mathcal{O}(\varepsilon^{-2(1+D)})$ to obtain an estimator up to
$\varepsilon$-error. More advanced methods, such as multilevel Monte Carlo,
currently only exist for $D = 1$. In this paper, we propose a novel Monte Carlo
estimator called $\mathsf{READ}$, which stands for "Recursive Estimator for
Arbitrary Depth.'' Our estimator has an optimal computational cost of
$\mathcal{O}(\varepsilon^{-2})$ for every fixed $D$ under suitable assumptions,
and a nearly optimal computational cost of $\mathcal{O}(\varepsilon^{-2(1 +
\delta)})$ for any $0 < \delta < \frac12$ under much more general assumptions.
Our estimator is also unbiased, which makes it easy to parallelize. The key
ingredients in our construction are an observation of the problem's recursive
structure and the recursive use of the randomized multilevel Monte Carlo
method.
|
Yasa Syed, Guanyang Wang
|
2023-01-10T17:36:28Z
|
http://arxiv.org/abs/2301.04095v3
|
# Optimal randomized multilevel Monte Carlo for repeatedly nested expectations
###### Abstract
The estimation of repeatedly nested expectations is a challenging task that arises in many real-world systems. However, existing methods generally suffer from high computational costs when the number of nestings becomes large. Fix any non-negative integer \(D\) for the total number of nestings. Standard Monte Carlo methods typically cost at least \(\mathcal{O}(\varepsilon^{-(2+D)})\) and sometimes \(\mathcal{O}(\varepsilon^{-2(1+D)})\) to obtain an estimator up to \(\varepsilon\)-error. More advanced methods, such as multilevel Monte Carlo, currently only exist for \(D=1\). In this paper, we propose a novel Monte Carlo estimator called READ, which stands for "Recursive Estimator for Arbitrary Depth." Our estimator has an optimal computational cost of \(\mathcal{O}(\varepsilon^{-2})\) for every fixed \(D\) under suitable assumptions, and a nearly optimal computational cost of \(\mathcal{O}(\varepsilon^{-2(1+\delta)})\) for any \(0<\delta<\frac{1}{2}\) under much more general assumptions. Our estimator is also unbiased, which makes it easy to parallelize. The key ingredients in our construction are an observation of the problem's recursive structure and the recursive use of the randomized multilevel Monte Carlo method.
_Keywords:_ nested expectation, optimal cost, randomized Multilevel Monte Carlo, unbiased estimator
## 1 Introduction
Monte Carlo methods are a class of algorithms that use random sampling to estimate quantities of interest, such as integrals or expected values. When the estimand can be expressed as an expectation, for example \(\mathbf{E}_{\pi}[g(X)]\), these methods work by generating independent random samples \(X_{1},\ldots,X_{n}\) from \(\pi\), and using the average \(\sum_{i=1}^{n}g(X_{i})/n\) as an estimator. Monte Carlo estimators are unbiased and converge at a rate of \(n^{-1/2}\), regardless of the dimension of the samples. This dimension-independent convergence rate makes Monte Carlo methods a powerful tool for approximating high-dimensional integrations, as they do not suffer from the curse of dimensionality that plagues deterministic numeric integration methods.
However, the above analysis implicitly assumes the integrand \(g\) can be pointwisely evaluated, which may not be possible in many situations of practical interest. In this paper, we study the problem of estimating repeatedly nested expectations (RNE), which means the integrand depends on a sequence of other functions and conditional expectations. Specifically, fix any positive integer \(D\) for the total number of nestings, and \(\{g_{d}\}_{d=0}^{D}\) for a family of real-valued functions which can be pointwisely evaluated. Let \((y^{(0)},\ldots,y^{(D)})\) be a finite-time stochastic process with underlying joint distribution \(\pi\), and let \(y^{(0:d)}\) denote the vector \((y^{(0)},\ldots,y^{(d)})\) for every \(d\leq D\). The RNE, first formally formulated in Rainforth et al. (2018), is defined as:
\[\gamma_{0}=\mathbf{E}\left[g_{0}\left(y^{(0)},\gamma_{1}\left(y^{(0)}\right) \right)\right], \tag{1}\]
where \(\{\gamma_{i}\}_{i=1}^{D-1}\) is recursively defined as:
\[\gamma_{d}(y^{(0:d-1)})=\mathbf{E}\left[g_{d}\left(y^{(0:d)},\gamma_{d+1}\left(y ^{(0:d)}\right)\right)\bigg{|}\ y^{(0:d-1)}\right], \tag{2}\]
and
\[\gamma_{D}(y^{(0:D-1)})=\mathbf{E}\left[g_{D}\left(y^{(0:D)}\right)\bigg{|}\ y^{(0:D-1)} \right]. \tag{3}\]
Estimating RNEs is a fundamental problem that covers a variety of real-world applications, where the quantity of interest depends on multiple stages or decision points. For example, when \(g_{d}(y^{(0:d)},u)\coloneqq\max\{y^{(d)},u\}\) for \(0\leq d\leq D-1\) and \(g_{D}(y^{(0:D)})=y^{(D)}\), the quantity \(\gamma_{0}\) stands for the expected utility of the optimal strategy in a \(D\)-horizon optimal stopping problem - a central problem in financial modeling. Other applications include probabilistic programs Rainforth (2018), Bayesian experimental design Goda et al. (2022), portfolio risk management Gordy and Juneja (2010), physics and chemistry Dauchet et al. (2018).
However, estimating RNEs is challenging. As shown in formulas (1) - (3), we are interested in the expectation of \(g_{0}\), which depends on the random variable \(y^{(0)}\) and \(\gamma_{1}(y^{(0)})\) - a conditional expectation of \(g_{1}\) given \(y^{(0)}\). Then \(g_{1}\) further depends on a random variable \(y^{(1)}\) and \(\gamma_{2}(y^{(0)},y^{(1)})\) which is a conditional expectation of \(g_{2}\) given \(y^{(0)}\) and \(y^{(1)}\). This procedure is recursively defined until it reaches the deepest depth, \(D\). Since \(\gamma_{1}(y^{(0)})\) (and also \(\gamma_{2},\gamma_{3},\ldots\)) cannot be directly evaluated in most practical cases, estimating RNEs cannot be handled by standard Monte Carlo methods.
The most natural way to estimate RNEs is by nesting Monte Carlo (NMC) estimators. In the \(D=1\) case, this method works by first sampling independent and identically distributed (i.i.d.) copies \(y_{1}^{(0)},\ldots,y_{N_{0}}^{(0)}\) according to the distribution of \(y^{(0)}\). For each fixed \(y_{i}^{(0)}\), one further samples \(N_{1}\) i.i.d.
\(y_{1}^{(1)},\ldots,y_{N_{1}}^{(1)}\) according to \(\pi(y^{(1)}\mid y_{i}^{(0)})\), and uses the standard estimator \(\hat{\gamma}_{1}(y_{i}^{(0)})\coloneqq\sum_{i=j}^{N_{1}}g_{1}(y_{i}^{(0)},y_{j }^{(1)})/N_{1}\) to estimate \(\gamma_{1}(y_{i}^{(0)})\). The final estimator uses the estimated \(\hat{\gamma}_{1}(y_{i}^{(0)})\) to replace the intractable \(\gamma_{1}(y_{i}^{(0)})\), i.e.,
\[I_{N_{0},N_{1}}=\frac{1}{N_{0}}\sum_{i=1}^{N_{0}}g_{0}(y_{i}^{(0)},\hat{\gamma }_{1}(y_{i}^{(0)})).\]
This nested estimator can be easily extended to the general \(D\) case, albeit the notations become more complex. Roughly, one still samples \(N_{0}\) i.i.d. copies according to \(\pi(y^{(0)})\), and for each fixed trajectory \(y^{(0:d-1)}\), the user generates \(N_{d}\) i.i.d. samples from \(\pi(y^{(d)}\mid y^{(0:d-1)})\) all the way to depth \(D\) and then form the nested estimator from the deepest depth to the shallower depths. The construction details are referred to Section 3.2 of Rainforth et al. (2018).
After suitably allocating the number of samples \((N_{d})_{d=0}^{D}\) for each depth, the root-mean-square error (rMSE) of the NMC estimator converges to 0 at a rate of \(N^{-1/(2D+2)}\) or \(N^{-1/(D+2)}\) Rainforth et al. (2018), depending on the regularity conditions of the functions \(\{g_{d}\}_{d=0}^{D}\), where \(N=\prod_{d=0}^{D}N_{d}\) is the total number of samples used to form a nested estimator. This convergence rate diminishes exponentially with \(D\), meaning that NMC estimators do not have the same dimension-free convergence rate as standard Monte Carlo estimators. As a result, NMC methods require at least \(\mathcal{O}(\varepsilon^{-(2+D)})\) and sometimes \(\mathcal{O}(\varepsilon^{-2(1+D)})\) samples to get an estimator within \(\varepsilon\)-rMSE, while standard Monte Carlo estimators require only \(O(\varepsilon^{-2})\) samples. Although there are a few cases mentioned in Rainforth et al. (2018) where the canonical \(\mathcal{O}(N^{-1/2})\) rate can be achieved, the problem of estimating RNEs with an optimal (or dimension-free) convergence rate remains largely open.
In the special case \(D=1\), more efficient methods have been proposed Giles (2018); Giles and Goda (2019); Giles and Haji-Ali (2019) based on the celebrated multilevel Monte Carlo (MLMC) methods Heinrich (2001); Giles (2008). These
estimators achieve up to \(\varepsilon\)-rMSE with cost \(O(\varepsilon^{-2}\log(1/\varepsilon)^{2})\) or \(O(\varepsilon^{-2})\) under varying conditions, comparing favorably with the NMC estimator. However, existing methods cannot be directly generalized to solve the general \(D\) case. Meanwhile, implementing these methods requires users to prespecify the precision level \(\varepsilon\) and conduct preliminary experiments/calculations to carefully estimate/bound the parameters in the MLMC algorithm (see, e.g., Theorem 1 of Giles and Goda (2019)). Therefore, existing MLMC estimators seem to be harder to implement and less amendable to our original problem, which has a recursive structure.
In this work, we propose the READ, a novel Monte Carlo estimator for the RNE estimation with an arbitrary number of nestings \(D\). Our construction is interesting in the following three aspects. Firstly, under suitable regularity conditions similar to those in Rainforth et al. (2018), the rMSE of our estimator has an optimal convergence rate \(N^{-1/2}\) regardless of \(D\). Equivalently, our method costs in expectation \(O(\varepsilon^{-2})\) to get an estimator up to \(\varepsilon\)-rMSE. Under much more general assumptions, our method still achieves a nearly-optimal cost of \(\mathcal{O}(\varepsilon^{-2(1+\delta)})\) for any \(0<\delta<\frac{1}{2}\) to get an estimator up to \(\varepsilon\)-mean-absolute-error (MAE).
It is worth mentioning that most of our effort is devoted to designing unbiased estimators of \(\gamma_{0}\) in (1) with finite computational cost and finite variance (or finite (2-\(\delta\))-th moment under more general assumptions). After developing such an unbiased estimator, we can simulate independent copies of these estimators and average them. The \(N^{-1/2}\) convergence rate and \(\mathcal{O}(\varepsilon^{-2})\) cost are then immediate corollaries of the bias-variance decomposition formula, see Corollary 2.3.
Therefore, another appealing property of READ, in contrast to existing methods, is that it admits no estimation bias. Unbiasedness implies these
estimators can be implemented in parallel processors without requiring any communication between them. Designing unbiased estimators has recently attracted much interest in statistics, operations research, and machine learning communities for its potential for parallelization. Our methods add to the rich body of works of Glynn and Rhee (2014); Rhee and Glynn (2015); Blanchet and Glynn (2015); Jacob et al. (2020); Biswas et al. (2019); Wang et al. (2021); Wang and Wang (2022); Kahale (2022).
Finally, our algorithm for constructing READ relies on the randomized multilevel Monte Carlo (rMLMC) method McLeish (2011); Rhee and Glynn (2015); Blanchet and Glynn (2015), but it is significantly different from its previous applications. Most existing rMLMC applications Rhee and Glynn (2015); Vihola (2018); Goda et al. (2022) also have a non-randomized version (i.e., the original MLMC Giles (2008)) with similar or better computational cost guarantees, leading some to believe that every problem solved by rMLMC also has a natural non-randomized counterpart. However, our work seems to suggest that this belief may not always hold true. The rMLMC framework is well-suited to the recursive structure of RNEs, and can be used as a subroutine in our method. In contrast, the non-randomized MLMC cannot be easily applied to the general case of \(D>1\). This suggests that the rMLMC framework may be more widely applicable than previously thought.
The rest of this paper is organized as follows: in the remainder of this section, we discuss related works, set up our notation, and introduce our technical assumptions. In Section 2, we introduce our algorithm and show that it attains the optimal and nearly-optimal computational cost under two different assumptions, respectively. In Section 3, we demonstrate the empirical performance of our method on several toy examples. We conclude this paper with a short discussion in Section 4. Proof and experiment details
are deferred to the Appendix. An additional experiment is also included in Appendix F.
### Related work
Our algorithm design strategy mainly follows the randomized multilevel Monte Carlo (rMLMC) framework McLeish (2011); Rhee and Glynn (2015); Blanchet and Glynn (2015). Our algorithm is inspired by the unbiased optimal stopping estimator Zhou et al. (2022), which develops estimators for the optimal stopping problem by recursively calling the rMLMC algorithm. We extend the methodology in Zhou et al. (2022) both in scope and depth. Our method works with a more general class of problems formulated by Rainforth et al. (2018), which includes the optimal stopping problem as a special case, and provides more precise results under practical assumptions.
Throughout this paper, we will assume the functions \(\{g_{d}\}_{d=0}^{D-1}\) are all continuous and the process \(\pi\) can be perfectly simulated. When \(D=1\) and \(g_{0}\) is discontinuous, progress has been made by Broadie et al. (2011) and Giles and Haji-Ali (2019, 2022). When the underlying distribution is itself challenging, users have to first use MCMC to approximately sample from \(\pi\). The case of \(D=1\) and challenging \(\pi\) is considered in Wang and Wang (2022).
### Notations
Now we introduce our notations. Many of our notations follow those used in the original definition Rainforth et al. (2018). Throughout this paper, we preserve the letter \(D\) for the total number of nestings. We denote by \(\pi\) the underlying joint distribution of a finite-time, real-valued stochastic process \((y^{(0)},\ldots,y^{(D)})\). For every \(0\leq i\leq j\leq D\), we use the \(y^{(i:j)}\) to denote
the vector \((y^{(i)},\ldots,y^{(j)})\). The conditional distribution of \(y^{(d:D)}\) given the value of \(y^{(0:d-1)}\) is denoted by \(\pi_{d:D}(\cdot\mid y^{(0:d-1)})\). The marginal distribution of \(y^{(d)}\) conditioning on \(y^{(0:d-1)}\) is denoted by \(\pi_{d}(\cdot\mid y^{(0:d-1)})\). We adopt the convention that \(y^{(0:-1)}=\varnothing\), and therefore \(\pi_{0}\) stands for the (unconditioned) marginal distribution of \(y^{(0)}\). Let \(\Pi\) be any probability distribution on some probability space, and \(Z\) be some random variable on the same space, then we use \(\|Z\|_{\Pi,m}\) to denote the \(L^{m}\)-norm of \(Z\) under \(\Pi\), i.e., \(\left(\mathbf{E}_{\Pi}[|Z|^{m}]\right)^{1/m}\). The geometric distribution with parameter \(r\) is denoted by \(\mathsf{Geo}(r)\). We also define \(p_{r}(n)\coloneqq\mathbf{P}[\mathsf{Geo}(r)=n]=r(1-r)^{n}\) for every \(n\in\{0,1,2,\ldots,\}\). For every \(0\leq d<D\), the function \(g_{d}\) introduced in (1) - (2) maps from \(\mathbf{R}^{d+2}\) to \(\mathbf{R}\). The function \(g_{D}\) in (3) maps from \(\mathbf{R}^{D+1}\) to \(\mathbf{R}\). For \(i.i.d.\) random variables \(X_{1},\ldots,X_{n}\), we denote their summation by \(S_{n}\coloneqq\sum_{i=1}^{n}X_{i}\). When \(n\) is even, we denote by \(S_{n/2}^{\mathsf{O}}\coloneqq\sum_{k=1}^{n/2}S_{2k-1}\) and \(S_{n/2}^{\mathsf{E}}\coloneqq\sum_{k=1}^{n/2}S_{2k}\) the summations of their odd and even terms, respectively.
### Assumptions
Throughout this paper, we assume that we can access a simulator \(\mathcal{S}\). The simulator can take any trajectory \(y^{(0:d-1)}\) with \(0\leq d\leq D\) as input, and outputs \(y^{(d)}\) which follows the distribution \(\pi_{d}(\cdot\mid y^{(0:d-1)})\). In particular, \(\mathcal{S}\) can take \(\varnothing\) as input and simulates \(y^{(0)}\sim\pi_{0}\). Calling \(\mathcal{S}\) recursively for \(D+1\) times generates one complete sample path. This assumption enables us to sample from any marginal or conditional distribution perfectly. This assumption is also standard and is posed explicitly or implicitly in nearly all the existing works concerning the estimation of nested expectations, see Giles and Goda (2019); Goda et al. (2022); Zhou et al. (2022) for examples.
Fix a function \(f:\mathbf{R}^{k+2}\to\mathbf{R}\), we say \(f\) satisfies the last-component bounded Lipschitz condition (LBL) if there exists \(L<\infty\) such that for all
\(x,z\in\mathbf{R}\):
\[\sup_{y^{(0:k)}}\bigl{|}f(y^{(0:k)},x)-f(y^{(0:k)},z)\bigr{|}\leq L|x-z|. \tag{4}\]
We say \(f\) satisfies the last-component bounded second-derivative condition (LBS) if \(f\) has continuous second-order derivative on its last component, and there exists \(C<\infty\) such that
\[\sup_{y^{(0:k+1)}}\left|\partial_{k+1}^{2}f(y^{(0:k+1)})\right|<C, \tag{5}\]
where \(\partial_{k+1}^{2}f\coloneqq(\partial^{2}f)/(\partial y^{(k+1)})^{2}\) is the second-order partial derivative on the last component. These assumptions (and their variants) are also posed in related works such as Rainforth (2018); Blanchet and Glynn (2015); Giles (2018).
## 2 Algorithm, estimator, and theoretical results
Now we are ready to present our main results. As discussed in Section 1, we will be focusing on designing a Monte Carlo estimator which is unbiased, has a finite computational cost, and has finite variance or (2-\(\delta\))-th moment under different assumptions.
### Preliminary analysis
One of the challenges in estimating the RNEs is the difficulty of estimating \(\gamma_{1}(y^{(0)})\). Users typically first estimate \(\gamma_{1}(y^{(0)})\) and then use these estimators to estimate \(\gamma_{0}\). For the time being, we are temporarily adding the assumption that users can simulate unbiased estimators \(\hat{\gamma}_{1}(y^{(0)})\) of \(\gamma_{1}(y^{(0)})\) for every fixed
\(y^{(0)}\) with finite computational cost. This assumption will be removed in Section 2.2. It easily holds when \(D=1\), as users can repeatedly simulate \(y^{(1)}_{i}\sim\pi_{1}(\cdot\mid y^{(0)})\) and it follows from the problem definition that each \(g_{1}(y^{(0)},y^{(1)}_{i})\) is unbiased for \(\gamma_{1}(y^{(0)})\). In the general case of \(D>1\), this assumption is far from trivial, as \(\gamma_{1}(y^{(0)})\) is itself a nested expectation with a nesting depth of \(D-1\). Nevertheless, as we will see in Section 2.2, this assumption helps us to capture and reduce the intrinsic difficulty of the problem and, therefore, will guide us to design the general algorithm.
With this extra assumption, constructing unbiased estimators of (1) is equivalent to constructing unbiased estimators of \(g_{0}(y^{(0)},\gamma_{1}(y^{(0)}))\). Even with access to unbiased estimators of \(\gamma_{1}(y^{(0)})\), the intuitive plug-in estimator \(g_{0}\big{(}y^{(0)},\hat{\gamma}_{1}(y^{(0)})\big{)}\) is still biased, as in general \(\mathbf{E}[g_{0}\big{(}y^{(0)},\hat{\gamma}_{1}(y^{(0)})\big{)}\mid y^{(0)}] \neq g_{0}(y^{(0)},\mathbf{E}[\hat{\gamma}_{1}(y^{(0)})\mid y^{(0)}])\). To eliminate this bias, we use the rMLMC method Blanchet and Glynn (2015), which is briefly reviewed below.
The rMLMC method uses the Law of Large Numbers (LLN) and rewrites \(g_{0}\) as the following telescoping summation.
\[g_{0}(y^{(0)},\gamma_{1}(y^{(0)}))=\mathbf{E}\left[g_{0}\left(y^ {(0)},\lim_{k\to\infty}\frac{S_{k}}{k}\right)\bigg{|}\;y^{(0)}\right]\] \[=\sum_{n=1}^{\infty}\mathbf{E}\left[g_{0}\left(y^{(0)},\frac{S_{2^ {n}}}{2^{n}}\right)\bigg{|}\;y^{(0)}\right]\] \[\qquad\qquad-\mathbf{E}\left[g_{0}\left(y^{(0)},\frac{S_{2^{n-1} }}{2^{n-1}}\right)\bigg{|}\;y^{(0)}\right],\]
where \(S_{k}=\sum_{i=1}^{k}\hat{\gamma}_{1,i}(y^{(0)})\) is the summation of i.i.d. copies of \(\hat{\gamma}_{1}(y^{(0)})\). To unbiasedly estimate the infinite sum, the rMLMC algorithm first samples \(y^{(0)}\sim\pi_{0}\), then samples a random \(N\sim\mathsf{Geo}(r)\), finally generates \(2^{N}\) unbiased estimators \(\{\hat{\gamma}_{1,i}(y^{(0)})\}_{i=1}^{2^{N}}\) of \(\gamma_{1}(y^{(0)})\) and estimates \(\gamma_{0}\) by \(R_{0}\coloneqq\Delta_{N}/p_{r}(N)\)
where \(\Delta_{n}\) is defined as:
\[\Delta_{n}\coloneqq g_{0}\left(y^{(0)},\frac{S_{2^{n}}}{2^{n}}\right) -\frac{1}{2}\Bigg{[}g_{0}\bigg{(}y^{(0)},\frac{S_{2^{n-1}}^{\mathrm{E}}}{2^{n-1} }\bigg{)}\] \[+g_{0}\bigg{(}y^{(0)},\frac{S_{2^{n-1}}^{\mathsf{O}}}{2^{n-1}} \bigg{)}\Bigg{]}\]
for \(n\geq 1\) and \(\Delta_{0}\coloneqq g_{0}(y^{(0)},\hat{\gamma}_{1,1}(y^{(0)}))\).
The next theorem justifies the theoretical properties of \(R_{0}\):
**Theorem 2.1**.: _With all the notations as above, suppose \(g_{0}:\mathbf{R}^{2}\to\mathbf{R}\) satisfies LBS condition defined in (5), and \(\|\hat{\gamma}_{1}(y^{(0)})\|_{\pi,m}<\infty\) for some \(m\geq 4\). Then for any \(r\in(1/2,3/4)\), the estimator \(R_{0}\coloneqq\Delta_{N}/p_{r}(N)\) has expectation \(\gamma_{0}\), finite variance, and finite expected computational cost._
Theorem 2.1 will not be proved directly, as it is a special case of our Theorem 2.2. For now, we use the following heuristic calculation to justify the unbiasedness of \(\hat{\gamma}_{0}\):
\[\mathbf{E}[R_{0}\big{|}\ y^{(0)}]=\mathbf{E}\left[\mathbf{E} \left[R_{0}\big{|}\ N,y^{(0)}\right]\right]=\sum_{n=0}^{\infty}\mathbf{E} \left[\frac{\Delta_{n}}{p_{r}(n)}p_{r}(n)\big{|}\ y^{(0)}\right]\] \[=\sum_{n=0}^{\infty}\mathbf{E}\left[g_{0}\left(y^{(0)},\frac{S_{2 ^{n}}}{2^{n}}\right)\bigg{|}\ y^{(0)}\right]-\mathbf{E}\left[g_{0}\left(y^{(0) },\frac{S_{2^{n-1}}}{2^{n-1}}\right)\bigg{|}\ y^{(0)}\right]\] \[=g_{0}(y^{(0)},\gamma_{1}(y^{(0)})).\]
Therefore \(\mathbf{E}[R_{0}]=\mathbf{E}[g_{0}(y^{(0)},\gamma_{1}(y^{(0)}))]=\gamma_{0}\) by (1). More technical discussions such as the range of \(r\), other possible regularity conditions on \(g_{0}\), and the moment guarantees of \(\gamma_{0}\) will all be deferred after Theorem 2.2.
### Recursive rMLMC algorithm for general \(D\)
Theorem 2.1 is useful to solve our original problem (without extra assumptions) in two ways. First, Theorem 2.1 already solves the case where \(D=1\), as
our extra assumption automatically holds. It states that if \(g_{0}\) has a bounded second derivative on its last component, and \(g_{1}(y^{(0)},y^{(1)})\) has at least finite fourth moment under \(\pi\), then \(R_{0}\) is unbiased, has finite variance, and finite expected computational cost. More importantly, Theorem 2.1 tells us that the original problem of estimating an RNE with a depth of \(D\) can be solved if we can unbiasedly estimate \(\gamma_{1}(y^{(0)})\) for fixed \(y^{(0)}\), which is another RNE with a depth of \(D-1\). Therefore, we have successfully reduced the number of nestings by one. This observation motivates us to come up with an algorithm for the general \(D\) case, as explained below.
We first go one step further to illustrate the \(D=2\) case. When \(D=2\), estimating \(\gamma_{1}(y^{(0)})\) reduces to the case we have analyzed in Section 2.1. To be precise, since \(g_{2}(y^{(0:2)})\) is unbiased for \(\gamma_{2}(y^{(0:1)})\) if \(y^{(2)}\sim\pi_{2}(\cdot\mid y^{(0:1)})\), one can first sample \(y^{(1)}\sim\pi_{1}(\cdot\mid y^{(0)})\), then simulate \(N\sim\mathsf{Geo}(r)\) and \(2^{N}\) samples \(\{y^{(2)}_{i}\}_{i=1}^{2^{N}}\) from \(\pi_{2}(\cdot\mid y^{(0:1)})\). Let \(\hat{\gamma}_{2,i}(y^{(0:1)})\coloneqq g_{2}(y^{(0:1)},y^{(2)}_{i})\), our estimator of \(\gamma_{1}(y^{(0)})\) is then constructed in the same way as Section 2.1, i.e., \(R_{1}(y^{(0)})\coloneqq\Delta_{N}/p_{r}(N)\) with
\[\Delta_{n}\coloneqq g_{1}\left(y^{(0:1)},\frac{S_{2^{n}}}{2^{n}} \right)-\frac{1}{2}\Bigg{[} g_{1}\bigg{(}y^{(0:1)},\frac{S_{2^{n-1}}^{\mathsf{E}}}{2^{n-1}} \bigg{)}\] \[+g_{1}\bigg{(}y^{(0:1)},\frac{S_{2^{n-1}}^{\mathsf{O}}}{2^{n-1}} \bigg{)}\Bigg{]},\]
where \(S_{2^{n}},S_{2^{n-1}}^{\mathsf{E}},S_{2^{n-1}}^{\mathsf{O}}\) are the summation of every, even, and odd terms of \(\{\hat{\gamma}_{2,i}(y^{(0:1)})\}\), respectively. The same procedure of simulating \(R_{1}(y^{(0)})\) can be repeated independently. Therefore we can sample another geometrically distributed random variable \(N^{\prime}\sim\mathsf{Geo}(r^{\prime})\), and generate \(R_{1,i}(y^{(0)})\coloneqq\Delta_{N^{\prime}}/p_{r}(N^{\prime})\) independently. Since each \(R_{1,i}(y^{(0)})\) is unbiased for \(\gamma_{1}(y^{(0)})\), one can again use the method described in Section 2.1 to form our final estimator for \(\gamma_{0}\). After checking \(R_{1}(y^{(0)})\) satisfies the finite fourth-moment assumption, Theorem 2.1
can be applied which implies our estimator is unbiased, has finite variance and finite cost (for the \(D=2\) case).
The general case works in the same way. A key observation is that, due to the nested structure of the problem, Theorem 2.1 not only states that an unbiased estimator of \(\gamma_{0}\) can be constructed if one can unbiasedly estimate \(\gamma_{1}(y^{(0)})\) for every \(y^{(0)}\), but also directly implies that an unbiased estimator of \(\gamma_{d}(y^{(0:d-1)})\) can be constructed if one can unbiasedly estimate \(\gamma_{d+1}(y^{(0:d)})\) for every \(y^{(0:d)}\). Therefore, we can estimate \(\gamma_{0}\) in a backward, inductive manner.
To begin, we consider the deepest depth of the problem, fixing any \(y^{(0:D-1)}\). An unbiased estimator of \(\gamma_{D}(y^{(0:D-1)})\) can be directly constructed as \(g_{D}(y^{(0:D-1)},y^{(D)})\), where \(y^{(D)}\sim\pi_{D}(\cdot\mid y^{(0:D-1)})\). For \(0\leq d\leq D-1\), if we assume that users can generate unbiased estimators of \(\gamma_{d+1}(y^{(0:d)})\) for every \(y^{(0:d)}\), then we can obtain an unbiased estimator of \(\gamma_{d}(y^{(0:d-1)})\) by sampling one \(y^{(d)}\), generating \(N_{d}\sim\mathsf{Geo}(r_{d})\) and \(2^{N_{d}}\) unbiased estimators of \(\gamma_{d+1}(y^{(0:d)})\), and applying the method described in Section 2.1. This process continues until we reach \(d=0\), at which point we have an unbiased estimator of \(\gamma_{0}\). The parameters \((r_{0},r_{1},\ldots,r_{D-1})\) will be carefully chosen and depend on the regularity assumptions of \((g_{0},g_{1},\ldots,g_{D-1})\). These choices will be discussed in more detail later.
Our algorithm is described in Algorithm 1. It is written as a recursive algorithm, though it could also be equivalently written in an iterative form with much more cumbersome notations. Algorithm 1 takes a depth index, a trajectory, a simulator, and parameters for the geometric distribution as inputs, and outputs an unbiased estimator of \(\gamma_{d}(H)\). In particular, with inputs \(\{\text{depth}=0\), trajectory \(=\varnothing\), parameters \(=(r_{0},r_{1},\ldots,r_{D-1})\}\), it outputs READ - an unbiased estimator of the RNE defined in (1). The logic of Algorithm 1 is precisely the same as we just discussed. To estimate
\(\gamma_{d}(y^{(0:d-1)})\), the algorithm first checks the value of \(d\). When \(d=D\), the problem becomes straightforward. When \(d<D\), the algorithm samples \(y^{(d)}\), appends \(y^{(d)}\) to the trajectory, samples \(N_{d}\), and calls itself \(2^{N_{d}}\) times with depth \(d+1\) and new trajectory \(\{y^{(0:d)}\}\) to get \(2^{N_{d}}\) unbiased estimators of \(\gamma_{d+1}(y^{(0:d)})\). Finally, we split these \(2^{N_{d}}\) estimators into even and odd terms and apply the method described in Section 2.1. The algorithm is guaranteed to stop, as the depth will eventually reach the deepest depth \(D\).
### Theoretical guarantees
We now discuss the computational costs of Algorithm 1 and the statistical properties of READ. Our theoretical results depend on the smoothness conditions of \(\{g_{d}\}_{d=0}^{D-1}\), so we will consider two cases where \(\{g_{d}\}_{d=0}^{D-1}\) satisfies the LBS and LBL assumptions separately.
#### 2.3.1 The LBS case
The following theorem shows, under the LBS assumption, the computational cost and the variance of READ can be controlled simultaneously.
**Theorem 2.2**.: _Suppose for every \(d\in\{0,1,\ldots,D-1\}\), the function \(g_{d}\) satisfies the LBS assumption defined in (5), and \(r_{d}\coloneqq 1-2^{-k_{d}}\) satisfies \(k_{d}\in\left(1,\frac{2^{d+1}}{2^{d+1}-1}\right)\). Moreover, suppose \(\|g_{D}(y^{(0:D)})\|_{\pi,2^{D+1}}<\infty\). Then for every \(0\leq d\leq D\), the output \(R_{d}(y^{(0:d-1)})\) of Algorithm 1 with inputs \(\{\)depth \(=d\), trajectory \(=y^{(0:d-1)}\), \(\mathcal{S}\), parameters \((r_{d},\ldots,r_{D-1})\}\) has the following properties:_
* _For_ \(\pi\)_-almost surely every fixed_ \(y^{(0:d-1)}\)_,_ \[\mathbf{E}\left[R_{d}(y^{(0:d-1)})\mid y^{(0:d-1)}\right]=\gamma_{d}(y^{(0:d-1 )}).\]
```
Input: Depth index \(d\in\{0,...,D\}\). Trajectory history \(H=\{y^{0},...,y^{d-1}\}\) or \(\varnothing\). A simulator \(\mathcal{S}\). Parameters \(r_{d},...,r_{D-1}\) determined by conditions on \(\{g_{d}\}_{d=0}^{D-1}\) (see Theorem 2.2, 2.4). Output: An unbiased estimator of \(\gamma_{d}(H)\) if\(d=D\)then Sample one \(y^{(D)}\sim\pi_{D}\left(\cdot\mid H\right)\); Return: \(R_{D}\coloneqq g_{D}\left(y^{(0:D)}\right).\) else Sample \(y^{(d)}\sim\pi_{d}\left(\cdot\mid H\right)\); Update the trajectory \(H\gets H\cup\left\{y^{(d)}\right\}\); Sample \(N_{d}\sim\mathsf{Geo}(r_{d})\); Call Algorithm 2 for \(2^{N_{d}}\) times with inputs \(\{d+1;H;\mathcal{S};r_{d+1},...,r_{D-1}\}\), and label the observations as \(R_{d+1}(y^{(0:d)})(1),...,R_{d+1}(y^{(0:d)})\left(2^{N_{d}}\right)\); Calculate \(S_{2^{N_{d}}},S_{2^{N_{d}-1}}^{\mathsf{E}},S_{2^{N_{d}-1}}^{\mathsf{O}}\) defined in Section 1.2; Calculate \(\left(\text{note }\Delta_{0}\coloneqq g_{d}\left(y^{(0:d)},R_{d+1}(y^{(0:d)})(1) \right)\right)\): \[\Delta_{N_{d}}=g_{d}\left(y^{(0:d)},\frac{S_{2^{N_{d}}}}{2^{N_{d}} }\right)-\] \[\frac{1}{2}\left[g_{d}\left(y^{(0:d)},\frac{S_{2^{N_{d}-1}}^{ \mathsf{O}}}{2^{N_{d}-1}}\right)+g_{d}\left(y^{(0:d)},\frac{S_{2^{N_{d}-1}}^{ \mathsf{E}}}{2^{N_{d}-1}}\right)\right];\] Return: \(R_{d}\coloneqq\Delta_{N_{d}}/p_{r_{d}}(N_{d})\). endif
```
**Algorithm 1** A recursive rMLMC algorithm for RNE
* _The expected computational cost of_ \(R_{d}\) _is finite._
* _The output has finite_ \(2^{d+1}\)_-th moment, i.e.,_ \[\mathbf{E}_{\pi}\left[|R_{d}(y^{(0:d-1)})|^{2^{d+1}}\right]<\infty\ \ \text{for}\ \ 0\leq d\leq D.\]
Theorem 2.2 states for \(\pi\)-almost surely every \(y^{(0:d-1)}\), the expectation of the output \(R_{d}\) conditioning on the input is unbiased for \(\gamma_{d}(y^{(0:d-1)})\). The computational cost has a finite expectation, and the output has a finite \(2^{d+1}\)-th moment 1. The detailed proof of Theorem 2.2 will be provided in the Appendix. Here, we highlight two special cases. First, Theorem 2.2 shows that \(\mathsf{READ}\), the output \(R_{0}\) of Algorithm 1 when given input \(\{\text{depth}=0\), trajectory \(=\varnothing\), \(\mathcal{S}\), parameters \(=(r_{0},\ldots,r_{D-1})\}\), has the desired properties. Specifically, it is an unbiased estimator for \(\gamma_{0}\) with finite expected computational cost and finite variance. Second, Theorem 2.2 includes Theorem 2.1 as a special case when \(D=1\).
Footnote 1: Readers should notice that the expectation of \(R_{d}(y^{(0:d-1)})\) is calculated under the conditional distribution \(\pi_{d:D}(\cdot\mid y^{(0:d-1)})\). The computational cost and the \(2^{d+1}\)-th moment are calculated under the joint distribution \(\pi\). When the input depth \(=0\), these two underlying distributions coincide.
Let \(R_{0,1},R_{0,2},\ldots\) be the i.i.d. outcomes by repeatedly implementing Algorithm 1. Since each \(R_{0,i}\) is unbiased and has a finite variance, the standard Central Limit Theorem (CLT) implies that \(\sqrt{n}(\sum_{i=1}^{n}R_{0,i}/n-\gamma_{0})\to\mathbf{N}(0,1)\) in distribution. This means that the estimator \(\sum_{i=1}^{n}R_{0,i}/n\) converges to \(\gamma_{0}\) at a rate of \(n^{-1/2}\) in rMSE (or equivalently, \(n^{-1}\) in MSE), which compares quite favorably with the rates obtained by NMC estimators in Rainforth (2018). This rate is optimal in the sense that it matches the minimax lower bound over all Monte Carlo methods (Theorem 2.1 of Heinrich and Sindambiwe (1999)). The next corollary shows that, by repeatedly implementing Algorithm 1, users
can easily obtain an unbiased estimator for \(\gamma_{0}\) with at most \(\varepsilon\)-rMSE within \(O(\varepsilon^{-2})\) computational cost.
**Corollary 2.3**.: _With all the assumptions the same as Theorem 2.2, for any \(\varepsilon>0\), we can construct an estimator \(R\) with expected computational cost \(\mathcal{O}(\varepsilon^{-2})\) such that the rMSE \(\sqrt{\mathbf{E}[(R-\gamma_{0})^{2}]}\) is at most \(\epsilon\)._
Proof of Corollary 2.3.: Calling Algorithm 1 independently for \(n\) times with \(\{\text{depth}=0\), trajectory \(=\varnothing\), \(\mathcal{S}\), parameters \(=(r_{0},\ldots,r_{D-1})\}\) yield i.i.d. unbiased estimators \(R_{0,1},...,R_{0,n}\) for \(\gamma_{0}\). Let our estimator be \(R\coloneqq\frac{1}{n}\sum_{i=1}^{n}R_{0,i}\). Then,
\[\mathbf{E}[(R-\gamma_{0})^{2}]=\mathbf{E}\left[\left(\frac{1}{n}\sum_{i=1}^{n }R_{0,i}-\gamma_{0}\right)^{2}\right]=\frac{1}{n}\text{Var}(R_{0}).\]
Thus noting that \(\text{Var}(R_{0})<\infty\) by Theorem 2.2, taking \(n=\text{Var}(R_{0})/\varepsilon^{2}\) samples ensures \(R\) has up to \(\varepsilon\)-rMSE. Finally, let \(C\coloneqq C(D)<\infty\) be the expected computational cost for one call of Algorithm 1. The expected computational cost for constructing \(R\) is then \(C\cdot\text{Var}(R_{0})/\varepsilon^{2}=\Theta(\varepsilon^{-2})\).
The \(\varepsilon\)-rMSE of \(R\) can be easily translated to other performance metrics via standard inequalities. For example, for any \(\delta\), Markov's inequality implies the absolute error \(|R-\gamma_{0}|\) is less than \(\varepsilon/\sqrt{\delta}\) with probability at least \(1-\delta\).
Next, we discuss the assumptions and proof strategies of Theorem 2.2. We require the first \(D\) functions \(\{g_{d}\}_{d=0}^{D-1}\) all satisfy the LBS condition, and the final function \(g_{D}\) has finite \(2^{D+1}\)-th moment under \(\pi\). The LBS assumption also appears in the work of NMC estimators (see the second part of Theorem 3 in Rainforth et al. (2018)). The moment assumption of \(g_{D}\) is not required in Rainforth et al. (2018). Nevertheless, it is a mild assumption that holds in most practical applications. It covers all the cases where \(g_{D}\) is bounded or has a moment generating function (including the uniform, Gaussian, Poisson,
or exponential distributions), which implies \(\mathbf{E}[|g_{D}|^{k}]<\infty\) for every \(k\). As we will see in our proofs, these assumptions help us to establish the moment guarantee of Theorem 2.2 in a backward inductive way. For example, the \(2^{D+1}\)-th moment assumption on \(g_{D}\) and the LBS assumption on \(g_{D-1}\) implies \(R_{D-1}\) has finite \(2^{D}\)-th moment. More generally, the finiteness of the \(2^{d+1}\)-th moment of \(R_{d}\) follows from the LBS assumption on \(g_{d}\) and the \(2^{d+2}\)-th moment of \(R_{d+1}\) (which is the conclusion of the previous inductive step). Eventually, we conclude \(R_{0}\) has a finite variance. Finally, we want to emphasize our moment assumption on \(g_{D}\) is not 'trajectory-dependent'. We require \(g_{D}\) has finite \(2^{D+1}\)-th moment under the joint distribution \(\pi\) of \(y^{(0:D)}\), which is much weaker than \(g_{D}\) has a uniformly bounded finite \(2^{D+1}\)-th moment under \(\pi_{D}(\cdot\mid y^{(0:D-1)})\) for every fixed trajectory \(y^{(0:D-1)}\).
Finally, the parameters \(\{r_{d}\}_{d=0}^{D=1}\) reflect the trade-off between the variance and computation cost. Since \(2^{N_{d}}\) calls are required for each \(d\), standard calculation shows that \(\mathbf{E}[2^{N_{d}}]=r_{d}/(2r_{d}-1)\) when \(r_{d}>0.5\), and \(+\infty\) if \(r_{d}\leq 0.5\). Therefore, every \(r_{d}\) has to be strictly greater than \(0.5\) to ensure a finite expected computational cost. Meanwhile, we cannot guarantee finite variance or unbiasedness of READ when \(r_{d}\) becomes too large. Our range for \(r_{d}\) in Theorem 2.2 follows from a careful calculation in our proof to ensure unbiasedness, finite computational cost, and variance simultaneously.
#### 2.3.2 The LBL case
The assumptions in Theorem 2.2 guarantee that READ enjoys an optimal convergence rate and computational cost. However, the second-order derivative assumption also rules out many functions of practical interest, such as max and min. In this section, we study the theoretical properties of Algorithm 1 and READ under weaker smoothness and moment assumptions. Our result is
summarized below:
**Theorem 2.4**.: _Fix any \(0<\delta<1/2\). Suppose for every \(d\in\{0,1,\ldots,D-1\}\), the function \(g_{d}\) satisfies the LBL assumption defined in (4), and \(r_{d}\coloneqq 1-2^{-k_{d}}\) satisfies_
\[k_{d}\in\left(1,\left(\frac{2^{d+2}-3\delta}{2^{d+3}-3\delta}\right)\left( \frac{2^{d+1}-\delta}{2^{d}-\delta}\right)\right).\]
_Moreover, suppose \(\|g_{D}(y^{(0:D)})\|_{\pi,2}<\infty\). Then for every \(0\leq d\leq D\), the output \(R_{d}(y^{(0:d-1)})\) of Algorithm 1 with inputs \(\{\)depth = \(d\), trajectory = \(y^{(0:d-1)}\), \(\mathcal{S}\), parameters \((r_{d},\ldots,r_{D-1})\}\) has the following properties:_
* _For almost surely every fixed_ \(y^{(0:d-1)}\)_,_ \[\mathbf{E}\left[R_{d}(y^{(0:d-1)})\mid y^{(0:d-1)}\right]=\gamma_{d}(y^{(0:d- 1)}).\]
* _The expected computational cost of_ \(R_{d}\) _is finite._
* _The output has finite_ \((2-\delta/2^{d})\)_-th moment, i.e.,_ \[\mathbf{E}_{\pi}\left[|R_{d}(y^{(0:d-1)})|^{(2-\delta/2^{d})}\right]<\infty \ \ \text{for}\ \ 0\leq d\leq D.\]
Comparing Theorem 2.2, which requires the LBS assumption for \(\{g_{d}\}_{d=0}^{D-1}\) and finite \(2^{D+1}\)-th moment for \(g_{D}\), with Theorem 2.4, which only requires the LBL assumption for \(\{g_{d}\}_{d=1}^{D-1}\) and finite second moment for \(g_{D}\), it is clear that Theorem 2 has more general assumptions. However, it does not guarantee that \(\mathtt{READ}\) has a finite variance. Nevertheless, it remains unbiased and has a finite expected computational cost. To minimize the loss of moment guarantees, one can choose suitable parameters such that \(\mathtt{READ}\) has finite \((2-\delta)\)-th moment for any small \(\delta\).
Again, let \(R_{0,1},R_{0,2},\ldots,\) be the i.i.d. outcomes by repeatedly implementing Algorithm 1. There are more technical challenges when analyzing the
convergence rate of \(\sum_{i=1}^{n}R_{0,i}/n-\gamma_{0}\), as the CLT cannot be applied. Instead, we use the Marcinkiewicz-Zygmund generalized law of large numbers (see Theorem A.4 in Appendix A), which shows \(n^{-1}\mathbf{E}[|\sum_{i=1}^{n}X_{i}|^{p}]\to 0\) if \(\{X_{i}\}_{i=1}^{n}\) are i.i.d., centered random variables with finite \(p\)-th moment for \(p\in[1,2)\). Our result is the following:
**Corollary 2.5**.: _With all the assumptions the same as Theorem 2.4, let \(R_{0,1},R_{0,2},\dots,\) be the i.i.d. outcomes by repeatedly implementing Algorithm 1, we have:_
* \(\mathbf{E}[|\sum_{i=1}^{n}R_{0,i}/n-\gamma_{0}|]=o(n^{-1/(2(1+\delta))})\)_._
* _We can construct an estimator_ \(R\) _with expected computational cost_ \(\mathcal{O}(\varepsilon^{-2(1+\delta)})\) _such that the mean absolute error_ \(\mathbf{E}[|R-\gamma_{0}|]<\varepsilon\)_._
Proof of Corollary 2.5.: Applying Theorem A.4 with \(p=2-\delta,X_{i}=R_{0,i}-\gamma_{0}\) and Jensen's inequality, we have:
\[\mathbf{E}\left[\left|\sum_{i=1}^{n}R_{0,i}/n-\gamma_{0}\right| \right]=n^{-1}\mathbf{E}\left[\left|\sum_{i=1}^{n}X_{i}\right|\right]\] \[\leq n^{-1}\left(\mathbf{E}\left[\left|\sum_{i=1}^{n}X_{i}\right| ^{p}\right]\right)^{1/p}=o(n^{-1+\frac{1}{p}})=o(n^{\frac{-1}{2(1+\delta)}}),\]
which proves the first part. The last step follows from
\(1-1/(2-\delta)>1/(2+2\delta)\) for \(\delta\in(0,1/2)\). Setting \(n=\Omega(\varepsilon^{-2(1+\delta)})\) and the second part immediately follows.
Although we are not able to recover the optimal \(n^{-1/2}\) convergence rate under this more general regime, our convergence rate is still near-optimal as it can be as close to \(n^{-1/2}\) as we want. Still, the convergence rate does not depend on \(D\), and, although we replace the MSE by MAE due to the moment
constraint, one can still use Markov's inequality to show the absolute error \(|R-\gamma_{0}|\) is less than \(\varepsilon/\delta\) with probability at least \(1-\delta\).
As the max function satisfies the LBL assumption, our results here include the optimal stopping problem as a special case. Our results complement the work of Zhou et al. (2022), where the authors use rMLMC to design an estimator with \(\mathcal{O}(\varepsilon^{-2})\) computational cost under stronger assumptions (see their Assumption 4). We have a slightly worse cost of \(\mathcal{O}(\varepsilon^{-2(1+\delta)})\) but under more general assumptions.
## 3 Numerical experiments
We consider the following simple example with known ground-truth. Suppose the process \((y^{(0)},y^{(1)},y^{(2)})\) satisfies \(y^{(0)}\sim\mathbf{N}(\pi/2,1),y^{(1)}\sim\mathbf{N}(y^{(0)},1),y^{(2)}\sim \mathbf{N}(y^{(1)},1)\). Define \(g_{0}(y^{(0)},z)\coloneqq\sin\bigl{(}y^{(0)}+z\bigr{)},g_{1}(y^{(0:1)},z) \coloneqq\sin\bigl{(}y^{(1)}-z\bigr{)}\), and \(g_{2}(y^{(0:2)})\coloneqq y^{(2)}\). The target quantity \(\gamma_{0}\) defined (1) is a nested expectation with \(D=2\). One can use the formula \(\mathbf{E}_{Z\sim\mathbf{N}(\mu,\sigma^{2})}[\sin(Z)]=\sin(\mu)\exp(-\sigma^{ 2}/2)\) to analytically calculate \(\gamma_{0}=\exp(-1/2)\approx 0.6065\). Now we compare our READ estimator with the NMC estimators in Rainforth et al. (2018).
For the NMC estimator, users first specify \(N_{0},N_{1},N_{2}\). Then we sample \(N_{0}\) copies of \(y^{(0)}\), \(N_{1}\) copies of \(y^{(1)}\) for each fixed \(y^{(0)}\), and \(N_{2}\) copies of \(y^{(2)}\) for each fixed \(y^{(0:1)}\), and use these samples to form the NMC estimator, details are explained in Appendix D. Following Rainforth (2018), we consider two ways of allocating \((N_{0},N_{1},N_{2})\). The first estimator NMC1 is to choose \(N_{0}=N_{1}=N_{2}\), the second NMC2 is to choose \(N_{0}=N_{1}^{2}=N_{2}^{2}\). For READ, since all assumptions in Theorem 2.2 are satisfied, therefore when \(r_{0}\in(1/2,3/4)\) and \(r_{1}\in(1/2,1-2^{-4/3})\), the READ estimator generated by Algorithm 1 is unbiased and of finite variance. Since the computational cost
gets lower when each \(r_{i}\) gets larger, we choose \(r_{0}=0.74\) and \(r_{1}=0.6\) (close to the upper-end of their respective ranges above) to facilitate the computational efficiency. Therefore, implementing Algorithm 1 once has an expected sample size/computational cost \(\left(r_{1}/(2r_{1}-1)\right)\left(r_{2}/(2r_{2}-1)\right)\approx 4.625\).
Our comparison result is summarized in Figure 1. Since the NMC methods and READ have different ways of generating estimators. To make a fair comparison, we compare the estimation errors with the total sample cost used by these three estimators. For NMC1 and NMC2, the total sample cost is \(n=N_{0}N_{1}N_{2}\). For READ, the total sample cost is random, therefore we use its expected value, which equals \(4.625\times\) Number of repetitions of Algorithm 1. The slopes of the blue, red, and green lines, which correspond to the empirical convergence rate of READ, NMC1, NMC2, equals \(-0.97,-0.35,-0.47\), respectively. They match well with the theoretical predictions \(n^{-1}\) in Corollary 2.3 for READ, \(n^{-1/3}\) for NMC1, and \(n^{-1/2}\) for NMC2 in Rainforth et al. (2018). It is clear from Figure 1 that READ has a significant advantage over NMC estimators, with both faster convergence rate and orders of magnitude lower errors.
We also repeatedly call Algorithm 1 for \(10^{6}\) times and plot the running averages of our estimates in Figure 2. Our estimator becomes more accurate when we increase the number of repetitions. For each \(k\in(1,2,\ldots,10^{6})\), we also calculate the standard deviation (SD) of the first \(k\) repetitions and use Mean \(\pm 1.96\) SD to form the \(95\%\) confidence interval. It is also clear from Figure 2 that our confidence intervals always include the ground-truth, suggesting the high accuracy of our method. In contrast, constructing confidence intervals of NMC estimators are much more time-consuming.
Some additional statistics and an extra experiment are provided in Appendix E and F.
Figure 1: The comparison on the empirical MSEs of estimating the RNE among READ (blue), NMC1 (red), and NMC2 (green). All the logarithms are of the base 10. Each method’s empirical errors are calculated based on 20 independent repetitions.
Figure 2: The trace plot (solid blue curve) of the running averages of READ. The blue dotted curves are the 95% confidence intervals. The red dashed line is the ground truth \(\exp(-1/2)\).
Further discussions
Here we provide some remarks for practical implementation and discuss some potential generalizations. The users need to specify the parameters \(\{r_{d}\}_{d=0}^{D-1}\) when implementing Algorithm 1. Larger values of \(r_{i}\) lead to a shorter time for each implementation but potentially larger variance. When some \(r_{i}\) is not chosen according to Theorem 2.2 or 2.4, the algorithm can still be implemented, but the variance may be infinite. The trade-off between the values of \(\{r_{d}\}\) and the fluctuations of the resulting estimator is problem-specific. In practice, knowing how many repetitions are sufficient is important to provide an accurate estimator. One possible way is to bound certain moments of READ and use Corollary 2.3 or 2.5 to choose a sufficiently large \(n\). But this bound can be problem-specific and very conservative. Instead, we follow Glynn and Rhee (2014) and suggest the following adaptive stopping rule: users first specify a precision-level \(\varepsilon\) and a small \(\delta\%\). When repeatedly implementing Algorithm 1, users calculate the empirical \((1-\delta\%)\) confidence interval \([L_{\delta}(k),U_{\delta}(k)]\) for first \(k\) repetitions in the same way as Section 3 for every \(k\). Users can stop when the width of the confidence interval is less than \(2\varepsilon\). The validity of this stopping rule is proven in Glynn and Whitt (1992).
There are two directions for further extensions. First, in our paper, each \(y^{(i)}\) is assumed to be univariate for concreteness. Extending this assumption to multivariate \(y^{(i)}\) should be straightforward and the computational cost of Algorithm 1 scales linearly with the dimensionality of \(y^{(i)}\), which may be another appealing property of our method. Second, in this paper, we only consider the 'fix \(D\)' regime and construct estimators with optimality guarantees. However, the cost of Algorithm 1 scales exponentially with \(D\). Therefore, although our algorithm is more efficient than the NMC estimator
for every fixed \(D\), both methods are not practical when \(D\) becomes too large. Indeed, the poor scaling with \(D\) is a common issue in related literature such as Glasserman and Yu (2004); Zanger (2013) and seems unavoidable. An interesting direction would be to construct modifications of Algorithm 1 under extra practical assumptions for large or infinite \(D\). For example, if we know that the 'influence' of \(\gamma_{d}\) on \(\gamma_{0}\) decays exponentially or double-exponentially with \(d\), it is then sufficient to truncate the depth to \(\tilde{D}\coloneqq\log(1/\varepsilon)\) or \(\sqrt{\log(1/\varepsilon)}\). We hope to report progress in the future.
|
2304.01055
|
Eigen-Factors an Alternating Optimization for Back-end Plane SLAM of 3D
Point Clouds
|
Modern depth sensors can generate a huge number of 3D points in few seconds
to be latter processed by Localization and Mapping algorithms. Ideally, these
algorithms should handle efficiently large sizes of Point Clouds under the
assumption that using more points implies more information available. The Eigen
Factors (EF) is a new algorithm that solves SLAM by using planes as the main
geometric primitive. To do so, EF exhaustively calculates the error of all
points at complexity $O(1)$, thanks to the {\em Summation matrix} $S$ of
homogeneous points.
The solution of EF is highly efficient: i) the state variables are only the
sensor poses -- trajectory, while the plane parameters are estimated previously
in closed from and ii) EF alternating optimization uses a Newton-Raphson method
by a direct analytical calculation of the gradient and the Hessian, which turns
out to be a block diagonal matrix. Since we require to differentiate over
eigenvalues and matrix elements, we have developed an intuitive methodology to
calculate partial derivatives in the manifold of rigid body transformations
$SE(3)$, which could be applied to unrelated problems that require analytical
derivatives of certain complexity.
We evaluate EF and other state-of-the-art plane SLAM back-end algorithms in a
synthetic environment. The evaluation is extended to ICL dataset (RGBD) and
LiDAR KITTI dataset. Code is publicly available at
https://github.com/prime-slam/EF-plane-SLAM.
|
Gonzalo Ferrer, Dmitrii Iarosh, Anastasiia Kornilova
|
2023-04-03T15:02:42Z
|
http://arxiv.org/abs/2304.01055v2
|
# Eigen-Factors an Alternating Optimization for Back-end Plane SLAM of 3D Point Clouds
###### Abstract
Modern depth sensors can generate a huge number of 3D points in few seconds to be latter processed by Localization and Mapping algorithms. Ideally, these algorithms should handle efficiently large sizes of Point Clouds under the assumption that using more points implies more information available. The Eigen Factors (EF) is a new algorithm that solves SLAM by using planes as the main geometric primitive. To do so, EF exhaustively calculates the error of all points at complexity \(O(1)\), thanks to the _Summation matrix_\(S\) of homogeneous points.
The solution of EF is highly efficient: i) the state variables are only the sensor poses - trajectory, while the plane parameters are estimated previously in closed from and ii) EF alternating optimization uses a Newton-Raphson method by a direct analytical calculation of the gradient and the Hessian, which turns out to be a block diagonal matrix. Since we require to differentiate over eigenvalues and matrix elements, we have developed an intuitive methodology to calculate partial derivatives in the manifold of rigid body transformations \(SE(3)\), which could be applied to unrelated problems that require analytical derivatives of certain complexity.
We evaluate EF and other state-of-the-art plane SLAM back-end algorithms in a synthetic environment. The evaluation is extended to ICL dataset (RGBD) and LiDAR KITTI dataset. Code is publicly available at [https://github.com/prime-slam/EF-plane-SLAM](https://github.com/prime-slam/EF-plane-SLAM).
SLAM, Plane SLAM, Planar Bundle Adjustment, Differentiation in \(SE(3)\).
## I Introduction
The advent of modern sensing capabilities is offering the possibility to perceive 3D depth, i.e. Point Clouds (PC), at a high streaming rate from diverse sensors, such as 3D LiDARs, Time of Flight cameras, RGB-D, etc. The more points are used, the more information is gathered from the environment. However, is it possible to evaluate all observed points and improve accuracy without a considerable increase in complexity? This is the main question this paper tries to address.
Aligning a sequence of observed PCs can be formulated as a Simultaneous Localization and Mapping (SLAM) [1] problem, where full trajectory and map geometry are purely estimated from the sensor's observed 3D points. Similarly, this problem definition fits perfectly the Bundle Adjustment (BA) [2] problem. In this paper, we are interested in the design and study of a new observation function that allows to formulate the SLAM (or BA) problem by directly considering the sensor's observed points in an efficient manner.
An important hindrance is that the sensor's points from PCs are sampled from the scene geometry, consequently it is highly unlikely to observe the same exact point on later observations. Therefore, one needs to define some representation of this inherent geometry that is capable of providing the error-point in an efficient manner. We will choose planar surfaces as landmarks due to their regularity and simple constrain equation. These geometrical surfaces are commonly present in indoors and urban areas. Plane SLAM [3, 4] approaches use planes as landmarks, but the observations are pre-processed into planar features, hence, 3D points are not directly used with their consequent performance deterioration.
We propose a plane SLAM technique for back-end optimization, the Eigen-Factors (EF), tailored for 3D Point Clouds (PC) from RGBD and LiDAR, while being capable of processing a large amount of points with constant complexity. This paper is an improved version of the former EF [5], overcoming its main limitation (first order optimization). The EF uses planar geometric constraints for the alignment of a trajectory described by a moving sensor. To do so, EF aggregates all observed points into the _Summation matrix_\(S\), a summation of outer product of homogeneous points and directly calculates the exact point error at \(O(1)\) complexity.
In addition, EF optimizes the point to plane error w.r.t. the trajectory. The plane variables are calculated in closed form in an early step and the resultant problem is an alternating optimization.
The gradient and the Hessian are directly calculated by reformulating the problem in terms of manifold optimization on \(SE(3)\) and retractions. Unfortunately, EF does not allow for the Gauss-Newton optimization approximation, since there are not residuals but eigenvalues, so instead we use the Newton-Raphson optimization. In order to differentiate over \(SE(3)\), we have proposed a methodology which greatly simplifies the calculation of derivatives of any order and it could be used on other problems where an analytical expression is hard to derive, as in our case through eigenvalues or matrix elements. For our particular configuration, the resulting Hessian is block diagonal which has an impact on the optimization process, since its inversion is extremely efficient.
The contributions of the Eigen Factors are the following:
* Matrix summation \(S\) to calculate all point to plane errors at \(O(1)\) complexity;
* Alternating optimization where first the plane parameters are calculated in closed form and then the reduced dimensionality problem (only trajectory) is estimated by Newton-Raphson;
* Efficient second order optimization of EF with analytical calculation of the gradient and the Hessian (block
diagonal).
* Methodology to calculate derivatives of any function involving the manifold of rigid body transformations \(SE(3)\);
The evaluation compares EF with other state-of-the-art plane SLAM back-end algorithms in synthetic, RGBD and LiDAR environments, since we believe these are the most useful settings for plane SLAM. In order to clearly understand all the details of each method, we have decided to compare only the back-end and provide to all methods the same exact input. The front-end has an impact to the final performance of the methods so we have decided to isolate this effect.
The ideal application of EF is map refinement of 3D PC by optimizing the plane error over trajectories.
## II Related Work
The initial motivation for EF is to align PC minimizing the point error. The Iterative Closest Point (ICP) algorithm [6, 7, 8, 9] fits a pair of PCs by iteratively finding correspondences and solving the optimization problem [10, 11, 12]. Existing solutions include matching point-to-point, point-to-line [13], point-to-plane, and other variants emulating planes [14, 15] or assuming a general geometry. These techniques require to process all points every iteration and can achieve high accurate results.
One can bring this idea to a sequence of PC or Multi-view registration. An early work [16] proposed to match all points in every view, reducing the overall point error, but requiring huge memory resources and intense computations. Improvements on Multi-view registration targeted efficiency, for instance considering pairs of PC as constraints [17].
Although computing the overall point error is a natural choice for mapping and SLAM, researchers have been trying to reduce the _a priory_ high computational requirements of these techniques. On this regard, processing plane landmarks for each PC observation could be seen as a proxy for computing the point error but at a reduced computational cost. Therefore, plane Landmark-based SLAM optimizes trajectory and a map of plane-landmarks [3, 4]. There have been proposed multitude of variants, some of them focusing on the plane representation [18, 19], adding planar constraints [20] or a mixture of points and planes [21, 22]. The key problem of plane landmarks is that after extracting planar features from noisy points, there is an inevitable loss of information compared to considering jointly all points. EF on the other hand, keeps all points' information.
Modern mapping approaches, in order to ensure real-time (or close) conditions, aggregate observations to the map after an accurate scan-to-map, effectively marginalizing poses. Examples of these approaches include LOAM [23, 24] which combines PC alignment with mapping or LIO-mapping [25]. Surfel-based approaches [26, 27] follow the same principle dividing the scene in planar disks, obtaining impressive results. When considering loop closure, pure mapping approaches present limitations and require an extra effort to update their mapping solutions properly. SLAM approaches do not suffer this limitation, since in general both the trajectory and the map are estimated simultaneously, so they handle more naturally loop closure situations at the cost of higher computational resources.
When considering overall point error, efficiency is of the essence. Our previous paper on EF [5] introduces for the first time the \(S\) matrix as a summation of outer products the homogeneous points from each sensor pose. This allows to optimize the point to plane error with complexity \(O(1)\), which is the key requirement for our approach to work efficiently. Later, \(\pi\)-LSAM [28] makes use of the same matrix \(S\) to calculate the overall point to plane error to solve the Planar Bundle adjustment estimating jointly poses and plane-landmarks. In a later work, the authors refer to this homogeneous summation matrix \(S\) as the _Integrated Cost Matrix_ (ICM) [22].
The BAREG [29] algorithm updates incrementally a covariance matrix \(3\times 3\), without explicit re-evaluation of points, maintaining the same advantages as the \(S\) matrix. BALM2 [30] uses the same concept of the \(S\) matrix, which is referred there as _Cluster Points_ and formalizes some properties of the \(S\) matrix such as superposition and transformation.
The idea of using the eigendecomposition from EF as an objective function is adopted by BALM [31], whose derivation is from a \(3\times 3\) covariance instead of the \(4\times 4\) homogeneous \(S\) matrix as in EF. The authors improve the efficiency [30] by calculating analytical derivatives of the gradient and Hessian, in the same spirit we propose in this paper. The derivations are however different: BALM's Hessian is dense whereas our derivation results in a block diagonal Hessian.
In BAREG [29], the authors use the eigendecomposition to demonstrate the equivalence of the point-to-plane error to the plane-to-plane error, with proper weights. This equivalence assumes that the plane parameters are optimal which might be a too restrictive condition in practice. However, this reduction greatly reduces the complexity and efficiency of the optimization step. It also requires the plane estimation at each observation, which has consequences when evaluating under strong point noise conditions.
In this work, we are interested also in assessing the quality of the map by using a mapping metric. The ground truth on some datasets will not be accurate enough for the mapping purposes, so these assessment tools are necessary. Whereas there are many solutions for precise trajectory estimation from sequences of point clouds, the topic of measuring map quality has not received the same attention. The 3D graphics community proposes a set of full reference metrics for solving tasks of point cloud compression and denoising, including p2point metric, p2plane metric [32], angular similarity [33], projection based methods [34] or SSIM [35, 36]. The majority of metrics based on point cloud requires reference to ground truth, whereas in the evaluation scenario we propose (KITTI), 3D reconstruction ground truth will not be available. Therefore no-reference evaluation is used in this work. In particular, Mean Map Entropy (MME) [37] and Mean Plane Variance (MPV) [38] evaluate noise in the aggregated point cloud and Mutually Orthogonal Metric (MOM) [39] correlates reconstruction noise with perturbation in the 3D poses.
## III Eigen-Factors: Point to Plane Error
A plane \(\pi\) in the 3D space is defined by a normal unit vector \(\eta\in\mathbb{S}^{2}\) and the plane distance to the origin \(d\in\mathbb{R}\):
\[\pi=\begin{bmatrix}\eta\\ d\end{bmatrix}\in\mathbb{P}^{3},\qquad\text{where }||\eta||=1. \tag{1}\]
A point \(p=[x,y,z]^{\top}\) lies on the plane \(\pi\) if an only if:
\[\pi^{\top}\begin{bmatrix}p\\ 1\end{bmatrix}=\pi^{\top}\tilde{p}=0. \tag{2}\]
With this constrain and given a set of noisy observed points from the same planar surface, one can solve the plane estimation mainly by two approaches. The **centered method** minimizes the overall point to plane error
\[\min_{\pi}\sum_{n=1}^{N}||\eta^{\top}p_{n}+d||^{2}. \tag{3}\]
The solution comes in two steps:
\[\min_{\eta}\left\{\sum_{n=1}^{N}\eta^{\top}\underbrace{(p_{n}-E\{p\})(p_{n}-E \{p\})}_{\Sigma_{p}}\eta\right\} \tag{4}\] \[d=-E\{\eta^{\top}p\}, \tag{5}\]
first the eigendecomposition of the \(3\times 3\) matrix \(\Sigma_{p}\) solves the value of \(\eta\) and then \(d\) is obtained.
The **homogeneous method** calculates the point to plane error of a plane \(\pi\) without centering the data points:
\[\min_{\pi}\sum_{n=1}^{N}||\pi^{\top}\tilde{p}_{n}||^{2}=\pi^{\top}\tilde{P} \tilde{P}^{\top}\pi \tag{6}\]
where the matrix \(\tilde{P}\) is the stacked vector of \(N\) homogeneous points. This arrangement of terms in the homogeneous plane estimation method allows us to define the \(S\) matrix:
\[S=\tilde{P}\tilde{P}^{\top}=\sum_{n=1}^{N}\tilde{p}_{n}\tilde{p}_{n}^{\top}, \tag{7}\]
which is the _Summation_ of the outer product of all points in the plane, hence matrix \(S\). The solution of (6) is calculated using the eigendecomposition of the \(4\times 4\) matrix \(S\). In [40] there is a comparison of the two methods yielding similar results.
Our ultimate objective is the optimization of the trajectory and the geometrical features in the map. The disadvantage of the _centered method_ is that every time a new sample is added or modified, all calculations should be carried out again. It is more natural to develop the _homogeneous method_ to account for poses in a highly efficient solution.
Then, the Eigen-Factors is a generalization of the homogeneous plane estimation observed over a sequence of reference frames or poses \(T_{t}\in SE(3)\) at different instants of time \(t\in\{1,\dots,H\}\). It can also be understood as a trajectory \(\mathbf{T}=\{T_{1},\dots,T_{H}\}\). Each of these poses transforms points to a global reference frame \(g\), such that \({}^{g}T_{t}\), where we will omit the global frame reference for simplicity in following sections. The underlying idea is that the same plane \(\pi\) is observed from different poses \(T_{t}\) but the planar constraint must hold for all views. Accordingly, we can reformulate the plane estimation problem for the plane \(\pi\) as
\[\min_{\pi}\sum_{t=1}^{H}\sum_{n=1}^{N_{t}}||\pi^{\top}T_{t}\tilde{p}_{t,n}||^ {2}=\min_{\pi}\pi^{\top}(\sum_{t=1}^{H}T_{t}S_{t}T_{t}^{\top})\pi, \tag{8}\]
where each of the points \(\tilde{p}_{t,n}\) denotes the reference frame it was observed \(t\) and the matrix \(S_{t}\) includes all these \(N_{t}\) points as in (7). By re-arranging terms one can see as the plane \(\pi\) is out of the summation, so the plane solution boils down to the eigendecompsition of this new matrix. Therefore, we define the variable \(Q\) as the summation
\[Q(\mathbf{T})=\sum_{t=1}^{H}T_{t}S_{t}T_{t}^{\top}=\sum_{t=1}^{H}Q_{t}(T_{t}), \tag{9}\]
where each component \(Q_{t}\) depends on the matrix \(S_{t}\), calculated only once, and the current 3D pose estimation \(T_{t}\). The matrix \(S_{t}\) expresses points in the local coordinate system at time \(t\) and is constant, while the matrix \(Q\) is expressed in the global reference frame and is recalculated for each new updated trajectory.
The solution of the minimization problem is obtained directly by the eigendecomposition:
\[\pi^{*} =k\cdot v_{min}(Q(\mathbf{T})) \tag{10}\] \[\text{s.t. }||\eta^{*}||=1.\]
The minimum eigenvector \(v_{min}\) is proportional to the plane solution. The exact solution, in order to have geometrical meaning, must fulfill the definition of a plane (1) with unit normal \(\eta\). Therefore, it is multiplied by the scalar value \(k\). Without this correction, each of the errors would be scaled differently and the solution would be biased. The summation of the point to plane squared error (for plane \(\pi\)) is equal to:
\[\min_{\pi}\pi^{\top}Q(\mathbf{T})\pi=k^{2}\lambda_{min}(Q(\mathbf{T}))=\lambda_{\pi}( \mathbf{T}) \tag{11}\]
where \(\lambda_{min}\) is the minimum eigenvalue, which equals the minimum cost or minimum error after estimation after scaling by \(k^{2}\), easily derived from (10). Note that the minimization of the plane is implicit after solving the algebraic equation by the eigendecomposition, so the plane variables can be removed from the state variables.
The function \(\lambda_{\pi}(\cdot)\) obtained in (11) for the plane \(\pi\) shows some interesting properties. In the ideal case of no-noise, this function should be zero and its values are always positive by construction \(\lambda_{\pi}(\cdot)\geq 0\). We could reformulate this as an **implicit observation function** of the form \(h(x,z)=0\), where in our particular case, the state \(x\) is the trajectory \(\mathbf{T}\) and the observation \(z\) are all points. In contrast, an explicit observation is of the form \(h(x)=z\) which when optimizing tries to fit the state with the observations.
Unfortunately, this implicit observation function is not suitable for general nonlinear least squares (NLLS). We will later derive a state estimation problem from \(\lambda_{\pi}(\cdot)\), where the total cost to minimize is
\[C(\mathbf{T})=\sum_{m=1}^{M}\lambda_{\pi_{m}}(\mathbf{T}) \tag{12}\]
given that \(M\) is the total number of observed planes. Some important remarks:
* The cost \(C(\mathbf{T})\) is equal to the total point-to-plane squared error evaluated for each of the poses \(T_{t}\).
* The complexity is independent of the number of points, since the matrix \(S_{t}\) requires them to be summed only once and use thereafter without modifications.
* The plane parameters are estimated in closed form and the state variables are the sequence of poses \(\mathbf{T}\).
This is the essence of the EF, a function that seeks to minimizes the point to plane error w.r.t. the trajectory. The next sections are dedicated to the particular optimization of this function that requires specific development.
Our approach is an Alternating optimization, where the plane parameters are obtained in closed form by the eigen-decomposition and then the reduced problem is estimated by optimizing w.r.t. the trajectory \(\mathbf{T}\). Other alternatives to the calculation of separable variables include the variable projection [41] or the general Wiberg approach [42]. We have chosen the alternating optimization due to its simplicity and efficiency, since the main purpose of EF is map refinement to improve the quality of the map.
## IV Retraction and Optimization on the Manifold
This section introduces some tools from optimization on matrix manifolds [43] required for the correct development of a second-order optimization method for the Eigen Factors. The problem of interest is to further develop the form \(Q(T):SE(3)\to\mathbb{R}^{4\times 4}\), so we will propose systematic rules to differentiate any function involving \(SE(3)\) in a simple way.
A Rigid Body Transformation (RBT), denoted as \({}^{f}T_{o}\in SE(3)\), allows to transform points from an origin reference system \(o\) to a final reference system \(f\). It also transforms reference frames or simply expresses a 3D pose uniquely. More formally, the Special Euclidean group is
\[SE(3)=\left\{T=\begin{bmatrix}\mathbf{R}&\mathbf{t}\\ 0&1\end{bmatrix}\,|\,\mathbf{R}\in SO(3)\,\wedge\,\mathbf{t}\in\mathbb{R}^{3} \right\}. \tag{13}\]
In addition, RBT is a matrix group \(SE(3)\subset\mathbb{R}^{4\times 4}\) and due to its inner structure and properties is a Riemannian manifold. As such, there is a smooth mapping (chart) from a subgroup of \(SE(3)\) elements to the vector space \(\mathbb{R}^{6}\). It is well known that the dimensionality of a 3D pose is six, however, there exist several possible mappings.
Our interest is on the optimization of a cost function whose input is an element of \(SE(3)\) or more generally, an element of a manifold \(\mathcal{M}\). Let the function \(f(x):\mathcal{M}\to\mathbb{R}\) be a mapping from a manifold to a real value. Then, if one applies directly the definition of directional derivative
\[\lim_{h\to 0}\frac{f(x+h\eta)-f(x)}{h}, \tag{14}\]
finds this operation not suitable for numerical calculations.
One can solve this problem by using a _retraction_ mapping.
**Definition 1**.: _A retraction \(R_{T}(\xi)\) is a smooth mapping from the tangent space around the \(T\) element to the manifold:_
\[R_{T}(\xi):\mathbb{R}^{6}\to SE(3)\,. \tag{15}\]
_Two conditions must be satisfied: i) \(R_{T}(0_{T})=T\) and ii) local rigidity (see Ch.4 [43])._
Accordingly, we can compose a new function when using a retraction:
\[\hat{f}_{T}=f\circ R_{T}, \tag{16}\]
from the tangent space \(\mathbb{R}^{6}\) to a real value. This new function \(\hat{f}_{T}\) admits a directional derivative (14). As a convention in this paper, all retractions are denoted with the _hat_ superscript.
As a result, an optimization problem \(x^{\prime}=x+\Delta x\), which is not well defined for elements of a manifold \(x\in\mathcal{M}\), becomes well defined when using a retraction
\[T^{\prime}=R_{T}(\Delta\xi). \tag{17}\]
In the case of first order optimization, the update is proportional to the gradient direction \(\Delta\xi=-\alpha\nabla_{\xi}\hat{f}_{T}\). Gradient decent is one of the simplest methods for optimization: it requires a direction and a magnitude. Second-order optimization methods, provide the direction of descent \(\Delta\xi=-\alpha(\nabla_{\xi}^{2}\hat{f}_{T})^{-1}\nabla_{\xi}\hat{f}_{T}\). The advantage is that the Hessian already weights this descent direction, resulting in improved convergence rates.
In the particular case of \(SE(3)\), its tangent space has a structure of Euclidean space. Only under these conditions, the gradient and the Hessian are well defined after using a retraction:
\[\nabla_{T}f(T) =\nabla_{\xi}\hat{f}_{T}(0_{T})\] \[\nabla_{T}^{2}f(T) =\nabla_{\xi}^{2}\hat{f}_{T}(0_{T}).\]
These results are significant since they will allow to use the concept of directional derivatives (14) by simply applying real analysis into the calculation of the gradient and Hessian in Sec.V. By doing that, we will implement a second-order optimization method that have almost no need for hyper-parameters and quadratic convergence rate [44].
### _Differentiating over the Exponential Map in \(Se(3)\)_
The exponential map is the most natural retraction choice: its definition is highly related with the concept of perturbations over \(SE(3)\) which will facilitate the calculation of derivatives in the following sections. There are other options for choosing a retraction mapping, as long as they satisfy the definition.
The exponential map defines the retraction \(R_{T}\) in the following way:
\[R_{T}(\xi)=\mathrm{Exp}(\xi)T. \tag{18}\]
We follow a left-hand-side convention, but the right-hand-side is a valid retraction and it would require a similar derivation of the terms. The retraction around the identity can be expanded by using the matrix exponential definition as a Taylor series
\[R_{I}(\xi)=\mathrm{Exp}(\xi)I=I+\xi^{\wedge}+\frac{1}{2}(\xi^{\wedge})^{2}+o(| \xi|^{3}), \tag{19}\]
where the vector \(\xi=[\theta^{\top},\rho^{\top}]^{\top}=[\xi_{1},\ldots,\xi_{\xi}]^{\top}\in\mathbb{R} ^{6}\) and the matrix
\[\xi^{\wedge}=\begin{bmatrix}\theta^{\wedge}&\rho\\ 0&0\end{bmatrix}=\begin{bmatrix}0&-\theta_{3}&\theta_{2}&\rho_{1}\\ \theta_{3}&0&-\theta_{1}&\rho_{2}\\ -\theta_{2}&\theta_{1}&0&\rho_{3}\\ 0&0&0&0\end{bmatrix} \tag{20}\]
is the Lie Algebra matrix group \(\mathfrak{se}(3)\) that represents the group of infinitesimal RBT around the identity. See [45, 46] for a comprehensive explanation on the topic of RBT and its Lie algebra. This matrix can be rearranged such that it has a vector space structure
\[\xi^{\wedge}=G_{1}\xi_{1}+G_{2}\xi_{2}+\ldots+G_{6}\xi_{6}, \tag{21}\]
where \(G_{i}\) is a \(4\times 4\) matrix of the i-\(th\) generator and the basis \(\mathbf{G}=\{G_{i}\},\forall i=1,\ldots,6\).
After defining the retraction map (18), the next step is to obtain the derivative of the retraction, i.e., the exponential map evaluated at the zero element.
**Lemma 1**.: _The derivative of the exponential map with respect to one of the Lie coordinates \(\xi_{i}\) is its corresponding matrix generator_
\[\frac{\partial\operatorname{Exp}(\xi)}{\partial\xi_{i}}=G_{i}. \tag{22}\]
Proof.: One can expand analytically the definition of the matrix exponent in (19)
\[\frac{\partial\operatorname{Exp}(\xi)}{\partial\xi_{i}} =\frac{\partial}{\partial\xi_{i}}\Big{(}I+\xi^{\wedge}+\frac{1}{2 }(\xi^{\wedge})^{2}+o(|\xi|^{3})\Big{)}\bigg{|}_{\xi_{i}=0}\] \[=G_{i}+\frac{1}{2}G_{i}\xi^{\wedge}+\frac{1}{2}\xi^{\wedge}G_{i}+ \ldots\bigg{|}_{\xi_{i}=0}=G_{i} \tag{23}\]
where after differentiating each of the terms, the higher order terms vanish when evaluated at \(\xi_{i}=0\).
**Lemma 2**.: _The second derivative of the exponential with respect to the Lie coordinates \(\xi_{i}\) and \(\xi_{j}\) is_
\[\frac{\partial^{2}\operatorname{Exp}(\xi)}{\partial\xi_{j}\partial\xi_{i}}= \frac{1}{2}G_{i}G_{j}+\frac{1}{2}G_{j}G_{i}. \tag{24}\]
Proof.: one can repeat the same procedure to expand analytically the second-order derivative:
\[\frac{\partial^{2}\operatorname{Exp}(\xi)}{\partial\xi_{j} \partial\xi_{i}} =\frac{\partial^{2}}{\partial\xi_{j}\partial\xi_{i}}\Big{(}I+\xi^ {\wedge}+\frac{1}{2}(\xi^{\wedge})^{2}+o(|\xi|^{3})\Big{)}\bigg{|}_{\xi=0}\] \[=\frac{\partial}{\partial\xi_{j}}\Big{(}G_{i}+\frac{1}{2}G_{i} \xi^{\wedge}+\frac{1}{2}\xi^{\wedge}G_{i}\Big{)}\bigg{|}_{\xi=0}\] \[=\frac{1}{2}G_{i}G_{j}+\frac{1}{2}G_{j}G_{i} \tag{25}\]
This compact result is a \(4\times 4\) matrix, corresponding to each of the \((i,j)\) elements of the Hessian. The overall tensor \(\frac{\partial^{2}\operatorname{Exp}(\xi)}{\partial\xi^{2}}\) has dimensions \(4\times 4\times 6\times 6\) when considering each of its coordinates \((i,j)\).
The idea of retraction functions plus the first and second-order derivatives of the exponential map will be enormously useful when calculating derivatives of more complex functions, such as the matrices in the EF, as we show in the next section.
## V Eigen-Factors Differentiation
We will start by redefining the key functions, now as a retraction function (denoted by the \(\hat{\cdot}\) indicator). The \(Q\) matrix in (9) was defined as a function of the trajectory \(\mathbf{T}\). If only considering the transformation \(T_{t}\) at time \(t\), then \(Q=Q_{t}\) can be redefined as the retraction function (16) of the form \(\hat{Q}_{t}=Q_{t}\circ R_{T_{t}}:\mathbb{R}^{6}\to\mathbb{R}^{4\times 4}\), such that
\[\hat{Q}_{t}(\xi_{t})=\operatorname{Exp}(\xi_{t})\cdot Q_{t}\cdot\operatorname {Exp}(\xi_{t})^{\top}. \tag{26}\]
The point to plane error, defined as the \(\lambda_{\pi}(\mathbf{T})\) in the EF derivation in (11), can be as well redefined as the retraction function
\[\hat{\lambda}_{\pi}(\xi_{t})=\lambda_{\pi}\circ\hat{Q}_{t}:\mathbb{R}^{6}\to \mathbb{R} \tag{27}\]
which is now a well defined function to calculate directional derivatives (14). One can evaluate the retraction around any other pose in a more general form \(\hat{\lambda}_{\pi}(\mathbf{\xi})\), however, for the development that follows, we will use the current time index \(t\) and generalize the obtained expressions later to all the trajectory.
Now, it is necessary to propagate the gradient through the eigendecomposition.
**Proposition 1**.: _The derivative of the eigenvalue \(\lambda_{\pi}\) with respect to the pose \(T_{t}\) equals_
\[\nabla_{\xi_{t}}\hat{\lambda}_{\pi}=\frac{\partial\hat{\lambda}_{\pi}}{ \partial\xi_{t}}=\pi^{\top}\frac{\partial\hat{Q}_{t}}{\partial\xi_{t}}\,\pi. \tag{28}\]
Proof.: See the analytical derivation in the Appendix A-A.
Accordingly, the calculation of the gradient boils down to calculate the derivative of \(\hat{Q}_{t}\) with respect to the tangent space around the element \(T_{t}\) of the manifold. We differentiate (26) such that
\[\frac{\partial\hat{Q}_{t}}{\partial\xi_{t}}=\frac{\partial\text{Exp}(\xi_{t})} {\partial\xi_{t}}Q_{t}+Q_{t}\frac{\partial\text{Exp}(\xi_{t})^{\top}}{ \partial\xi_{t}}, \tag{29}\]
which is obtained by applying the product rule.
By using Lemma 1, one simply substitutes to obtain the derivative wrt the coordinate \(i\):
\[\frac{\partial\hat{Q}_{t}}{\partial\xi_{t,i}}=G_{i}\cdot Q_{t}+Q_{t}\cdot G_{i }^{\top}, \tag{30}\]
where each of these elements is a \(4\times 4\) matrix. The gradient is then obtained after multiplying this result as in (28) which yields a scalar value; in total a 6D vector for each of the \(i=1,\ldots,6\) components.
This methodology is general for obtaining derivatives of functions wrt poses in 3D and allows to have a compact result, even when the image of this function is a matrix.
### _Eigen Factors, Hessian_
The objective of this section is to obtain an analytical solution for the Hessian matrix \(\nabla_{\xi}^{2}\hat{\lambda}\). Similarly, we follow the same systematic procedure to derive the Hessian of the EF.
**Proposition 2**.: _The Hessian of the Eigendecomposition with respect to the pose \(T_{t}\) is_
\[\frac{\partial^{2}\hat{\lambda}_{\pi}}{\partial\xi_{t}^{2}}=\pi^{ \top}\frac{\partial^{2}\hat{Q}_{t}}{\partial\xi_{t}^{2}}\pi \tag{31}\] \[\frac{\partial^{2}\hat{\lambda}_{\pi}}{\partial\xi_{t}\partial\xi _{t^{\prime}}}=0,\qquad\forall t\neq t^{\prime}. \tag{32}\]
Proof.: See the analytical derivation in the Appendix A-B.
An important property derived from Proposition 2 is that the Hessian matrix is **block diagonal**, which implies a fast calculation of the optimization iterations and a reduced complexity of the algorithm.
Now, we will derive the second derivative for the pair \((\xi_{t,i},\xi_{t,j})\) of components of the vector \(\xi_{t}\). We calculate the derivative of \(\hat{Q}_{t}\) in (26) twice, by using the product rule:
\[\frac{\partial^{2}\hat{Q}_{t}}{\partial\xi_{t,j}\partial\xi_{t,i}} =\frac{\partial}{\partial\xi_{t,j}}\left(\frac{\partial\text{Exp}( \xi_{t})}{\partial\xi_{t,i}}Q_{t}+Q_{t}\frac{\partial\text{Exp}(\xi_{t})^{\top }}{\partial\xi_{t,i}}\right)\] \[=\frac{\partial^{2}\text{Exp}(\xi_{t})}{\partial\xi_{t,j} \partial\xi_{t,i}}Q_{t}+\frac{\partial\text{Exp}(\xi_{t})}{\partial\xi_{t,i}} \frac{\partial Q_{t}}{\partial\xi_{t,j}}+\] \[+\underbrace{\frac{\partial Q_{t}}{\partial\xi_{t,i}}\cdot\frac{ \partial\text{Exp}(\xi_{t})^{\top}}{\partial\xi_{t,j}}+Q_{t}\frac{\partial^{2 }\text{Exp}(\xi_{t})^{\top}}{\partial\xi_{t,j}\partial\xi_{t,i}}}_{B_{ij}^{ \top}}. \tag{33}\]
Given the symmetry in \(Q=Q^{\top}\), this expression is simplified and one has to calculate only the expression
\[B_{ij}=(G_{i}G_{j}+G_{j}G_{i})\frac{Q_{t}}{2}+G_{i}\frac{\partial Q_{t}}{ \partial\xi_{t,j}}, \tag{34}\]
and sum to its transpose in order to obtain the the components \((i,j)\) of the Hessian. Note that the Hessian is symmetric as well, and only the upper-triangular elements need to be calculated.
### _Trajectory Optimization_
In this section, we discuss different options for optimizing a trajectory \(\mathbf{T}=\{T_{1},\dots,T_{H}\}\) or its corresponding set of vectors \(\mathbf{\xi}=\{\xi_{1},\dots,\xi_{H}\}\). One can directly apply the EF gradient and Hessian at each pose in the trajectory, resulting in a the joint vector gradient
\[\frac{\partial\hat{\lambda}_{\pi}}{\partial\mathbf{\xi}}=\left[\left(\pi^{\top} \frac{\partial Q}{\partial\xi_{1}}\pi\right)^{\top},\dots,\left(\pi^{\top} \frac{\partial Q}{\partial\xi_{H}}\pi\right)^{\top}\right]_{6H\times 1}^{\top}. \tag{35}\]
This aggregation corresponds to the Cartesian product of the manifolds of each of the poses. In case there were not points observed from a pose, then the gradient would zero since it is not contributing to the EF cost in any mean. The same arrangement can be done for the Hessian:
\[\frac{\partial^{2}\hat{\lambda}_{\pi}}{\partial\mathbf{\xi}^{2}}=\text{diag}\left[ \left(\pi^{\top}\frac{\partial^{2}Q}{\partial\xi_{1}^{2}}\pi\right),\dots, \left(\pi^{\top}\frac{\partial^{2}Q}{\partial\xi_{H}^{2}}\pi\right)\right]_{6H \times 6H}. \tag{36}\]
The total EF cost (12) can be redefined in terms of retraction function w.r.t. the joint vector of poses \(\mathbf{\xi}\)
\[\hat{C}(\mathbf{\xi})=\sum_{m=1}^{M}\hat{\lambda}_{\pi_{m}}(\mathbf{\xi}). \tag{37}\]
The optimal update of the joint vector of pose coordinates is solved by
\[\Delta\mathbf{\xi}=-\alpha(\nabla_{\xi}^{2}\hat{C})^{-1}\nabla_{\xi} \hat{C} \tag{38}\] \[\nabla_{\xi}\hat{C}=\sum_{m}^{M}\frac{\partial\hat{\lambda}_{\pi _{m}}}{\partial\mathbf{\xi}}\] (39) \[\nabla_{\xi}^{2}\hat{C}=\sum_{m}^{M}\frac{\partial^{2}\hat{ \lambda}_{\pi_{m}}}{\partial\mathbf{\xi}^{2}}. \tag{40}\]
### _Error Invariance to the Reference Frame_
Planes are elements of the projective space \(\mathbb{P}^{3}\) and they can be transformed from one reference frame to another, similarly to homogeneous points by RBT:
\[{}^{t}\pi={}^{t}T_{g}^{-\top}\cdot{}^{g}\pi, \tag{41}\]
where the plane \({}^{g}\pi\) in the global reference frame is transformed into \({}^{t}\pi\) in the local frame \(t\).
If we explicitly account all reference frames in the point to plane error in (8), where the point \({}^{t}p\) was expressed in a local coordinate and the plane was expressed in the global coordinate \(g\), then
\[{}^{g}\pi^{\top}\cdot{}^{g}T_{t}\cdot{}^{t}\hat{p}={}^{g}\pi^{\top}\cdot{}^{g} \tilde{p}={}^{t}\pi^{\top}\cdot{}^{t}\tilde{p}. \tag{42}\]
One can see that the error value is invariant to the choice of the reference: we could either transform both elements to the local reference frame or to the global frame. The EF development used the latter.
Not just that, but the planar constraint is fulfilled in any reference frame:
\[{}^{c}\pi^{\top}\cdot{}^{c}\tilde{p}=({}^{c}T_{g}^{-\top}g\pi)^{\top}\cdot{}^{c} T_{g}{}^{g}\tilde{p}={}^{g}\pi^{\top}\cdot{}^{g}\tilde{p}, \tag{43}\]
where the transformations cancel out for any new reference frame \(c\).
### _Data Centering_
From the last subsection, we can express the EF cost in a different reference frame by multiplying a transformation \({}^{c}T_{g}\) (for simplicity we write \(T_{c}\)) and its inverse, so the the error is unaltered:
\[\lambda_{\pi} =\pi^{\top}\hat{Q}\pi=\pi^{\top}\underbrace{T_{c}^{-1}T_{c}}_{I} \cdot\hat{Q}\cdot T_{c}^{\top}\underbrace{T_{c}^{-\top}\pi}_{{}^{c}\pi}\] \[={}^{c}\pi^{\top}T_{c}\cdot\hat{Q}\cdot T_{c}^{\top}{}^{c}\pi. \tag{44}\]
The solution is the well known eigendecomposition of \(T_{c}\cdot\hat{Q}\cdot T_{c}^{\top}\). When applied to the gradient, yields
\[\frac{\partial\hat{\lambda}_{\pi}}{\partial\xi}={}^{c}\pi^{\top}T_{c}\frac{ \partial\hat{Q}}{\partial\xi}T_{c}^{\top}{}^{c}\pi. \tag{45}\]
If the error \(\lambda_{\pi}\) remains constant, then one can naturally ask if there is a reference frame more convenient than others to calculate the error. The matrix \(Q\) from (9) is composed of the following terms:
\[Q=\begin{bmatrix}Q_{p}&q\\ q^{\top}&N\end{bmatrix} \tag{46}\]
where there is a relation between \(q\) and the mean of all points from the plane, recall observed in the same global reference frame
\[q=\sum_{t}^{H}\sum_{n}^{N_{t}}T_{t}\cdot p_{t,n}=\sum_{t}^{H}q_{t}=N\cdot\mu. \tag{47}\]
The vector \(q\) is the summation of \(q_{t}\) each the average point in global coordinates. The \(3\times 3\) block \(Q_{p}\) is also related to the covariance
\[\Sigma_{p}\cdot N=Q_{p}-\frac{1}{N}q\cdot q^{\top} \tag{48}\]
Let the transformation matrix be a _centering data_ transformation by a simple translation of the mean value:
\[T_{c}=\begin{bmatrix}I&-\mu\\ 0&1\end{bmatrix} \tag{49}\]
In that case, the result of this transformation is
\[T_{c}\begin{bmatrix}Q_{p}&q\\ q^{\top}&N\end{bmatrix}T_{c}^{\top}=\begin{bmatrix}Q_{p}-\frac{1}{N}q\cdot q^ {\top}&0\\ 0&N\end{bmatrix}, \tag{50}\]
where top left block corresponds to the scaled covariance (48).
Alternatively, we could have asked: what is the transformation that applied to the plane \(\pi\) yields a \(d\) component zero? Intuitively, the higher \(d\), the more sensible the eigendecomposition is. Then
\[\begin{bmatrix}\eta\\ d\end{bmatrix}=\begin{bmatrix}I&0\\ -\mu^{\top}&1\end{bmatrix}\begin{bmatrix}\eta\\ 0\end{bmatrix}, \tag{51}\]
which is valid since we already defined \(d=-E\{\eta^{\top}p\}\) in (5).
We will make use of this form assuming \(T_{c}\) is a constant. However, this is not strictly the case, since \(T_{c}(\mathbf{T})\). In Appendix B we discuss further how the exact gradient is equal and how the Hessian is slightly different and dense, however, this component depends on the number of poses. We have decided to maintain the Hessian matrix diagonal as derived in previous sections.
In the evaluations, we will use the method just described in this section, the EF-center, which will referred in short as EF.
## VI Experiments
Experiments are targeted to evaluate performance of different plane-based SLAM **back-end** approaches, formulated in the form of graph SLAM. Since the quality of the SLAM full pipeline could also be affected significantly by detection and association algorithms on the front-end, the question about quality of the full pipeline remains out of the scope of this paper and instead we focus only on the back-end. To achieve this, we consider pre-labeled data from different sensor modalities by developing our synthetic generator and using labeled data for ICL NUIM [47] and KITTI [48] sequences from EVOPS benchmark [49].
The following approaches are taken for evaluation as the main representatives of methods on graph plane-based SLAM: BAREG [29], \(\pi\)-Factor [22], BALM [30, 31], EigenFactor (EF, ours). BAREG, \(\pi\)-Factor and EigenFactor are implemented in our common framework mrob1 for factor-graph optimization problems. In particular, we use a Levenberg-Marquadt optimizer with tolerance of \(10^{-2}\) for all the methods and the exact same initial conditions. Since the BALM formulation is also graph SLAM, we consider the author's original implementation with default parameters [30].
Footnote 1: [https://github.com/prime-slam/mrob](https://github.com/prime-slam/mrob)
### _Complexity Analysis_
The theoretical complexity analysis is depicted in Table I. We have included three categories that are required at each iteration step of the optimization, assuming that planes are observed over the full sequence.
First column is the number of planes to be estimated. For instance, \(\pi\)-Factor does not evaluate the planes, since they are part of their state variables. Other methods evaluate each plane once and BAREG needs to evaluate each plane at each pose in the trajectory. Second column -- _evaluation_ and it refers to the complexity to evaluate the residuals for each method and calculate the Jacobians and Hessian if needed. We see how all methods provide the same complexity which depends on the number of planes \(M\) and poses in the trajectory \(H\).
Finally, the optimization complexity, and this is related to the structure of the problem. For instance BAREG and EF show linear complexity since they have a block diagonal matrix to invert. \(\pi\)-Factor estimates a larger state vector \(H+M\) and requires to invert a sparse matrix (we show worst case) and Schur trick could be used to marginalize the plane landmarks \(M\). BALM2 requires to invert a dense matrix, so it shows cubic complexity.
### _Synthetic planes_
In order to evaluate the quality of planar-based SLAM back-end, we have developed a generator of synthetic environment with planes.
**Setup.** The generator provides set of point clouds with planes and reference trajectory for them, and could be parameterized by number of poses, number of planes, number of points per plane, point-to-plane noise, trajectory noise. Example of generated synthetic environment is presented in Fig. 2 The default parameters configuration is the following: number of poses -- 10, number of planes -- 10, number of points per plane -- 50, point-to-plane noise -- 0.04 meters, trajectory
\begin{table}
\begin{tabular}{|c||c|c|c|} \hline & \# Planes Estim. & Evaluation & Optimization \\ \hline BAREG & \(MH\) & \(O(HM)\) & \(O(H)\) \\ \hline \(\pi\)-Factor & 0 & \(O(HM)\) & \(O(H^{3}+M)^{*}\) \\ \hline BALM2 & \(M\) & \(O(HM)\) & \(O(H^{3})\) \\ \hline EF & \(M\) & \(O(HM)\) & \(O(H)\) \\ \hline \end{tabular}
\end{table} TABLE I: Complexity comparison of various plane SLAM methods.
perturbation per each pose -- 0.05 meters and 5 degrees. In evaluation, we chose one parameter to be swiped over different values whereas other parameters remain constant. Relative Pose Error [50] is used to estimate the accuracy of methods, rotation and translation part independently.
**Results and discussion** Results of evaluation are depicted in Fig. 1. In general, on all parameters variations methods could be ordered as following: EF and \(\pi\)-Factor show almost the same quality, BAREG is on the second place, but still at a relatively small distance, and BALM diverges on huge perturbations. Since BALM pipeline is used with built-in front-end, the divergence could be explained by this fact. It is worth to note that on trajectory perturbation experiment, EF, \(\pi\)-Factor and BALM show consistent results on all perturbation ranges, meaning that they show good convergence and could be applied for global alignment, not only local trajectory refinement. Swiping over such parameters as number of poses, number of points per plane, point-to-plane noise the method's quality is ranged in the same order, but it could be noticed that in extreme conditions such as huge plane noise or amount of poses, EF shows more stable results than \(\pi\)-Factor.
### _Rghd_
The second stage of evaluation includes quality estimation on RGBD data. Since evaluation considers only back-ends, we take RGBD sequences from the EVOPS benchmark [49] with pre-labeled planes. Benchmark has labeled data for ICL NUIM and TUM RGBD dataset. TUM RGBD has only small pieces of trajectories covered by enough amount of planes, therefore only ICL NUIM data is used.
**Setup** For evaluation, we randomly sample 50 different subsequences, whose length is 30 consecutive frames. For every subsequence, the poses in original trajectory are perturbed in the range from 0.001-0.15 degrees and 0.001-0.15 meters. Two configuration of ICL NUIM dataset are used: original and one that emulates Kinect noise. For evaluation, Relative Pose Error is used for translation and rotation parts independently.
**Results and discussion** Qualitative example of pose perturbation and later alignment by EF is depicted in Fig. 3. Since all methods give good map for visual quality assessment, only one method is chosen for visualization. Results of evaluation with metrics are presented in Fig. 4. On depth data without noise, \(\pi\)-Factor and BAREG show the best results, EF is the second but still with very small error and BALM shows good results only on small perturbations, whereas later diverges, that could be explained by interference with the front-end error. On data that emulates Kinect noise the situation changes -- EF shows the best performance, downgrading one order with respect
Fig. 1: Evaluation results in synthetic environments with planes swiping over such parameters as: pose perturbation, number of poses in trajectory, number of points per plane, point-to-plane noise. _Top_: RPE translation error in different simulator configurations, _Down_: RPE rotation error in different simulator configurations
Fig. 2: Synthetic Environment. Points are generated to emulate sampling over planes.
to data without noise, the second one result is demonstrated by \(\pi\)-Factor which error is two orders more than on original synthetic data, BAREG is the third one with 3 orders worse quality. The same as in previous synthetic experiments, EF, \(\pi\)-Factor and BAREG demonstrate almost constant error among all ranges of perturbation. To sum up, in this experiment setting EF shows more stable results when the data is close to real in comparison to other methods that degrade drastically when more noisy data is provided.
### _LiDAR_
Finally, methods are evaluated on KITTI dataset with real LiDAR data.
**Setup** Original ground truth poses of KITTI dataset are not suitable for evaluation using classical pose-based metrics, since they provide noisy data [49]. Therefore, the following scenario is used: ground truth trajectory is used as the initial conditions for all methods, then the methods optimize over pre-labeled planes from the EVOPS benchmark. For evaluation we consider 100 submaps of length 30 poses, randomly sampled from map 00. To estimate the quality of alignment, no-reference map metrics MME [37], MPV [38] and MOM [39] are used.
**Results and discussion** No-reference map-metrics statistics over the methods is presented in Tab. II, BALM is not reported in the results, since it doesn't provide any convergence on those sequences. Following results, EF shows improvement of poses that gives better map quality on all metrics, qualitative demonstration of this can be found in Fig. 5. Other methods sometimes show improvement over ground truth poses, but not so drastically as EF. It is worth to note, that KITTI LiDAR data is noisy and EF shows good results even in those conditions.
## VII Conclusions
We have presented the Eigen Factors method for back-end plane SLAM. The main contribution is on calculating exactly the point error of all points in the plane at \(O(1)\) complexity when evaluating the optimization cost. We have entirely reformulated the problem, from the perspective of manifold optimization and we have derived an analytical solution for the gradient and the Hessian matrix of the EF by using a simple procedure to calculate derivatives. This derivation turns out to be very efficient for optimization, since it requires an inversion of a block diagonal matrix (Hessian). The proposed method is an alternating optimization where first the plane parameters are calculated in closed form and then a Newton-Raphson optimization is done on the resulting subproblem by considering the estimated plane variables fixed.
For the evaluations, we have compared EF with other state-of-the-art methods in back-end plane SLAM, showing solid results across synthetic and real data for RGBD and LiDAR. EF presents overall good accuracy in all domains, and excellent results for different number of poses, number of planes, point noise and initial conditions. We have also evaluated no-reference metrics for the quality of the 3D reconstruction, where EF shows superior results.
\begin{table}
\begin{tabular}{|c||c|c|c|} \hline & MME \(\downarrow\) & MPV \(\downarrow\) & MOM \(\downarrow\) \\ \hline BAREG & 0.24 & 0.0276 & 0.0046 \\ \hline \(\pi\)-Factor & 0.26 & 0.0275 & 0.0028 \\ \hline GT & 0.23 & 0.0276 & 0.0029 \\ \hline EF (ours) & **0.17** & **0.0266** & **0.0023** \\ \hline \end{tabular}
\end{table} TABLE II: No-reference map metrics on KITTI dataset
Fig. 3: _Top_: Example of the map aggregated from 10 poses with amount of noise 1 degree, 0.01 meter per pose. _Middle_: Example of map aggregated from 10 poses with amount of noise 5 degree, 0.05 meter per pose. _Down_: Result of alignment using the EF algorithm.
## Appendix A
### _First Order Derivative over the Eigendecomposition_
Proof.: By definition, the eigendecomposition is expressed as \(\hat{Q}v=\hat{\lambda}v\). The vector of parameters \(v\) is a unit vector s.t. \(||v||^{2}=v^{\top}v=1\). One can exchange the unit vector \(v\) for any other vector \(\pi\in\mathbb{P}^{3}\), since \(\pi=kv\) and \(||\pi||=k^{2}\), from definition \(\hat{\lambda}_{\pi}\) in (11).
\(\hat{Q}\pi=\hat{\lambda}\pi\) is derived wrt each of \(i\)th coordinates of the variable \(\xi_{t,i}\), for simplicity it is written as \(\xi_{i}\),
\[\hat{Q}\frac{\partial\pi}{\partial\xi_{i}}+\frac{\partial\hat{Q} }{\partial\xi_{i}}\,\pi=\lambda\,\frac{\partial\pi}{\partial\xi_{i}}+\frac{ \partial\hat{\lambda}}{\partial\xi_{i}}\,\pi \tag{52}\] \[\text{s.t.}\quad\pi^{\top}\,\frac{\partial\pi}{\partial\xi_{i}}=0. \tag{53}\]
Taking into account that the matrix \(\hat{Q}\) is symmetric by construction, then
\[\hat{Q}=\hat{Q}^{\top}\quad\implies\quad\pi^{\top}\hat{Q}=\hat{\lambda}\pi^{ \top}. \tag{54}\]
One can pre-multiply the expression (52) by \(\pi^{\top}\), substitute (54) and do some manipulations:
\[\pi^{\top}\hat{Q}\,\frac{\partial\pi}{\partial\xi_{i}}+\pi^{\top }\frac{\partial\hat{Q}}{\partial\xi_{i}}\,\pi=\pi^{\top}\hat{\lambda}\,\frac{ \partial\pi}{\partial\xi_{i}}+\pi^{\top}\frac{\partial\hat{\lambda}}{\partial \xi_{i}}\,\pi\] \[\hat{\lambda}\underbrace{\pi^{\top}\,\frac{\partial\pi}{\partial \xi_{i}}}_{(53)}+\pi^{\top}\frac{\partial\hat{Q}}{\partial\xi_{i}}\,\pi=\hat{ \lambda}\underbrace{\pi^{\top}\,\frac{\partial\pi}{\partial\xi_{i}}}_{(53)}+ \frac{\partial\hat{\lambda}}{\partial\xi_{i}}\,\pi^{\top}_{k^{2}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\[\pi\frac{\partial\hat{\lambda}_{\pi}}{\partial\xi_{i}}=k^{2}\frac{\partial\hat{Q}_{t }}{\partial\xi_{i}}\pi=\pi\frac{\partial\hat{\lambda}}{\partial\xi_{i}}k^{2}. \tag{59}\]
The constant value \(k^{2}\) cancels out and this expression is substituted in (58) plus the symmetry of \(Q\) in (54)
\[\frac{\partial^{2}\hat{\lambda}_{\pi}}{\partial\xi_{j}\partial\xi_{i}}=\underbrace {\frac{\partial\pi^{\top}}{\partial\xi_{j}}\pi\cdot\frac{\partial\hat{ \lambda}}{\partial\xi_{i}}+\pi^{\top}\frac{\partial^{2}\hat{\lambda}_{\pi}}{ \partial\xi_{j}\partial\xi_{i}}\pi+\frac{\partial\hat{\lambda}}{\partial\xi_ {i}}\cdot\underbrace{\pi^{\top}\frac{\partial\pi}{\partial\xi_{j}}}_{0} \tag{60}\]
where the first and last element vanish due to (53). Finally
\[\frac{\partial^{2}\hat{\lambda}_{\pi}}{\partial\xi_{j}\partial\xi_{i}}=\pi^{ \top}\frac{\partial^{2}\hat{Q}_{t}}{\partial\xi_{j}\partial\xi_{i}}\pi. \tag{61}\]
The second condition can be easily verified by noting that the right hand side of (57) only depends on the matrix \(Q_{t}\) at pose \(t\) and the contribution from other poses at \(t^{\prime}\neq t\) vanishes after the first differentiation.
## Appendix B Approximation of the EF Center
The retraction of the centering transformation \(T_{c}\) for the particular pose at \(t\) can be written
\[R_{T_{c}}(\xi_{t}) =\begin{bmatrix}I&-\mu(\xi_{t})\\ 0&1\end{bmatrix}=\begin{bmatrix}I&-\frac{1}{N}\left(\sum_{\tau\neq t}^{H}q_{ \tau}+\operatorname{Exp}(\xi_{t})q_{t}\right)\\ 0&1\end{bmatrix}=\] \[=\begin{bmatrix}0&\frac{1}{N}(q_{t}-\operatorname{Exp}(\xi_{t})q _{t})\\ 0&1\end{bmatrix}T_{c}=\mathcal{E}(\xi_{t})T_{c}. \tag{62}\]
where we have used the results from (47). Its derivative at any pose \(t\) for the \(i\)-th coordinate
\[\frac{\partial\mathcal{E}(\xi)}{\partial\xi_{i}}=\begin{bmatrix}0&-\frac{1}{ N}G_{i}q_{t}\\ 0&0\end{bmatrix}. \tag{63}\]
Therefore, the total gradient starting from (45)
\[\frac{\partial}{\partial\xi_{i}}\Big{(}\mathcal{E}(\xi_{t})T_{c} \operatorname{Exp}(\xi_{t})Q\operatorname{Exp}(\xi_{t})^{\top}T_{c}^{\top} \mathcal{E}(\xi_{t})^{\top}\Big{)}=\] \[\frac{\partial\mathcal{E}(\xi)}{\partial\xi_{i}}T_{c}QT_{c}^{ \top}+T_{c}G_{i}Q_{t}T_{c}^{\top}+(\ldots)^{\top} \tag{64}\]
where the first term after the equal is zero (see below) and the second term was already obtained before in (45). There are 2 more terms as a result of the differentiation, but they happen to be the transposed of the first two. We will expand the first term and evaluate its derivative after multiplying by the plane \(c_{\pi}\):
\[\begin{bmatrix}\eta^{\top}&0\end{bmatrix}\begin{bmatrix}0&-\frac{1}{N}G_{i}q_ {t}\\ 0&0\end{bmatrix}\begin{bmatrix}Q_{p}-\frac{1}{N}q\cdot q^{\top}&0\\ 0&N\end{bmatrix}\begin{bmatrix}\eta\\ 0\end{bmatrix}=0. \tag{65}\]
This expression is always zero \(\forall i\), therefore it has null contribution on the gradient, so the results is exactly the same as obtained in in (45).
The Hessian will be dense since there is a term from \(T_{c}\) that depends on all poses and it does not cancel out as in the gradient case. One could derive it with the definition of retraction, but this term is weighted roughly by \(\frac{1}{H^{2}}\) so its contribution diminishes fast with the number of poses.
We have tested empirically that for 2 poses, the relative difference between the exact Hessian and the approximated Hessian (as a matrix norm) is around 30% and for 20 poses, the difference is less than 1%, so its approximation is well justified.
|
2306.00532
|
Orthonormal bases of extreme quantumness
|
Spin anticoherent states acquired recently a lot of attention as the most
"quantum" states. Some coherent and anticoherent spin states are known as
optimal quantum rotosensors. In this work, we introduce a measure of
quantumness for orthonormal bases of spin states, determined by the average
anticoherence of individual vectors and the Wehrl entropy. In this way, we
identify the most coherent and most quantum states, which lead to orthogonal
measurements of extreme quantumness. Their symmetries can be revealed using the
Majorana stellar representation, which provides an intuitive geometrical
representation of a pure state by points on a sphere. Results obtained lead to
maximally (minimally) entangled bases in the $2j+1$ dimensional symmetric
subspace of the $2^{2j}$ dimensional space of states of multipartite systems
composed of $2j$ qubits. Some bases found are iso-coherent as they consist of
all states of the same degree of spin-coherence.
|
Marcin Rudziński, Adam Burchardt, Karol Życzkowski
|
2023-06-01T10:35:45Z
|
http://arxiv.org/abs/2306.00532v3
|
# Orthonormal bases of extreme spin coherence
###### Abstract
Spin anticoherent states acquired recently a lot of attention as the most "quantum" states. Some coherent and anticoherent spin states are known as optimal quantum rotosensors. In this work we introduce a measure of spin coherence for orthonormal bases, determined by the average anticoherence of individual vectors, and identify the most and the least coherent bases which lead to orthogonal measurements of extreme coherence. Their symmetries can be revealed using the Majorana stellar representation, which provides an intuitive geometrical representation of a pure state by points on a sphere. Results obtained lead to maximally (minimally) entangled bases in the \(2j+1\) dimensional symmetric subspace of the \(2^{2j}\) dimensional space of quantum states of multipartite systems composed of \(2j\) qubits.
## I Introduction
Geometric methods play an essential role while studying physical systems in classical mechanics [1; 2], relativity [3], quantum mechanics [4; 5], and quantum field theory [6]. The stellar representation, also called _the Majorana representation_, is one of the important geometrical representations in quantum mechanics [7]. The stellar representation presents a spin-\(j\) pure state in \(N=2j+1\) dimensional Hilbert space as a collection of \(2j\) points on a sphere. The same constellation represents a symmetric state of a system consisting of \(2j\) qubits. In the case of a spin-\(\frac{1}{2}\) particle, it reduces to the celebrated Bloch representation of a two-level quantum system (qubit). This representation is used in various contexts such as spinor Bose gases [8; 9; 10; 11], entanglement classification in multiqubit systems [12; 13; 14; 15; 16; 17; 18; 19], the Berry phase associated with the cyclic evolution of the state [21; 22; 23], investigating Lipkin-Meshkov-Glick model [24; 25] and studying symmetries and properties of spin states [26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41].
The Majorana representation appears naturally in the context of \(SU(2)\) coherent, and anticoherent states of a given spin \(j\)[26; 27; 28; 29; 30; 31]. Note that the spin coherence is not basis dependent. Properties of the coherent state \(|\mathbf{n}\rangle\) of spin \(j\) closely resemble the classical state of spin \(j\) pointing in the direction given by vector \(\mathbf{n}\). Hence, in the Majorana representation, the most coherent state is represented by one \(2j\)-degenerated point on a sphere. On the other hand, the anticoherent state \(|\psi\rangle\) should "point nowhere", and be represented as \(2j\) points equally distributed on a sphere [39; 40; 41]. However, even if the polarization (coherence) disappears, the higher moments of coherence might not vanish. Hence, anticoherence is defined up to some order [26]. Highly anticoherent states turned out to be applicable as optimal quantum rotosensors [32; 33; 34; 35] and are related to spherical \(t\)-designs [36; 37; 38]. Experimental realization of some of those states was discussed in [42]. The Majorana representation has proven to be a suitable tool to study properties of anticoherent states.
As much as the construction of coherent states is straightforward, it is not possible to construct an orthonormal basis composed only of coherent states. Furthermore, much effort has been made to determine quantum states with the highest possible anticoherence, while the concept of the most anticoherent, orthonormal basis of states is largely unexplored.
The main goal of this paper is to address the problem of finding the most coherent and the most anticoherent (the least coherent) orthonormal basis of quantum states that provide orthogonal quantum measurements of extreme spin coherence.
To highlight the symmetries that arise in the studied quantum structures, we use the Majorana stellar representation. The constellations representing the bases and vectors obtained in this study exhibit classical symmetries, such as a Platonic solid, its compound, or an Archimedean solid. Geometric configurations of Platonic solids appear in many areas of quantum information theory. In particular, where recently used to construct particular classes of quantum measurements [43; 44], absolutely maximally entangled (AME) states [45] and Bell inequalities [46; 47].
This work is organized as follows. Section II recalls the Majorana representation for both for spin-\(j\) state or a symmetric state of \(2j\) qubits. Section III presents the measure of coherence of an orthonormal bases, defined using an anticoherence measure initially introduced for quantum states [31]. Section IV describes our methods of searching for the most coherent and anticoherent orthonormal bases and presents the obtained results. Bases of extreme entanglement in the symmetric subspace of \(2j\)-qubit systems are discussed in Section V.
## II The stellar representation
The Bloch sphere is a geometrical representation of pure states of a two-level quantum system, often called a qubit. In particular, a state of spin-\(\frac{1}{2}\) quantum system, can be naturally represented as a point
on the sphere. Indeed, a pure quantum state
\[Z_{0}\,|\frac{1}{2},\frac{1}{2}\rangle+Z_{1}\,|\frac{1}{2},-\frac{1}{2}\rangle\,, \qquad|Z_{0}|^{2}+|Z_{1}|^{2}=1 \tag{1}\]
uniquely determines a complex number \(z=Z_{1}/Z_{0}\) (possibly \(z=\infty\)), which can be projected onto the surface of a 2-dimensional sphere by the stereographic projection:
\[z\mapsto\big{(}\theta,\phi\big{)}:=\big{(}2\ \text{arctan}|z|\,,\,\text{arg}(z) \big{)}, \tag{2}\]
where \(\theta\in[0,\pi]\) and \(\phi\in[0,2\pi]\) are usual spherical coordinates. Note that the stereographic projection provides a bijection between extended complex plane \(\mathbb{C}\cup\{\infty\}\) and the surface of the sphere, the reverse transformation is given by \(z=e^{i\phi}\tan(\theta/2)\). Furthermore, a complex number \(z\) uniquely defines the quantum state (1) satisfying the property \(z=Z_{1}/Z_{0}\).
In 1932 Ettore Majorana generalized the celebrated Bloch representation for arbitrary spin-\(j\) states [7]. The _stellar representation_ maps a spin-\(j\) state \(|\psi\rangle\) from \((2j+1)\)-dimensional Hilbert space \(\mathcal{H}_{2j+1}\) into a constellation of \(2j\) points on a sphere. More precisely, any spin-\(j\) state might be expressed in the angular momentum basis \(J_{z}\) as
\[|\psi\rangle=\sum_{m=-j}^{j}Z_{j-m}\,|j,m\rangle\ \in\ \mathcal{H}_{2j+1}, \tag{3}\]
where \(\sum_{m=-j}^{j}|Z_{j-m}|^{2}=1\). The state coefficients are further used to construct _the Majorana polynomial_ of degree \(2j\) in complex variable \(z\),
\[w(z)=\sum_{k=0}^{2j}(-1)^{k}Z_{k}\sqrt{\binom{2j}{k}}z^{2j-k}. \tag{4}\]
Aforesaid polynomial uniquely determines \(2j\) possibly degenerated roots \(z_{i}\), \(i=1,\ldots,2j\), which can be mapped on a sphere by the stereographic projection (2). In this way, the state (3) is represented as \(2j\) points on a sphere, which are called _stars_. We refer to such a collection of \(2j\) stars written in the spherical coordinates \(\{\theta_{k},\phi_{k}\}_{k=1}^{2j}\) as the Majorana representation of a state \(|\psi\rangle\), and denote it by
\[\mathcal{M}(|\psi\rangle):=\{\theta_{k},\phi_{k}\}_{k=1}^{2j}. \tag{5}\]
For example, the state \(|j,j\rangle\) corresponds to the trivial Majorana polynomial \(w(z)=z^{2j}\), which has \(2j\)-degenerated root at \(z=0\), and hence is represented by \(2j\) stars at the north pole. More generally, the Majorana polynomial corresponding to \(|j,m\rangle\) state has two degenerated roots at \(z=0\) and \(z=\infty\) with multiplicities \(j+m\) and \(j-m\) respectively. Hence the state \(|j,m\rangle\) is represented by \(j+m\) stars at the north and \(j-m\) stars at the south pole. Note that for a spin-\(\frac{1}{2}\) particle, the Majorana representation, reduces to the Bloch representation of a state provided in (1).
### The stellar representation for symmetric states
The stellar representation has a natural interpretation while identifying a spin-\(j\) system with a symmetric subspace of a system of \(2j\) qubits. The symmetric subspace of dimension \(2j+1\) is spanned by _the Dicke states_\(|D_{2j,k}\rangle\), which are a uniform superposition of states with a given number of \(k\) excitations [48]
\[|D_{2j,k}\rangle=\binom{2j}{k}^{-\frac{1}{2}}\sum_{\sigma\in\mathbb{S}_{2j}} \sigma\big{(}|0\rangle^{\otimes 2j-k}\otimes|1\rangle^{\otimes k}\big{)}, \tag{6}\]
where \(\sigma\in\mathbb{S}_{2j}\) denotes a permutation of subsystems determined by an element of the symmetric group \(\mathbb{S}_{2j}\). Any symmetric state may be uniquely expressed as a combination of Dicke states,
\[|\psi_{sym}\rangle=\sum_{k=0}^{2j}Z_{k}\,|D_{2j,k}\rangle\ \in\ \mathcal{H}_{2}^{ \otimes 2j}, \tag{7}\]
with normalization \(\sum_{k=0}^{2j}|Z_{k}|^{2}=1\). By identifying the eigenstates of the \(J_{z}\) operator with Dicke states
\[F\ :\ |j,m\rangle\ \mapsto\ |D_{2j,j-m}\rangle\,, \tag{8}\]
one can relate a \(\mathcal{H}_{2j+1}\) state space with symmetric subspace of \(\mathcal{H}_{2}^{\otimes 2j}\), and hence a spin-\(j\) particle with a symmetric state of \(2j\) qubits, see Eqs. (3) and (7). Above equation determines the isomorphism \(F\) of two spaces, and for any spin-\(j\) state \(|\psi\rangle\) we denote by \(|F(\psi)\rangle\) the related symmetric state of \(2j\) qubits. As we shall see, both states have the same _Majorana representation_, \(\mathcal{M}(|\psi\rangle)=\mathcal{M}(|F(\psi)\rangle)\).
Interestingly, there is another way of presenting a symmetric state of \(2j\) qubits. Consider the collection of \(2j\) quantum states
\[|\Phi_{k}\rangle=\cos(\tfrac{\theta_{k}}{2})\,|0\rangle+e^{i\phi_{k}}\sin( \tfrac{\theta_{k}}{2})\,|1\rangle \tag{9}\]
with \(\theta_{k}\in[0,2\pi],\phi_{k}\in[0,\pi]\) and \(k=1,\ldots,2j\). A symmetric superposition of their tensor products constitutes a symmetric state
\[|\psi_{sym}\rangle=\mathcal{N}\sum_{\sigma\in\mathbb{S}_{2j}}|\Phi_{\sigma(1) }\rangle\otimes\cdots\otimes|\Phi_{\sigma(2j)}\rangle\,, \tag{10}\]
where \(\sigma\in\mathbb{S}_{2j}\) runs over all permutations of indices, and \(\mathcal{N}\) denotes the normalization factor. A connection between those two representations of symmetric
states (7) and (10) is given by the Majorana representation. Consider a symmetric state (7) with state coefficients \(Z_{k}\). The related Majorana polynomial \(w(z)\), Eq. (4), has \(2j\) roots \(z_{i}\) with respect to \(z\) variable. On one hand, the stereographic projection \(z_{k}\mapsto(\theta_{k},\phi_{k})\), given by Eq. (2), maps roots of the polynomial \(w(z)\) onto the surface of a sphere. On the other hand, however, the set of angles \((\theta_{k},\phi_{k})\) provides an alternative description of the symmetric state, as a symmetrization (10) of \(2j\) qubit states (9) determined by angles \((\theta_{k},\phi_{k})\).
## III Measures of coherence
In this section, we introduce the concept of coherence in spin-\(j\) system and provide its quantitative description. Furthermore, we extended those concepts to the notion of coherence on an orthonormal basis.
A spin-\(j\) coherent state that points in direction \({\bf n}\) in \(\mathbb{R}^{3}\) is a state \(|{\bf n}\rangle\) for which the polarization vector \({\bf p}\) is of length \(j\) i.e.
\[{\bf p}\equiv\langle{\bf n}|{\bf J}|{\bf n}\rangle=j{\bf n}, \tag{11}\]
where \({\bf J}=(J_{x},J_{y},J_{z})\) is the spin-\(j\) operator, and \(\hbar\) is set to unity. In the Majorana representation, a spin-\(j\) coherent state is represented by \(2j\) degenerated stars on a sphere. Their position is given by \({\bf n}\). A spin state \(|\psi\rangle\) is anticoherent, if its polarization vector vanishes, \({\bf p}={\bf 0}\). One may introduce higher orders of anticoherence, namely a spin state \(|\psi\rangle\) is called \(t\)-_anticoherent_[26] if \(\langle\psi|({\bf n}\cdot{\bf J})^{k}|\psi\rangle\) is independent of \({\bf n}\) for \(k=1,\ldots,t\).
For a pure symmetric quantum state of \(2j\) subsystems \(|\psi\rangle\in\mathcal{H}_{2}^{0\otimes 2j}\), we consider a t-partite reduced density operator,
\[\rho_{t}(\psi):={\rm tr}_{1,\ldots,2j-t}\ |\psi\rangle\left\langle\psi \right|, \tag{12}\]
and analyze its purity,
\[R_{t}(|\psi\rangle):={\rm tr}\left(\rho_{t}(\psi)^{2}\right). \tag{13}\]
This quantity can be used to quantify the coherence of the related system of spin-\(j\) particle [28]. Recall that the isomorphism (8) identifies a spin-\(j\) system with a symmetric state of \(2j\) qubits. Thus, Baguette and Martin [31] introduced the following measures of anticoherence of order \(t\geq 1\) based on the purity of the reduced state:
\[\mathcal{A}_{t}(|\psi\rangle)=\frac{t+1}{t}[1-R_{t}(F(|\psi\rangle))]. \tag{14}\]
where \(|\psi\rangle\in\mathcal{H}_{2j+1}\) is a spin-\(j\) system, and
\(F(|\psi\rangle)\in\mathcal{H}_{2}^{0\otimes 2j}\) denotes the corresponding symmetric state, see the map (8). As discussed in Ref. [31] this quantity enjoys the following properties,
1. \(\mathcal{A}_{t}(|\psi\rangle)=0\iff|\psi\rangle\) is coherent,
2. \(\mathcal{A}_{t}(|\psi\rangle)=1\iff|\psi\rangle\) is t-anticoherent,
3. \(\mathcal{A}_{t}(|\psi\rangle)\in[0,1]\) for all \(|\psi\rangle\).
4. \(\mathcal{A}_{t}(|\psi\rangle)\) is invariant under phase changes and spin rotations,
and hence provides a plausible measure of \(t\)-anticoherence. Making use of this quantity we propose the t-coherence measure \(\mathcal{B}_{t}\) for an orthonormal basis using the arithmetic mean of \(t\)-anticoherence measure of constituting states
\[\mathcal{B}_{t}(U)=1-\sum_{i=1}^{N}\frac{\mathcal{A}_{t}(|\psi_{i}\rangle)}{N}. \tag{15}\]
Here \(N\)=\(2j+1\) denotes the dimension of the Hilbert space, \(U\) is a unitary matrix that represents a basis and \(|\psi_{i}\rangle\) is i-th state in this basis. Observe that the smaller value of the quantity \(\mathcal{B}_{t}(U)\), defined in Eq. (15), the less coherent, hence the more anticoherent, is the analyzed basis determined by the unitary matrix \(U\).
## IV Bases of extreme coherence
In this section, we present the most coherent and the least coherent (the most anticoherent) bases in small dimensions. Note that such bases can be interpreted as orthogonal measurements of extreme coherence. We present both, numerical and analytical results in dimensions \(N=3,4,5,6,7\), which correspond to spin \(j=1,\frac{3}{2},2,\frac{5}{2},3\). For convenience, we present the basis vectors \(\{|\psi_{i}\rangle\}\) in the form of a unitary matrix \(U\), where the i-th column of the matrix \(U\) corresponds to the i-th vector \(|\psi_{i}\rangle\) in the basis expressed in the angular momentum basis \(J_{z}\), i.e. \(U=(U_{ki})\), where \(U_{ki}=\langle j,j+1-k|\psi_{i}\rangle\). Unitary matrices corresponding to the basis maximizing, respectively minimizing, coherence measure \(\mathcal{B}_{1}\) in dimension \(N=2j+1\) shall be denoted by \(U_{N}^{\rm max}\), \(U_{N}^{\rm max}\) respectively. In general, basis of extreme coherence correspond to highly symmetric constellations of stars in the Majorana representation, see Fig. (2-10).
Note that the measures of coherence are invariant under \(SU(2)\) transformations represented as an action of, Wigner D-matrices. Any such action corresponds to the rotation of a sphere in the Majorana representation, for more details see Appendix E and F.
### Bases for N=3
Consider an orthonormal basis in \(\mathcal{H}_{3}\), corresponding to \(j=1\). Up to \(SU(2)\) transformations, corresponding to the rigid rotation of the entire sphere, any basis can be parametrized by three real parameters, see Appendix A. Coherence measures are invariant under the \(SU(2)\) transformations, so one may use this parameterization to find bases of extreme coherence. Therefore, the problem reduces to finding the global extrema of a function of three real variables \(\mathcal{B}_{1}(\Theta_{1},\Theta_{2},\Phi)\), which we solved analytically. The most coherent basis, for which coherence measure
consists the following states,
\[\left|\psi_{3}^{max}\right\rangle=\frac{1}{\sqrt{3}}(\left|1,1 \right\rangle+\left|1,0\right\rangle+\left|1,\text{-}1\right\rangle), \tag{16}\] \[\left|\psi^{\prime\prime\text{ }max}\right\rangle=\frac{1}{\sqrt{3}}( \left|1,1\right\rangle+\omega_{3}\left|1,0\right\rangle+\omega_{3}\left|1, \text{-}1\right\rangle),\] \[\left|\psi^{\prime\prime\text{ }max}\right\rangle=\frac{1}{\sqrt{3}}( \left|1,1\right\rangle+\omega_{3}^{2}\left|1,0\right\rangle+\omega_{3}\left|1, \text{-}1\right\rangle),\]
where \(\omega_{3}=e^{2\pi i/3}\). Basis represented in the Majorana representation is depicted in Fig. 2. Each state in this basis can generate another two states by rotation around \(\hat{z}\) axis by angle \(2\pi/3\) and \(4\pi/3\). All stars lie on the equator. This basis corresponds to the _Fourier_ matrix
\[U_{3}^{max}=F_{3}=\frac{1}{\sqrt{3}}\begin{pmatrix}1&1&1\\ 1&\omega_{3}&\omega_{3}^{2}\\ 1&\omega_{3}^{2}&\omega_{3}\end{pmatrix}. \tag{17}\]
The least coherent basis, for which coherence measure \(\mathcal{B}_{1}=0\) reads
\[\left|\psi_{3}^{min}\right\rangle=\left|1,0\right\rangle,\] \[\left|\psi^{\prime\text{ }min}\right\rangle=\frac{1}{\sqrt{2}}( \left|1,1\right\rangle+\left|1,\text{-}1\right\rangle), \tag{18}\] \[\left|\psi^{\prime\prime\text{ }min}\right\rangle=\frac{1}{\sqrt{2}}( \left|1,1\right\rangle-\left|1,\text{-}1\right\rangle).\]
This basis corresponds to the following unitary matrix
\[U_{3}^{min}=\frac{1}{\sqrt{2}}\begin{pmatrix}0&1&1\\ \sqrt{2}&0&0\\ 0&1&-1\end{pmatrix}, \tag{19}\]
which is up to a permutation equivalent to the simple sum of a Hadamard matrix \(H_{2}\) and identity. In the stellar representation, the points representing a single state lie on a line, going through the center of a sphere that graphically corresponds to 1-simplex \(\Delta_{1}\). The entire constellation spans a regular octahedron, see Fig. 3. This basis may be generated by rotating one of its elements around a vector directed to the center of any face of a regular octahedron by \(2\pi/3\). Observe that the same constellation represents 3 mutually unbiased bases (MUB) in \(\mathcal{H}_{2}\) i.e. two points of the same color correspond to a single basis.
Since the most coherent basis in dimension \(N=3\) was found as an analytical solution of maximizing the \(\mathcal{B}_{1}(\Theta_{1},\Theta_{2},\Phi)\) function, while the least coherent basis saturates the bound for the coherence measure \(\mathcal{B}_{1}\), we can summarize this section in the following statement.
**Theorem 1**.: The most coherent and the least coherent bases in \(\mathcal{H}_{3}\) are presented in Eqs. (16) and (18), and correspond to \(U_{3}^{max}\) and \(U_{3}^{min}\) respectively. The measure of coherence \(\mathcal{B}_{1}\) archives values \(\mathcal{B}_{1}(U_{3}^{max})=\frac{8}{9}\) and \(\mathcal{B}_{1}(U_{3}^{min})=0\).
### Bases for N=4
To find a basis that maximizes average coherence we apply a random walk algorithm. We choose a random unitary matrix \(U_{0}\), that represents an orthonormal basis, then we make a random step
\[U_{0}\to U_{1}=U_{0}\exp{(i\alpha H)}, \tag{20}\]
where \(\alpha\) is a small real parameter ("step length") and \(H\) is a random hermitian matrix. If \(\mathcal{B}_{1}(U_{1})>\mathcal{B}_{1}(U_{0})\), then we treat \(U_{1}\) as new basis \(U_{0}\) and repeat the procedure. Otherwise, we pick another random hermitian matrix \(H\) and repeat the above steps. During the procedure parameter \(\alpha\) is decreased in a way to obtain an extremal basis with an increased precision. Analogously, one can obtain bases that minimize the average coherence using a similar approach.
According to the above algorithm the most coherent basis for \(N=4\) (\(j=3/2\)) was found, for which \(\mathcal{B}_{1}=\frac{8}{9}\), see Fig. 4. This basis is formed by four states which are equivalent up to \(SO(3)\) rotation on a sphere. To show that this solution yields a local extremum, one should check that the gradient vanish, \(\nabla\mathcal{B}_{1}(U)=\mathbf{0}\) and the Hessian is negative definite (for maximum) or positive definite (for minimum). For further details see Appendix D. This basis corresponds
Figure 3: The least coherent basis (19) in \(\mathcal{H}_{3}\) (\(j=1\)) represented by three pairs of points on the sphere (left). The same configuration in the Mercator projection is shown on the right. The Orange arrow represents one of the rotational axes that may be used to generate the entire basis by rotation of one state by \(2\pi/3\) and \(4\pi/3\).
Figure 2: The most coherent basis (17) in \(\mathcal{H}_{3}\) (\(j=1\)) is represented by three pairs of stars in the Majorana representation. Each state is presented as two dots of given color at the equator. The angles between lines connecting stars with the center of the sphere read \(\alpha=\pi/6\) and \(\beta=\pi/2\). The left panel shows the stars on the sphere while the right one uses the Mercator projection, with the geographic grid drawn.
to the following unitary matrix
\[U_{4}^{max}=\frac{1}{3\sqrt{4\text{-}2\sqrt{2}}}\begin{pmatrix}\mu&\mu/\sqrt{3}&\mu /\sqrt{3}&3x^{3}i\\ \nu&\nu e^{\frac{3x^{3}}{2i}}&\nu e^{\frac{3x^{3}}{2i}}&0\\ \zeta&\zeta e^{\frac{3x^{3}}{2i}}&\zeta e^{\frac{3x^{3}}{2i}}&0\\ \tau&\tau/\sqrt{3}&\tau/\sqrt{3}&\text{-}3i\end{pmatrix}, \tag{21}\]
where \(\mu=3,\quad\nu=3(2-\sqrt{2}),\quad\zeta=2(3-\sqrt{2})\), and \(\tau=3(3-2\sqrt{2})\). This result was obtained by observation of specific symmetries of purely numerical expression and then using them as constraints to reduce the problem of finding a basis to one real parameter \(x\) problem, which helped to get an analytical solution for the local extremum.
The least coherent basis, see Fig. 5, for which \(\mathcal{B}_{1}=0\) corresponds to following unitary matrix
\[U_{4}^{min}=\frac{1}{\sqrt{6}}\begin{pmatrix}\sqrt{3}&1&1&1\\ 0&\xi&\xi\omega_{3}&\xi\omega_{3}^{2}\\ 0&\xi&\xi\omega_{3}^{2}&\xi\omega_{3}\\ \text{-}\sqrt{3}&1&1&1\end{pmatrix}, \tag{22}\]
where \(\omega_{3}=e^{i2\pi/3}\) and \(\theta_{3}=\arcsin{(1/\sqrt{3})}\) with \(z_{3}=\tan{\frac{\theta_{3}}{2}}e^{-i5\pi/6}\) and \(\xi=(1+z_{3}+1/z_{3})/\sqrt{3}\).
In analogy to the most coherent basis, the four states are equivalent up to \(SO(3)\) rotation on a sphere. The entire basis may be generated by rotating a single state by multiples of \(\pi/2\) around the \(\mathbf{z}\) axis. Each vector of the basis is represented by three stars that form an equilateral triangle, then graphically it corresponds to the 2-simplex \(\Delta_{2}\). The entire constellation of 12 stars forms an Archimedean solid, called cuboc-thedron, see Fig. 5.
The least coherent basis \(U_{4}^{min}\) saturates the bound for the measure \(\mathcal{B}_{1}\), which leads to the following statement.
**Theorem 2**.: The least coherent basis in \(\mathcal{H}_{4}\) is \(U_{4}^{min}\), given by (22) for which \(\mathcal{B}_{1}=0\).
**Conjecture 1**.: The most coherent basis in \(\mathcal{H}_{4}\) is \(U_{4}^{max}\), given by (21) for which \(\mathcal{B}_{1}=\frac{8}{9}\).
### Bases for N=5
Using the numerical procedure described above, we found the most coherent basis presented in Fig. 6. The set of states forming this basis can be divided into two equivalence classes, with respect to rotation on the sphere. The first class contains two states
\[|\psi_{5}^{max}\rangle=\mathcal{N}_{1}(|2,2\rangle+\frac{r_{1}^{3}}{2}\,|2, \text{-}1\rangle), \tag{23}\]
where \(r_{1}\in\mathbb{R}\) is the only possible parameter that does not change the symmetry of the state and \(\mathcal{N}_{1}\) is the normalization constant. The other is obtained by rotation around the \(\mathbf{x}\) axis by angle \(\pi\). The second class contains the state
\[|\psi^{\prime}_{5}{}^{max}\rangle=\mathcal{N}_{2}(|2,2\rangle+\chi\,|2,1 \rangle+\sqrt{\frac{2}{3}}\,|2,0\rangle+\chi\,|2,\text{-}1\rangle+|2,\text{-}2\rangle) \tag{24}\]
and its rotations by angles \(2\pi/3\) and \(4\pi/3\) around the \(\mathbf{z}\) axis. Here, \(\mathcal{N}_{2}\) is the normalization constant,
\(\chi=(r_{2}+1/r_{2}+2\cos{\phi_{3}})/2\), and \(r_{2},\phi_{3}\in\mathbb{R}\) are parameters of the symmetry of this class of states.
In the stellar representation, \(|\psi_{5}^{max}\rangle\) corresponds to one star at the north pole, and another three equally distributed on the circle of latitude, which the horizontal angle \(\theta_{4}\) in spherical coordinates is given by \(r_{1}=\tan{\frac{\theta_{4}}{2}}\), see red stars on Fig. 6. Similarly \(|\psi^{\prime}_{5}{}^{max}\rangle\) is represented by one star which horizontal angle \(\theta_{5}\) in spherical coordinates is given by \(r_{2}=\tan{\frac{\theta_{5}}{2}}\). Horizontal angle \(\theta_{6}\) of the second star is given by \(1/r_{2}=\tan{\frac{\theta_{6}}{2}}\), so that \(\theta_{6}=\pi-\theta_{5}\). The remaining
Figure 4: The most coherent basis (21) in \(\mathcal{H}_{4}\) (\(j=3/2\)) represented by 4 triples of points. The sphere (left) and the Mercator projection (right).
Figure 5: The least coherent basis in \(\mathcal{H}_{4}\) (\(j=3/2\)) represented by \(4\times 3=12\) points. The sphere (left) and the Mercator projection (right). Three stars representing a single state span an equilateral triangle. Constellation of the entire basis spans cuboctahedron, the edges are denoted by black lines.
Figure 6: The most coherent basis in \(\mathcal{H}_{5}\) (\(j=2\)) represented by \(5\times 4=20\) points. The sphere (left) and the Mercator projection (right).
two stars lie on the equator with azimuthal angles \(\phi_{3}\) and -\(\phi_{3}\), see blue stars in Fig. 6.
Imposing those symmetries and orthogonality conditions reduces the number of degrees of freedom to a single one. The maximum of the coherence measure obtained numerically reads, \(\mathcal{B}_{1}\approx 0.874\) -see Appendix B.
To distinguish the least coherent basis from the others, for which we get \(\mathcal{B}_{1}=0\), we impose a condition concerning the second quantity \(\mathcal{B}_{2}\). We arrived at the basis for which \(\mathcal{B}_{2}=\mathcal{B}_{1}=0\). Exactly the same basis was earlier described by Zimba in [26]. It is formed by five states equivalent up to rotation on the sphere. Each of those states is represented by four stars that form a regular tetrahedron, then graphically it corresponds to 3-simplex \(\Delta_{3}\). It may be constructed by rotation of a state
\[\ket{\psi_{5}^{tet}}=\frac{1}{\sqrt{3}}(\ket{2,2}+\sqrt{2}\ket{2, \text{-}1}). \tag{25}\]
The corresponding unitary matrix reads
\[U_{5}^{min}=\frac{1}{\sqrt{5}}\begin{pmatrix}1&1&1&1&1\\ \kappa&\kappa\omega_{5}&-\kappa\omega_{2}^{2}&-\kappa\omega_{3}^{2}&-\kappa \omega_{4}^{2}\\ \lambda&\lambda\omega_{5}^{2}&\lambda\omega_{5}^{2}&\lambda\omega_{5}^{2}& \lambda\omega_{5}^{2}\\ \kappa&\kappa\omega_{5}^{3}&\kappa\omega_{5}^{3}&\kappa\omega_{5}^{3}&\kappa \omega_{5}^{2}\\ 1&\omega_{5}^{4}&\omega_{5}^{3}&\omega_{5}^{2}&\omega_{5}^{2}&\omega_{5}\\ \end{pmatrix}, \tag{26}\]
where \(\kappa=\frac{1}{4}(1+i\sqrt{15})\), \(\lambda=\sqrt{\kappa}\) and \(\omega_{5}=e^{i2\pi/5}\). The entire constellation of basis consists of 20 stars and forms a regular dodecahedron, see Fig. 7. Above considerations arrive to the following statement.
**Theorem 3**.: The least coherent basis in \(\mathcal{H}_{5}\) is \(U_{5}^{min}\), for which \(\mathcal{B}_{1}=\mathcal{B}_{2}=0\).
**Conjecture 2**.: The most coherent basis in \(\mathcal{H}_{5}\) is \(U_{5}^{max}\), for which \(\mathcal{B}_{1}\approx 0.874\).
Note that a regular dodecahedron arises in quantum theory in several contexts, including the Penrose dodecahedron [50; 51; 52] formed by 40 pure states in \(\mathcal{H}_{4}\) which allow one to construct a proof of the Bell's nonlocality theorem. The same configuration is related to the set of five isoentangled two-qubit mutually unbiased bases, are the partial traces of 20 pure states in \(\mathcal{H}_{4}\) lead to a regular dodecahedron inside the Bloch ball [53].
### Bases for N=6
The most coherent basis found by the numerical search has an octahedral symmetry, see Fig. 8. All states in this basis are equivalent up to a rotation. By imposing this symmetry and orthogonality requirement one gets an analytical expression for that basis, for which \(\mathcal{B}_{1}(U_{6}^{max})=(929+272\sqrt{10})/2025\approx 0.874\). The basis contains the state,
\[\ket{\psi_{6}^{max}}=\mathcal{N}_{3}\Big{(}\frac{5}{2},\frac{5}{2} )-\frac{1}{3}(2\sqrt{2}-\sqrt{5})\ket{\frac{5}{2},\text{-}\frac{3}{2}}\Big{)} \tag{27}\]
and the remaining five states, which may be obtained by appropriate rotations, where \(\mathcal{N}_{3}\) denotes the normalization constant. In the Majorana representation, a state \(\ket{\psi_{6}^{max}}\) is represented by a single star at the north pole and the remaining four equally distributed on a parallel with lattitude defined by \(\tan\frac{\theta_{7}}{2}=(\frac{1}{3}(2\sqrt{10}-5))^{1/4}\), see red stars in Fig. 8. The corresponding unitary matrix reads
\[U_{6}^{max}=\frac{1}{2}\begin{pmatrix}2a&0&-b&-b&-b\\ 0&2b&a&-a&-ia&ia\\ 0&0&1&1&-1&-1\\ 0&0&1&-1&i&-i\\ 2b&0&a&a&a&a\\ 0&2a&-b&b&ib&-ib\end{pmatrix}, \tag{28}\]
where \(a=\frac{1}{3}\sqrt{11/2+\sqrt{10}}\) and \(b=(2-\sqrt{10})/6\).
In a manner reminiscent of the \(N=5\) case, we employ further conditions to identify a unique least coherent basis. Specifically, we require that not only that \(\mathcal{B}_{1}\) is minimal, but \(\mathcal{B}_{2}\) is also optimal. We find a basis for which \(\mathcal{B}_{1}=0\) and \(\mathcal{B}_{2}\approx 0.092\), without expected symmetries. However, when we impose the minimization of a sum of measures \(\mathcal{B}_{1}\) and \(\mathcal{B}_{2}\), we obtained \(\mathcal{B}_{1},\mathcal{B}_{2}>0\) and observe the emergence of an intriguingly symmetric structure. This basis is comprised of 30 points that are equally distributed across five circles of latitude on a sphere, with each circle hosting six stars in vertices of a regular hexagon. The Majorana representation of this basis is displayed in Fig. 9. A single state is represented, up to rotation, by a set of points in spherical coordinates \((\theta,\phi)\) given
Figure 8: The most coherent basis in \(\mathcal{H}_{6}\) (\(j=5/2\)) represented by \(6\times 5=30\) points. The sphere (left) and the Mercator projection (right).
Figure 7: The least coherent basis (26) in \(\mathcal{H}_{5}\) (\(j=2\)) represented by \(5\times 4=20\) points on the sphere (left) and in the Mercator projection (right). Stars representing each state span a regular tetrahedron and the entire basis forms a compound of five tetrahedra and spans a regular dodecahedron, plotted with solid lines.
by
\[\big{\{}(\pi/2,0),(\theta_{8},5\pi/6-\delta_{1}),(\theta_{9},-\pi/2- \delta_{2}),\] \[(\pi-\theta_{8},-5\pi/6+\delta_{1}),(\pi-\theta_{9},\pi/2+\delta_ {2})\big{\}}. \tag{29}\]
The remaining four states can be obtained through rotation around the \(\hat{z}\) axis by an angle of \(\pi/3\). Based on numerical experiments, we have imposed this parametrization and obtained numerical results for the four real parameters \(\theta_{8},\theta_{9},\delta_{1}\), and \(\delta_{2}\), resulting in one state described by Eq. (29) and four rotations. These rotations generate mutually orthogonal states up to the accuracy of approximately \(10^{-8}\). This particular basis, which we denote as \(\bar{U}_{6}^{min}\), yields \(\mathcal{B}_{1}\approx 0.000006\) and \(\mathcal{B}_{2}\approx 0.010709\). It minimizes a measure that is a sum of \(\mathcal{B}_{1}\) and \(\mathcal{B}_{2}\), also it has interesting rotational symmetries. See Appendix B for further details of numerical results.
**Conjecture 3**.: The most coherent basis in \(\mathcal{H}_{6}\) is \(U_{6}^{max}\) for which \(\mathcal{B}_{1}=\frac{929-272\sqrt{10}}{2025}\approx 0.884\), see Eq. (28).
## Appendix E Basis for N=7
The least coherent basis \(U_{7}^{min}\), found by numerical search is formed by seven regular octahedrons, see Fig. 10. All states in this basis are equivalent up to rotation to the state
\[|\psi_{7}^{oct}\rangle=\frac{1}{\sqrt{2}}(|3,\!-\!2\rangle+|3,2\rangle). \tag{30}\]
No symmetry has been observed in the relative positions of those octahedrons. Additional details regarding the numerical results can be found in Appendix B.
**Theorem 4**.: The least coherent basis in \(\mathcal{H}_{7}\) is represented by \(U_{7}^{min}\) for which \(\mathcal{B}_{1}=\mathcal{B}_{2}=\mathcal{B}_{3}=0\) and \(\mathcal{B}_{4}=\frac{1}{6}\).
### Asymptotic results
We observed that the average value of the introduced t-coherence measure \(\mathcal{B}_{t}(U)\) decreases with the dimension \(N\) of the Hilbert space \(\mathcal{H}_{N}\) for any \(t\), see Fig. 11 as an example. One can demonstrate that the value of \(\mathcal{B}_{t}(U)\), averaged over the unitary group with respect to the Haar measure, approaches zero as the dimension \(N\) tends to infinity.
## V Permutation symmetric states of multiqubit systems
Due to the correspondence (8) between spin-\(j\) states and symmetric states of \(2j\) spin-\(1/2\) particles and the fact that our measure of \(1\)-anticoherence Eq. (14) is equivalent to linear entropy of reduced density operator, the extremal bases we found correspond to the most and the least entangled bases. Specifically, the \(1\)-anticoherent states \(\mathcal{A}_{1}(|\psi\rangle)=1\) are maximally entangled, while the coherent states \(\mathcal{A}_{1}(|\psi\rangle)=0\) represent separable states of the composite system.
Note that the basis (19), represented in Fig. 3 by a regular octahedron, is equivalent to the well-known
Figure 11: Maximal, minimal and average value of the coherence measure \(\mathcal{B}_{1}\) as a function of the dimension \(N=2j+1\). The average values are taken with respect to the Haar measure on \(U(N)\). The solid line is plotted to guide the eye.
Figure 10: The least coherent basis in \(\mathcal{H}_{7}\) (\(j=3\)) represented by \(7\times 6=42\) points. Each state is represented by a regular octahedron. The sphere (left) and the Mercator projection (right).
Figure 9: The selected least coherent basis in \(\mathcal{H}_{6}\) (\(j=5/2\)), for which the sum of \(\mathcal{B}_{1}\) and \(\mathcal{B}_{2}\) achieves minimum. It is represented by \(6\times 5=30\) points. The sphere with a regular hexagon on each circle of latitude (left) and the Mercator projection (right).
Bell basis in the symmetric sector of \({\cal H}_{2}^{\otimes 2}\),
\[|\psi_{1}\rangle = \frac{1}{\sqrt{2}}(|01\rangle+|10\rangle),\] \[|\psi_{2}\rangle = \frac{1}{\sqrt{2}}(|00\rangle+|11\rangle), \tag{31}\] \[|\psi_{3}\rangle = \frac{1}{\sqrt{2}}(|00\rangle-|11\rangle).\]
The last state that completes the orthonormal basis of \({\cal H}_{2}^{\otimes 2}\) is a singlet state
\[|\psi_{4}\rangle=\frac{1}{\sqrt{2}}(|01\rangle-|10\rangle), \tag{32}\]
which does not belong to the symmetric subspace. Similarly, the least coherent basis in \({\cal H}_{4}\) corresponds to a basis of the symmetric subspace of \({\cal H}_{2}^{\otimes 3}\), that is maximally entangled, and each state is equivalent (up to \(SU(2)\) rotation) to a \(GHZ\) state [4]
\[|GHZ\rangle=\frac{1}{\sqrt{2}}(|000\rangle+|111\rangle). \tag{33}\]
In \({\cal H}_{5}\), the least coherent basis (26), represented in Fig. 7 by five regular tetrahedrons that form a regular dodecahedron, corresponds to a basis of symmetric sector of \({\cal H}_{2}^{\otimes 4}\). This basis consists five, maximally entangled states of four qubits, which are known to be the most sensitive states for alignment of the reference frame [20] and having the highest geometric entanglement [13]. A simple calculation leads to the form
\[|\psi_{tet.}\rangle = \frac{1}{\sqrt{3}}\left|0000\right\rangle+\frac{1}{\sqrt{6}}(|0111\rangle\] \[+|1011\rangle+|1101\rangle+|1110\rangle).\]
In a similar way the matrix \(U_{7}^{min}\) of minimal coherence leads to the basis of the symmetric sector of \({\cal H}_{2}^{\otimes 6}\) with the maximal entanglement. Each state is equivalent, up to rotation, to
\[|\psi_{oct.}\rangle=\frac{1}{\sqrt{2}}(|D_{6,1}\rangle-|D_{6,5}\rangle), \tag{35}\]
which is known to display the highest geometric entanglement [13].
## VI Concluding remarks
This study introduces measures of \(t\)-coherence \({\cal B}_{t}\) as tools to characterize the spin coherence of a given basis in \({\cal H}_{N}\). The search for the least coherent bases for \(N=3,4,5\) and \(7\) is performed, and a set of candidates for the most coherent bases for \(N=3,4,5,6\) is presented. Some of these bases analyzed by Majorana representation reveal symmetries of Platonic solids. Obtained results, including the most coherent, anticoherent, and average values of \({\cal B}_{1}\), are presented in Fig. 11 and summarized in Table 1. These bases lead to orthogonal measurements of Luders and von Neumann of extreme spin coherence.
Coherent and anticoherent states have practical applications in quantum metrology as optimal rotessors [32; 33; 34; 35]. This work provides a natural extension of previous studies on the search for such states proposing optimal measurements. Note that presented bases have specific rotational symmetries, which allow one to obtain the entire basis just by rotation of a reference state and therefore make it easier to prepare.
## VII Acknowledgements
We would like to express our gratitude to Rafal Bistroi and Jakub Czartowski for engaging in discussions that have enriched this research significantly. We are also thankful to the Foundation for Polish Science for the financial support provided through TEAMNET project number POIR.04.04.00-00-17C1/18-00 and to the Narodowe Centrum Nauki for the support provided through the Quantenna project number 2021/03/Y/ST2/00193. A. B. acknowledges support by an NWO Vidi grant (Project No. VI.Vidi.192.109).
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline & Coherent & \multicolumn{4}{c|}{Anticoherent} \\ \hline N & \({\cal B}_{1}\) & \({\cal B}_{1}\) & \({\cal B}_{2}\) & \({\cal B}_{3}\) & \({\cal B}_{4}\) \\ \hline
3 & \({\bf 8/9}\cong{\bf 0.889}\) & \({\bf 0}\) & - & - & - \\ \hline
4 & \(8/9\cong 0.889\) & \(0\) & \(1/4\) & - & - \\ \hline
5 & \(0.874\) & \(0\) & \(0\) & \(1/3\) & - \\ \hline
6 & \(\frac{929+27\gamma/\overline{\Omega}}{2025}\approx 0.874\) & \(0.000006\) & \(0.011\) & \(0.110\) & \(0.375\) \\ \hline
7 & - & \(0\) & \(0\) & \(0\) & \(1/6\) \\ \hline \end{tabular}
\end{table}
Table 1: Measures of t-coherence for identified bases of order \(N=3,\ldots,7\). Analytical results are shown in bold purple, while analytical results with constraints suggested by numerical outcomes are shown in blue. The remaining bases were obtained via numerical search with imposed symmetry constraints to enhance precision.
## Appendix A Parametrization of bases in \({\cal H}_{3}\)
Any basis in \({\cal H}_{3}\), up to a \(SU(2)\) rotation, consisting of states \(\left|\Psi_{3}\right\rangle\), \(\left|\Psi_{3}^{\prime}\right\rangle\) and \(\left|\Psi_{3}^{\prime\prime}\right\rangle\) can be expressed in terms of three real parameters \(\Theta_{1},\Theta_{2},\Phi\). A parameterization of orthonormal bases is given by
\[\left|\Psi_{3}\right\rangle = {\cal N}_{4}(\left|1,1\right\rangle-\tan^{2}\frac{\Theta_{1}}{2} \left|1,\text{-}1\right\rangle),\] \[\left|\Psi_{3}^{\prime}\right\rangle = {\cal N}_{5}(\left|1,1\right\rangle+\Upsilon\left|1,0\right\rangle +\cot^{2}\frac{\Theta_{1}}{2}\left|1,\text{-}1\right\rangle), \tag{36}\] \[\left|\Psi_{3}^{\prime\prime}\right\rangle = {\cal N}_{6}(\left|1,1\right\rangle-\frac{1+\cot^{4}\frac{\Theta _{1}}{2}}{\Upsilon^{*}}\left|1,0\right\rangle+\cot^{2}\frac{\Theta_{1}}{2} \left|1,\text{-}1\right\rangle),\]
where \({\cal N}_{4}\), \({\cal N}_{5}\), \({\cal N}_{6}\) denote suitable normalization constants, and \(\Upsilon=(e^{-i\Phi}\cot\frac{\Theta_{2}}{2}\cot^{2}\frac{\Theta_{1}}{2}+e^{i \Phi}\tan\frac{\Theta_{2}}{2})/\sqrt{2}\). In the Majorana representation, the state \(\left|\Psi_{3}\right\rangle\) is represented by two points on the opposite sites of a single circle of latitude, given by angle \(\Theta_{1}\) in the spherical coordinates.
## Appendix B Numerical results
In this section we present some details concerning our numerical results. Detailed expressions for the vectors forming orthonormal bases found numerically are avaliable online [55].
### \(\mathbf{N=5}\)
The most coherent basis in \({\cal H}_{5}\), for which \({\cal B}_{1}\approx 0.8736987\) is generated by rotation of two reference states (23) and (24) with parameters \(\chi=(r_{2}+1/r_{2}+2\cos\phi)/2\) and \(r_{1}=\sqrt[3]{4/(1/r_{2}^{2}+r_{2}^{2}+2\cos\phi)}\). Numerical search gives the following values of free parameters \(r_{2}\approx 7.564405\) and \(\phi\approx 0.93380835\). The basis is orthonormal up to accuracy \(\sum_{i,j,k}\left|U_{ij}^{*}U_{ik}-\delta_{jk}\right|\approx 2.1\times 10^{-15}\).
### \(\mathbf{N=6}\)
Minimizing \({\cal B}_{1}\) leads to several different bases with \({\cal B}_{1}\approx 0\), without any internal symmetry. To distinguish among them we analyzed a linear combination of \({\cal B}_{1}\) and \({\cal B}_{2}\). The measure \({\cal B}_{1}\) is no longer \(\approx 0\), but interesting rotational symmetry by \(\pi/3\) and internal state symmetry (29) arise. Imposing such symmetries we search for orthonormal basis generated by rotating state \(\left|\psi_{6}^{min}\right\rangle\) of internal symmetry (29) around the \(\hat{z}\) axis by \(\pi/3\). The numerical search gave us the following values of the parameters \(\theta_{8}\approx 0.5922575\), \(\theta_{9}\approx 1.1820735\), \(\delta_{1}\approx 0.0822441\), \(\delta_{2}\approx 0.0522207\). Corresponding reference vector in \({\cal H}_{6}\) reads,
\[\left|\psi_{6}^{min}\right\rangle \approx (0.4082482,-0.3757166-0.1596989i,0.0850774-0.3992850i,\] \[0.0850774-0.3992850i,-0.3757166-0.1596989i,0.4082483).\]
The other four vectors are obtained by rotation of \(\left|\psi_{6}^{min}\right\rangle\) around the \(\hat{z}\) axis by an angle of \(\pi/3\). Rotation matrix reads \(\hat{R}_{\hat{z}}(\pi/3)=\text{diag}(1,e^{i\pi/3},e^{i2\pi/3},e^{i\pi},e^{i4\pi/3 },e^{i5\pi/3})\) and the entire basis takes the form
\[\left\{\hat{R}_{\hat{z}}(\pi/3)^{k}\left|\psi_{6}^{min}\right\rangle\right\}_ {k=0}^{5}.\]
Obtained constellation of vectors provides the following values of the measures \({\cal B}_{1}\approx 0.0000059\), \({\cal B}_{2}\approx 0.0107090\) and \({\cal B}_{3}\approx 0.1206302\). The basis is orthonormal up to accuracy \(\sum_{i,j,k}\left|U_{ij}^{*}U_{ik}-\delta_{jk}\right|\approx 9.4\times 10^{-7}\).
### \(\mathbf{N=7}\)
The basis \(U_{7}^{min}\) is orthonormal with accuracy \(\sum_{i,j,k}\left|U_{ij}^{*}U_{ik}-\delta_{jk}\right|\approx 6.3\times 10^{-15}\). Positions of stars representing states in this basis differ from those of regular octahedrons by less than \(10^{-15}\), for a unit radius of a sphere.
## Appendix C Coherence measure in terms of matrix elements.
The t-anticoherence measure \({\cal A}_{t}(\left|\psi_{i}\right\rangle)\) of a state \(\left|\psi_{i}\right\rangle\) is defined in Eq. (14) using purity of corresponding reduced symmetric state \(F(\left|\psi_{i}\right\rangle)\), which is discussed in [28, 31]. Thus, the t-coherence measure \({\cal B}_{t}(U)\) of an
orthonormal basis \(U=\{\left|\psi_{1}\right\rangle,\ldots,\left|\psi_{N}\right\rangle\}\), can be expressed in terms of the purity of a corresponding reduced symmetric states \(F(\left|\psi_{i}\right\rangle)\). The purity of a state \(F(\left|\psi_{i}\right\rangle)\), corresponding to a state \(\left|\psi_{i}\right\rangle\), expressed in angular momentum \(J_{z}\) basis (3) is given by,
\[R_{t}(\left|\psi_{i}\right\rangle)=\sum_{k_{1}=0}^{t}\sum_{k_{2}=0}^{t}\left| \sum_{k=-j}^{j+t}Z_{j-k_{1}}^{*}Z_{j-k_{2}}\Gamma_{j+k_{2}}^{k_{1}k_{2}}\right| ^{2}, \tag{37}\]
where
\[\Gamma_{k}^{k_{1}k_{2}}=\frac{1}{C_{t}^{2j}}\sqrt{C_{k}^{k+k_{1}}C_{t\cdot k_{ 1}}^{2j\cdot k\cdot k_{1}}C_{k}^{k+k_{2}}C_{t\cdot k_{2}}^{2j\cdot k\cdot k_{2 }}} \tag{38}\]
and \(C_{q}^{l}=\binom{l}{q}\) if \(0\leq l\leq q\) and \(0\) otherwise. Then the t-coherence measure \(\mathcal{B}_{t}(U)\) of an orthonormal basis, represented by unitary matrix \(U\) can be expressed in terms of its matrix elements \(U_{ki}=\left\langle j,j+1-k|\psi_{i}\right\rangle\) as follows
\[\mathcal{B}_{t}(U)=1-\frac{t+1}{Nt}\sum_{p=1}^{N}\biggl{(}1-\sum_{k_{1},k_{2}= 0}^{t}\biggl{|}\sum_{k=-j}^{j-t}U_{j-k-k_{1}+1,p}^{*}U_{j-k-k_{2}+1,p}\Gamma_{ p+k}^{k_{1}k_{2}}\biggr{|}^{2}\biggr{)}. \tag{39}\]
## Appendix D: Extremality of solution.
Extremal point \(\mathbf{x_{0}}\) of n-variable function \(f\) satisfies \(\nabla f|_{\mathbf{x_{0}}}=\mathbf{0}\). The most convenient way to determine whether it is a local minimum, maximum, or saddle point is the analysis of the positive definiteness of Hessian \(H_{f}\) defined as
\[H_{f}=\left(\begin{array}{ccc}\frac{\partial^{2}f}{\partial x_{1}}&\frac{ \partial^{2}f}{\partial x_{1}\partial x_{2}}&\cdots&\frac{\partial^{2}f}{ \partial x_{1}\partial x_{n}}\\ \vdots&\vdots&\ddots&\vdots\\ \frac{\partial^{2}f}{\partial x_{n}\partial x_{1}}&\frac{\partial^{2}f}{ \partial x_{n}\partial x_{2}}&\cdots&\frac{\partial^{2}f}{\partial x_{n}^{2}} \end{array}\right). \tag{40}\]
1. If \(H_{f}|_{\mathbf{x}=\mathbf{x_{0}}}>0\iff\mathbf{x_{0}}\) is a local maximum.
2. If \(H_{f}|_{\mathbf{x}=\mathbf{x_{0}}}<0\iff\mathbf{x_{0}}\) is a local minimum.
3. If \(H_{f}|_{\mathbf{x}=\mathbf{x_{0}}}=0\iff\mathbf{x_{0}}\) is a saddle point.
Following [49] we recall the Lie group structure of unitary matrices manifold to introduce directions in the neighborhood of matrix U. Lie algebra of unitary matrices is formed by Hermitian matrices. We set the following basis
\[\begin{split} H_{ii}&=\left|i\right\rangle\left\langle i \right|\ \ \text{for}\ \ i\in 1,\ldots,N,\\ H_{kl}^{+}&=\left|k\right\rangle\left\langle l\right|+\left|l \right\rangle\left\langle k\right|\ \ \text{for}\ k,l\in 1,\ldots,N,\ k\neq l,\\ H_{kl}^{-}&=\hskip-2.845276pti(\left|k\right\rangle\left\langle l \right|-\left|l\right\rangle\left\langle k\right|)\ \ \text{for}\ k,l\in 1,\ldots,N,\ k\neq l,\end{split} \tag{41}\]
that give \(N^{2}\) directons in neighbourhood of matrix \(U\). Now we may define the derivative of a function \(f\) on a unitary matrix as
\[\nabla_{r}f(U)=\lim_{\epsilon_{r}\to 0}\frac{f(U_{\epsilon_{r}})-f(U)}{ \epsilon_{r}}, \tag{42}\]
where \(U_{\epsilon_{r}}\) is the matrix \(U\), transformed in \(r\)-th direction as \(U_{\epsilon_{r}}=U\exp(i\epsilon_{r}H^{r})=U(\mathbb{1}+i\epsilon_{r}H_{r}+O( \epsilon_{r}^{2}))\), where \(H^{r}\) is an element of the Lie algebra (41).
For a given unitary matrix \(U\) the derivative of the coherence measure \(\mathcal{B}_{t}(U)\), determined by Eqs. (15) and (39) reads,
\[\begin{split}\nabla_{r}\mathcal{B}_{t}(U)&=-4\frac{t +1}{Nt}\sum_{p,p^{\prime}=1}^{N}\sum_{k_{1},k_{2}=0}^{t}\sum_{k,k^{\prime}=-j}^ {j-t}\Gamma_{j+k}^{k_{1}k_{2}}\Gamma_{j+k}^{k_{1}k_{2}}\text{Im}\Bigl{(}U_{j-k ^{\prime}-k_{1}+1,p}U_{j-k^{\prime}-k_{2}+1,p}^{*}\times\\ &\qquad\qquad\qquad\qquad\qquad\Bigl{(}U_{j-k-k_{1}+1,p}^{*}U_{j-k- k_{2}+1,p^{\prime}}H_{p^{\prime},p}^{*}-U_{j-k-k_{2}+1,p}U_{j-k-k_{1}+1,p^{ \prime}}^{*}H_{p^{\prime},p}^{r*}\Bigr{)}.\end{split} \tag{43}\]
In a similar way, we arrive at the second derivative:
\[\begin{split}\nabla_{r}\nabla_{s}\mathcal{B}_{t}(U)&=-8 \frac{t+1}{Nt}\sum_{p,p^{\prime},p^{\prime}=1}^{N}\sum_{k_{1},k_{2}=0}^{t}\sum _{k,k^{\prime}=-j}^{j-t}\Gamma_{j+k}^{k_{1}k_{2}}\Gamma_{j+k^{\prime}}^{k_{1}k _{2}}\text{Re}\Big{(}U_{j-k^{\prime}-k_{1}+1,p}U_{j-k-k_{2}+1,p}U_{j-k^{\prime }-k_{2}+1,p^{\prime}}^{*}\times\\ &\qquad\qquad H_{p^{\prime},p^{\prime}}^{*}U_{j-k-k_{1}+1,p^{ \prime}}^{*}H_{p^{\prime},p}^{*}-U_{j-k^{\prime}-k_{1}+1,p}U_{j-k-k_{1}+1,p}^{ *}U_{j-k^{\prime}-k_{2}+1,p^{\prime}}^{*}H_{p^{\prime},p}^{*}U_{j-k-k_{2}+1,p ^{\prime}}\times\\ &\qquad\qquad\qquad\qquad H_{p^{\prime\prime},p}^{*}-U_{j-k^{ \prime}-k_{1}+1,p}U_{j-k^{\prime}-k_{2}+1,p}^{*}U_{j-k-k_{1}+1,p^{\prime}}H_{ p^{\prime},p}^{*}U_{j-k-k_{2}+1,p^{\prime\prime}}H_{p^{\prime\prime},p}^{*} \Big{)}.\end{split} \tag{44}\]
## Appendix E Wigner D-matrices
Let \(J_{x},J_{y},J_{z}\) be components of the _angular momentum operator_. The three operators satisfy the following commutation relations
\[[J_{x},J_{y}]=iJ_{z}\quad[J_{z},J_{x}]=iJ_{y}\quad[J_{y},J_{z}]=iJ_{x}, \tag{45}\]
where the reduced Planck's constant is set to identity, \(\hbar=1\). Mathematically, the operators \(J_{x},J_{y},J_{z}\) generate the Lie algebra SU(2) and SO(3). The sum of squares of \(J_{x},J_{y},J_{z}\) is known as _the Casimir operator_,
\[\mathbf{J}^{2}=J_{x}^{2}+J_{y}^{2}+J_{z}^{2} \tag{46}\]
and it commutes with \(J_{x},J_{y},J_{z}\) operators. In particular, \(\mathbf{J}^{2}\) might be diagonalized together with \(J_{z}\), which defines an orthonormal basis of joint eigenvectors labeled by quantum numbers \(j,m\),
\[\mathbf{J}^{2}\left|j,m\right\rangle =j(j+1)\left|j,m\right\rangle,\] \[J_{z}\left|j,m\right\rangle =m\left|j,m\right\rangle,\]
with \(j=0,\frac{1}{2},1,\ldots\) and \(m=-j,-j+1,\ldots,j\). Note that for a given \(j\), the operator \(J_{z}\) is non-degenerated and has exactly \(2j+1\) distinct eigenvalues.
A three-dimensional _rotation operator_ has the form
\[\mathcal{R}(\alpha,\beta,\gamma)=e^{-i(\alpha J_{x}+\beta J_{y}+\gamma J_{z})}. \tag{47}\]
The _Wigner D-matrix_[54] is a unitary matrix of dimension \(2j+1\) defined in the angular momentum basis as
\[D_{mm^{\prime}}^{j}(\alpha,\beta,\gamma):=\left\langle j,m^{\prime}\right| \mathcal{R}(\alpha,\beta,\gamma)\left|j,m\right\rangle. \tag{48}\]
Recall, those matrix elements of the operator \(J_{z}\) in the angular momentum basis read
\[\left\langle j,m^{\prime}\right|J_{y}\left|j,m\right\rangle=\frac{1}{2i}\Big{[} \delta_{m^{\prime},m+1}\sqrt{(j-m)(j+m+1)}-\delta_{m^{\prime},m-1}\sqrt{(j-m)( j-m+1)}\Big{]}, \tag{49}\]
then, the precise form of the Wigner D-matrix for \(j=\frac{1}{2}\) is
\[D^{\frac{1}{2}}(\alpha,\beta,\gamma)=\begin{pmatrix}\cos\!\frac{\beta}{2}e^{- i\frac{i}{2}(\alpha+\gamma)}&-\sin\!\frac{\beta}{2}e^{-\frac{i}{2}(\alpha- \gamma)}\\ \sin\!\frac{\beta}{2}e^{\frac{i}{2}(\alpha-\gamma)}&\cos\!\frac{\beta}{2}e^{ \frac{i}{2}(\alpha+\gamma)}\end{pmatrix} \tag{50}\]
and for \(j=1\):
\[D^{1}(\alpha,\beta,\gamma)=\begin{pmatrix}\cos^{2}\frac{\beta}{2}e^{-i(\alpha+ \gamma)}&-\sqrt{2}\cos\frac{\beta}{2}\sin\frac{\beta}{2}e^{-i\alpha}&\sin^{2} \frac{\beta}{2}e^{-i(\alpha-\gamma)}\\ \sqrt{2}\cos\frac{\beta}{2}\sin\frac{\beta}{2}e^{-i\gamma}&\cos\beta&-\sqrt{2} \cos\frac{\beta}{2}\sin\frac{\beta}{2}e^{i\gamma}\\ \sin^{2}\frac{\beta}{2}e^{i(\alpha-\gamma)}&\sqrt{2}\cos\frac{\beta}{2}\sin \frac{\beta}{2}e^{i\alpha}&\cos^{2}\frac{\beta}{2}e^{-i(\alpha+\gamma)}\end{pmatrix}. \tag{51}\]
## Appendix F: Invariance under rotation
The Majorana representation presents a spin-\(j\) state as a collection of \(2j\) stars on a sphere. In this section, we discuss the behavior of such a representation, while rotating the sphere. Firstly, we show that the rotation of a sphere preserves the scalar products of related states, in particular, transforming any orthonormal basis into another orthonormal basis. Secondly, we show that the rotation of a sphere preserves the coherence (14) of related states. Lastly, we present the rotation of a sphere in terms of the transformation of related spin-\(j\) and symmetric states. As a consequence, all orthonormal bases of extreme spin coherence specified in this work
are defined up to the global rotation of related Majorana stars, or equivalently, up to the action of any Wigner D-matrix on related spin-\(j\) basis elements.
Recall that there is a direct correspondence between a unitary evolution of a two-level system and rotations of the related Bloch sphere representation, which follows directly from the fact that groups SU(2) and SO(3) are isomorphic. In particular, any two-dimensional unitary matrix is of the form \(U=\left(\begin{smallmatrix}a&b\\ -b^{*}&a^{*}\end{smallmatrix}\right)\), \(|a|^{2}+|b|^{2}=1\) and hence can be expressed as
\[U(\alpha,\beta,\gamma)=\begin{pmatrix}\cos\frac{\beta}{2}e^{-\frac{i}{2}( \alpha+\gamma)}&-\sin\frac{\beta}{2}e^{-\frac{i}{2}(\alpha-\gamma)}\\ \sin\frac{\theta}{2}e^{\frac{i}{2}(\alpha-\gamma)}&\cos\frac{\theta}{2}e^{ \frac{i}{2}(\alpha+\gamma)}\end{pmatrix}. \tag{52}\]
As a quantum state \(|\psi\rangle\in\mathcal{H}_{2}\) can be presented as a single point on the Bloch sphere, its evolution under the above unitary operator might be presented as a sequence of three Euler rotations acting on the Bloch sphere. More precisely, rotation \(\mathcal{R}_{Z}(\alpha)\) of \(\alpha\) angle along \(Z\) axis, then \(\mathcal{R}_{X}(\beta)\) by \(\beta\) angle along \(X\) axis and \(\mathcal{R}_{Z}(\gamma)\) by \(\gamma\) angle along \(Z\) axis again on the Bloch sphere. We shall denote such a rotation by \(\mathcal{R}(\alpha,\beta,\gamma)\).
Any symmetric state \(|\psi\rangle\in\mathcal{H}_{2}^{\otimes 2j}\) remains symmetric under any action of the joint unitary local operator \(U^{\otimes 2j}\)[16]. In fact, the reverse statement is also true, all local unitary operations that preserve a given symmetric state are of the form \(U^{\otimes 2j}\)[12]. For a unitary matrix represented in the form (52) the action of \(U^{\otimes 2j}\) is equivalent to the rotation \(\mathcal{R}(\alpha,\beta,\gamma)\) of the sphere. As a consequence, any rotation of stars in the Bloch representation does not change the scalar product of underlying symmetric states \(|\psi_{1,2}\rangle\in\mathcal{H}_{2}^{\otimes 2j}\) as \(\langle\psi_{1}|\left(U^{\otimes 2j}\right)^{\dagger}U^{\otimes 2j}\,|\psi_{2} \rangle=\langle\psi_{1}|\psi_{2}\rangle\). In particular, the orthonormal basis is mapped to another orthonormal basis of symmetric states. Furthermore, the purity of the reduced density operator (13) does not change under local unitary transformation \(U^{\otimes 2j}\), and hence is the same for states related by the rotation of a sphere in the Majorana representation.
Recall that the system of symmetric states of \(2j\) qubits is related to the spin-\(j\) system by isomorphism \(F\), see Eq. (8), hence it preserves a scalar product, i.e \(\langle\psi_{3}|\psi_{4}\rangle=\langle F(\psi_{3})|F(\psi_{4})\rangle\), where \(|\psi_{3,4}\rangle\in\mathcal{H}_{2j+1}\). Furthermore, by the definition, both states \(|\psi\rangle\) and \(|F(\psi)\rangle\)) have the same Majorana representation. As a consequence, any rotation of stars in the Bloch representation of the spin-\(j\) system does not change the scalar product of underlying states, as was the case of symmetric states. Moreover, notice that the rotation of a sphere does not change the coherence properties of the represented states. Indeed, for any symmetric state \(|F(\psi)\rangle\) the purity of the reduced density operator (13) does not change under local unitary transformation \(U^{\otimes 2j}\), or rotation of a sphere in the Majorana representation respectively. Hence, both states initial and rotated achieve the same \(t\)-anticoherence (14). We can summarize this discussion in the following two observations.
**Observation 1**.: An action of a joint local unitary operator \(U(\alpha,\beta,\gamma)^{\otimes 2j}\) on a collection of symmetric states \(\{|\psi_{sym}^{k}\rangle\}_{k=1}^{m}\) belonging to \(\mathcal{H}_{2}^{\otimes 2j}\) preserves their mutual scalar products and purity. Furthermore, it is represented by the rotation \(\mathcal{R}(\alpha,\beta,\gamma)\) in the stellar representation, i.e.
\[\mathcal{M}\Big{(}U(\alpha,\beta,\gamma)^{\otimes 2j}\,|\psi_{sym}^{k}\rangle \Big{)}=\mathcal{R}(\alpha,\beta,\gamma)\mathcal{M}(|\psi_{sym}^{k}\rangle). \tag{53}\]
**Observation 2**.: An action of a Wigner D-matrix \(D^{j}(\alpha,\beta,\gamma)\) on a collection of spin-\(j\) sets \(\{|\psi_{k}\rangle\}_{k=1}^{m}\) belonging to \(\mathcal{H}_{2j+1}\) preserves their mutual scalar products and \(t\)-anticoherence measure \(\mathcal{A}_{t}(|\psi_{k}\rangle)\). Furthermore, it is represented by the rotation \(\mathcal{R}(\alpha,\beta,\gamma)\) in the stellar representation, i.e.
\[\mathcal{M}\Big{(}D^{j}(\alpha,\beta,\gamma)\,|\psi_{k}\rangle \,\Big{)}=\mathcal{R}(\alpha,\beta,\gamma)\mathcal{M}(|\psi_{k}\rangle). \tag{54}\]
|
2303.05294
|
On the support of Betti tables of multiparameter persistent homology
modules
|
Persistent homology encodes the evolution of homological features of a
multifiltered cell complex in the form of a multigraded module over a
polynomial ring, called a multiparameter persistence module, and quantifies it
through invariants suitable for topological data analysis.
In this paper, we establish relations between the Betti tables, a standard
invariant for multigraded modules commonly used in multiparameter persistence,
and the multifiltered cell complex. In particular, we show that the grades at
which cells of specific dimensions first appear in the filtration reveal all
positions in which the Betti tables are possibly nonzero. This result can be
used in combination with discrete Morse theory on the multifiltered cell
complex originating the module to obtain a better approximation of the support
of the Betti tables. In the case of bifiltrations, we refine our results by
considering homological critical grades of a filtered chain complex instead of
entrance grades of cells.
|
Andrea Guidolin, Claudia Landi
|
2023-03-09T14:42:50Z
|
http://arxiv.org/abs/2303.05294v2
|
# On the support of Betti tables of multiparameter persistent homology modules
###### Abstract
Persistent homology encodes the evolution of homological features of a multifiltered cell complex in the form of a multigraded module over a polynomial ring, called a multiparameter persistence module, and quantifies it through invariants suitable for topological data analysis.
In this paper, we establish a relation between Betti tables, a standard invariant for multigraded modules commonly used in multiparameter persistence, and the critical cells determined by discrete Morse theory on the filtered cell complex originating the module. In particular, we show that for a discrete gradient vector field consistent with a given multiparameter sublevel set filtration, the grades at which its critical cells appear in the filtration reveal all positions in which the Betti tables are possibly nonzero. This result is refined in the case of bifiltrations by considering homological critical grades of a filtered chain complex instead of entrance grades of critical cells.
_MSC:_ 55N31, 57Q70 (primary); 55U15, 13D02 (secondary)
_Keywords:_ Discrete Morse theory, Multigraded Betti numbers, Koszul complex
## 1 Introduction
One of the main concepts in Topological Data Analysis is _persistent homology_, a tool to capture topological information at multiple scales and provide meaningful topological summaries of the data, as surveyed, for example, in [13, 14, 15]. In practice, assuming that a data set comes equipped with measurements like functions or metrics to filter it, persistent homology transforms the filtered data into a nested family of chain complexes that depend on as many parameters as the number of different measurements used. Applying homology with coefficients in a field to such filtered chain complex produces a parametrized family of vector spaces, connected by linear transition maps, called a _persistent homology module_. Algebraic invariants of persistent homology modules provide the required summaries of the data topology.
Classically, the development of the theory of persistent homology originated from two separate roots: Morse theory (as in, e.g., [1, 13, 12]), and commutative algebra (as in, e.g., [1, 14, 15]). These two perspectives reconcile very elegantly in the case of 1-parameter persistence, i.e. when the filtration depends on only one parameter. In this case, persistent homology modules admit a complete invariant, the so-called _barcode_, encoding the lifespan of homology classes through the considered filtration. From the standpoint of Morse theory, the endpoints of bars in a persistence barcode correspond to cancellation pairs of critical points of the filtering (Morse) function [16]. From the algebraic perspective, a persistent homology module is a representation of a finite linear quiver in the category of vector spaces. Thus, a 1-parameter persistence module admits a unique decomposition into _interval modules_, i.e. indecomposable modules, each supported on an interval. These intervals are exactly the bars of the persistence barcode [14].
It is of both theoretical and practical interest to understand persistent homology in the case of multiple parameters, yielding to the so-called _multiparameter persistence_. Indeed, in applications, one
often needs to filter the data using more than only one measurement, obtaining a multiparameter persistence module. This is the case, for example, when there are different drivers for a phenomenon [1], or when one needs to downsize the role of outliers by adding a co-density measurement to the principal, explanatory, measurement as in [12, 13].
Unfortunately, the theory of multiparameter persistence modules proved to be much more elusive than the single-parameter one: in particular, since multigraded modules are of wild representation type [1], more complicated indecomposables than just intervals can generally occur, and it is impossible to list them all or characterize them via discrete invariants. Despite this difficulty, all the relevant homological events in a multiparameter filtration are conveniently captured by the _Betti tables_ of the multiparameter persistent homology module [10]. However, these events cannot be paired to obtain summaries similar to barcodes, and their mutual dependencies cannot be easily unveiled.
The main goal of this paper is to relate the events captured by the Betti tables of a multiparameter persistent homology module to the events captured by Morse theory, considered in its combinatorial formulation [11, 12]. This attempt to reconnect the algebraic perspective to Morse theory in the multiparameter situation is not only of theoretical interest in commutative algebra but also of practical advantage, as it provides a unified perspective to study persistent homology modules together with the underlying filtered complexes.
Starting from the observation that for a \(1\)-parameter persistent homology module the support of the Betti tables coincides with the set of entrance grades of critical cells in the filtration under consideration, our goal is to understand whether and to what extent this fact can be generalized to multiparameter persistence. An indication that this may be the case comes from the results of [13], which establish Morse inequalities involving, on the one hand, the values of the Betti tables of a multiparameter persistent homology module, and, on the other hand, the so-called _homological critical numbers_ of the same filtration. The latter numbers can be viewed as theoretical lower bounds of the numbers of critical cells entering the filtration at each filtration grade for any choice of a discrete gradient vector field consistent with the filtration.
The main result of this paper delimits, in the space of parameters, the support of the Betti tables of a persistent homology module obtained from a multiparameter sublevel set filtration of measurement functions. More precisely, Theorem 4.8 in Section 4 states that the Betti tables can be nonzero only for parameter values in the closure of the set of entrance grades of critical cells of a discrete gradient vector field consistent with the filtration, where the closure is taken with respect to the least upper bound.
The key idea in the proof of this result is the introduction of the construction of the _Koszul complex_ via mapping cones in the context of persistence (see Section 3). Using this inductive construction, we are able to compute Betti tables by looking at the space of parameters only locally and, more importantly, to disentangle the different parameters of the multifiltration: the Koszul complex at a fixed grade in an \(n\)-parameter space is determined by the Koszul complexes at nearby grades in a \((n-1)\)-parameter space.
Our main result of Section 4 can be strengthened in the case of \(2\)-parameter persistent homology modules. As a further contribution of this paper, we show that, in this case, the support of the Betti tables of a persistence module is contained in the closure of the set of _homological critical grades_ of a filtration (see Section 5). Although limited to the case of \(2\)-parameter filtrations, this result improves our main result in two ways: it does not depend on the choice of a specific discrete gradient vector field and establishes that all events witnessed by the Betti tables are determined by homological criticality (Corollary 5.6).
## 2 Preliminaries
Before presenting relevant background material for this article, let us establish some general notations: \(\mathbb{N}\) denotes the set \(\{0,1,\ldots\}\) of natural numbers; \([n]\) denotes the set \(\{1,2,\ldots,n\}\); \(\{e_{i}\}_{i=1,\ldots,n}\) is the standard basis of \(\mathbb{N}^{n}\); for any subset \(\alpha\subseteq[n]\), we denote \(e_{\alpha}\coloneqq\sum_{j\in\alpha}e_{j}\); \(|J|\) denotes the cardinality of a set \(J\); the symbols \(\wedge\) and \(\vee\) denote the greatest lower bound and least upper bound, respectively.
### Based chain complexes, cell complexes, and homology
Let \(\mathbb{F}\) denote a field, arbitrary but fixed. A _based chain complex_ is a chain complex \(C_{*}=(C_{q},\partial_{q})_{q\in\mathbb{Z}}\) of vector spaces over \(\mathbb{F}\), which we assume to be of finite dimension, such that each \(C_{q}\) is endowed with a distinguished basis \(X_{q}\). Throughout this article, we assume all chain complexes to be bounded, meaning that \(C_{q}=0\) whenever \(q<0\) or \(q\geq m\) for some integer \(m\). Based chain complexes can be viewed from a combinatorial perspective, as their distinguished bases inherit the structure of an (abstract) _cell complex_, in the sense of Lefschetz [10]. In this work, we call cell complex a finite graded set \(X=\bigsqcup_{q\in\mathbb{N}}X_{q}\), whose elements are called _cells_, endowed with an _incidence function_\(\kappa:X\times X\to\mathbb{F}\). A cell \(\sigma\in X_{q}\) is said to have _dimension \(q\)_, denoted \(\dim\sigma=q\), or to be a \(q\)_-cell_. The incidence function must satisfy two axioms: (_i_) \(\kappa(\tau,\sigma)\neq 0\) implies \(\dim\tau=\dim\sigma+1\), and (_ii_) \(\sum_{\rho\in X}\kappa(\tau,\rho)\cdot\kappa(\rho,\sigma)=0\), for any pair of cells \(\tau\) and \(\sigma\) in \(X\). We endow \(X\) with the order relation \(\leq\), called the _face partial order_, generated by the _covering face relation_: \(\sigma<\tau\) if and only if \(\kappa(\tau,\sigma)\neq 0\). Given a cell complex \(X\), we denote \(C_{*}(X)=(C_{q}(X),\partial_{q})_{q\in\mathbb{Z}}\) the based chain complex such that \(X_{q}\) is the fixed basis of \(C_{q}\), for all \(q\), with differentials \(\partial_{q}:C_{q}\to C_{q-1}\) defined on each \(\tau\in X_{q}\) by
\[\partial_{q}(\tau)=\sum_{\sigma\in X_{q-1}}\kappa(\tau,\sigma)\sigma.\]
We observe that \(C_{*}(X)\) is the zero chain complex if \(X=\emptyset\).
A graded set \(A=\bigsqcup_{q\in\mathbb{N}}A_{q}\) is called a _subcomplex_ of \(X\) if, for all \(\tau\in A\), every cell \(\sigma\in X\) such that \(\sigma\leq\tau\) is also in \(A\). This property makes \(A\), endowed with the restriction of the incidence function of \(X\), a cell complex, and is equivalent to requiring \(C_{*}(A)\) to be a chain subcomplex of \(C_{*}(X)\). We denote by \(H_{q}(X):=\ker\partial_{q}/\operatorname{im}\partial_{q+1}\) the homology \(\mathbb{F}\)-modules of \(C_{*}(X)\), and by \(H_{q}(X,A)\) the homology \(\mathbb{F}\)-modules of the relative chain complex \(C_{*}(X,A)\).
We observe that the notion of a cell complex as reviewed above, equivalent to that of a based chain complex, is general enough to include simplicial complexes, cubical complexes, and finite regular CW complexes, among other widely used combinatorial objects admitting a canonically associated chain complex.
### Multifiltrations and multiparameter persistence
One of the main mathematical objects of interest in topological data analysis are functors from a poset to the category of finite dimensional vector spaces over a field \(\mathbb{F}\). Here, we consider the indexing poset \(\mathbb{N}^{n}\), for some integer \(n\geq 1\), equipped with the coordinate-wise partial order: for \(u=(u_{i}),v=(v_{i})\in\mathbb{N}^{n}\), we write \(u\preceq v\) if and only if \(u_{i}\leq v_{i}\), for all \(1\leq i\leq n\). In this article, an _n-parameter persistence module_ is a functor from the poset \((\mathbb{N}^{n},\preceq)\) with values in finite-dimensional \(\mathbb{F}\)-vector spaces. Morphisms between such functors are the natural transformations. Explicitly, an \(n\)-parameter persistence module \(V\) consists of a family \(\{V^{u}\}_{u\in\mathbb{N}^{n}}\) of \(\mathbb{F}\)-vector spaces together with a family \(\{\varphi^{u,v}:V^{u}\to V^{v}\}_{u\preceq v\in\mathbb{N}^{n}}\) of linear maps such that \(\varphi^{u,v}=\varphi^{v,w}\circ\varphi^{u,v}\) whenever \(u\preceq v\preceq w\), and \(\varphi^{u,u}=\operatorname{id}_{V^{u}}\), for all \(u\). A _morphism_ between two \(n\)-parameter persistence modules \(\{V^{u},\varphi^{u,v}\}\) and \(\{W^{u},\psi^{u,v}\}\) is a family of linear maps \(\{\nu^{u}:V^{u}\to W^{u}\}_{u\in\mathbb{N}^{n}}\) such that \(\nu^{v}\circ\varphi^{u,v}=\psi^{u,v}\circ\nu^{u}\), for all \(u\preceq v\) in \(\mathbb{N}^{n}\). A morphism \(\nu\) is an _isomorphism_ (_monomorphism_, _epimorphism_, respectively) if, and only if, its components \(\nu^{u}\) are bijective (injective, surjective), for all \(u\in\mathbb{N}^{n}\).
In topological data analysis, the typical source of persistence modules are filtrations of cell complexes associated with the data. An \(n\)-_filtration_ of a cell complex \(X\) is a family \(\{X^{u}\}_{u\in\mathbb{N}^{n}}\) of subcomplexes of \(X\) such that \(u\preceq v\) implies \(X^{u}\subseteq X^{v}\). If a cell \(\sigma\) of \(X\) is an element of \(X^{u}-\bigcup_{j=1}^{n}X^{u-ej}\), we say that \(u\) is an _entrance grade_ of \(\sigma\) in the filtration. In this article we assume, once and for all, that filtrations \(\{X^{u}\}_{u\in\mathbb{N}^{n}}\) are families of sublevel sets \(X^{u}=\{\sigma\in X\mid h(\sigma)\preceq u\}\) of some order-preserving function \(h:(X,\leq)\to(\mathbb{N}^{n},\preceq)\), with \(\leq\) denoting the face partial order on \(X\). This assumption is equivalent to requiring every cell of \(X\) to have exactly one entrance grade.
The filtrations we are considering are usually called _one-critical_[12] in topological data analysis. We want to highlight that assuming the uniqueness of entrance grades is fundamental in order to obtain the results of this article, which are false for general filtrations of cell complexes. For instance, in this article we repeatedly use the following fact.
_Remark 2.1_.: Given a one-critical \(n\)-filtration \(\{X^{u}\}_{u\in\mathbb{N}^{n}}\) and a finite set of filtration grades \(\{u_{j}\}_{j=1,\dots,k}\subseteq\mathbb{N}^{n}\), with \(u_{j}=(u_{j,1},\dots,u_{j,n})\) for all \(j\), we have \(\bigcap_{j=1}^{k}X^{u_{j}}=X^{w}\), where \(w=\bigwedge\{u_{j}\}_{j}\) is the greatest lower bound \((\min\{u_{j,1}\}_{j},\dots,\min\{u_{j,n}\}_{j})\) of the subset \(\{u_{j}\}_{j=1,\dots,k}\) in \(\mathbb{N}^{n}\). In particular, for each subset \(\alpha\subseteq[n]\), we have the equality \(\bigcap_{j\in\alpha}X^{u-e_{j}}=X^{u-e_{\alpha}}\).
We are interested in persistence modules obtained as the homology of an \(n\)-filtration. Given an \(n\)-filtration \(\{X^{u}\}_{u\in\mathbb{N}^{n}}\) and applying the \(q\)th homology functor, one obtains the \(n\)-_parameter persistent \(q\)th-homology module_\(V_{q}=\{V_{q}^{u},{}^{u,v}_{q}\}_{u\preceq v\in\mathbb{N}^{n}}\), with \(V_{q}^{u}:=H_{q}(X^{u})\) and \(\iota_{q}^{u,v}\colon H_{q}(X^{u})\to H_{q}(X^{v})\) induced by the inclusion maps \(X^{u}\hookrightarrow X^{v}\) for \(u\preceq v\). We note that it is common to use the terms _multifiltration_ and _multiparameter_ in place of, respectively, \(n\)-filtration and \(n\)-parameter, to indicate the generic case when \(n>1\). Moreover, \(2\)-filtrations are also called _bifiltrations_.
The overall purpose of this work is to study the relation between the homological invariants of multiparameter persistent homology modules called Betti tables and the multifiltrations from which they are obtained. To this aim, we adopt some tools and terminology from commutative algebra. An \(n\)-_graded module_ over the polynomial ring \(S:=\mathbb{F}[x_{1},\dots,x_{n}]\) is an \(S\)-module with a vector space decomposition \(V=\bigoplus_{u\in\mathbb{N}^{n}}V^{u}\) such that \(x_{i}\cdot V^{u}\subseteq V^{u+e_{i}}\), for all \(u\in\mathbb{N}^{n}\) and \(i\in[n]\). There is a standard equivalence [1] between the category of \(n\)-parameter persistence modules and the category of \(n\)-graded \(S\)-modules, allowing us to view a persistence module \(\{V^{u},\varphi^{u,v}\}\) as the \(n\)-graded \(S\)-module \(\bigoplus_{u\in\mathbb{N}^{n}}V^{u}\), where the action of \(S\) is defined by \(x_{i}\cdot z=\varphi^{u,u+e_{i}}(z)\), for all \(z\in V^{u}\) and \(i\in[n]\). Standard homological invariants from commutative algebra, like the _Betti tables_ (also called _multigraded Betti numbers_, see e.g. [15] for background), were among the first ones studied in multiparameter persistence [1, 16]. Given an \(n\)-parameter persistent homology module \(\{V_{q}^{u},{}^{u,v}\}\), obtained as the \(q\)th homology of an \(n\)-filtration, we view it as the \(n\)-graded \(S\)-module \(V_{q}=\bigoplus_{u\in\mathbb{N}^{n}}V_{q}^{u}\) and denote its \(i\)th Betti table by \(\xi_{i}^{q}\), for \(i\in\{0,1,\dots,n\}\). We recall that its Betti tables are functions \(\xi_{i}^{q}:\mathbb{N}^{n}\to\mathbb{N}\) defined by
\[\xi_{i}^{q}(u)=\dim(\operatorname{Tor}_{i}^{S}(V_{q},\mathbb{F}))^{u},\]
for all \(u\in\mathbb{N}^{n}\). Explicitly, \(\xi_{i}^{q}(u)\) is the dimension (as an \(\mathbb{F}\)-vector space) of the piece of grade \(u\) of the \(n\)-graded \(S\)-module \(\operatorname{Tor}_{i}^{S}(V_{q},\mathbb{F})\). In Section 3 we give an equivalent definition of the Betti tables based on the Koszul complex.
### Discrete Morse theory and multifiltrations
Discrete Morse theory, developed by Forman [10], is an adaptation of smooth Morse theory [13] to a combinatorial framework. In its original formulation, it allows, given a regular CW complex, to construct a homotopy equivalent CW complex with a smaller number of cells. Building on Forman's work, discrete Morse theory has been formulated in purely algebraic terms for based chain complexes [14] and in more general frameworks [14, 15]. In this algebraic setting, the aim is to decompose a chain complex into a smaller complex and an acyclic complex. As explained in Section 2.1, one can always take an equivalent combinatorial perspective by considering the cell complexes associated with based cell complexes. We briefly present here the main ideas of algebraic discrete Morse theory in the setting of this work.
Let \(C_{*}(X)\) be the chain complex associated with a cell complex \(X=\bigsqcup_{q}X_{q}\), and let \(<\) be the covering face relation on \(X\) introduced in Section 2.1. A pair of cells \((\sigma,\tau)\in X\times X\) with \(\sigma<\tau\) is called a _discrete vector_. A _discrete vector field_\(\mathcal{V}\) on \(X\) is a collection of discrete vectors \(\mathcal{V}=\{(\sigma_{j},\tau_{j})\}_{j\in J}\) such that all cells appearing in \(\mathcal{V}\) (indifferently as the first or the second component of a vector) are different. A discrete vector field \(\mathcal{V}\) determines a partition of \(X\) into three graded subsets \(M,S,T\), where \(M\) is the set of unpaired cells, called _critical cells_, and \(S\) (respectively, \(T\)) is the set of cells appearing in \(\mathcal{V}\) as first (respectively, second) components of a discrete vector. The subsets \(M,S,T\) inherit the grading by dimension of the cells of \(X\), so that for example \(M=\bigsqcup_{q}M_{q}\). A \(\mathcal{V}\)-_path_ between two cells \(\sigma\) and \(\sigma^{\prime}\) is a sequence \((\sigma_{0},\tau_{0},\sigma_{1},\tau_{1},\dots,\sigma_{r-1},\tau_{r-1},\sigma_ {r})\) with \(r\geq 1\) such that \(\sigma_{0}=\sigma\), \(\sigma_{r}=\sigma^{\prime}\), each \((\sigma_{i},\tau_{i})\) is a discrete vector of \(\mathcal{V}\), and \(\sigma_{i+1}<\tau_{i}\). The \(\mathcal{V}\)-path is called _closed_ if \(\sigma_{r}=\sigma_{1}\) and _trivial_ if \(r=1\). A discrete vector field \(\mathcal{V}\) is a _discrete gradient vector field_ (also called an _acyclic matching_ or a _Morse matching_) when all closed \(\mathcal{V}\)-paths are trivial.
The core result of discrete Morse theory [12] can be algebraically stated as follows [13, 14, 15].
**Theorem 2.2**.: _Let \(C_{*}(X)=(C_{q}(X),\partial_{q})_{q\in\mathbb{Z}}\) be the chain complex associated with a cell complex \(X=\bigsqcup_{q}X_{q}\) and let \(\mathcal{V}=\{(\sigma_{j},\tau_{j})\}_{j\in J}\) be a discrete gradient vector field on \(X\). Then \(C_{*}(X)\) is chain homotopy equivalent to \(C_{*}(M)=(C_{q}(M),\partial_{q}^{M})_{q\in\mathbb{Z}}\), where \(M=\bigsqcup_{q}M_{q}\) is the set of critical cells and \(\partial^{M}\) is a differential determined by \(\partial\) and \(\mathcal{V}\)._
We call \(C_{*}(M)\) the _(discrete) Morse chain complex_ of \(C_{*}(X)\) associated with \(\mathcal{V}\). Let us stress that in general \(C_{*}(M)\) is not a chain subcomplex of \(C_{*}(X)\), since its differential \(\partial^{M}\) is not simply induced by restriction by the differential \(\partial\) of \(C_{*}(X)\). The details on how \(\partial^{M}\) is (uniquely) determined by \(\partial\) and \(\mathcal{V}\) can be found in [14, 15]. Equivalently, a cell complex structure on the set \(M=\bigsqcup_{q}M_{q}\), called the _(discrete) Morse complex_ of \(X\) associated with \(\mathcal{V}\), is determined by the incidence function of \(X\) and \(\mathcal{V}\)[13]. In general, \(M\) is not a subcomplex of \(X\).
Discrete Morse theory of filtered chain complexes has been studied in a series of works related to one-parameter [16] or multiparameter persistent homology [1]. In the remainder of this subsection, we present the main ideas of discrete Morse theory for multifiltrations.
Consider an \(n\)-filtration \(\{X^{u}\}_{u\in\mathbb{N}^{n}}\) of a cell complex \(X\), which determines a filtration \(\{C_{*}(X^{u})\}_{u\in\mathbb{N}^{n}}\) of the chain complex \(C_{*}(X)\). Given a discrete gradient vector field \(\mathcal{V}\) on \(X\), there are clearly induced filtrations \(\{M^{u}\}_{u\in\mathbb{N}^{n}}\) on the Morse complex \(M\) and \(\{C_{*}(M^{u})\}_{u\in\mathbb{N}^{n}}\) on the Morse chain complex \(C_{*}(M)=(C_{q}(M),\partial_{q}^{M})\). In general, the former is only a filtration of sets and the latter is only a filtration of graded \(\mathbb{F}\)-vector spaces, as the differential \(\partial^{M}\) may fail to be compatible with the filtration. To avoid this, one can require the discrete gradient vector field to interact nicely with the multifiltration on \(X\).
**Definition 2.3**.: A discrete gradient vector field \(\mathcal{V}\) on \(X\) is _consistent_ with a multifiltration \(\{X^{u}\}_{u\in\mathbb{N}^{n}}\) if, for all \((\sigma,\tau)\) in \(\mathcal{V}\) and all \(u\in\mathbb{N}^{n}\), \(\sigma\in X^{u}\) if and only if \(\tau\in X^{u}\).
If \(\mathcal{V}\) is consistent with the multifiltration \(\{X^{u}\}_{u\in\mathbb{N}^{n}}\), then \(\{C_{*}(M^{u})\}_{u\in\mathbb{N}^{n}}\) is a filtration of chain subcomplexes of \(C_{*}(M)\)[16, 1]. Equivalently, \(\{M^{u}\}_{u\in\mathbb{N}^{n}}\) is a filtration of subcomplexes of \(M\). Moreover, the persistent homology modules associated with the multifiltrations of \(X\) and its Morse complex are isomorphic (in the sense of Section 2.2).
**Proposition 2.4** (Lemma 3.10 in [1]).: _Let \(\mathcal{V}\) be a discrete gradient vector field on a cell complex \(X\) consistent with an \(n\)-filtration \(\{X^{u}\}_{u\in\mathbb{N}^{n}}\), and let \(\{M^{u}\}_{u\in\mathbb{N}^{n}}\) be the \(n\)-filtration induced on the Morse complex \(M\). Then, for any \(q\in\mathbb{N}\), the persistence modules obtained as \(q\)-th homology of the \(n\)-filtrations \(\{X^{u}\}_{u\in\mathbb{N}^{n}}\) and \(\{M^{u}\}_{u\in\mathbb{N}^{n}}\) are isomorphic._
## 3 The Koszul complex of a persistence module
In this section, we describe the Koszul complex associated with an \(n\)-parameter persistence module and illustrate some of its properties. In particular, given an \(n\)-parameter persistent homology module \(\{H_{q}(X^{u}),t_{q}^{u,v}\}\), we introduce its Koszul complex at \(u\in\mathbb{N}^{n}\), a chain complex whose \(i\)th homology module has dimension equal to the Betti table value \(\xi_{i}^{q}(u)\). This chain complex can be constructed via a repeated procedure which allows us to add one parameter of the multifiltration at a time.
In Section 3.1, upon briefly recalling general definitions and results, we provide a more detailed description of Koszul complexes of multiparameter persistent homology modules. We claim no original results in this subsection, as the Koszul complex is a standard tool, and the explicit description of its chain modules and differentials in the case of persistent homology modules is included, for example, in [10, Sect. 3]. Here, besides fixing notations, we provide further details, especially with regard to bifiltrations, that are relevant to this work.
In Section 3.2, we explain how the Koszul complex associated with an \(n\)-parameter persistence module can be constructed as an iterated mapping cone, and we highlight the role of this construction for persistent homology modules, which intuitively allows one to disentangle the different parameters of the multifiltration and study their impact on the Betti tables. In Section 4, we will apply this technique to study the support of the Betti tables.
### The Koszul complex of a multigraded module
Let \(S\) denote the polynomial ring \(\mathbb{F}[x_{1},\ldots,x_{n}]\). The _Koszul complex_\(\mathbb{K}_{*}\) is a chain complex of free \(n\)-graded \(S\)-modules whose construction is standard in commutative algebra [14, Def. 1.26]. Given an \(n\)-graded \(S\)-module \(V=\bigoplus_{u\in\mathbb{N}^{n}}V^{u}\), the _Koszul complex_\(\mathbb{K}_{*}(x_{1},\ldots,x_{n};V)(u)\)_of \(V\) at grade \(u\in\mathbb{N}^{n}\) is the piece of grade \(u\) of the \((n\)-graded) chain complex \(V\otimes_{\mathbb{S}}\mathbb{K}_{*}\). This chain complex of \(\mathbb{F}\)-vector spaces can be used to determine the Betti tables \(\xi_{i}(u):=\dim(\operatorname{Tor}_{i}^{S}(V,\mathbb{F}))^{u}\) of \(V\) at grade \(u\), for \(i\in\{0,1,\ldots,n\}\) (see Section 2.2). Indeed, by definition, \(\operatorname{Tor}_{i}^{S}(V,\mathbb{F})\) can be determined by applying the functor \(-\otimes_{\mathbb{S}}\mathbb{F}\) to a free resolution of \(V\) and taking \(i\)th homology of the resulting chain complex. The roles of \(V\) and \(\mathbb{F}\) can however be interchanged, by virtue of the isomorphism \(\operatorname{Tor}_{i}^{S}(V,\mathbb{F})\cong\operatorname{Tor}_{i}^{S}( \mathbb{F},V)\) (see, e.g., [12, Thm. 7.1]); in this case, choosing \(\mathbb{K}_{*}\) as a (minimal) free resolution of \(\mathbb{F}\) (see [14, Prop. 1.28]) yields, for all \(i\in\{0,1,\ldots,n\}\), the equality
\[\xi_{i}(u)=\dim H_{i}(\mathbb{K}_{*}(x_{1},\ldots,x_{n};V)(u)).\]
Let us now provide a more explicit description of the Koszul complex \(\mathbb{K}_{*}(x_{1},\ldots,x_{n};V_{q})(u)\) of a persistent homology module \(\{H_{q}(X^{u}),t_{q}^{u,v}\}\) associated with an \(n\)-parameter filtration \(\{X^{u}\}_{u\in\mathbb{N}^{n}}\), regarded as an \(n\)-graded \(S\)-module \(V_{q}=\bigoplus_{u\in\mathbb{N}^{n}}H_{q}(X^{u})\) (as reviewed in Section 2.2). Even if this description of the Koszul complex can be easily adapted to any \(n\)-parameter persistence module, not necessarily built from a filtered cell complex, we prefer to focus on the case of interest for this work in order to clearly introduce the notations we will use in what follows. We recall that, for any subset \(\alpha\subseteq[n]\), we set \(e_{\alpha}\coloneqq\sum_{j\in\alpha}e_{j}\).
For each \(i\in\{0,1,\ldots,n\}\), the chain module in degree \(i\) of \(\mathbb{K}_{*}(x_{1},\ldots,x_{n};V_{q})(u)\) is
\[\mathbb{K}_{i}(x_{1},\ldots,x_{n};V_{q})(u)=\bigoplus_{\alpha\subseteq[n],\ | \alpha|=i}H_{q}(X^{u-e_{\alpha}}).\]
The definition can be easily extended if, for some fixed \(u\in\mathbb{N}^{n}\) and some \(\alpha\subseteq[n]\), it happens that \(u-e_{\alpha}\notin\mathbb{N}^{n}\): throughout this article, by definition, we set \(X^{w}=\emptyset\) whenever the grade \(w\) is not in \(\mathbb{N}^{n}\). Note that the modules \(\mathbb{K}_{i}(x_{1},\ldots,x_{n};V_{q})(u)\) are zero for all \(i\notin\{0,1,\ldots n\}\). The differentials of \(\mathbb{K}_{*}(x_{1},\ldots,x_{n};V_{q})(u)\) are defined in terms of the maps \(\iota_{q}^{v,w}:H_{q}(X^{v})\to H_{q}(X^{w})\) as follows: the restriction of
\[d_{i}:\mathbb{K}_{i}(x_{1},\ldots,x_{n};V_{q})(u)\to\mathbb{K}_{i-1}(x_{1}, \ldots,x_{n};V_{q})(u)\]
to each direct summand \(H_{q}(X^{u-e_{\alpha}})\) of its domain, with \(\alpha=\{j_{1}<j_{2}<\ldots<j_{i}\}\), is
\[d_{i|}=\sum_{r=0}^{i-1}(-1)^{r}\iota_{q}^{u-e_{\alpha},\,u-e_{\alpha}+e_{j_{i -r}}}.\]
For the sake of a simpler notation, we avoid denoting the grade \(u\) in the differentials \(d_{i}\). As we explained, \(\xi_{i}^{q}(u)\) coincides with the dimension (as an \(\mathbb{F}\)-vector space) of the \(i\)th homology module of \(\mathbb{K}_{*}(x_{1},\ldots,x_{n};V_{q})(u)\).
Let us detail the cases of \(n=1\) and \(n=2\) parameters for later convenience. For a \(1\)-parameter filtration \(\{X^{u}\}_{u\in\mathbb{N}}\), the Koszul complex \(\mathbb{K}_{*}(x_{1};V_{q})(u)\) of \(V_{q}=\bigoplus_{u\in\mathbb{N}}H_{q}(X^{u})\) at \(u\in\mathbb{N}\) is
\[0\to H_{q}(X^{u-1})\xrightarrow{d_{1}=u_{q}^{-1,u}}H_{q}(X^{u})\to 0.\]
The Betti tables at grade \(u\) are \(\xi_{0}^{q}(u)=\dim\operatorname{oker}\iota_{q}^{u-1,u}\) and \(\xi_{1}^{q}(u)=\dim\ker\iota_{q}^{u-1,u}\), which correspond respectively to the number of _births_ and _deaths_ of \(q\)-homology classes at \(u\in\mathbb{N}\) in the sense of persistence [10].
For a \(2\)-parameter filtration \(\{X^{u}\}_{u\in\mathbb{N}^{2}}\), the Koszul complex \(\mathbb{K}_{*}(x_{1},x_{2};V_{q})(u)\) of the module \(V_{q}=\bigoplus_{u\in\mathbb{N}^{2}}H_{q}(X^{u})\) at \(u\in\mathbb{N}^{2}\) is
\[0\to H_{q}(X^{u-e_{1}-e_{2}})\xrightarrow{d_{2}}H_{q}(X^{u-e_{1}})\oplus H_{q} (X^{u-e_{2}})\xrightarrow{d_{1}}H_{q}(X^{u})\to 0,\]
with differentials
\[d_{2}=\left[\begin{smallmatrix}-\iota_{q}^{u-e_{1}-e_{2},u-e_{1}}\\ \iota_{q}^{u-e_{1}-e_{2},u-e_{2}}\end{smallmatrix}\right]\qquad\text{and} \qquad d_{1}=[\iota_{q}^{u-e_{1},u}\quad\iota_{q}^{u-e_{2},u}].\]
The Betti tables at the grade \(u\) are
\[\xi_{2}^{q}(u)=\dim\ker d_{2},\qquad\xi_{1}^{q}(u)=\dim(\ker d_{1}/\operatorname{ im}d_{2}),\qquad\xi_{0}^{q}(u)=\dim\operatorname{coker}d_{1}.\]
A morphism \(\nu=\{\nu^{u}:V^{u}\to W^{u}\}_{u\in\mathbb{N}^{n}}\) between \(n\)-parameter persistence modules \(\{V^{u},\varphi^{u,v}\}\) and \(\{W^{u},\psi^{u,v}\}\) induces a chain map between the Koszul complexes of \(V=\bigoplus_{u\in\mathbb{N}^{n}}V^{u}\) and \(W=\bigoplus_{u\in\mathbb{N}^{n}}W^{u}\) at \(u\in\mathbb{N}^{n}\), the morphism between the chain modules in degree \(i\) being \(\bigoplus_{|\alpha|=i}\nu^{u-e_{\alpha}}\), with \(\alpha\subseteq[n]\). Clearly, an isomorphism between persistence modules induces an isomorphism between their Koszul complexes.
In what follows, we will apply the previous observation in the particular case of a multifiltration \(\{X^{u}\}_{u\in\mathbb{N}^{n}}\) of \(X\) and the induced multifiltration \(\{M^{u}\}_{u\in\mathbb{N}^{n}}\) of its Morse complex. By virtue of Proposition 2.4, since the modules \(V_{q}:=\bigoplus_{u\in\mathbb{N}^{n}}H_{q}(X^{u})\) and \(V_{q}^{\prime}:=\bigoplus_{u\in\mathbb{N}^{n}}H_{q}(M^{u})\) are isomorphic, their Koszul complexes \(\mathbb{K}_{*}(x_{1},\dots,x_{n};V_{q})(u)\) and \(\mathbb{K}_{*}(x_{1},\dots,x_{n};V_{q}^{\prime})(u)\) are also isomorphic, at all \(u\in\mathbb{N}^{n}\). As a consequence, the Betti tables \(\xi_{i}^{q}(u)\) can be determined considering the Morse complex instead of the original complex.
### Explicit construction via mapping cones
We now illustrate the explicit construction of the Koszul complex \(\mathbb{K}_{*}(x_{1},\dots,x_{n};V_{q})(u)\) of \(V_{q}\) at grade \(u\in\mathbb{N}^{n}\) as an iterated mapping cone. The classical construction of the Koszul complex via mapping cones can be found in [10, SS A2F] and [12, Ch. 1.6]; here we rephrase, adapt, and enrich it with examples, to provide a complete and explicit treatment for Koszul complexes of persistent homology modules that conveys the intuition of persistent homology.
Given a chain map \(f:B_{*}\to C_{*}\), the _mapping cone_\(\operatorname{Cone}(f)_{*}\) of \(f\) is the chain complex with \(\operatorname{Cone}(f)_{i}\coloneqq B_{i-1}\oplus C_{i}\) and differential \(\delta_{i}:B_{i-1}\oplus C_{i}\to B_{i-2}\oplus C_{i-1}\) defined by
\[\delta_{i}(b,c)\coloneqq(-\partial_{i-1}^{B}(b),\;\partial_{i}^{C}(c)+f_{i-1}( b)), \tag{3.1}\]
for all \(i\), with \(b\in B_{i-1},c\in C_{i}\) and \(\partial^{B},\partial^{C}\) respectively denoting the differentials of \(B_{*}\) and \(C_{*}\), see [10, SS A3.12].
Let \(\mathcal{F}:=\{X^{u}\}_{u\in\mathbb{N}^{n}}\) be an \(n\)-filtration of a cell complex \(X\). As is evident from the definitions in Section 3.1, the Koszul complex \(\mathbb{K}_{*}(x_{1},\dots,x_{n};V_{q})(u)\) of the associated persistent homology module \(V_{q}=\bigoplus_{u\in\mathbb{N}^{n}}H_{q}(X^{u})\) at the fixed grade \(u\in\mathbb{N}^{n}\) only depends on the subcomplexes \(X^{u-e_{\alpha}}\) of the filtration, with \(\alpha\subseteq[n]\). In other words, to determine \(\mathbb{K}_{*}(x_{1},\dots,x_{n};V_{q})(u)\) it is enough to consider the smaller \(n\)-filtration \(\mathcal{F}^{u}:=\{X^{u-e_{\alpha}}\}_{\alpha\subseteq[n]}\), containing \(2^{n}\) subcomplexes of the original \(n\)-filtration \(\mathcal{F}\). We observe that, fixed any \(j\in[n]\), the \(n\)-filtration \(\mathcal{F}^{u}\) can be partitioned into \(2^{n-1}\)\(1\)-filtrations \(X^{u-e_{\alpha}-e_{j}}\subseteq X^{u-e_{\alpha}}\), one for each \(\alpha\subseteq[n]\smallsetminus\{j\}\). More generally, fixed any non-empty subset \(J\coloneqq\{j_{1},\dots,j_{t}\}\subseteq[n]\), there is a partition of \(\mathcal{F}^{u}\) consisting of \(2^{n-t}\)\(t\)-filtrations of the form \(\{X^{u-e_{\alpha}-e_{\gamma}}\}_{\gamma\subseteq J}\), one for each \(\alpha\subseteq[n]\smallsetminus J\). Every such \(t\)-filtration has an associated Koszul complex \(\mathbb{K}_{*}(x_{j_{1}},\dots,x_{j_{t}};V_{q})(u-e_{\alpha})\) that intuitively only encodes information on the parameters \(j_{1},\dots,j_{t}\) of the \(n\)-filtration \(\mathcal{F}^{u}\). Given \(k\in[n]\smallsetminus J\), regarded here as an additional parameter to be taken into account, one can consider the \((t+1)\)-filtration given by the union of two \(t\)-filtrations \(\{X^{u-e_{\alpha}-e_{\gamma}}\}_{\gamma\subseteq J}\) and \(\{X^{u-e_{\alpha}-e_{k}-e_{\gamma}}\}_{\gamma\subseteq J}\), for any \(\alpha\subseteq[n]\smallsetminus(J\cup\{k\})\). Below, we will explain how the Koszul complex associated with such \((t+1)\)-filtration can be constructed as the mapping cone of a chain map between the two Koszul complexes associated with the \(t\)-filtrations.
We begin by illustrating in detail the first few steps of the procedure based on iterated mapping cones to construct the Koszul complex \(\mathbb{K}_{*}(x_{1},\dots,x_{n};V_{q})(u)\) starting from "\(1\)-parameter" Koszul complexes "in direction \(e_{j}\)"
\[\mathbb{K}_{*}(x_{j};V_{q})(w)=\left(0\to H_{q}(X^{w-e_{j}})\xrightarrow{d_{1} =u_{q}^{w-e_{j},w}}H_{q}(X^{w})\to 0\right),\]
for any fixed \(j\in[n]\) and for \(w=u-e_{\alpha}\) with \(\alpha\subseteq[n]\smallsetminus\{j\}\), and from specific chain maps between them. The chain maps are those induced by inclusions "in direction \(e_{k}\)", for any fixed \(k\in[n]\setminus\{j\}\), that is
\[f^{k}(x_{j};V_{q})(w-e_{k}):\mathbb{K}_{*}(x_{j};V_{q})(w-e_{k})\to\mathbb{K}_{ *}(x_{j};V_{q})(w),\]
with \(f_{i}^{k}(x_{j};V_{q})(w-e_{k}):\mathbb{K}_{i}(x_{j};V_{q})(w-e_{k})\to\mathbb{K}_{i }(x_{j};V_{q})(w)\) defined, for degrees \(i=0,1\), as
\[f_{0}^{k}(x_{j};V_{q})(w-e_{k}) =\iota_{q}^{w-e_{k},w}:H_{q}(X^{w-e_{k}})\xrightarrow{}H_{q}(X^{w}),\] \[f_{1}^{k}(x_{j};V_{q})(w-e_{k}) =\iota_{q}^{w-e_{j}-e_{k},w-e_{j}}:H_{q}(X^{w-e_{j}-e_{k}}) \xrightarrow{}H_{q}(X^{w-e_{j}}).\]
The mapping cone \(\operatorname{Cone}(f^{k}(x_{j};V_{q})(w-e_{k}))_{*}\) is the Koszul complex \(\mathbb{K}_{*}(x_{j},x_{k};V_{q})(w)\), associated with the \(2\)-filtration \(\{X^{w-e_{\gamma}}\}_{\gamma\subseteq\{j,k\}}\). Intuitively, it is obtained from the previous step, where only the \(j\)th parameter was considered, by adding one parameter more, namely the \(k\)th parameter of the original \(n\)-filtration. Explicitly, \(\mathbb{K}_{*}(x_{j},x_{k};V_{q})(w)\) is the chain complex
\[0\xrightarrow{}H_{q}(X^{w-e_{j}-e_{k}})\xrightarrow{d_{2}}H_{q}(X^{w-e_{j}} )\oplus H_{q}(X^{w-e_{k}})\xrightarrow{d_{1}}H_{q}(X^{w})\xrightarrow{}0\]
where the differentials, applying the definition (3.1), are
\[d_{2}=\begin{bmatrix}-\iota_{q}^{w-e_{j}-e_{k},w-e_{j}}\\ \iota_{q}^{w-e_{j}-e_{k},w-e_{k}}\end{bmatrix}\qquad\text{and}\qquad d_{1}=[ \iota_{q}^{w-e_{j},w}\quad\iota_{q}^{w-e_{k},w}].\]
The process we just described can be repeated, by choosing a new "direction" \(e_{\ell}\) corresponding to a new parameter \(\ell\in[n]\smallsetminus\{j,k\}\) and constructing \(\mathbb{K}_{*}(x_{j},x_{k},x_{\ell};V_{q})(w)\) as the mapping cone of the chain map \(f^{\ell}(x_{j},x_{k};V_{q})(w-e_{\ell})\) induced by inclusions in direction \(e_{\ell}\), for each \(w=u-e_{\alpha}\) with \(\alpha\subseteq[n]\smallsetminus\{j,k,\ell\}\). Explicitly, \(f^{\ell}(x_{j},x_{k};V_{q})(w-e_{\ell})\) is defined by the following maps, in degrees \(i=0,1,2\):
\[f_{0}^{\ell}(x_{j},x_{k};V_{q})(w-e_{\ell}) =\iota_{q}^{w-e_{\ell},w},\] \[f_{1}^{\ell}(x_{j},x_{k};V_{q})(w-e_{\ell}) =\iota_{q}^{w-e_{j}-e_{\ell},w-e_{j}}\oplus\iota_{q}^{w-e_{k}-e_{ \ell},w-e_{k}},\] \[f_{2}^{\ell}(x_{j},x_{k};V_{q})(w-e_{\ell}) =\iota_{q}^{w-e_{j}-e_{k}-e_{\ell},w-e_{j}-e_{k}}.\]
If the order in which the indeterminates are added is changed, one obtains isomorphic chain complexes: for example, \(\mathbb{K}_{*}(x_{j},x_{k},x_{\ell};V_{q})(w)\) is isomorphic to \(\mathbb{K}_{*}(x_{j},x_{\ell},x_{k};V_{q})(w)\). At the last step, one obtains \(\mathbb{K}_{*}(x_{1},\dots,x_{n};V_{q})(u)\) as the mapping cone of the chain map \(f^{m}(x_{1},\dots,\hat{x}_{m},\dots,x_{n};V_{q})(u-e_{m})\) between \(\mathbb{K}_{*}(x_{1},\dots,\hat{x}_{m},\dots,x_{n};V_{q})(u-e_{m})\) and \(\mathbb{K}_{*}(x_{1},\dots,\hat{x}_{m},\dots,x_{n};V_{q})(u)\).
Thanks to the iterative nature of the process, we can provide an explicit description of the chain complex \(\mathbb{K}_{*}(x_{j_{1}},\dots,x_{j_{i}};V_{q})(u)\) for any \(u\in\mathbb{N}^{n}\) and any non-empty subset \(J\coloneqq\{j_{1},\dots,j_{t}\}\subseteq[n]\). For each \(i\in\{0,1,\dots,|J|\}\), the chain module in degree \(i\) is
\[\mathbb{K}_{i}(x_{j_{1}},\dots,x_{j_{i}};V_{q})(u)=\bigoplus_{\gamma\subseteq J,\,|\gamma|=i}H_{q}(X^{u-e_{\gamma}}).\]
The modules \(\mathbb{K}_{i}(x_{j_{1}},\dots,x_{j_{i}};V_{q})(u)\) are zero for all \(i\notin\{0,1,\dots,|J|\}\). The differentials of the chain complex \(\mathbb{K}_{*}(x_{j_{1}},\dots,x_{j_{i}};V_{q})(u)\) can be described as follows: the restriction of
\[d_{i}:\mathbb{K}_{i}(x_{j_{1}},\dots,x_{j_{i}};V_{q})(u)\to\mathbb{K}_{i-1}(x _{j_{1}},\dots,x_{j_{t}};V_{q})(u)\]
to each direct summand \(H_{q}(X^{u-e_{\gamma}})\) of its domain, with \(\gamma=\{j_{s(1)},\dots,j_{s(i)}\}\) and \(s(1)<\dots<s(i)\), is
\[d_{i}|=\sum_{r=0}^{i-1}(-1)^{r}\iota_{q}^{u-e_{\gamma},\,u-e_{\gamma}+e_{j_{s( i-r)}}}.\]
For any \(k\in[n]\smallsetminus J\), the Koszul complex \(\mathbb{K}_{*}(x_{j_{1}},\dots,x_{j_{t}},x_{k};V_{q})(u)\) is the mapping cone of the chain map induced by inclusions in direction \(e_{k}\),
\[f^{k}(x_{j_{1}},\dots,x_{j_{t}};V_{q})(u-e_{k}):\mathbb{K}_{*}(x_{j_{1}},\dots, x_{j_{t}};V_{q})(u-e_{k})\to\mathbb{K}_{*}(x_{j_{1}},\dots,x_{j_{t}};V_{q})(u),\]
which for each degree \(i\in\{0,1,\dots,|J|\}\) is defined by
\[f_{i}^{k}(x_{j_{1}},\dots,x_{j_{t}};V_{q})(u-e_{k})=\bigoplus_{\gamma\subseteq J,\,|\gamma|=i}\iota_{q}^{u-e_{\gamma}-e_{k},\,u-e_{\gamma}}.\]
In Section 4, several results will be obtained by showing certain mapping cones to be acyclic, i.e. having vanishing homology in all degrees. We recall the following immediate consequence of [1, Prop. A.3.19] (see also [1, Corollary 1.5.4]), which gives an equivalent condition to the acyclicity of a mapping cone.
**Proposition 3.1**.: _A chain map \(f:B_{*}\to C_{*}\) is a quasi-isomorphism (i.e., it induces isomorphisms \(H_{q}(B_{*})\cong H_{q}(C_{*})\) in homology, for all \(q\in\mathbb{Z}\)) if and only if \(\operatorname{Cone}(f)_{*}\) is acyclic._
**Corollary 3.2**.: _Let \(f:B_{*}\to C_{*}\) be a chain map, and let \(B_{*}\) and \(C_{*}\) be acyclic. Then \(\operatorname{Cone}(f)_{*}\) is acyclic._
Proof.: If \(B_{*}\) and \(C_{*}\) are acyclic, the chain map \(f\) must be a quasi-isomorphism.
## 4 Entrance grades of critical cells and support of Betti tables
In this section, we are interested in relating the set of parameter grades at which new critical cells enter the filtration of the Morse complex, on the one hand, and the set of parameter grades where the Betti tables are nonzero, on the other hand. Additionally, we look for a relation between the dimension of the critical cells and the degree of the persistent homology module whose Koszul complex they impact on.
Our fixed setting for the whole section will be as follows. Let \(\{X^{u}\}_{u\in\mathbb{N}^{n}}\) be an \(n\)-parameter filtration of a cell complex \(X\), let \(\mathcal{V}\) be a fixed discrete gradient vector field consistent with the filtration (see Section 2.3) and let \(\{M^{u}\}_{u\in\mathbb{N}^{n}}\) be the associated \(n\)-parameter filtration of the Morse complex \(M\). We assume the multifiltration \(\{X^{u}\}_{u\in\mathbb{N}^{n}}\) to be _exhaustive_, that is, \(X=\bigcup_{u\in\mathbb{N}^{n}}X^{u}\). Clearly, since \(X\) is also graded by the dimension \(q\) of cells, this means that \(X_{q}=\bigcup_{u\in\mathbb{N}^{n}}X^{u}_{q}\) for all \(q\), and it is immediate to show that, as a consequence, \(M_{q}=\bigcup_{u\in\mathbb{N}^{n}}M^{u}_{q}\), for all \(q\).
Notationally, for any non-empty subset \(A\) of cells of \(X\), we set
\[\mathcal{G}(A)\coloneqq\{\text{entrance grades of the cells of }A\}\subseteq \mathbb{N}^{n}.\]
We recall that each cell has a unique entrance grade because we are assuming multifiltrations to be one-critical (cf. Section 2.2). We denote by \(\overline{G}\) the closure of a non-empty subset \(G\subseteq\mathbb{N}^{n}\) with respect to the least upper bound in \(\mathbb{N}^{n}\), which is the set \(\overline{G}:=\{\bigvee B\mid B\subseteq G,B\neq\emptyset\}\subseteq\mathbb{ N}^{n}\), with \(\bigvee B\) denoting the least upper bound of \(B\) in \((\mathbb{N}^{n},\preceq)\). Moreover, we denote by \(\operatorname{supp}\xi^{q}_{i}:=\{u\in\mathbb{N}^{n}\mid\xi^{q}_{i}(u)\neq 0\}\) the support of the Betti table \(\xi^{q}_{i}:\mathbb{N}^{n}\to\mathbb{N}\). We finally establish the following notation to be used throughout this section.
**Notation 4.1**.: Having fixed a grade \(u\in\mathbb{N}^{n}\), for any \(\alpha\subseteq[n]\) we set \(w(\alpha):=u-e_{\alpha}\), where \(e_{\alpha}:=\sum_{j\in\alpha}e_{j}\).
Our goal for this section is to prove that \(\bigcup_{i=0}^{n}\operatorname{supp}\xi^{q}_{i}\subseteq\overline{\mathcal{G} (M_{q})}\cup\overline{\mathcal{G}(M_{q+1})}\) and, moreover, \(\operatorname{supp}\xi^{q}_{0}\subseteq\overline{\mathcal{G}(M_{q})}\) and \(\operatorname{supp}\xi^{q}_{n}\subseteq\overline{\mathcal{G}(M_{q+1})}\), for all \(q\in\mathbb{N}\) (Theorem 4.8). We observe that the first inclusion is clearly equivalent to the following statement: if \(u\notin\overline{\mathcal{G}(M_{q})}\cup\overline{\mathcal{G}(M_{q+1})}\), then \(\xi^{q}_{i}(u)=0\), for all \(i\in\{0,1,\ldots,n\}\). To start with, we prove a result that allows us to rephrase the hypothesis of this statement.
**Proposition 4.2**.: _Let \(A\) be any subset of cells of \(X\) and let \(u\in\mathbb{N}^{n}\). Then \(u\notin\overline{\mathcal{G}(A)}\) if and only if there exists \(j\in[n]\) such that for any subset \(\alpha_{j}\subseteq[n]\smallsetminus\{j\}\) it holds \((X^{w(\alpha_{j})}\smallsetminus X^{w(\alpha_{j})-e_{j}})\cap A=\emptyset\), where \(w(\alpha_{j})\) is defined as in Notation 4.1._
Proof.: We prove Proposition 4.2 by contraposition, showing the equivalence of the following statements:
1. \(u\in\overline{\mathcal{G}(A)}\).
2. For all \(j\in[n]\), there exists a subset \(\alpha_{j}\subseteq[n]\smallsetminus\{j\}\) such that \(\big{(}X^{w(\alpha_{j})}\smallsetminus X^{w(\alpha_{j})-e_{j}}\big{)}\cap A\neq\emptyset\).
Assume that \(u\in\overline{\mathcal{G}(A)}\). If \(u\in\mathcal{G}(A)\), we are done by taking \(\alpha_{j}=\emptyset\), for all \(j\). If \(u\notin\mathcal{G}(A)\), then \(u=\bigvee\{v_{1},\ldots,v_{r}\}\) with \(r\geq 2\) and \(v_{1},\ldots v_{r}\in\mathcal{G}(A)\). In this case, by definition of the least upper bound, for all \(j\in[n]\) there exists \(\ell(j)\in[r]\) such that \(u-e_{j}\not\succeq v_{\ell(j)}\). Therefore, taking a cell \(\sigma_{\ell(j)}\in A\) with entrance grade \(v_{\ell(j)}\), we have \(\sigma_{\ell(j)}\in(X^{u}\smallsetminus X^{w-e_{j}})\cap A\), since \(u-e_{j}\not\succeq v_{\ell(j)}\) implies \(\sigma_{\ell(j)}\notin X^{u-e_{j}}\) by one-criticality of the multifiltration (Section 2.2). The second statement follows again by taking \(\alpha_{j}=\emptyset\), for all \(j\).
Conversely, assume that the second statement holds. For each \(j\in[n]\), let \(v(j)\) denote the entrance grade of a cell \(\sigma_{j}\in\left(X^{w(\alpha_{j})}\smallsetminus X^{w(\alpha_{j})-e_{j}}\right)\cap A\), for some \(w(\alpha_{j})=u-\sum_{i\in\alpha_{j}}e_{i}\). Let \(v=\bigvee\{v(1),\ldots,v(n)\}\). From \(v(j)\preceq w(\alpha_{j})\preceq u\), for all \(j\), we see that \(v\preceq u\). Let us show now that \(v=u\), which concludes the proof. If \(v\neq u\), then there exists \(j\in[n]\) such that \(v\preceq u-e_{j}\). Since \(\sigma_{j}\) has entrance grade \(v(j)\) and \(v(j)\preceq v\preceq u-e_{j}\), we have \(\sigma_{j}\in X^{u-e_{j}}\). On the other hand, we are assuming that \(\sigma_{j}\in X^{w(\alpha_{j})}\) with \(w(\alpha_{j})=u-\sum_{i\in\alpha_{j}}e_{i}\) and \(j\notin\alpha_{j}\). The latter condition implies that \(w(\alpha_{j})\) and \(u-e_{j}\) are not comparable. More precisely, the greatest lower bound of \(w(\alpha_{j})\) and \(u-e_{j}\) is \(w(\alpha_{j})-e_{j}\). Hence, the one-criticality assumption on the multifiltration yields a contradiction (see Remark 2.1), since we are assuming that \(\sigma_{j}\notin X^{w(\alpha_{j})-e_{j}}\).
We underline that the one-criticality assumption on the \(n\)-filtration \(\{X^{u}\}_{u\in\mathbb{N}^{n}}\) plays a key role in the proof of Proposition 4.2.
The particular case of Proposition 4.2 where \(A\) is the set \(M_{q}\) of critical \(q\)-cells, for some \(q\), is the most relevant for our purposes.
**Corollary 4.3**.: _For any \(u\in\mathbb{N}^{n}\), we have \(u\notin\overline{\mathcal{G}(M_{q})}\) if and only if there exists \(j\in[n]\) such that \(M_{q}^{w(\alpha_{j})-e_{j}}=M_{q}^{w(\alpha_{j})}\), for all subsets \(\alpha_{j}\subseteq[n]\smallsetminus\{j\}\)._
Proposition 4.2 also yields information on the maps of the persistent homology modules \(\{H_{q}(X^{u}),\iota_{q}^{u,v}\}\) and \(\{H_{q-1}(X^{u}),\iota_{q-1}^{u,v}\}\) in the "vicinity" of a fixed grade \(u\notin\overline{\mathcal{G}(M_{q})}\).
**Corollary 4.4**.: _If \(u\notin\overline{\mathcal{G}(M_{q})}\), then there exists \(j\in[n]\) such that, for all \(\alpha_{j}\subseteq[n]\smallsetminus\{j\}\), the inclusion \(X^{w(\alpha_{j})-e_{j}}\hookrightarrow X^{w(\alpha_{j})}\) induces a surjection_
\[\iota_{q}^{w(\alpha_{j})-e_{j},w(\alpha_{j})}:H_{q}(X^{w(\alpha_{j})-e_{j}}) \to H_{q}(X^{w(\alpha_{j})})\]
_and an injection_
\[\iota_{q-1}^{w(\alpha_{j})-e_{j},w(\alpha_{j})}:H_{q-1}(X^{w(\alpha_{j})-e_{j }})\to H_{q-1}(X^{w(\alpha_{j})}).\]
Proof.: By Proposition 4.2, if \(u\notin\overline{\mathcal{G}(M_{q})}\), then there exists \(j\in[n]\) such that, for all \(\alpha_{j}\subseteq[n]\smallsetminus\{j\}\), we have \(M_{q}^{w(\alpha_{j})-e_{j}}=M_{q}^{w(\alpha_{j})}\), which implies \(H_{q}(M^{w(\alpha_{j})},M^{w(\alpha_{j})-e_{j}})=0\). By the natural isomorphism of the long exact sequences of relative homology of \((X^{w},X^{v})\) and \((M^{w},M^{v})\) for each \(v\preceq w\) (see Proposition 2.4), the sequence
\[H_{q}(X^{w(\alpha_{j})-e_{j}})\longrightarrow H_{q}(X^{w(\alpha_{j})}) \longrightarrow 0\longrightarrow H_{q-1}(X^{w(\alpha_{j})-e_{j}}) \longrightarrow H_{q-1}(X^{w(\alpha_{j})}),\]
where the first and last maps are \(\iota_{q}^{w(\alpha_{j})-e_{j},w(\alpha_{j})}\) and \(\iota_{q-1}^{w(\alpha_{j})-e_{j},w(\alpha_{j})}\), respectively, is exact. Hence, our claim follows.
_Remark 4.5_.: Moving towards the proof of our main result, let us note that the hypothesis \(u\notin\overline{\mathcal{G}(M_{q})}\cup\overline{\mathcal{G}(M_{q+1})}\) implies, applying Corollary 4.3 twice, that the following properties hold simultaneously:
1. there exists \(j\in[n]\) such that \(M_{q}^{w(\alpha_{j})-e_{j}}=M_{q}^{w(\alpha_{j})}\), for all subsets \(\alpha_{j}\subseteq[n]\smallsetminus\{j\}\).
2. there exists \(\ell\in[n]\) such that \(M_{q+1}^{w(\alpha_{j})-e_{\ell}}=M_{q+1}^{w(\alpha_{\ell})}\), for all subsets \(\alpha_{\ell}\subseteq[n]\smallsetminus\{\ell\}\).
Clearly, the indices \(j\) and \(\ell\) of properties (i) and (ii) in Remark 4.5 can either coincide or not. We next prove that both cases imply the acyclicity of certain Koszul complexes, addressing the case \(j=\ell\) in Lemma 4.6 and the case \(j\neq\ell\) in Lemma 4.7.
**Lemma 4.6**.: _If properties (i) and (ii) in Remark 4.5 are verified with \(j=\ell\), then the Koszul complex \(\mathbb{K}_{*}(x_{1},\ldots,x_{n};V_{q})(u)\) is acyclic._
Proof.: Corollary 4.4 implies that the maps
\[i_{q}^{w(\alpha_{j})-e_{j},w(\alpha_{j})}:H_{q}(X^{w(\alpha_{j})-e_{j}})\to H _{q}(X^{w(\alpha_{j})})\]
are isomorphisms, for all subsets \(\alpha_{j}\subseteq[n]\smallsetminus\{j\}\). Therefore, the induced chain map
\[f^{j}(x_{1},\ldots,\hat{x}_{j},\ldots,x_{n};V_{q})(u-e_{j}):\mathbb{K}_{*}(x_{1}, \ldots,\hat{x}_{j},\ldots,x_{n};V_{q})(u-e_{j})\to\mathbb{K}_{*}(x_{1},\ldots, \hat{x}_{j},\ldots,x_{n};V_{q})(u)\]
is an isomorphism of chain complexes. Hence, the claim follows from Proposition 3.1 because the chain complex \(\mathbb{K}_{*}(x_{1},\ldots,x_{n};V_{q})(u)\) is the mapping cone of \(f^{j}(x_{1},\ldots,\hat{x}_{j},\ldots,x_{n};V_{q})(u-e_{j})\)
**Lemma 4.7**.: _Let \(u\in\mathbb{N}^{u}\) and suppose that properties (i) and (ii) of Remark 4.5 hold with \(j\neq\ell\). Then, for any \(w\coloneqq w(\alpha)=u-\sum_{i\in\alpha}e_{i}\) with \(\alpha\subseteq[n]\smallsetminus\{j,\ell\}\), the Koszul complex \(\mathbb{K}_{*}(x_{j},x_{\ell};V_{q})(w)\) is acyclic._
Proof.: By the isomorphism of Koszul complexes (Section 3), it is enough to show that \(\mathbb{K}_{*}(x_{j},x_{\ell};V^{\prime}_{q})(w)\) is acyclic, with \(V^{\prime}_{q}\coloneqq\bigoplus_{u\in\mathbb{N}^{u}}H_{q}(M^{u})\). In order to apply Proposition 3.1, we regard \(\mathbb{K}_{*}(x_{j},x_{\ell};V^{\prime}_{q})(w)\) as the mapping cone of the chain map
\[f^{\ell}(x_{j};V^{\prime}_{q})(w-e_{\ell}):\mathbb{K}_{*}(x_{j};V^{\prime}_{q} )(w-e_{\ell})\to\mathbb{K}_{*}(x_{j};V^{\prime}_{q})(w).\]
We want to prove that \(f^{\ell}(x_{j};V^{\prime}_{q})(w-e_{\ell})\) induces isomorphisms between the homology modules of
\[\mathbb{K}_{*}(x_{j};V^{\prime}_{q})(w-e_{\ell})=\left(0\to H_{q}(M^{w-e_{j}- e_{\ell}})\xrightarrow{d_{1}=u^{-e_{j}-e_{\ell},w-e_{\ell}}}H_{q}(M^{w-e_{\ell}}) \to 0\right)\]
and
\[\mathbb{K}_{*}(x_{j};V^{\prime}_{q})(w)=\left(0\to H_{q}(M^{w-e_{j}}) \xrightarrow{d_{1}=u^{-e_{j},w}}H_{q}(M^{w})\to 0\right).\]
Since \(\iota^{w-e_{j}-e_{\ell},w-e_{\ell}}_{q}\) and \(\iota^{w-e_{j},w}_{q}\) are surjective (see proof of Corollary 4.4), homology in degree \(0\) is zero for both Koszul complexes. Hence, we only have to show that
\[f^{\prime}:\ker\iota^{w-e_{j}-e_{\ell},w-e_{\ell}}_{q}\to\ker\iota^{w-e_{j},w}_ {q}\]
is an isomorphism, where \(f^{\prime}\) denotes the restriction of \(\iota^{w-e_{j}-e_{\ell},w-e_{j}}_{q}\) to \(\ker\iota^{w-e_{j}-e_{\ell},w-e_{\ell}}_{q}\). The map \(f^{\prime}\) is injective because \(\iota^{w-e_{j}-e_{\ell},w-e_{j}}_{q}\) is injective (see proof of Corollary 4.4). A quick way to show that \(f^{\prime}\) is an isomorphism is therefore showing that \(\ker\iota^{w-e_{j}-e_{\ell},w-e_{\ell}}_{q}\) and \(\ker\iota^{w-e_{j},w}_{q}\) are vector spaces of the same (finite) dimension. We use here the notations \(Z_{q}(M^{v})\) and \(B_{q}(M^{v})\) respectively for the submodules of cycles and boundaries of \(C_{q}(M^{v})\), for all \(v\in\mathbb{N}^{n}\). By Remark 4.5(i), \(M^{w-e_{j}-e_{\ell}}_{q}=M^{w-e_{\ell}}_{q}\) and \(M^{w-e_{j}}_{q}=M^{w}_{q}\), which implies \(Z_{q}(M^{w-e_{j}-e_{\ell}})=Z_{q}(M^{w-e_{\ell}})\) and \(Z_{q}(M^{w-e_{j}})=Z_{q}(M^{w})\). Similarly, by Remark 4.5(ii), \(M^{w-e_{j}-e_{\ell}}_{q+1}=M^{w-e_{j}}_{q+1}\) and \(M^{w-e_{\ell}}_{q+1}=M^{w}_{q+1}\), which implies \(B_{q}(M^{w-e_{j}-e_{\ell}})=B_{q}(M^{w-e_{j}})\) and \(B_{q}(M^{w-e_{\ell}})=B_{q}(M^{w})\). Expressing the dimension of the kernels of the surjective maps \(\iota^{w-e_{j}-e_{\ell},w-e_{\ell}}_{q}\) and \(\iota^{w-e_{j},w}_{q}\) with respect to the dimension of these submodules, one can easily check that they coincide.
We underline that to conclude the proof we use an argument based on the equality of some subsets of critical cells. This part of the proof cannot be replaced by using only the properties of the induced maps in homology (as in Corollary 4.4). As a counterexample, consider the diagram
of vector spaces, with \(\dim V\neq 0\). We can regard the rows as two chain complexes with surjective differentials, and the vertical arrows as an injective chain map between them, as in our proof. However, the mapping cone of this chain map is clearly not acyclic.
We can now complete the proof of our main result for this section.
**Theorem 4.8**.: _Let \(\{X^{u}\}_{u\in\mathbb{N}^{n}}\) be an \(n\)-parameter exhaustive filtration of a cell complex \(X\), let \(\mathcal{V}\) be a fixed discrete gradient vector field consistent with the filtration, and let \(\{M^{u}\}_{u\in\mathbb{N}^{n}}\) be the associated \(n\)-parameter filtration of the Morse complex \(M\). Then_
\[\bigcup_{i=0}^{n}\operatorname{supp}\xi^{q}_{i}\subseteq\overline{\mathcal{G}( M_{q})}\cup\overline{\mathcal{G}(M_{q+1})},\]
_for all \(q\in\mathbb{N}\). Furthermore, \(\operatorname{supp}\xi^{q}_{0}\subseteq\overline{\mathcal{G}(M_{q})}\) and \(\operatorname{supp}\xi^{q}_{n}\subseteq\overline{\mathcal{G}(M_{q+1})}\), for all \(q\in\mathbb{N}\)._
Proof.: To prove that \(\bigcup_{i=0}^{n}\operatorname{supp}\xi_{i}^{q}\subseteq\operatorname{\mathbb{G}}(M_ {q})\cup\operatorname{\mathbb{G}}(M_{q+1})\), let \(u\notin\operatorname{\mathbb{G}}(M_{q})\cup\operatorname{\mathbb{G}}(M_{q+1})\). As we have seen, properties (i) and (ii) of Remark 4.5 hold, which involve indices \(j,\ell\in[n]\). If \(j=\ell\), the Koszul complex \(\mathbb{K}_{*}(x_{1},\dots,x_{n};V_{q})(u)\) is acyclic by Lemma 4.6. If \(j\neq\ell\), consider the Koszul complexes \(\mathbb{K}_{*}(x_{j},x_{\ell};V_{q})(w)\), for any \(w\coloneqq w(\alpha)=u-\sum_{i\in\alpha}e_{i}\) with \(\alpha\subseteq[n]\smallsetminus\{j,\ell\}\), which are acyclic by Lemma 4.7. The Koszul complex \(\mathbb{K}_{*}(x_{1},\dots,x_{n};V_{q})(u)\) can be obtained from the chain complexes \(\mathbb{K}_{*}(x_{j},x_{\ell};V_{q})(w)\) by iterating the mapping cone construction (see Section 3). At each iteration of this process, by Corollary 3.2, one obtains acyclic Koszul complexes, hence we can conclude that \(\mathbb{K}_{*}(x_{1},\dots,x_{n};V_{q})(u)\) is acyclic, that is, \(\xi_{i}^{q}(u)=0\) for all \(i\in\{0,\dots,n\}\).
To prove that \(\operatorname{supp}\xi_{0}^{q}\subseteq\operatorname{\mathbb{G}}(M_{q})\), we observe that if \(u\notin\operatorname{\mathbb{G}}(M_{q})\) then by Corollary 4.4 there exists \(j\in[n]\) such that \(H_{q}(X^{u-e_{j}})\to H_{q}(X^{u})\) is surjective. This implies that the differential \(d_{1}\) of the Koszul complex \(\mathbb{K}_{*}(x_{1},\dots,x_{n};V_{q})(u)\) is surjective, hence \(\xi_{0}^{q}(u)=\dim(H_{q}(X^{u})/\operatorname{im}d_{1})=0\).
Similarly, to prove that \(\operatorname{supp}\xi_{0}^{q}\subseteq\operatorname{\mathbb{G}}(M_{q+1})\), we observe that if \(u\notin\operatorname{\mathbb{G}}(M_{q+1})\) then by Corollary 4.4 there exists \(j\in[n]\) such that \(H_{q}(X^{w-e_{j}})\to H_{q}(X^{w})\) is injective, where \(w:=u-\sum_{i\in[n]\setminus\{j\}}e_{i}\). This implies that the differential \(d_{n}\) of the Koszul complex \(\mathbb{K}_{*}(x_{1},\dots,x_{n};V_{q})(u)\) is injective, hence \(\xi_{n}^{q}(u)=\dim\ker d_{n}=0\).
## 5 Homological critical grades and support of Betti tables for bifiltrations
In this section, we fix \(n=2\) and study the support of the Betti tables of persistent homology modules associated with a bifiltration \(\{X^{u}\}_{u\in\mathbb{N}^{2}}\) of a cell complex \(X\). In what follows, we make use of the notations introduced at the beginning of Section 4. Additionally, for \(q\in\mathbb{N}\), let us set
\[\mathcal{C}_{q}(X):=\{u\in\mathbb{N}^{2}\mid\dim H_{q}(X^{u},X^{u-e_{1}}\cup X ^{u-e_{2}})\neq 0\}\]
and call it the set of \(q\)_-homological critical grades_ (see [1]). For any fixed \(u\in\mathbb{N}^{2}\) and any \(q\in\mathbb{N}\), let us recall the following known inequalities (see [1, Corollary 1], and [1] for a generalization to the case \(n\geq 2\)):
\[\xi_{0}^{q}(u)+\xi_{1}^{q-1}(u)-\xi_{2}^{q-1}(u)\leq\dim H_{q}(X^{u},X^{u-e_{ 1}}\cup X^{u-e_{2}})\leq\xi_{0}^{q}(u)+\xi_{1}^{q-1}(u)+\xi_{2}^{q-2}(u). \tag{5.1}\]
To interpret the results of this section, we remark that, if \(M\) is the Morse complex associated with any discrete gradient vector field consistent with the filtration \(\{X^{u}\}_{u\in\mathbb{N}^{2}}\), by [1, Prop. 1] we have \(\mathcal{C}_{q}(X)\subseteq\operatorname{\mathbb{G}}(M_{q})\). As we will show (Proposition 5.5 and Corollary 5.6), for bifiltrations we are able to bound the support of the Betti tables using the sets \(\mathcal{C}_{q}(X)\) instead of the sets \(\operatorname{\mathbb{G}}(M_{q})\), thus strengthening our general results of Section 4.
First, we prove a technical result that crucially depends on the one-criticality assumption (Section 2.2) on the bifiltration.
**Lemma 5.1**.: _Let \(v\in\mathbb{N}^{2}\) and let \(j\neq\ell\) in \(\{1,2\}\). Then, there is a short exact sequence of chain complexes_
\[0\xrightarrow{}C_{*}(X^{v-e_{\ell}},X^{v-e_{1}-e_{2}})\xrightarrow{}C_{*}(X^ {v},X^{v-e_{j}})\xrightarrow{}C_{*}(X^{v},X^{v-e_{1}}\cup X^{v-e_{2}}) \xrightarrow{}0.\]
_Remark 5.2_.: The statement has to be interpreted by setting \(X^{v-e_{1}}=\emptyset\) if \(v-e_{1}\) is not in \(\mathbb{N}^{2}\), and similarly for \(X^{v-e_{2}}\) and \(X^{v-e_{1}-e_{2}}\). We use this convention throughout this section.
Proof.: Without loss of generality, we prove the statement for \(j=1\) and \(\ell=2\). The sequence
\[0\xrightarrow{}C_{*}(X^{v-e_{1}}\cup X^{v-e_{2}},X^{v-e_{1}})\xrightarrow{}C _{*}(X^{v},X^{v-e_{1}})\xrightarrow{}C_{*}(X^{v},X^{v-e_{1}}\cup X^{v-e_{2}}) \xrightarrow{}0\]
associated with the triple \(X^{v-e_{1}}\subseteq X^{v-e_{1}}\cup X^{v-e_{2}}\subseteq X^{v}\) is exact. Now we observe that, for any \(q\in\mathbb{N}\), the relative chain modules of the pair \((X^{v-e_{1}}\cup X^{v-e_{2}},X^{v-e_{1}})\) are
\[C_{q}(X^{v-e_{1}}\cup X^{v-e_{2}},X^{v-e_{1}}):=\frac{C_{q}(X^{v-e_{1}}\cup X^{ v-e_{2}})}{C_{q}(X^{v-e_{1}})}=\frac{C_{q}(X^{v-e_{1}})+C_{q}(X^{v-e_{2}})}{C_{q}(X^{ v-e_{1}})}\]
\[\cong\frac{C_{q}(X^{v-e_{2}})}{C_{q}(X^{v-e_{1}})\cap C_{q}(X^{v-e_{2}})}=\frac{C _{q}(X^{v-e_{2}})}{C_{q}(X^{v-e_{1}-e_{2}})}=:C_{q}(X^{v-e_{2}},X^{v-e_{1}-e_{2}}),\]
where we used the classical isomorphism theorem for modules and, in the penultimate equality, the fact that \(C_{q}(X^{v-e_{1}})\cap C_{q}(X^{v-e_{2}})=C_{q}(X^{v-e_{1}-e_{2}})\) as a consequence of the equality \(X^{v-e_{1}}\cap X^{v-e_{2}}=X^{v-e_{1}-e_{2}}\) given by the one-criticality assumption on the filtration (see Remark 2.1). These isomorphisms between chain modules commute with the differentials of the chain complexes \(C_{*}(X^{v-e_{1}}\cup X^{v-e_{2}},X^{v-e_{1}})\) and \(C_{*}(X^{v-e_{2}},X^{v-e_{1}-e_{2}})\), since they are induced by the differential of \(C_{*}(X)\).
**Corollary 5.3**.: _Let \(v\in\mathbb{N}^{2}\), \(q\in\mathbb{N}\) and \(j\neq\ell\) in \(\{1,2\}\), and suppose that \(H_{q}(X^{v},X^{v-e_{1}}\cup X^{v-e_{2}})=0\). Then \(H_{q}(X^{v},X^{v-e_{j}})\neq 0\) implies \(H_{q}(X^{v-e_{\ell}},X^{v-e_{1}-e_{2}})\neq 0\)._
Proof.: By Lemma 5.1, the following is a portion of a long exact sequence in homology:
\[H_{q}(X^{v-e_{\ell}},X^{v-e_{1}-e_{2}})\to H_{q}(X^{v},X^{v-e_{j}})\to H_{q}(X^ {v},X^{v-e_{1}}\cup X^{v-e_{2}}).\]
Since \(H_{q}(X^{v},X^{v-e_{1}}\cup X^{v-e_{2}})=0\), the first map is surjective, and the claim follows immediately.
To prove the final result of this section (Proposition 5.5), we first show directly that the support of the Betti table \(\xi_{2}^{q-1}\) is contained in \(\overline{\mathcal{C}_{q}(X)}\).
**Lemma 5.4**.: _For all \(q\in\mathbb{N}\), we have \(\operatorname{supp}\xi_{2}^{q-1}\subseteq\overline{\mathcal{C}_{q}(X)}\)._
Proof.: Let \(u\in\operatorname{supp}\xi_{2}^{q-1}\). We prove that there exists \(\lambda\in\mathbb{N}\) such that
\[H_{q}(X^{u-\lambda e_{1}},X^{u-(\lambda+1)e_{1}}\cup X^{u-\lambda e_{1}-e_{2} })\neq 0. \tag{5.2}\]
If condition (5.2) holds for \(\lambda=0\), then \(u\in\mathcal{C}_{q}(X)\). Otherwise, since the same property can be proven with the roles of \(e_{1}\) and \(e_{2}\) interchanged, our claim follows by observing that \((u-\lambda e_{1})\vee(u-\mu e_{2})=u\), for every \(\lambda,\mu\in\mathbb{N}\).
Assume that (5.2) is false (i.e. it is an equality) for all \(\lambda\in\mathbb{N}\); then \(H_{q}(X^{u-\lambda e_{1}},X^{u-\lambda e_{1}-e_{2}})\neq 0\) implies \(H_{q}(X^{u-(\lambda+1)e_{1}},X^{u-(\lambda+1)e_{1}-e_{2}})\neq 0\) by Corollary 5.3 (applied with \(v:=u-\lambda e_{1}\)), and we can therefore use an inductive argument. The base case of the induction is \(H_{q}(X^{u-e_{1}},X^{u-e_{1}-e_{2}})\neq 0\) for \(\lambda=1\), which holds because the hypothesis \(u\in\operatorname{supp}\xi_{2}^{q-1}\) implies that \(i_{q-1}^{u-e_{1}-e_{2},u-e_{1}}:H_{q-1}(X^{u-e_{1}-e_{2}})\to H_{q-1}(X^{u-e_{ 1}})\) has nonzero kernel (see Section 3.1). Since \(X^{u-\lambda e_{1}}=\emptyset=X^{u-\lambda e_{1}-e_{2}}\) for a sufficiently large \(\lambda\), we see that the induction leads to a contradiction.
**Proposition 5.5**.: _For all \(q\in\mathbb{N}\), we have \(\operatorname{supp}\xi_{0}^{q}\cup\operatorname{supp}\xi_{1}^{q-1}\cup \operatorname{supp}\xi_{2}^{q-1}\subseteq\overline{\mathcal{C}_{q}(X)}\)._
Proof.: Let us assume that \(u\notin\overline{\mathcal{C}_{q}(X)}\). In the first inequality of (5.1), the term \(\dim H_{q}(X^{u},X^{u-e_{1}}\cup X^{u-e_{2}})\) is zero by definition of \(\mathcal{C}_{q}(X)\). By Lemma 5.4, \(\xi_{2}^{q-1}(u)=0\), hence we have \(\xi_{0}^{q}(u)+\xi_{1}^{q-1}(u)=0\), which is equivalent to \(\xi_{0}^{q}(u)=\xi_{1}^{q-1}(u)=0\).
We observe that the inclusion \(\operatorname{supp}\xi_{0}^{q}\subseteq\overline{\mathcal{C}_{q}(X)}\) can be proven directly, in a similar way to the proof of Lemma 5.4. Contrarily, a direct proof of the inclusion \(\operatorname{supp}\xi_{1}^{q-1}\subseteq\overline{\mathcal{C}_{q}(X)}\) eludes us.
In conclusion, for bifiltrations, we can bound the support of Betti tables as follows.
**Corollary 5.6**.: _For all \(q\in\mathbb{N}\), the Betti tables of degree \(q\) satisfy_
\[\operatorname{supp}\xi_{0}^{q}\cup\operatorname{supp}\xi_{1}^{q}\cup \operatorname{supp}\xi_{2}^{q}\,\subseteq\,\overline{\mathcal{C}_{q}(X)}\cup \overline{\mathcal{C}_{q+1}(X)}.\]
_Furthermore, the union of the supports of all Betti tables satisfies_
\[\bigcup_{q}\mathcal{C}_{q}(X)\,\subseteq\,\bigcup_{q,i}\operatorname{supp}\xi_{ i}^{q}\,\subseteq\,\bigcup_{q}\overline{\mathcal{C}_{q}(X)}.\]
Proof.: The first statement holds by Proposition 5.5 and implies the second inclusion of the second statement. The first inclusion of the second statement follows from the second inequality of (5.1), which implies that \(\mathcal{C}_{q}(X)\,\subseteq\,\operatorname{supp}\xi_{0}^{q}\cup \operatorname{supp}\xi_{1}^{q-1}\cup\operatorname{supp}\xi_{2}^{q-2}\), for all \(q\in\mathbb{N}\).
We remark that the first statement of Corollary 5.6 is not a consequence of Theorem 4.8, as for \(2\)-parameter persistent homology modules it is known that \(\mathbb{C}_{q}(X)\) can be strictly contained in \(\mathcal{G}(M_{q})\), for any choice of a discrete gradient vector field to determine the latter set of grades (see [13, p. 2369] for an example).
For \(n>2\) parameters, we believe that exact sequences like those of Lemma 5.1, along with those induced in homology, can still be useful to study the relation between Betti tables and homological critical grades. In this case, however, these sequences assemble in much more complicated systems, and appropriately disentangling them would require a different approach.
## Acknowledgments
This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. This work was partially carried out within the activities of ARCES (University of Bologna) and under the auspices of INdAM-GNSAGA.
|
2307.10408
|
Explaining Autonomous Driving Actions with Visual Question Answering
|
The end-to-end learning ability of self-driving vehicles has achieved
significant milestones over the last decade owing to rapid advances in deep
learning and computer vision algorithms. However, as autonomous driving
technology is a safety-critical application of artificial intelligence (AI),
road accidents and established regulatory principles necessitate the need for
the explainability of intelligent action choices for self-driving vehicles. To
facilitate interpretability of decision-making in autonomous driving, we
present a Visual Question Answering (VQA) framework, which explains driving
actions with question-answering-based causal reasoning. To do so, we first
collect driving videos in a simulation environment using reinforcement learning
(RL) and extract consecutive frames from this log data uniformly for five
selected action categories. Further, we manually annotate the extracted frames
using question-answer pairs as justifications for the actions chosen in each
scenario. Finally, we evaluate the correctness of the VQA-predicted answers for
actions on unseen driving scenes. The empirical results suggest that the VQA
mechanism can provide support to interpret real-time decisions of autonomous
vehicles and help enhance overall driving safety.
|
Shahin Atakishiyev, Mohammad Salameh, Housam Babiker, Randy Goebel
|
2023-07-19T18:37:57Z
|
http://arxiv.org/abs/2307.10408v1
|
# Explaining Autonomous Driving Actions with Visual Question Answering
###### Abstract
The end-to-end learning ability of self-driving vehicles has achieved significant milestones over the last decade owing to rapid advances in deep learning and computer vision algorithms. However, as autonomous driving technology is a safety-critical application of artificial intelligence (AI), road accidents and established regulatory principles necessitate the need for the explainability of intelligent action choices for self-driving vehicles. To facilitate interpretability of decision-making in autonomous driving, we present a Visual Question Answering (VQA) framework, which explains driving actions with question-answering-based causal reasoning. To do so, we first collect driving videos in a simulation environment using reinforcement learning (RL) and extract consecutive frames from this log data uniformly for five selected action categories. Further, we manually annotate the extracted frames using question-answer pairs as justifications for the actions chosen in each scenario. Finally, we evaluate the correctness of the VQA-predicted answers for actions on unseen driving scenes. The empirical results suggest that the VQA mechanism can provide support to interpret real-time decisions of autonomous vehicles and help enhance overall driving safety.
## I Introduction
Urban autonomous driving is one of the most challenging tasks for self-driving vehicles, especially considering the potential interaction with other cars, road-crossing pedestrians, bystanders, traffic lights, and other conditions of dynamically changing environments. As highly automated vehicles increasingly rely on mapping sensory data to control the commands of a vehicle, applicable end-to-end learning techniques should be acceptably safe and computationally transparent. In particular, the remarkable success of deep learning and computer vision algorithms has expedited progress in safe autonomous driving on real roads and urban areas. For example, in February 2023, Waymo reported that their autonomous vehicle drove more than one million rider-only miles across several US cities with no reported injuries or events involving vulnerable road participants [1]. The report also describes that Waymo's vehicle was involved in two accidents, where one of the accidents was caused by the driver of another car being distracted by their phone while approaching a red traffic light, according to Waymo's claim. Moreover, other recently reported traffic accidents with self-driving cars [2] and resulting fatalities call for a scrutinized regulation of vehicle autonomy within a legal framework. Such road mishaps trigger safety, transparency, and other legal culpability issues. Inherently, a self-driving vehicle also needs to justify its temporal decisions with some form of explanation. As self-driving decisions directly impact passengers on board and other road users, consumers and transportation jurisdictions intrinsically expect transparency and rely on the correctness of such decisions. As a concrete example, the European Union (EU) adopted the General Data Protection Regulation (GDPR) [3] that proposed a recital of the _right of explanation_, which entitles consumers to receive an explanation on decisions of autonomous systems. Article 22 of GDPR also describes general principles regarding stakeholders' rights and responsibilities to use automated decision-making systems [4]. Thus, the need for the explainability of autonomous driving decisions has legal, socio-technical, psychological, and philosophical perspectives, in general [5, 6].
The delivery of explanations is another important topic in autonomous driving. As both consumers and engaged technical people have different backgrounds and knowledge about how self-driving cars work, it is necessary that explanations are provided in accordance with an explanation receiver's (i.e., an explainee) relevant identity, as described in the recent surveys of [5, 6, 7]. In this context, self-driving explanations must be correct, sufficiently informative, and intelligible for explainees.
In this study, we propose a VQA-based explanation approach1 to justify RL-based self-driving decisions in a simulation environment. At its core, VQA is a task in the intersection of natural language processing and computer
Footnote 1: The source code, data, and related resources are available at [https://github.com/Shahin-01/VQA-AD](https://github.com/Shahin-01/VQA-AD).
Such an obj
Fig. 1: An example of the most probable answers with softmax probability scores predicted by our VQA framework on the action of an ego vehicle.
ing technique intuitively applicable to autonomous driving. When humans drive or are a passenger on board, they inherently analyze real-time and upcoming traffic scenes and think about relevant causal, temporal, and descriptive questions, such as "Why is the car turning left?", "What action will the car in the left lane perform at the T-junction?" and "What is the speed of the vehicle in front?" as examples. Getting answers to such questions by any means helps us have a reliable and safe trip. In this regard, we leverage the VQA mechanism to pose a question on an autonomous car's chosen action within the driving scene and justify the question with a causal answer reflecting the car's decision-making in that scenario.
Motivated by this point, we build our framework as follows. We train an RL agent (i.e., an ego car) to operate in an autonomous driving environment and record its decisions (_actions_) in correspondence to the video frames (_states_). We then utilize a VQA system to justify actions of the ego car: the VQA framework inputs an _image frame_ with a _question_ reflecting the action of the car in the scene and tries to predict the relevant answer for such an action (e.g., Figure 1).
Overall, the main contributions of our paper can be summarized as follows:
* We present the first empirical study on explaining autonomous driving actions with a VQA approach.
* We release a dataset of image-question-answer triplets justifying an autonomous car's actions in the scene.
* We show that connecting vision and natural language could rationalize an RL agent's decision-making in an intelligible way.
* We propose further directions to develop more rigorous VQA frameworks for explanatory self-driving actions.
The rest of the paper is organized as follows. In Section II, we provide state-of-the-art explainability approaches for autonomous driving. We then present details of the data generated by our RL agent and visual feature extraction in Section III. Finally, we report empirical results and the discussion of these results in Section IV and sum up the article with concluding remarks and future directions.
## II Related Work
Since Bojarski et al.'s [9] CNN-based end-to-end learning approach, the autonomous driving community has shown increasing interest in interpreting self-driving decisions. In general, primarily explored explanation provision techniques for autonomous driving are _visual, textual explanations_, _feature importance scores_, and _hybrid_ or _multimodal explanations_ comprising two or more of these methods (see Figure 2 for the relevant classification).
Visual explanations in the context of autonomous driving identify which parts of the perceived image (i.e., driving scene) have more influence on the vehicle's decision, as justifications for the performed action [7]. For instance, a visual explanation can show an image of a red traffic light captured by the vehicle's video camera as a _saliency map_ (i.e., a heatmap) pointing out that the perception algorithm classified it as a primary reason for stopping. In this context, Kim and Canny proposed a causal attention-based visualization technique to show which groups of pixel values (i.e., blobs) have a true causal impact on the model's prediction [10]. After analyzing attention maps in a post-hoc manner, they remove more than half of the blobs and analyze the model's output. The empirical results show that the network produces more convincing and correct predictions in driving decisions, just like real drivers do in a realistic environment. Furthermore, as an augmented version of their initial work [9], Bojarski et al. developed _VisualBackProp_, a saliency map-based visual explanation framework highlighting which sets of input pixels have more influence on a vehicle's decisions [11]. They show that the VisualBackProp method correctly identifies the most important traffic elements, such as lane markings and other cars in the scene, as a basis for decision-making. In addition, VisualBackProp has been proven to be an effective interpretable approach to detecting the failure cases [12] in the original vision-based end-to-end learning method of [9].
Another popular vision-based rationalization technique uses the idea of _counterfactual visual explanations_. These explanations aim to identify whether changing some parts of the original image leads to a different prediction than the original prediction made on the original input. Bansal et al. [13] modified hand-crafted inputs by removing some objects in the image to see whether their introduced _ChauffearNet_ makes different predictions with the altered image. A similar strategy is followed by Li et al. [14], where the goal is to find the _risk objects_ for driving. They show that manipulated
Fig. 2: The most common approaches to explaining autonomous driving actions.
removal of a pedestrian in an intersection changes the driving command from "Stop" to "Go"; thus, the pedestrian is considered a "risk object", which causes the driving decision to change to the contrastive class. Finally, as a more recent counterfactual analysis, Jacob et al. [16] investigated a style modification of image regions on the driving model's predictions. The experimental study shows that their presented framework, _STEEX_, generates counterfactual explanations in case of manual interventions to the driving scene. Therefore, visual explanations can enable people to ensure that the intelligent driving system accurately senses the operational environment.
Textual descriptions are another way of conveying rationales to the end-users for driving decisions. This approach generates natural language text that explains driving actions with descriptive, temporal, and causal information. The first successful textual explanation work is Kim et al.'s study [17], where the authors leverage an attention-based video-to-text approach to generate textual explanations for autonomous vehicles. They further extend this work by incorporating human advice [18] and observation-to-action rules [19] to the underlying model and provide text-based explanations on performed actions. In another study, Xu et al. introduce _BDD-OIA_[20], an extension of the BDD100K dataset. Based on the action-inducing objects, they provide 21 explanations for a set of 4 actions (move forward, stop/slow down, turn left, turn right). Lastly, in this context, Ben Younes et al. [21] proposed _BEEF_, an architecture that explains the behavior of trajectory prediction with textual justifications based on features fused from multi-levels, such as late features comprising the system-wise decisions and spatio-temporal features consisting of perceptual driving information.
Feature importance scores, as well-known quantitative evaluation metrics, have also recently been investigated in various autonomous driving tasks. The applications of these methods to autonomous driving include decision trees [22], Shapley values [23], and partial dependence plots [24]. The primary goal of these methods in self-driving is to understand the weights and contributions of scene features used in predictive modeling across the explored self-driving tasks.
Finally, except for visual, textual, and quantitative explanations, recent studies have attempted to use multi-modal explanatory techniques to convey information on the chosen course of actions of self-driving vehicles. For example, in their two studies [25, 26], Schneider et al. propose a combination of visual, textual, audio, light, and vibration feedback to provide retrospective and live explanations on action decisions of autonomous driving. These studies show that while visualization and light-based driving information improve the user experience (UX), multi-modal explanations can enhance perceived control and understanding of a vehicle's decision-making by connecting UX, autonomous driving, and explainable AI. Moreover, real-time driving information delivered via vibration, tactile sensation, or haptic feedback with a relevant degree of an alert may have a crucial role in the smooth and timely transfer of control between a self-driving vehicle and a backup driver.
With the inherent ability to reason about visual information, such as images, videos, and related multimedia data, VQA has recently been explored in several safety-critical and security-concerning domains. These works include applications to the healthcare and medical field [27, 28] and visual surveillance [29]. Interestingly, the topic has not been investigated deeply in autonomous driving. As far as we know, there are only two instances that utilize the VQA mechanism in the transportation domain. The first is the CLEVRER dataset, which describes the collision events with video representation and reasoning [30]. The other contribution is the SUTD-TrafficQA benchmark, which basically predicts traffic situations with question-answer pairs ranging from basic understanding (i.e., _What is the type of the road?_) to reverse reasoning (i.e., _What might have happened moments ago?_) [31]. In our study, we focus on action-based explanations and show that question-answering-based causal event reasoning has significant benefits for explaining real-time decisions of self-driving cars. We describe the details of the framework and the experimental results in the following sections.
## III Experimental Design and Methodology
Our framework is designed in three primary steps. First, we use a deep RL agent to control an autonomous car in a simulation environment and collect a driving video from the simulator. We then convert this recorded video to image
Fig. 4: State space representation of the ego car in its environment. An ideal state is that the car follows the direction of the lane within the lane.
Fig. 3: An aerial view of Town 1 and 2 on the CARLA simulator [15].
sequences uniformly. Finally, we select five specific action categories in the extracted driving frames and annotate them using question-answer pairs that justify the car's action in the scene (Table I). The high-level description of the components and overall architecture is provided in Figure 6. Given such a setup, the objective of our architecture is to predict the correct answer to a posed question about an autonomous car's performed action in an unseen driving scene. The details of the data collection, data annotation, and question-answering steps are described in the following subsections.
#### Iv-B1 Data Collection
To obtain driving data, we trained an RL agent (i.e., a self-driving car) on the CARLA simulator [15]. We used the Deep Deterministic Policy Gradient (DDPG) algorithm [32] for the control of a self-driving car in a simulation environment. Control commands of automated driving have continuous actions including braking, acceleration, and steering angle which themselves can have a broad range of values as a representation. DDPG, as an augmented version of the Deep Q-learning algorithm, is particularly well-adapted for continuous action spaces and therefore is appropriate for driving control tasks. Furthermore, DDPG uses _experience replay_, a memory storing the agent's past experiences (\(s_{t},a_{t},r_{t},s_{t+1}\)), out of which the algorithm can sample randomly to train the agent. This ability to reuse samples makes DDPG a computationally efficient learning approach. Moreover, DDPG has an actor-critic architecture, in which the actor learns an observation-to-action mapping, and the critic learns to evaluate the quality of an agent's chosen actions. DDPG also uses _target networks_ - the target actor network \(\mu^{\prime}\), and target critic networks \(Q^{\prime}\). These networks are time-delayed copies of their original networks that help stabilize the training process. The parameters of target networks are updated as follows:
\[\theta^{Q^{\prime}}\leftarrow\tau\theta^{Q}+(1-\tau)\theta^{Q^{\prime}} \tag{1}\]
\[\theta^{\mu^{\prime}}\leftarrow\tau\theta^{\mu}+(1-\tau)\theta^{\mu^{\prime}} \tag{2}\]
where \(\tau\ll 1\). For an effective action exploration, the term additive noise is usually added to the exploration policy and action is selected accordingly:
\[a_{t}=\mu(s_{t}|\theta^{\mu})+\mathcal{N}_{t} \tag{3}\]
Such a learning technique enables the DDPG agent to learn a policy that maximizes its expected reward while also considering the quality of the chosen actions.
_RL Training Details:_ We generated driving data by training the agent on Town 1 within CARLA. Town 1 (see Figure 3, a) is a map containing straight lines, left turns, right turns, T-junctions, traffic lights, speed signs, and various stationary objects around the curbs. We first use the A* motion planning algorithm [33] to generate a route with an initial and final point of a motion trajectory inside the simulated town, which shows consecutive waypoints linking these points. In our experiment, we set the number of waypoints to 15. By
Fig. 5: Learning curve of DDPG in Town 1 with the specified parameters. Our VQA framework is further fine-tuned on driving data collected here.
default, the waypoints are referenced to the origin point (0,0,0) in the map. To ensure that they are referenced to the dynamic position of the self-driving car while in motion, we use Perez et al.s' methodology [34] and apply a transformation matrix to represent the state of the agent with these points, the vehicle's yaw angle, and its global position on the map as follows:
\[\begin{bmatrix}\cos\phi_{c}&-\sin\phi_{c}&0&X_{c}\\ \sin\phi_{c}&\cos\phi_{c}&0&Y_{c}\\ 0&0&1&Z_{c}\\ 0&0&1&1\end{bmatrix} \tag{4}\]
The goal of the task is that the ego car follows this predefined route and reaches the final destination by performing the relevant actions along its trip.
As seen from Figure 4, the agent acquires a driving vector \(f_{t}\)= (\(v_{t}\), \(d_{t}\), \(\phi_{t}\)) from the simulation environment where these parameters reflect the vehicle's velocity, lateral distance, and yaw angle, respectively. Ideally, the goal of driving is to move on in the direction of the lane as long as possible without lane departure and collisions. In this sense, the reward shaping can be conditioned for the vehicle's 1) perfect longitudinal direction, 2) deviation from the lane direction with yaw angle, and 3) lane departure and collision. Based on these criteria, we adopt the relevant reward formulation from Perez et al. [34] for an ego car:
\[R=\begin{cases}-200\\ \sum_{t}|v_{t}\cos\phi_{t}|-|v_{t}\sin\phi_{t}|-|v_{t}|\,|d_{t}|&\text{driving inside the lane,}\\ +100&\text{arriving at the goal position.}\end{cases} \tag{5}\]
Finally, action space is continuous and can receive values from the interval [-1,1]. By defining this setting, we trained our agent in Town 1. The training parameters of DDPG can be seen in Table II.
#### Iii-B2 Data Annotation
As we obtained the driving video with the DDPG agent, we selected 5 action categories (_go straight, turn left, turn right, turn left at T-junction, and turn right at T-junction_), and extracted consecutive frames uniformly (30 frames per second) for 5 video segments. We then chose 10 frames from each segment. We ensured that these frames were extracted from driving segments, where the car followed the predefined route and performed the relevant action safely without lane departure or collision. We distinguish left and right turns in the current line from left and right turns at T-junction as in the latter an ego car also has an alternative route. So, our training data includes 5 action categories with 50 high-quality frames per category, denoting a total of 250 driving scenes obtained from the recorded video. We manually annotated the training data with five causal question-answer (QA) pairs (see Table I) ensuring the annotations reflected the scene correctly. Each of 250 frames has its single and scenario-related annotation. As test data, we selected a collection of 100 frames from both Town 1 and Town 2 on the CARLA simulator, as the map of Town 2 is similar to Town 1. Similar to the training data annotation, we selected 20 frames for each action category from various segments of Town 1 and Town 2 and annotated each of them with a relevant QA pair. The goal is to assess the generalization ability of the employed VQA framework on these action categories in unseen traffic scenarios.
#### Iii-B3 Question-Answering Framework
On the question-answering side, we fine-tune the original VQA framework [35] trained on the MS COCO dataset [36]. At the highest level, our VQA model takes an encoded driving image and a question embedding as input, to predict the answer (i.e., explanation) for a performed action in the scene. The model is composed of two neural networks. The first one is a multilayer-feedforward network with 2 hidden layers each containing 1000 hidden units and _tanh_ activation function. We apply the dropout regularization with 0.5 in each layer. Finally, a long short-term memory (LSTM) [37] followed by a softmax layer is employed to produce an answer for the asked question about the driving action. On the image encoding part, we eliminate the output layer and use the last hidden layer of the pre-trained VGG-19 architecture [38], producing a 4096-dimensional feature vector. Further, a linear transformation is applied to make the image features 1024-dimensional. The LSTM model for the question encoder has 2 hidden layers with 512 hidden units, and thus it is a 1024-dimensional vector, the same as image features. An interesting aspect is the unification of the question and image vectors from a mathematical perspective. Previous studies have generally either preferred the concatenation or
Fig. 6: A diagram of the proposed VQA architecture for autonomous driving.
multiplication of these vectors, but [35] and [39] have shown that multiplying the image and question encoder usually leads to a better joint representation. Consequently, given the image vector, \(V_{i}\), and question embedding \(V_{q}\), the resulting vector passed to the fully connected layer of the VQA pipeline is represented as their element-wise multiplication, as a fused feature vector:
\[V_{r}=V_{i}\times V_{q} \tag{6}\]
We use the question and answer vocabularies of the original VQA framework, which have sizes of more than 17K unique tokens and 1000 candidate answers (which are either single tokens such as "yes," "white," or expressions consisting of two or more strings such as "playing video game"), respectively, obtained by descriptions from the MS COCO [36] images. We customize candidate answers by adding our answers of 5 action questions to that answer vocabulary. The expectation is that our VQA model picks the most correct answer with the highest softmax probability score out of the 1K candidates for the asked "Why" question about the action within the driving scene.
## IV Experimental Results and Discussion
On the data collection side, we trained the DDPG agent on the CARLA 0.9.11 version in 500 episodes using a TensorFlow backend to get a driving video. As described above, we used 250 frames from Town 1 for training our VQA network and evaluated its performance on 100 frames collected from Town 1 and Town 2 (Figure 3). We used the PyTorch backend for training and evaluating our VQA architecture. The experiments were performed on an NVIDIA RTX 3090 GPU machine with a 32 GB memory size. All the frames were set to have a size of 640 \(\times\) 480 both in training and test. As we have ground-truth answers (see Table I) for the asked question about an image, we compare the top prediction of our model on the test data (i.e., an answer with the highest softmax probability score) with these ground-truth answers. Thus, we use accuracy as an evaluation metric, which is defined as follows:
\[\text{Accuracy}=\frac{\#\ frames\ with\ correct\ predictions}{total\ number\ of\ test\ frames} \tag{7}\]
Based on this evaluation criterion, our VQA model predicted 80 correct answers to the asked questions for 100 images. Hence, the accuracy of the prediction is 0.8 or 80%.
#### Iv-1 Discussion
Except for _turn left_ actions, our model predicted explanatory answers correctly for all remaining action classes. Interestingly, in frames with turn-left scenarios,
Fig. 8: The average softmax probability scores for top predictions in each action category.
Fig. 7: Example scenarios from an ego vehicle’s field of view on CARLA. During the decision-making process of the agent, we are given visual signals and we ask action-related questions and try to find an answer given the current state. The green arrow shows the ego car’s chosen action and the white arrows indicate the other route at T-junction scenarios. We show the top 5 answers predicted by our model. The green-colored text shows the correct answer to the question for the performed action of the car. Except for the _turn left_ scenario, justifications for other actions are predicted correctly by the model.
the VQA framework primarily recognized these actions as _turn right_. In Figure 7, we provide exemplary driving scenes for the five action categories. As seen, the model was able to predict the highest probability scores for all actions in the scenes correctly, except for the misclassified _turn left_ action in the second image. This misclassification could be due to ambiguity in the tested driving frames, the shape of curves in the scene, and road conditions in the training data. Hence, it is important to increase the size of the training data considering the shapes of road lanes and curves, lighting, and other road objects to potentially improve the accuracy of the predictions of the VQA network on self-driving actions.
Another implication of our work is that unifying computer vision with a natural language provides an opportunity to explain temporal actions of an RL agent. As explored in a recent study [40], explaining RL in sequential decision-making problems is an important and emerging topic, particularly when explanation receivers do not have a technical background. As autonomous driving is a safety-critical application area, justifying reinforcement learning-based decisions to end users with natural language-based reasoning is an effective and easily understandable approach. A natural foundation for explainable reinforcement learning (XRL) would be to provide reward-based justifications on action decisions. However, as self-driving explanations are intended for a general community, it is essential to ensure that such explanations are intelligible and informative. While [40] has attempted to build an inherently explainable RL architecture, we build our explanations independent of an agent's decisions. We also acknowledge the need to be cautious about providing explanations that are independent of an agent's behavior; it is possible that post-hoc explanations may _not_ always reflect an agent's real decision-making process. For example, in an actual _left turn_ scenario, a model's response to the question "Why is the car turning to the _right?_" as "Because the road is bending to the _right_." may be a hallucination of a VQA architecture. Consequently, it is important to further investigate the topic of generating linguistic explanations for an agent's actions and evaluate such explanations with human-adversarial examples as well.
#### Iv-B2 Limitations
Real roads are more complex and dynamic with the presence of traffic lights, bystanders, passengers, other vehicles, and adverse weather conditions. In the current version of our framework, the ego car only interacts with the stationary environment and explains actions associated with such interactions. Moreover, our dataset is small in size. Hence, these features are limitations of our present framework, and as a next step, we plan to work on explaining self-driving actions in more dynamic and complex scenarios with enriched data, where details are provided in the conclusions section.
#### Iv-B3 Practical use cases
In practice, the VQA mechanism can be leveraged at least in two ways on real autonomous vehicles. First, it can help passengers on board monitor driving safety by "judging" the vehicle's decisions. For instance, a user interface or dashboard set up on a back seat may provide voice-to-text functionality, and a passenger can observe driving surrounding, ask a question about the vehicle's chosen action, and get an answer. Such a feature can help monitor the reliability of self-driving and instill trust in vehicle autonomy during the trip. Another practical application is to retain a history of action-question-answer triplets (...\(a_{t},q_{t},ans_{t},a_{t+1},q_{t+1},ans_{t+1}\)...) and use it for forensic analysis in possible accident investigations with self-driving vehicles. Such log data can help understand why the self-driving vehicle made a specific decision at a particular time just before being involved in an accident.
## V Conclusions and Future Work
We have presented a preliminary study on explaining autonomous driving actions with a VQA approach. We used driving data generated by an RL agent on the CARLA simulator and developed our question-answering system as an explanatory approach to the agent's decisions. The experimental results show that a simple and straightforward VQA mechanism can help interpret the real-time decisions of an autonomous car and also help understand its correct and incorrect decisions as safety implications. The results also suggest that unifying VQA with RL-based decision-making will likely do well for actions in a dynamic environment, provided that we have more training dataset. In this sense, we plan to explore three potential directions:
_1. Augmenting data and using other VQA architectures:_ We will increase the size of training data (ideally \(>\)50K driving frames), perform fine-tuning on our model using more recent ConvNet architectures, such as Vision Transformer (ViT) [41], and try out other VQA frameworks as well. By making a comparative analysis of these pre-trained deep neural architectures on driving data, we can observe their empirical performance in terms of accuracy and potentially produce a large-scale and curated benchmark dataset.
_2. Training an RL agent on dynamic environments:_ We will run the RL agent in other towns on the CARLA simulator, which have more vehicles, pedestrians, and complex intersections, and annotate the ego vehicle's interaction with them accordingly to provide more image-question-answer triplets.
_3. Leveraging large language models (LLMs):_ Finally, a recent breakthrough in LLMs gives a reason to use this architecture in autonomous driving problems. As our current task combines vision and natural language-based reasoning for explaining self-driving actions, multimodal transformers (e.g., GPT-4 [42] and its variations) could serve a purpose in this context. As multimodal transformers can input an image and text, and provide contextual information about the joint semantics, it seems promising to fine-tune such state-of-the-art learning architectures for the self-driving domain and generate rigorously structured explanations.
We believe that the empirical work and further directions proposed in this paper can help improve safety, transparency, and trustworthiness of autonomous driving technology.
|
2307.08247
|
PAT: Parallel Attention Transformer for Visual Question Answering in
Vietnamese
|
We present in this paper a novel scheme for multimodal learning named the
Parallel Attention mechanism. In addition, to take into account the advantages
of grammar and context in Vietnamese, we propose the Hierarchical Linguistic
Features Extractor instead of using an LSTM network to extract linguistic
features. Based on these two novel modules, we introduce the Parallel Attention
Transformer (PAT), achieving the best accuracy compared to all baselines on the
benchmark ViVQA dataset and other SOTA methods including SAAA and MCAN.
|
Nghia Hieu Nguyen, Kiet Van Nguyen
|
2023-07-17T05:05:15Z
|
http://arxiv.org/abs/2307.08247v1
|
# PAT: Parallel Attention Transformer for Visual Question Answering in Vietnamese
###### Abstract
We present in this paper a novel scheme for multimodal learning named the Parallel Attention mechanism. In addition, to take into account the advantages of grammar and context in Vietnamese, we propose the Hierarchical Linguistic Features Extractor instead of using an LSTM network to extract linguistic features. Based on these two novel modules, we introduce the Parallel Attention Transformer (PAT), achieving the best accuracy compared to all baselines on the benchmark ViVQA dataset and other SOTA methods including SAAA and MCAN.
Information Fusion, Visual Question Answering, Attention, MultiModal Learning
## I Introduction
Multimodal learning recently attracted lots of attention from the research community because of its exciting and challenging requirements. Multimodal learning aims to explore how to extract and fuse multimodal information effectively. Typical tasks of multimodal learning can be listed as Visual Question Answering (VQA) where a machine is required to answer a given question based on visual information from a given image [2], Image Captioning (IC) where a machine is required to generate natural language captions that describe the content of the given image [2], or Visual Grounding where a machine is required to draw bounding boxes on images that indicate objects mentioned in a given query using natural language [37].
Most attention concentrates on the multimodal tasks relevant to visual-textual information, particularly the VQA task. Current approaches on VQA treat this task as an answers classification task. This guide the development of VQA methods focusing on studying the most effective scheme to fuse information from the given image and question in order to select the best accurate candidate among a given set of answers. According to the survey study of Zhang et al. [39], based on the way of performing attention, VQA methods can be grouped into two types: single-hop attention methods and multi-hop attention methods. On large benchmark VQA for English, various works show that single-hop attention methods do not achieve good results compared to multi-hop attention methods.
In this paper, we present a new multi-hop attention method for fusing information from images. Our experimental results prove that single-hop attention methods find difficulty when they tackle the VQA even on a small-size dataset as ViVQA [35].
## II Related works
### _VQA datasets_
Antol et al. [2] first introduced the VQA task by releasing the VQAv1 dataset. This dataset includes 254,721 images with 764,163 questions and 4,598,610 answers. Most of the attention is drawn to the VQAv1 dataset [8, 14, 34, 38] and many attention mechanisms were proposed that still affect the mindset of design for later methods [14, 22, 38] such as Co-Attention [22] and Stacked Attention [14].
Results of former studies on the VQAv1 dataset achieved pretty good results [34] by treating the VQA task as answer selection over a defined set of candidates or answer classification. However, as other classification tasks, answer imbalance in the VQAv1 dataset forms a novel problem that was indicated by Goyal et al. [8]. Goyal et al. [8] proved that former VQA methods obtained good results on the VQAv1 dataset as they suffered from the language prior methods. Particularly, when being given a question, former VQA methods recognize its pattern and select the most apparent answer belonging to that pattern as the candidate, despite the visual information of the images.
To overcome the language prior phenomenon, Goyal et al. [8] balanced the VQAv1 datasets and then proposed the VQAv2 dataset. Goyal et al. [8] constructed lots of experiments and showed that former VQA methods did not perform well as they had behaved. The VQAv2 dataset contains 204,721 images with 1,105,904 questions and 11,059,040 answers, which becomes the largest benchmark for the VQA task in English.
Recent studies constructed VQA datasets that required reading comprehension of VQA methods [24, 25, 30]. Moreover, to develop a VQA system that can use incorporate knowledge while answering the given questions, lots of datasets were released [23]. On the other side, former VQA methods were designed to select answers rather than forming sentences to answer as humans. From that on, there are works conducted the open-ended VQA datasets [13, 33] to research the answer-generation methods instead of answer-selection ones.
In Vietnamese, the first VQA dataset was introduced by Tran et al. [35]. This dataset was constructed based on the COCO-QA dataset [19] using a semi-automatic method. Recently, Nguyen et al. [27] introduced the multilingual VQA dataset, the UIT-EVJVQA dataset, in three languages Vietnamese, English, and Japanese. This dataset is the first open-ended VQA dataset that includes Vietnamese. In addition, Nghia et al. [26] presented a Vietnamese open-ended VQA dataset consisting of 11,000+ images associated with 37,000+ question-answer pairs (QAs).
### _VQA methods_
Former VQA methods were designed based on the attention mechanism [36]. One well-known baseline on the VQAv1 dataset is the Hierarchical Co-Attention Network [22] which used the Convolutional Neural Network (CNN) [28] to extract the n-gram features from questions and used the co-attention to perform attention mechanism over questions and images. Later studies based on this co-attention proposed various methods such as ViLBERT [20], VisualBERT [18], or LXMERT [32].
Another strong baseline on the VQAv1 dataset proposed by Kazemi et al. [14] introduces the Stack Attention. This kind of attention stacks the visual features and linguistic features together and then yielded the attention map over the two kinds of features. Later work proposed methods based on Stack Attention but using transformer [36] such as VL-BERT [31], Unicoder-VL [17], Uniter [5], X-LXMERT [6], Pixel-BERT [11], or VLMo [3].
## III Our proposed method
Inspired by the success of the transformer [36] and the study of Yu et al. [38], we propose a novel scheme of attention, Parallel Attention, that is a kind of multi-hop attention and differs from recent methods. Moreover, to leverage the linguistic features of Vietnamese, we provide Parallel Attention with the hierarchical feature extractor for questions and hence propose a novel method, the Parallel Attention Transformer (PAT). Our experiments prove that this hierarchical extractor is indeed necessary.
The PAT method includes four main components: the Hierarchical Linguistic Feature Extractor, the Image Embedding module, the Parallel Attention module, and the Answer Selector (Figure 1). The detailed architecture of our method is detailed as follows.
### _Hierarchical Linguistic Feature Extractor_
We apply a pre-trained word embedding for Vietnamese to extract the linguistic features of questions. As each token of questions after being passed through the pre-trained word embeddings they are mapped to respective embedded vectors. Accordingly extracted features using word embedding are the unigram features. We aim to make our method have the ability to fully catch the linguistic context of the sentence, so we propose to construct the n-gram linguistic features based on the unigram features (Figure 2).
Particularly, we use a 1D convolutional neural network (CNN) with a kernel of size 1, 2, 3, and 4 to extract the unigram, bigram, trigram, and 4-gram of the initial unigram features, respectively. We note that as the initial unigram features of pre-trained word embedding might not be in the same latent space of the model, so we use a 1D CNN with the kernel of size 1 to project these features into latent space. Our ablation study will prove that this 1D CNN is important to improve the accuracy of our proposed method. The four n-gram features finally are summed to yield the hierarchical linguistic features for questions.
### _Image Embedding module_
Inspired by the study of Anderson et al. [1], we perform the bottom-attention mechanism on the visual features. Particularly, we used the VinVL pre-trained image models [40] to achieve the region features from images. The VinVL pre-trained model was trained on large-scale datasets of vision-language tasks and they used detected tags of objects together with the ROI features of Faster-RCNN-based models hence their visual features are rich in visual aspect as well as linguistic aspect, and Zhang et al. [40] proved that VinVL outperformed previous pre-trained image models on various tasks. The visual features are projected into the latent space of the model before being passed to the next components by using a fully connected layer.
### _Parallel Attention module._
As various VQA models [5, 15, 17, 18, 20, 21, 38], our proposed method has an encoder module containing encoder layers to perform the attention mechanisms. In particular, the Parallel Attention module has four components. Each component has an attention layer [36] and a Position-wise Feed Forward layer [36].
The attention layer is the multi-head attention module proposed by Vaswani et al. [36]. Given a query \(Q\), key \(K\), and value \(V\) vector, the attention map is specified as follows:
\[A=softmax(\frac{QK^{T}}{\sqrt{d_{k}}}) \tag{1}\]
where \(d_{k}\) is the dimension of the value vector and we assume that \(Q\), \(K\), and \(V\) have the same dimension. After obtaining the attention map, the encoded features are determined as:
\[Y=AV \tag{2}\]
In the Parallel Attention module, the first two components are used to perform cross-and-parallel attention: vision over language and language over vision, respectively, by changing the query, key, and value role of visual features and linguistic features. The last second components are used to perform self-and-parallel attention: vision over itself and language over itself by defining the key, query, and value are all visual features or linguistic features (Figure 3). Finally, the visual features \(x_{v}\) and linguistic features \(x_{l}\) are produced that have advantage information for selecting an accurate candidate among defined answers.
### _Answer Selector_
The Answer Selector module is designed to fuse the information of visual features \(x_{v}\) and linguistic features \(x_{l}\) that produce the fused features \(x_{f}\). The fused features are then projected into the vocab space. Finally, we obtained the probabilistic vector that indicates the most candidate as an answer. We follow the Attribute Reduction and Classifier of MCAN [38] method to design the Answer Selector.
In particular, the Answer Selector module includes two phases: attributes reduction and Candidate Selection (in the context of the study of Yu et al. [38], this phase is named Answer Classifier). Given \(x_{v}\) and \(x_{f}\) obtained from the Parallel Attention layers, we use the MLP layer with the softmax function to re-weight these features:
\[attr_{v}=softmax(MLP(x_{v})) \tag{3}\]
\[attr_{l}=softmax(MLP(x_{l})) \tag{4}\]
Then the reduced attributes are applied to denoise and combine the visual features \(x_{v}\) and linguistic features \(x_{l}\):
\[x_{v}=sum(x_{v}*attr_{v}) \tag{5}\]
\[x_{l}=sum(x_{l}*attr_{l}) \tag{6}\]
where \(*\) indicates the element-wise product.
Finally, the fused features \(x_{f}\) are obtained by summing the \(x_{v}\) and \(x_{l}\):
\[x_{f}=W_{v}x_{v}+W_{l}x_{l} \tag{7}\]
The selected candidate \(c\) is determined based on the fused features \(x_{f}\):
\[c=max(W_{vocab}x_{f}) \tag{8}\]
Fig. 1: Overall structure of the PAT method.
Fig. 3: A Parallel Attention module.
Fig. 2: Hierarchical Linguistic Features Extractor.
## IV Experimental results
### _Dataset_
In this paper, we propose the Hierarchical Linguistic Feature Extractor to leverage the advantages of grammar and context in Vietnamese. Accordingly, we conduct experiments on the ViVQA dataset [35] which is the first visual question answering dataset for Vietnamese.
### _Evaluation Metrics_
We follow the study of Teney et al. [34] that treats the VQA task as a classification task. From that on, we use the Accuracy metric or Exact Match (EM) metric defined by Antol et al. [2] to measure the ability of VQA methods in our experiments. Particularly, the EM metric is determined as:
\[EM=\frac{1}{n}\sum_{i=1}^{n}\left(\frac{1}{m}\sum_{j=1}^{m}\left(1-\alpha_{ij} \right)\right) \tag{9}\]
\[\alpha_{ij}=\begin{cases}0\Leftrightarrow\hat{a_{i}}=a_{ij}\\ 1\text{ otherwise}\end{cases} \tag{10}\]
where \(n\) is the total number of questions in whole dataset, \(m\) is the total number of answers of given question \(i\), \(\hat{a_{i}}\) is the predicted answer for question \(i\), \(a_{ij}\) is the \(j\)th ground truth answer for question \(i\).
### _Baselines_
We compare our proposed PAT method with all models implemented in the previous work [35]. In addition, we re-implemented the two baselines on VQAv1 and VQAv2 datasets, which are SAAA [14] and MCAN [38] methods, respectively.
### _Configuration_
All experiments in this paper used the VinVL pre-trained image [40] to extract region features [1] and grid features [12]. Both SAAA and MCAN as well as PAT use FastText [4] as pre-trained word embeddings to extract features of questions. All implemented experiments were performed on an A100 GPU, with batch size 64 and the learning rate fixed at 0.01. We used Adam [16] as the optimization method. The detailed configuration for each method is listed as follows:
#### Iv-D1 SAAA (Show, Asked, Attend, and Answer)
We followed the configuration of SAAA that made this model obtain the best results on VQAv1 [14]. In particular, the LSTM [10] layer of SAAA has 1024 as its hidden dimension, and the attention size is 512. In the Classifier module of SAAA, features are mapped into 1024-dimensional space before being projected into the vocab space. In our implementation, we used VinVL instead of ResNet152 [9] to achieve the grid features.
#### Iv-D2 MCAN (Deep Modular Co-Attention Network)
We followed the best configuration of MCAN reported in the study [38]. In particular, we used 6 layers for the Co-Attention module. The multi-head attention modules of MCAN have 512 as their hidden size. We used VinVL to extract region features instead of Faster-RCNN [29].
#### Iv-D3 Pat
The Hierarchical Linguistic Feature Extractor contains 4 CNN layers with respectively 1, 2, 3, and 4 as their kernel size to extract unigram, bigram, trigram, and 4-gram features. The Parallel Attention module contains 4 layers. All attention modules of each layer in the Parallel Attention module have 512 as their hidden dimension. We follow [7] to use GeLU as an activation function instead of ReLU as in [36].
### _Results_
As indicated in Table I, SAAA and MCAN achieved significantly better results compared to all implementations of Tran et al. [35]. Straightforward structures such as the combination of pre-trained word embeddings and LSTM [10] do not tackle effectively such complicated tasks as VQA, while deeper and ingeniously designed methods such as SAAA and MCAN took over the ViVQA dataset better.
Especially, our proposed method, PAT, obtained the best results while leaving other methods a far distance. Particularly, PAT achieved approximately 6% better than SAAA and approximately 3% better than MCAN despite these two methods are the SOTA methods on the VQAv1 and VQAv2 that were not pre-trained on large-scale datasets.
### _Ablation study_
we conduct an ablation study to comprehensively discover how our two proposed modules, Hierarchical Linguistic Feature Extractor, and Parallel Attention module, contribute to the overall result of the PAT. Results are shown in Table II.
According to Table II, the PAT which does not use LSTM or Hierarchical Linguistic Features Extractor to extract features of questions obtained lower accuracy. When equipped with LSTM or Hierarchical Linguistic Extractor, PAT achieved better results. Especially it achieved the best results when
using the Hierarchical Linguistic Extractor. This result proves that the Hierarchical Linguistic Feature Extractor leverages the grammar dependency as well as the context of Vietnamese better than a simple LSTM network.
Moreover, as stated in Section III-A, the Hierarchical Linguistic Feature Extractor uses CNN to extract up to 4-gram features, including the unigram features. This is necessary as we assume the 1-size kernel CNN used to extract unigram is used to project the pre-trained word embedding features into the latent space of PAT where it finds easier to fuse information with features from images. In Table III, we proved our assumption where PAT which uses an additional 1-size kernel CNN has a better result than one using unigram features extracted from the pre-trained word embedding.
## V Conclusion and future works
In this paper, we present the PAT, which achieved the best performance on the benchmark ViVQA dataset. Our ablation study showed that the proposed Hierarchical Linguistic Feature Extractor performed better than LSTM when extracting features from questions.
In future works, we continue to investigate the impact of using Large Language Models (LLMs) on the results of VQA methods, as well as find the most effective multimodal structure that yields the best accuracy on the ViVQA dataset. In addition, our proposed method can be evaluated on two benchmarks datasets: EVJVQA [27] and OpenViVQA [26].
|
2308.01380
|
Lessons from LHC on the LFV Higgs decays $h \to \ell_a \ell_b$ in the
Two-Higgs Doublet Models
|
The non-conservation of the lepton number has been explored at the LHC
through the Lepton-Flavor Violating (LFV) Higgs decays $h\to\ell_a\ell_b$, with
$\ell_{a,\,b}=e,\,\mu,\,\tau$ $(a \neq b)$. Current limits on these decays are
a source of valuable information on the structure of the Yukawa and Higgs
sectors. The LFV Higgs couplings can arise within the general Two-Higgs Doublet
Model (2HDM); the predicted rates for these decay modes depend on the specific
Yukawa structure being considered, ranging from a vanishing branching ratio at
tree-level for some versions (2HDM-I, II, X, Y), up to large and detectable
ratios within the general 2HDM-III. An attractive scenario is given by the
texturized version of the model (2HDM-Tx), with the Yukawa matrices having some
texture zeros, such as the minimal version with the so-called Cheng-Sher
ansazt. We study the constraints on the parameter space of the 2HDM provided by
experimental and theoretical restrictions, and use them to study the detection
of LFV Higgs modes at LHC. We find several encouraging scenarios to the search
for the decay $h \to\tau\mu$ that could be achieved in the High-Luminosity LHC.
On the other hand, LFV Higgs couplings can also be induced at one-loop level in
the 2HDM with neutrino masses, with the loops being mediated by neutrino
interactions; we find that the resulting branching ratios are of order
$10^{-7}$ at best, which is out of the reach of current and future phases of
the LHC.
|
M. A. Arroyo-Ureña, J. Lorenzo Díaz-Cruz, O. Félix-Beltrán, M. Zeleny-Mora
|
2023-08-02T18:37:21Z
|
http://arxiv.org/abs/2308.01380v2
|
# Lessons from LHC on the LFV Higgs decays \(h\to\ell_{a}\ell_{b}\) in the Two-Higgs Doublet Models
###### Abstract
The non-conservation of the lepton number has been explored at the LHC through the Lepton-Flavor Violating (LFV) Higgs decays \(h\to\ell_{a}\ell_{b}\), with \(\ell_{a,\,b}=e,\,\mu,\,\tau\) (\(a\neq b\)). Current limits on these decays are a source of valuable information on the structure of the Yukawa and Higgs sectors. The LFV Higgs couplings can arise within the general Two-Higgs Doublet Model (2HDM); the predicted rates for these decay modes depend on the specific Yukawa structure being considered, ranging from a vanishing branching ratio at tree-level for some versions (2HDM-I, II, X, Y), up to large and detectable ratios within the general 2HDM-III. An attractive scenario is given by the texturized version of the model (2HDM-Tx), with the Yukawa matrices having some texture zeros, such as the minimal version with the so-called Cheng-Sher ansazt. We study the constraints on the parameter space of the 2HDM provided by experimental and theoretical restrictions, and use them to study the detection of LFV Higgs modes at LHC. We find several encouraging scenarios to the search for the decay \(h\to\tau\mu\) that could be achieved in the High-Luminosity LHC. On the other hand, LFV Higgs couplings can also be induced at one-loop level in the 2HDM with neutrino masses, with the loops being mediated by neutrino interactions; we find that the resulting branching ratios are of order \(10^{-7}\) at best, which is out of the reach of current and future phases of the LHC.
Introduction
The Standard Model (SM), with a single Higgs doublet generating all the masses of the model, it has been confirmed at the Large Hadron Collider (LHC) thanks to the detection of a light Higgs boson with \(m_{h}=125\) GeV [1; 2]. The LHC program includes measuring the Higgs properties, to determine whether they are consistent with the SM predictions, or show some deviations that could point into the direction of physics beyond the SM [3]. So far, the LHC has been able to measure the Higgs couplings with \(WW\), \(ZZ\), and \(b\bar{b},\tau^{+}\tau^{-},\mu^{+}\mu^{-}\), and indirectly the coupling with top quarks (from the Higgs coupling with photon and gluon pairs, which arise at one-loop).
On the other hand, the open problems of the SM, together with the results in areas such as neutrino physics and dark matter, suggest that some form of new physics should exist, to account for such phenomena. So far, a variety of experimental searches, including the energy, precision and cosmological frontiers, have imposed strong limits on the scale of new physics above \(\mathcal{O}(1)\) TeV [4].
Models of new physics often include a Higgs sector with an extended spectrum and some new features [5]. Bounds on the flavor conserving Higgs couplings that arise in these models have been derived at LHC too [6]. Moreover, some of these extensions of the SM, include a more flavored Higgs sector [7], which produces distinctive signals, such as the Flavor-Violating (FV) Higgs-fermions interactions; and in some cases the rates could be so large, that some suppression mechanism is required in order to build viable models [8].
In multi-Higgs doublet models, the Glashow-Weiberg theorem [9] states that in order to avoid the presence of FV couplings in the Higgs-Yukawa sector, each fermion type (up- and down-type quarks) must acquire their masses from a unique Higgs doublet. Later on, it was found that it is possible to build realistic models with FV Higgs couplings, provided that the Yukawa matrices have a restricted form, _i.e._ with texture zeroes [10]. The extension of these ideas to the lepton sector was presented in Refs. [11; 12].
It is possible to probe these LFV effects at low-energies, through the LFV decays \(\ell_{i}\to\ell_{j}\gamma\), \(\ell_{i}\to\ell_{j}\ell_{j}\ell_{k}\). Furthermore, Ref. [12] also proved that it is possible to have large rates for the LFV Higgs decays and to satisfy those low-energy LFV bounds. More realistic textures were considered in [13], while the possibility to search for these LFV Higgs decays at hadron colliders was studied in Ref. [14]. The most recent search for LFV Higgs Decays at LHC, with center-of-mass energy \(\sqrt{s}=13\) TeV and an integrated luminosity of \(138\) fb\({}^{-1}\), has provided stronger bounds, in particular ATLAS reports both \(\mathcal{BR}(h\to e\tau)<0.20\%\) and \(\mathcal{BR}(h\to\tau\mu)<0.18\%\)[15], while the CMS collaboration concludes that \(\mathcal{BR}(h\to e\tau)<0.22\%\) and \(\mathcal{B}(h\to\tau\mu)<0.15\%\) for \(137\) fb\({}^{-1}\) of accumulated data [16], which are consistent with a zero value. The fact that the LHC has searched for these LFV Higgs boson decays modes, and has already presented strong bounds on the corresponding branching ratios, has motivated great interest from the theoretical side. Other models where the LFV Higgs signal is present are models with new physics, including: 2HDM [17; 18; 19; 20], models with a low-scale flavon mixing with the SM-like Higgs boson [21], models with 3-3-1 gauge symmetry [22]; See-Saw model and its inverse version [23; 24; 25; 26], low scale see-saw [27; 28], as well as SUSY models, including models with neutrino masses, such as see-saw MSSM [29] and the minimal SUSY SM (MSSM) [30; 31] at loop-level. Several type of methods has been used to calculate the LFV Higgs decays, for example the Mass Insertion Approximation (MIA) [32; 33; 34] or Effective Field approach [35], and the minimal SUSY SM (MSSM) [7]. At the coming phase (with higher luminosity) it will be possible to derive more restrictive bounds, and one can analyze the consequences for the model building side.
In this paper we present a general discussion of the implications of the LFV Higgs decays searches at LHC, within the general 2HDM of type III [8]. We start by study in the limits on \({\cal BR}(h\to\tau\mu)\) that can be achieved at LHC, as a function of the integrated luminosities that is projected for the future phases of the LHC. Then, we shall derive constraints on the model parameters, and discuss the lessons that LHC can impose on the model structure, and see how the resulting rate fit into the picture. We also discuss the expected rates for the LFV decay \(h\to\ell_{a}\ell_{b}\), within the \(\nu\)2HDM, where such coupling is induced at one-loop level.
The organization of our paper goes as follows. Section II contains our parametrization for the Higgs couplings for the general 2HDM-III. Section III contains the study of the 2HDM-III parameter space which was obtained from low-energies processes and Higgs boson data. Results for the search for LFV Higgs decays at LHC and their future stages, namely, HL-LHC, HE-LHC and FCC-hh, are also presented in Sec. III. Section IV is dedicated to the one-loop calculation for the decay \(h\to\ell_{a}\ell_{b}\) within the 2HDM of type I,II, including massive neutrinos, which are generated using the see-saw mechanism of type I. Finally, conclusions and perspectives are presented in Section V.
## II The LFV Higgs couplings in the 2HDMs
The general two-Higgs doublet model admits the possibility to have LFV Higgs couplings, which have been explored already at LHC through the search for the decays \(h\to\ell_{a}\ell_{b}\). The predicted rates for these decays depend on the specific model realization, ranging from a vanishing branching ratio at tree-level for 2HDM-I,II,X,Y, as well as the MFV implementation, up to large and detectable ratios within the version 2HDM-III, which depend on the specific Yukawa structure being considered. For instance the texturized version of the model (2HDM-Tx) with Yukawa matrices having for texture zeros [10; 13; 36; 37], which has been found to reproduce the Cheng-Sher anzast [38; 39], provides the largest possible rates and it is strong by constrained at LHC (Run2), as we shall argue here. Next in importance could be the 2HDM where the LFV Higgs couplings are absent at tree level, that could be induced at one-loop, as it is the case for the \(\nu\)2HDM, which is the version of the 2HDM I,II but includes neutrino masses.
Thus, we start by presenting the coupling of the Higgs boson (\(h\)) with vector bosons (\(W^{\pm},\,Z\)). In this case we can write the interaction Lagrangian with terms of dimension-4, consistent with Lorentz symmetry and derivable from a renormalizable model, as follows:
\[{\cal L}_{V}=\kappa_{W}gm_{W}hW^{+\mu}W^{-}_{\mu}+\kappa_{Z}\frac{gm_{Z}}{2c_ {W}}hZ^{\mu}Z_{\mu}. \tag{1}\]
When the Higgs particle corresponds to the minimal SM, one has \(\kappa_{W}=\kappa_{Z}=\kappa_{V}=1\), while values \(\kappa_{W}=\kappa_{Z}\neq 1\) arise in models with several Higgs doubles, which respect the so-called custodial symmetry. In particular for models that include \(N\) Higgs doublets \(\Phi_{a}\), each one having a vacuum expectation value (vev) \(v_{a}\), one has:
\[\kappa_{V}=\sum_{a}\frac{v_{a}}{v}O_{1a}, \tag{2}\]
where \(O_{1a}\) denotes the element of the matrix that diagonalizes the \({\cal CP}\)-even Higgs mass matrix, with \(h_{1}=h\) denoting the light SM-like Higgs boson; and \(v^{2}=\sum_{a=1}^{N}v_{a}^{2}=246^{2}\) GeV\({}^{2}\).
On the other hand, the interaction of the Higgs with fermion \(f\), with \(f=q\) or \(\ell\) for quarks and leptons, respectively, can also be written in terms of a dimension-4 Lagrangian that respects Lorentz invariance,
namely:
\[\mathcal{L}_{f}=h\bar{f}_{i}(S_{ij}+i\gamma_{5}P_{ij})f_{j}. \tag{3}\]
The \(\mathcal{CP}\)-conserving (\(\mathcal{CP}\)C) and \(\mathcal{CP}\)-violating (\(\mathcal{CP}\)V) factors \(S_{ij}\) and \(P_{ij}\), which include the flavor physics we are interested in, are written as:
\[S_{ij} = \frac{gm_{i}}{2m_{W}}c_{f}\delta_{ij}+\frac{g}{2}d_{f}\eta_{ij}, \tag{4}\] \[P_{ij} = \frac{gm_{i}}{2m_{W}}e_{f}\delta_{ij}+\frac{g}{2}g_{f}\eta^{ \prime}_{ij}. \tag{5}\]
The factors \(c_{f},d_{f},e_{f}\) and \(g_{f}\), depend on new physics in the Higgs sector, where \(h\) is part of an enlarged Higgs sector, they also describe the possibility that the fermion masses may come from more than one Higgs doublet. As far the factors \(\eta_{ij}\), \(\eta_{ij}\) are concerned, they represent new physics associated with the origin of flavor, _i.e._ they have an explicit dependence on the Yukawa structure.
Within the SM \(c_{f}=1\) and \(d_{f}=e_{f}=g_{f}=0\), which signals the fact that for the SM the Higgs-fermion couplings are \(\mathcal{CP}\)C and flavor diagonal, _i.e._ there is no Flavor Changing Scalar Interactions (FCSI). Notice that when we have \(e_{f}\neq 0\), but \(d_{f}=g_{f}=0\), it indicates that the Higgs-fermion couplings are \(\mathcal{CP}\)V, but still flavor diagonal. In models with two or more Higgs doublets, it is possible to have both FCSI (\(d_{f},g_{f}\neq 0\)) and \(\mathcal{CP}\)V.
Explicit values for the coefficients \(c_{f},d_{f}\) within the general Two-Higgs doublet model with Yukawa matrix of texture form (2HDM-Tx), are shown in Table 1, for the \(\mathcal{CP}\)C case: \(e_{f}=g_{f}=0\), while the parameters \(\eta_{ij}\) are written as follows:
\[\eta_{ij}=\frac{\sqrt{m_{i}m_{j}}}{m_{W}}\chi_{ij}, \tag{6}\]
where the coefficients \(\tilde{\chi}_{ij}\) parametrize the dependence on the flavor structure of the Yukawa matrices, and it is expected to be \(\mathcal{O}(1)\) in minimal scenarios (such as models with four-texture zeroes that reproduces the so called Cheng-Sher ansazt). However, this parameter could be more suppressed in other settings, such as the so-called Texturized models [40]. In the BGL models [41] it is shown that the size of the FV Higgs couplings is given by the CKM matrix elements, and therefore the resulting branching ratios could be more suppressed.
\begin{table}
\begin{tabular}{c c c} \hline Coefficient & \(c_{fi}\) & \(d_{fi}\) \\ \hline \(d\)-type & - \(\frac{\sin\alpha}{\cos\beta}\) & \(\frac{\cos(\alpha-\beta)}{\cos\beta}\) \\ \hline \(u\)-type & \(\frac{\cos\alpha}{\sin\beta}\) & - \(\frac{\cos(\alpha-\beta)}{\sin\beta}\) \\ \hline leptons \(\ell\) & - \(\frac{\sin\alpha}{\cos\beta}\) & \(\frac{\cos(\alpha-\beta)}{\cos\beta}\) \\ \hline \end{tabular}
\end{table}
Table 1: Coefficients for Higgs-Fermion couplings in 2HDM-Tx with \(\mathcal{CP}\)-conserving Higgs potential.
## III Search for the \(h\to\tau\mu\) decay at future hadron colliders
Let us start with the analysis of the 2HDM-III parameter space. In this study we consider the most up-to-date experimental data from LHC and low-energy processes, as shown below.
### Constraint on the 2HDM-III parameter space
To evaluate the production cross section of the SM-like Higgs boson and the branching ratio of the \(h\to\tau\mu\) decay, it is necessary to analyze the 2HDM-III parameter space. The relevant 2HDM-III parameters that have a direct impact in the predictions are: \(c_{\alpha\beta}\equiv\cos(\alpha-\beta)\) and \(t_{\beta}\equiv\tan\beta\). This is so because both \(g_{h\tau\mu}\) and \(g_{hit}^{III}\) couplings are proportional to them (see Fig. 1), namely,
\[g_{h\tau\mu} = \frac{c_{\alpha\beta}t_{\beta}}{\sqrt{2}s_{\beta}}\frac{m_{\mu}m_ {\tau}}{m_{W}}\chi_{\tau\mu}, \tag{7}\] \[g_{hit}^{III} = \frac{gm_{t}}{2m_{W}}\Bigg{(}\frac{c_{\alpha}}{s_{\beta}}-\frac{ c_{\alpha\beta}}{s_{\beta}}\chi_{tt}\Bigg{)}, \tag{8}\]
here we have considered \(\chi_{tt}=1\), this value is in accordance with the \(htt\) SM coupling (which depends on \(t_{\beta}\) and \(c_{\alpha}-\beta\)), as shown in Fig. 2. In this analysis we include uncertainties coming from direct measurements of the top quark mass [42].
We use experimental data in order to constrain the free parameters previously mentioned. The software used for this propose is a Mathematica package called SpaceMath[43]. The physical observables used to bind the free model parameters are listed below.
1. LHC Higgs boson data,
2. \(B_{s}^{0}\to\mu^{-}\mu^{+}\) decay [42],
3. Upper limits on the \(\mathcal{BR}(\tau\to 3\mu)\) and \(\mathcal{BR}(\tau\to\mu\gamma)\) decays [42],
4. Upper limit on the \(\mathcal{BR}(h\to\tau\mu)\) decay [44; 45],
5. The muon anomalous magnetic dipole moment \(\delta a_{\mu}\)[46],
6. Direct searches for additional heavy neutral \(\mathcal{CP}\)-even and \(\mathcal{CP}\)-odd scalars through \(gb\to\phi\to\tau\tau\) (\(\phi=H,\,A\)) process are used to constrain their masses [47; 48]. We denote them as \(m_{H}\), \(m_{A}\), respectively.
Figure 1: Feynman diagram of the dominant Higgs boson production at LHC with its subsequent FCSI decay into \(\tau\mu\) pair.
7. As far as the charged scalar mass \(m_{H^{\pm}}\) is concerned, the observables to constrain \(m_{H}^{\pm}\) are i) the upper limit on \(\sigma(pp\to tbH^{\pm})\times\mathcal{BR}(H^{\pm}\to\tau^{\pm}\nu)\)[49] and ii) the decay \(b\to s\gamma\)[50; 51; 52; 53; 54; 55].
#### iii.1.1 Constraint on \(c_{\alpha\beta}\) and \(t_{\beta}\)
We first analyze the impact of LHC Higgs boson data on \(c_{\alpha\beta}\) and \(t_{\beta}\). More precisely, we use the coupling modifiers \(\kappa\)-factors reported by ATLAS and CMS collaborations [56; 57]. In the context of 2HDM-III, this observable is defined as follows
\[\kappa_{pp}^{2}=\frac{\sigma(pp\to h^{\text{2HDM-III}})}{\sigma(pp\to h^{\text{ SM}})}\text{ or }\kappa_{x\bar{x}}^{2}=\frac{\Gamma(h^{\text{2HDM-III}}\to X)}{\Gamma(h^{\text{ SM}}\to X)}. \tag{9}\]
In Eq. (9), \(\Gamma(\phi_{i}\to X)\) is the decay width of \(\phi_{i}=h^{\text{2HDM-III}}\), \(h^{\text{SM}}\) into \(X=b\bar{b},\,\tau^{-}\tau^{+},\,ZZ,\,WW,\,\gamma\gamma,\,gg\). Here \(h^{\text{2HDM-III}}\) corresponds the SM-like Higgs boson coming from 2HDM-III and \(h^{\text{SM}}\) stands for the SM Higgs boson; \(\sigma(pp\to\phi_{i})\) is the Higgs boson production cross section through proton-proton collisions. Figure 3(a) shows colored areas corresponding to the allowed regions by each channel \(X\). As far as the observables \(h\to\tau\mu\), \(\delta a_{\mu}\), \(\tau\to 3\mu\), \(\tau\to\mu\gamma\) and \(B_{s}^{0}\to\mu^{-}\mu^{+}\) are concerned, we present in Figure 3(b) the area in accordance with experimental bounds1. Meanwhile, Figure 3(c) displays the region consistent with
Figure 2: The colored areas in the \(\chi_{tt}-t_{\beta}\) plane represent points agree with the \(htt\) SM coupling including uncertainties coming from the top quark mass.
all constrains. We also include the projections for HL-LHC and HE-LHC for Higgs boson data [58] and for the decay \(B_{s}^{0}\to\mu^{+}\mu^{-}\)[45].
Once we consider all the observables, we find strong restrictions for the 2HDM-III parameter space on the \(c_{\alpha\beta}-t_{\beta}\) plane, in particular we observe that \(c_{\alpha\beta}=0.05\) admits values for \(t_{\beta}\approx 2\), \(t_{\beta}\approx 8\), \(t_{\beta}\approx 10\) for HE-LHC, HL-LHC and LHC, respectively. A special case, the alignment limit, _i.e._, \(c_{\alpha\beta}\approx 0\) allows \(t_{\beta}\approx 12\), 11, 8 for the LHC, HL-LHC and HE-LHC, respectively. A fact to highlight is that the 2HDM-III is able to accommodate the current discrepancy between the experimental measurement and the theoretical SM prediction of the muon anomalous magnetic dipole moment \(\delta a_{\mu}\). However, from Figure 3(b), we observe that the allowed region by \(\delta a_{\mu}\) is out of the intersection of the additional observables. This happens by choosing the parameters shown in Table 2. We find that \(\delta a_{\mu}\) is sensitive to the parameter changing flavor \(\chi_{\tau\mu}\) which was fixed to the unit in order to obtain the best fit of the 2HDM-III parameter space. Under this choice, \(\delta a_{\mu}\) is explained with high values of \(t_{\beta}\).
#### iii.1.2 Constraint on \(m_{H}\) and \(m_{A}\)
The ATLAS and CMS collaborations reported results of a search for hypothetical neutral heavy scalar and pseudoscalar bosons in the ditau decay channel [47; 48]. The search was done through the process \(gb\to\phi\to\tau\tau\), where \(\phi=A,\,H\) (see Fig. 4). Nevertheless, no significant deviation was observed from the predicted SM background, but upper limits on the production cross section \(\sigma(gb\to\phi)\) times branching ratio \(\mathcal{BR}(\phi\to\tau\tau)\) were imposed. Figure 5(a) shows the \(\sigma(gb\to Hb)\times\mathcal{BR}(H\to\tau\tau)\) as a function of \(m_{H}\) for illustrative values of \(t_{\beta}=5\), 8, 40 and \(c_{\alpha\beta}=0.01\), while in Figure 5(b) we present the same process but as a function of \(m_{A}\) for \(t_{\beta}=8\), 30, 40. The red crosses and black points correspond to the observed and expected values at 95% C.L., respectively. The green (yellow) band represents the interval at \(\pm 1\sigma\) (\(\pm 2\sigma\)) according to the expected value. We implement the Feynman rules in CalcHEP[59] to evaluate such a process.
From Figure 5(a) we can notice that \(m_{H}\lesssim 500\) GeV (\(m_{H}\lesssim 700\) GeV) are excluded at \(1\sigma\) (\(2\sigma\)) when \(t_{\beta}=8\), while for \(t_{\beta}\lesssim 5\) the upper limit on \(\sigma(gb\to\phi b)\times\mathcal{BR}(\phi\to\tau\tau)\) is easily accommodated. Despite \(t_{\beta}=\)40 is ruled out (see Figure 3(c)), we include it to illustrate the consistency (in the high mass regime) with the results reported in previous section. As far as the pseudoscalar mass is concerned, from Figure 5(b), we find that \(m_{A}\lesssim 600\) GeV (\(m_{H}\lesssim 700\) GeV) are excluded at \(1\sigma\) (\(2\sigma\)) for \(t_{\beta}=8\).
#### iii.1.3 Constraint on the charged scalar mass \(m_{H^{\pm}}\)
The detection of a charged scalar \(H^{\pm}\) would represents a clear signature of new physics. Constraints were obtained from collider searches for the \(H^{\pm}\) production and its subsequent decay into a \(\tau^{\pm}\nu_{\tau}\) pair [49]. More recently the ATLAS collaboration reported a study on the charged Higgs boson produced either in top-quark decays or in association with a top quark. Subsequently the charged Higgs boson decays via \(H^{\pm}\to\tau^{\pm}\nu_{\tau}\) with a center-of-mass energy of 13 TeV [49]. However, we find that such processes are not a good way to constrain the charged scalar mass \(m_{H^{\pm}}\) predicted in 2HDM-III. Nevertheless, the situation is opposite if one consider the decay \(b\to s\gamma\) which imposes severe lower limits on \(m_{H^{\pm}}\) because to the charged boson contribution [54; 55].
Figure 6 shows the ratio \(R_{quark}\) as a function of the charged scalar boson mass for \(t_{\beta}=2,5,10\). Here,
\(R_{quark}\) is given by:
\[R_{quark}=\frac{\Gamma(b\to X_{s}\gamma)}{\Gamma(b\to X_{c}e\nu_{e})}. \tag{10}\]
When \(t_{\beta}=2\), we notice that the charged scalar boson mass \(100\,\mathrm{GeV}\lesssim m_{H^{\pm}}\) (\(700\,\mathrm{GeV}\lesssim m_{H^{\pm}}\)) is ruled out, at \(2\sigma\) (\(1\sigma\)). The intermediate value \(t_{\beta}=5\), excludes masses \(800\,\mathrm{GeV}\lesssim m_{H^{\pm}}\) (\(2\,\mathrm{TeV}\lesssim m_{H^{\pm}}\)). Meanwhile, \(t_{\beta}=10\) imposes a stringent lower bound \(1.6\,\mathrm{TeV}\lesssim m_{H^{\pm}}\) (\(3.2\,\mathrm{TeV}\lesssim m_{H^{\pm}}\)).
In summary, we present in Table 2 the values used in the subsequent calculations.
Figure 4: Feynman diagram of the production of \(\phi\) in association with a bottom quark at LHC, with a subsequent decay into \(\tau\tau\) pair.
Figure 5: The observed and expected at 95% CL upper limits on the production cross section times ditau branching ratio for a scalar boson produced via \(b\)-associated production as a function of (a) the \(\mathcal{CP}\)-even mass for \(t_{\beta}\) =5, 8, 40 and (b) the \(\mathcal{CP}\)-odd mass for \(t_{\beta}\) = 8, 30, 40. We take \(c_{\alpha\beta}=0.05\).
\begin{table}
\begin{tabular}{c c} \hline Parameter & Value \\ \hline \hline \(c_{\alpha\beta}\) & 0.01 \\ \hline \(t_{\beta}\) & 0.1-8 \\ \hline \(\chi_{\tau\mu}\) & 1 \\ \hline \(m_{H}=m_{A}\) & 800 GeV \\ \end{tabular}
\end{table}
Table 2: Values of the parameters used in the calculations.
### Collider analysis for \(h\to\tau\mu\)
In this section we simulate the production of the SM-like Higgs boson \(h\) with its subsequent decay into a \(\tau\mu\) pair at the LHC, HL-LHC, HE-LHC and FCC-hh. Let us first show in Figure 7, the branching ratio of the \(h\to\tau\mu\) decay as a function of \(t_{\beta}\) for \(\chi_{\tau\mu}=0.1\), 0.5, 1 and \(c_{\alpha\beta}=0.01\). We also include the upper limit reported by ATLAS and CMS collaborations [15; 16]. We notice that for \(\chi_{\tau\mu}=1\), values of \(\tan\beta\leq 17\) are allowed, while \(\chi_{\tau\mu}=0.5\) allows \(\tan\beta\leq 35\), values of \(\tan\beta\leq 50\) are admissible when \(\chi_{\tau\mu}=0.1\).
We analyze the LFV Higgs signal for the options of three cases corresponding to
Figure 6: \(R_{quark}\) at NLO in QCD as a function of the charged scalar boson mass for \(t_{\beta}=\)2, 5, 10. Solid line represents the experimental central value while red area indicate the theoretical SM, around the central value. Blue and red bands stand for \(1\sigma\) and \(2\sigma\), respectively. \(R_{quark}\) is defined in Eq. 10.
Figure 7: Branching ratio of the decay \(h\to\tau\mu\) as a function of \(t_{\beta}\) for \(\chi_{\tau\mu}=0.1\), 0.5, 1. The horizontal line represents the upper limit on \(\mathcal{BR}(h\to\tau\mu)\).
(FHC), namely: HL-LHC, HE-LHC and FCC-hh, with the following luminosities:
* **FHC1**: HL-LHC at a center-of-mass energy of 14 TeV and integrated luminosities 0.3 through 3 ab\({}^{-1}\),
* **FHC2**: HE-LHC at a center-of-mass energy of 27 TeV and integrated luminosities in the interval 0.3-12 ab\({}^{-1}\),
* **FHC3**: FCC-hh at a center-of-mass energy of 100 TeV and integrated luminosities from 10 to 30 ab\({}^{-1}\).
Once we have under control the free model parameters, as shown previously, we now turn to estimate the number of signal events produced in the three accelerator options.
Figure 8 shows the product of \(\sigma(gg\to h)\mathcal{BR}(h\to\tau\mu)\) as a function of \(t_{\beta}\) (left axis) and number of events (right axis) for **FHC1**, **FHC2** and **FHC3**. The dark area indicates the allowed region by all observables analyzed previously (see Table 2). Given the center-of-mass energy and the integrated luminosity, we find that the maximum number of signal events produced, denoted by \(\mathcal{N}_{\mathcal{S}}^{\textbf{FHC1}}\), are of the order \(\mathcal{N}_{\mathcal{S}}^{\textbf{FHC1}}=\mathcal{O}(10^{5})\), \(\mathcal{N}_{\mathcal{S}}^{\textbf{FHC2}}=\mathcal{O}(10^{6})\), \(\mathcal{N}_{\mathcal{S}}^{\textbf{FHC3}}=\mathcal{O}(10^{7})\). Here, we consider \(t_{\beta}=8\) and \(\chi_{\tau\mu}=1\).
Let us now to analyze the signature of the decay \(h\to\tau\mu\), with \(\tau\mu=\tau^{-}\mu^{+}+\tau^{+}\mu^{-}\) and its potential SM background. The CMS and ATLAS collaborations [60; 61] searched for this process through two \(\tau\) decay channels: electron decay \(\tau\to e\nu_{\tau}\nu_{e}\) and hadron decay \(\tau_{h}\mu\). In this work, we will study the former. As far as our computation methods are concerned, we first implement the model in LanHEP [62] for exporting the UFO files necessary to be interpreted in MadGraph5 [63], later it is interfaced with Pythia8 [64] and Delphes 3 [65] for simulates the response of the detector. Subsequently, we generate signal and background events, the last ones at NLO in QCD. We used NNPDF parton distribution functions [66].
### Signal and SM background processes
The signal and background processes are as following:
* _SIGNAL_- We search for a final state coming from the process \(gg\to h\to\tau\mu\to e\nu_{\tau}\nu_{e}\mu\). This channel contains exactly two charged leptons, namely, one electron (or positron) and one antimuon (or muon) and missing energy because of undetected neutrinos.
* _BACKGROUND_- The potential SM background for the signal come from: 1. Drell-Yan process, followed by the decay \(Z\rightarrow\tau\tau\to e\nu_{\tau}\nu_{e}\mu\nu_{\tau}\nu_{\mu}\). 2. \(WW\) production with subsequent decays \(W\to e\nu_{e}\) and \(W\rightarrow\mu\nu_{\mu}\). 3. \(ZZ\) production, later decaying into \(Z\rightarrow\tau\tau\to e\nu_{\tau}\nu_{e}\mu\nu_{\tau}\nu_{\mu}\) and \(Z\rightarrow\nu\nu\).
### Signal significance
The most restrictive kinematic cuts to separate the signal from the background processes are the collinear and transverse masses, \(m_{\rm col}\) and \(M_{T}^{\ell}\), respectively. They are defined as following:
\[m_{\rm col}(e\,\mu)=\frac{m_{\rm inv}(e\,\mu)}{\sqrt{x}},\quad{ \rm with}\quad x=\frac{|\vec{P}_{T}^{e}|}{|\vec{P}_{T}^{e}|+\vec{E}_{T}^{\rm miss }\cdot\vec{P}_{T}^{e}}, \tag{11}\]
and
\[M_{T}^{\ell}=\sqrt{2|\vec{P}_{T}^{\ell}||\vec{E}_{T}^{\rm miss }|(1-\cos\Delta\phi_{\vec{P}_{T}^{\ell}-\vec{E}_{T}^{\rm miss}})}. \tag{12}\]
We present in Figure 9 the distribution of collinear mass of the signal events for the scenarios (a) **FHC1**, (b) **FHC2** and (c) **FHC3**, We consider integrated luminosities of 3, 12 and 30 ab\({}^{-1}\) respectively, and \(t_{\beta}\)= 5, 8. For the kinematic analysis of the processes, we use the package MadAnalysis5[67]. Besides the collinear mass and transverse mass, we applied additional cuts for both signal and background [60; 61] as shown in Table 3 (scenario **FHC1**). Scenarios **FHC1**-**FHC3** are available electronically in [68]. The number of the signal (\(\mathcal{N}_{S}\)) and background (\(\mathcal{N}_{B}\)) after the kinematic cuts were applied, are also included. We consider to the signal significance defined as \(\mathcal{N}_{S}/\sqrt{\mathcal{N}_{S}+\mathcal{N}_{B}}\); once we compute it, the efficiency of the kinematic cuts for both signal and background are: \(\epsilon_{S}\approx 0.13\) and \(\epsilon_{B}\approx 0.014\), respectively. In the previous analysis, we found that the LHC presents difficulties in carrying out an experimental scrutiny, achieving a statistical significance of \(\sim 1.4\sigma\). However, more promising expectations arise at future stages of the LHC, namely, HL-LHC in which a prediction about \(4.6\sigma\) for 3 ab\({}^{-1}\) and \(t_{\beta}=8\) is found. Meanwhile, the HE-LHC (FCC-hh) offers exceptional results, we find a signal significance around \(5.04\sigma\) (\(\sim 5.43\sigma\)) considering 9 ab\({}^{-1}\) and \(t_{\beta}=6\) (15 ab\({}^{-1}\) and \(t_{\beta}=3\)). Then, we present in Fig. 10 regions for the model parameters corresponding to a potential evidence (\(3\sigma\)) or a possible discovery (\(5\sigma\)) for the three scenarios **FHC1**, **FHC2**, **FHC3**.
Figure 9: Distribution of the collinear mass for scenarios: (a) **FHC1**, (b) **FHC2** and (c) **FHC3**.
## IV One-loop calculation of \(h\to\ell_{a}\ell_{b}\) within 2HDM-I,II, with massive neutrinos
In the context of the 2HDM Type I and II, the interaction \(h\ell_{a}\ell_{b}\) with \(a\neq b\) is not allowed at tree level, because of all fermions interact with only one Higgs doublet, such as in the SM. However, considering neutrino masses the LFV-HD are induced at one loop by means of charged currents. Here, we consider the Inverse SeeSaw (ISS) mechanism to describe the lightness of neutrino masses [69; 70; 71], which is a natural extension of the type I and II variants of the 2HDM to include neutrino masses. In both versions of the 2HDM that we take into consideration here, the Yukawa term proportional to \(\Phi_{2}\) is taken into account by the Dirac mass term for neutrinos. All fermions interact with \(\Phi_{2}\) in type I, and up-type fermions (in this case, neutrinos) interact with \(\Phi_{2}\) in type II.
In this work, we assume a normal hierarchy for light neutrino masses and we collect the experimental values for the neutrino oscillations parameters in the Table 4[72].
In the normal hierarchy, we can rewrite the masses of \(\nu_{2,3}\) in terms of \(m_{1}\), the mass of the lightest neutrino \(\nu_{1}\), and the square mass differences as follows
\[m_{i}=\sqrt{m_{1}^{2}+\Delta m_{i1}^{2}},\quad i=2,3. \tag{13}\]
\begin{table}
\begin{tabular}{c c c} \hline \hline \multicolumn{2}{c}{BFP \(\pm 1\sigma\)} & \multicolumn{1}{c}{\(3\sigma\)} \\ \hline \(\sin^{2}\left(\theta_{12}\right)\) & \(0.310^{+0.013}_{-0.012}\) & \(0.275-0.350\) \\ \hline \(\sin^{2}\left(\theta_{23}\right)\) & \(0.582^{+0.015}_{-0.019}\) & \(0.428-0.624\) \\ \hline \(\sin^{2}\left(\theta_{13}\right)\) & \(0.02240^{+0.00065}_{-0.00066}\) & \(0.02244-0.02437\) \\ \hline \(\frac{\Delta m_{1}^{2}}{10^{-5}\mathrm{eV}^{2}}\) & \(7.39^{+0.021}_{-0.020}\) & \(6.79-8.01\) \\ \hline \(\frac{\Delta m_{31}^{2}}{10^{-5}\mathrm{eV}^{2}}\) & \(2.525^{+0.033}_{-0.031}\) & \(2.431-2.622\) \\ \hline \end{tabular}
\end{table}
Table 4: Neutrino oscillations data for a normal hierarchy. The second column show the Best Fit Point (BFP) and the \(1\sigma\) range, third column shows the \(3\sigma\) range.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Cut number & Cut & \(\mathcal{N}_{S}\) & \(\mathcal{N}_{B}\) & \(\mathcal{N}_{S}/\sqrt{\mathcal{N}_{S}+\mathcal{N}_{B}}\) \\ \hline \hline & Initial (no cuts) & \(5.77\times 10^{4}\) & \(2.00\times 10^{8}\) & 4.10 \\ \hline
1 & \(|\eta^{c}|<2.3\) & \(2.53\times 10^{4}\) & \(1.30\times 10^{8}\) & 2.20 \\ \hline
2 & \(|\eta^{\mu}|<2.1\) & \(1.64\times 10^{4}\) & \(1.07\times 10^{8}\) & 1.59 \\ \hline
3 & \(0.1<\Delta R(e,\,\mu)\) & \(1.63\times 10^{4}\) & \(1.06\times 10^{8}\) & 1.59 \\ \hline
4 & \(10<p_{T}(e)\) & \(1.55\times 10^{4}\) & \(3.90\times 10^{7}\) & 2.49 \\ \hline
5 & \(20<p_{T}(\mu)\) & \(1.21\times 10^{4}\) & \(2.04\times 10^{7}\) & 2.69 \\ \hline
6 & \(10<\) MET & \(1.12\times 10^{4}\) & \(2.01\times 10^{7}\) & 2.50 \\ \hline
7 & \(100<m_{\mathrm{col}}(e,\,\mu)<150\) & \(9.65\times 10^{3}\) & \(9.33\times 10^{6}\) & 3.16 \\ \hline
8 & \(25<M_{T}(e)\) & \(8.67\times 10^{3}\) & \(4.83\times 10^{6}\) & 3.95 \\ \hline
9 & \(15<M_{T}(\mu)\) & \(7.87\times 10^{3}\) & \(2.87\times 10^{6}\) & 4.65 \\ \hline \end{tabular}
\end{table}
Table 3: Kinematic cuts applied to the signal and main SM background for scenario **FHC1**.
Figure 10: Signal significance as a function of the integrated luminosity, the \(\chi_{\tau\mu}\) parameter of LFV coupling \(h\tau\mu\) and \(t_{\beta}\). (a), (b), (d): \(3\sigma\) and (c), (e): \(5\sigma\).
### The model \(\nu\)2HDM-I, II
In the Yukawa sector, we include three pairs of fermionic singlets (\(\nu_{R}\), \(X\)) to the SM [27]. The right handed neutrinos \(\nu_{R}\) singlets partners of \(\nu_{L}\) in the SM with lepton number \(L=-1\), while the extra singlet \(X\) has lepton number \(L=1\). With these additional fermions, we can write the Lagrangian
\[\mathcal{L}^{\nu}=-Y_{ij}^{2,\nu}\overline{L_{i}}\bar{\Phi}_{2} \nu_{Rj}-M_{ij}^{\nu}\overline{\nu_{Ri}^{C}}X_{j}-\frac{1}{2}\mu_{Xij} \overline{X_{i}^{C}}X_{j}+\text{H. c.}, \tag{14}\]
where \(i,j=1,2,3\), \(Y_{2}^{\nu}\) is the \(3\times 3\) neutrino Yukawa matrix associated to the \(\Phi_{2}\) Higgs doublet, \(M_{R}\) is a lepton number conserving complex \(3\times 3\) matrix and \(\mu_{X}\) is a Majorana complex \(3\times 3\) symmetric mass matrix that violates the lepton number conservation by two units. Hence, the \(9\times 9\) neutrino mass matrix after the electroweak symmetry breaking, in the \((\nu_{L},\nu_{Ri}^{C},X_{i}^{C})\) basis, is given by
\[M_{\text{ISS}}^{\nu}=\begin{pmatrix}0&m_{D}^{2}&0\\ m_{D}^{2,\top}&0&M_{R}\\ 0&M_{R}^{\top}&\mu_{X}\end{pmatrix}, \tag{15}\]
where \(m_{D}^{2}=\frac{v_{2}}{\sqrt{2}}Y^{2,\nu}\) is the Dirac matrix, which is symmetric then can be diagonalized by a \(9\times 9\) unitary matrix \(U_{\nu}\) given by
\[U_{\nu}^{T}M_{\text{ISS}}^{\nu}U_{\nu}=\text{diag}\left(m_{n_{1}},\ldots,m_{n_{9}}\right), \tag{16}\]
where \(m_{n_{i}}\) are the masses of the nine physical Majorana neutrinos \(n_{i}\) and are associated with the electroweak basis by the rotation
\[\begin{pmatrix}\nu_{L}^{C}\\ \nu_{R}\\ X\end{pmatrix}=U_{\nu}P_{R}\begin{pmatrix}n_{1}\\ \vdots\\ n_{9}\end{pmatrix},\quad\begin{pmatrix}\nu_{L}\\ \nu_{R}^{C}\\ X^{C}\end{pmatrix}=U_{\nu}^{*}P_{L}\begin{pmatrix}n_{1}\\ \vdots\\ n_{9}\end{pmatrix}. \tag{17}\]
As a consequence of neutrino masses, following the notation of Aoki et. al. [73] and Diaz [8], the Yukawa Lagrangian in the leptonic sector is given by
\[\mathcal{L}^{2HDM}_{\text{Yuk,ISS}} =-\sum_{\ell}\Big{(}\frac{m_{\ell}}{v}c_{h}^{t}\overline{\ell} \ell h+\frac{m_{\ell}}{v}c_{H}^{t}\overline{\ell}\ell H-i\frac{m_{\ell}}{v}e_ {A}^{\ell}\overline{\ell}\gamma_{5}\ell A\Big{)}\] \[\quad-\frac{1}{v}\sum_{ij}^{9}\big{(}c_{h}^{n}\overline{n_{i}}( \omega_{ij}P_{R}+\omega_{ij}^{*}P_{L})n_{j}h+c_{H}^{n}\overline{n_{i}}(\omega _{ij}P_{R}+\omega_{ij}^{*}P_{L})n_{j}H-ie_{A}^{n}\overline{n_{i}}(\omega_{ij}P_ {R}-\omega_{ij}^{*}P_{L})n_{j}A\big{)}\] \[\quad-\left(\frac{\sqrt{2}U_{n\ell}}{v}\overline{n}(m_{n}e_{A}^{ n}P_{L}+m_{\ell}e_{A}^{\ell}P_{R})\ell H^{+}+\text{H.c.}\right)\] \[\quad-\left(\frac{\sqrt{2}U_{n\ell}}{v}\overline{n}(m_{n}P_{L}+m_ {\ell}P_{R})\ell G^{+}+\text{H.c.}\right). \tag{18}\]
where \(P_{L/R}\) are projections operators for left-/right-handed fermions. The second row of equation (18) was calculated in analogy with [26] for the \(\nu\)SM model. Also, \(\omega_{ij}=m_{n_{j}}C_{ij}+m_{n_{i}}C_{ij}^{*}\) and \(C_{ij}=\sum_{c=1}^{3}U_{ci}^{\nu}U_{cj}^{\nu*}\). Finally, the \(c_{\phi^{0}}^{f}\) (\(\phi^{0}=h,H\)) and \(e_{A}^{f}\) factors for charged leptons and neutrinos are given in Table 5. The
neutral currents in the Lagrangian (18), can be rewritten following the notation of equation (3). For charged leptons, only the following \(\mathcal{CP}\)-conserving and \(\mathcal{CP}\)-violating factors are non-zero,
\[S^{\ell,\phi^{0}}=\frac{gm_{\ell}}{2m_{W}}c^{\ell}_{\phi^{0}};\qquad P^{\ell,A}= -\frac{gm_{\ell}}{2m_{W}}e^{\ell}_{A}\delta. \tag{19}\]
In the case of neutrinos, as a consequence of the ISS, \(\mathcal{CP}\)-conserving and \(\mathcal{CP}\)-violating factors appears,
\[S^{n,\phi}_{ij}=\frac{1}{2v}c^{n}_{\phi}(\omega_{ij}+\omega^{*}_{ij});\qquad P^{ n,\phi}_{ij}=-\frac{i}{2v}c^{n}_{\phi}(\omega_{ij}-\omega^{*}_{ij}), \tag{20}\]
\[S^{n,A}_{ij}=-\frac{1}{2v}e^{n}_{A}(\omega_{ij}-\omega^{*}_{ij});\qquad P^{n,A} _{ij}=\frac{i}{2v}e^{n}_{A}(\omega_{ij}+\omega^{*}_{ij}). \tag{21}\]
As a result, the interaction of neutral scalars and leptons is given by
\[\mathcal{L}_{\phi ff} = -\sum_{\ell}\left(S^{\ell,h}\overline{\ell}\ell h+S^{\ell,H} \overline{\ell}\ell H+iP^{\ell,A}\overline{\ell}\gamma_{5}\ell A\right) \tag{22}\] \[- \sum_{ij}^{9}\left(\overline{n_{i}}(S^{n,h}_{ij}+i\gamma_{5}P^{n, h}_{ij})n_{j}h+\overline{n_{i}}(S^{n,H}_{ij}+i\gamma_{5}P^{n,H}_{ij})n_{j}H+i \overline{n_{i}}(S^{n,A}_{ij}+i\gamma_{5}P^{n,A}_{ij})n_{j}A\right). \tag{23}\]
The LFV-HD are allowed at one-loop by means of charged currents mediated by \(G^{\pm}\), \(W^{\pm}\) and \(H^{\pm}\). Detailed formulae for LFV-HD at one loop was presented in our previous paper [74], with the decay width being written in terms of the form factors \(A_{L,R}\), namely:
\[\Gamma(h\to\ell_{a}\ell_{b}) \equiv \Gamma(h\to\ell_{a}^{+}\ell_{b}^{-})+\Gamma(h\to\ell_{a}^{-}\ell _{b}^{+}) \tag{24}\] \[= \frac{1}{8\pi m_{h}}\left[1-\left(\frac{m_{a}^{2}+m_{b}^{2}}{m_{r }}\right)^{2}\right]^{1/2}\left[1-\left(\frac{m_{a}^{2}-m_{b}^{2}}{m_{r}} \right)^{2}\right]^{1/2}\] \[\times \left[(m_{h}^{2}-m_{a}^{2}-m_{b}^{2})(|A_{L}|^{2}+|A_{R}|^{2})-4 m_{a}m_{b}\text{Re}(A_{L}A_{R}^{*})\right].\]
The general topologies of Feynman graphs that contribute to the amplitude are shown in Figure 11, where S, F and V denote scalars, fermions and vectors, respectively. In the special case of the \(\nu\)2HDM-I,II, 18 Feynman graphs contributes to LFV-HD at one loop, summarized in Table 7, and the associated couplings are given in Table 6. We work in Feynman-tHooft gauge and use dimensional regularization to manage the divergences. The analytical expressions for the form factors of each diagram are presented in the Appendix A. The result for the form factors is finite, _i.e._, free of divergences, as it should be considering that the LFV Higgs couplings do not exist at tree-level.
### Numerical analysis and results
The ISS mechanism explain the smallness of neutrino masses, assuming the hierarchy, \(|\mu_{X}|\ll|M_{D}|\ll|M_{R}|\), by means of
\[\hat{m}_{\nu}\approx M_{D}^{2}M_{R}^{\top-1}\mu_{X}M_{R}^{-1}M_{D}^{2,\top}, \tag{25}\]
where \(\mathcal{M}=M_{R}^{\top-1}\mu_{X}M_{R}^{-1}\). On one hand, the mass matrix for light neutrinos (25) is proportional to \(\mu_{X}\) and inversely proportional to \(M_{R}\). On the other hand, (25) has the usual form in the SeeSaw (SS) type I mechanism. Then, considering the flavor basis, where the Yukawa matrix for charged leptons, \(Y^{\ell}\), the mass matrices \(M_{R}\), \(\mu_{X}\), and the gauge interactions are diagonal, as a consequence, the Dirac mass matrix \(M_{D}^{2}\) can be rewritten in terms of 9 neutrino masses, the leptonic mixing matrix \(U_{\rm PMNS}\) and an \(3\times 3\) orthogonal matrix \(Q\) as follows [75]:
\[M_{D}^{2}=\left(\widehat{\mathcal{M}}\right)^{1/2}Q\left(\widehat{m}_{\nu} \right)^{1/2}U_{\rm PMNS}^{\dagger}, \tag{26}\]
where \(\widehat{m}_{\nu}={\rm diag}(m_{n_{1}},m_{n_{2}},m_{n_{3}})\) and \(\widehat{\mathcal{M}}={\rm diag}(m_{n_{4}},\ldots,m_{n_{9}})\). Although, in general the matrix \(Q\) depends on three complex angles, in this work, we chose \(Q\) as the identity matrix, also, we assume that \(M_{R}={\rm diag}(M_{R1},M_{R2},M_{R3})\) and \(\mu_{X}={\rm diag}(\mu_{X1},\mu_{X2},\mu_{X3})\). As a result, \(M_{D}^{2}\) depends only on neutrino masses and the mixing matrix \(U_{\rm PMNS}\). The last one could be calculated using the BFP values from Table 4. The 9 neutrino masses are reduced to 6 by means of equation (13) and assuming \(m_{n_{1}}=10^{-12}\) GeV. To simplify the numerical analysis we consider degenerate values to \(M_{R1}=M_{R2}=M_{R3}=M_{R}\) and \(\mu_{X1}=\mu_{X2}=\mu_{X3}=\mu_{X}\).
Figure 11: Feynman diagrams that contribute at one-loop level to the \(h\to\ell_{a}\ell_{b}\) decay in the 2HDM-I and II [74]. Here \(F,S,V\) represent fermion, scalar and vector contributions, respectively.
Constraint on the THDM-I and THDM-II
We now turn to analyze the THDM-I and THDM-II parameter space
* **Signal strength modifiers:** We consider the use of signal strength modifiers \(\mathcal{R}_{X}\) for constraint the parameter space. For a production process \(\sigma(pp\to\phi_{i})\) and the decay \(\phi_{i}\to X\), \(\mathcal{R}_{X}\) is defined as follows \[\mathcal{R}_{X}=\frac{\sigma(pp\to h)\mathcal{BR}(h\to X)}{\sigma(pp\to h^{ \rm SM})\mathcal{BR}(h^{\rm SM}\to X)},\] (27) with \(\phi_{i}=h,\,h^{\rm SM}\), where \(h\) stands for the SM-like Higgs boson coming from an extension of the SM and \(h^{\rm SM}\) is the SM Higgs boson; \(\mathcal{BR}(\phi_{i}\to X)\) is the rate of the \(\phi_{i}\to X\) decay, with \(X=b\bar{b}\), \(\tau^{-}\tau^{+}\), \(\mu^{-}\mu^{+}\), \(W^{+}W^{-}\), \(ZZ\), \(\gamma\gamma\). Figure 12 shows the \(\cos(\alpha-\beta)-\tan\beta\) plane for the case of 12(a) THDM-I and 12(b) THDM-II.
* **Perturvativity:** This bound applies over Yukawa matrix and implies that \(|Y_{ij}^{2,\nu}|^{2}<6\pi\)[26; 27]. If we include the factor \((v_{2}/\sqrt{2})^{2}\), we obtain a bound for the Dirac matrix, given by: \[|(M_{D}^{2})_{ij}|^{2}<3\pi v_{2}^{2}=3\pi v^{2}\sin(\beta)^{2}.\] (28) In the framework that we considered (26) depends on \(M_{R}\) and \(\mu_{X}\) and \(\tan\beta\), then, the perturvativity bound (28) constraints this parameters as shown in the Figure 13.
### Events for the \(h\to\mu\tau\) decay at future hadron colliders
For the numerical evaluation, we observe that \(\mathcal{BR}(h\to\ell_{a}\ell_{b})\) depends on the scales of neutrino masses \(M_{R}\) and \(\mu_{X}\), the parameters \(\tan\beta\) and \(\cos\beta-\alpha\), the masses \(m_{H^{\pm}}\), \(m_{A}\) and \(\lambda_{5}\). From the signal strength
Figure 12: The \(c_{\beta\alpha}-t_{\beta}\) shows the allowed areas by LHC Higgs boson data. Green (Blue) area stands for 68% (95%) of confidence level.
modifiers, we can observe that the parameter space is in agreement with the alignment limit \(\beta-\alpha=\frac{\phi}{2}\) or \(c_{\beta\alpha}=0\). However, we assume in this work a quasi-alignment value of \(c_{\beta\alpha}=0.01\). Also, we use, \(M_{R}=1\) TeV, \(m_{A}=587\) GeV, \(\lambda_{5}=2.167\).
The branching ratio of \(\mathcal{BR}(h\to\mu\tau)\) is shown in Figure 142, as function of \(\tan\beta\) and \(m_{H^{\pm}}\) for type I and II \(\nu\)2HDM, considering two cases for \(\mu_{X}=10^{-4},10^{-7}\). In Figure13, for the case of \(\mu_{X}=10^{-4}\) (red dashed line) the perturvativity bound does not constraint the values of \(t_{\beta}\), however for the case of \(\mu_{X}=10^{-7}\) (blue dashed line), the allowed values are \(t_{\beta}\gtrsim 5\times 10^{-1}\). On one hand, in Figure 14(a), we observe that,
Figure 13: Perturvativity bound on the \((M_{D}^{2})_{23}\) in the \(\nu\)2HDM-I,II. The blue region represents the allowed perturvativity space, see Eq. (28).
\(\mathcal{BR}(h\to\mu\tau)\) is sensible to the scale of \(m_{H^{\pm}}\) for the type II. In contrast to type I, where the dependence on \(m_{H^{\pm}}\) disappear. Only the couplings \(hH^{\pm}G^{\mp}\) and \(hH^{\pm}H^{\mp}\) contain explicit dependence on \(m_{H^{\pm}}\), see Table 6, which appears in diagrams 14, 15 and 16 on Table 7. However, these contributions are not dominant, and as a consequence the weak dependence on \(m_{H^{\pm}}\), the difference in type I and II comes from the values of \(\xi_{h}^{\ell}\) and \(\xi_{A}^{\ell}\). On the other hand, in Figure 14(b) we observe a dependency over \(t_{\beta}\) in both version of \(\nu\)2HDM. In the type I, \(\mathcal{BR}(h\to\mu\tau)\) is small for \(t_{\beta}\) close to 1 and increase with \(t_{\beta}\) up to \(t_{\beta}\approx 5\), to reach a constant value. Conversely, for type two the behavior is reversed, when \(t_{\beta}\) is close to 1, we observe a constant value \(\mathcal{BR}(h\to\mu\tau)\) and for values near to \(t_{\beta}=10^{2}\), the \(\mathcal{BR}(h\to\mu\tau)\) decrease. Although, in both models, the neutrinos couple to the same Higgs doublet (\(\Phi_{2}\)), the charged leptons do not. The form of \(\mathcal{BR}(h\to\mu\tau)\) in type I and II models shown in Figure 14 its on the values of \(\xi_{h}^{\ell}\) and \(\xi_{A}^{\ell}\). Lastly, in both models the \(\mathcal{BR}(h\to\mu\tau)\) shows a dependency on \(\mu_{X}\) scale, as a consequence of the Casas-Ibarra parametrization (26).
Similarly to Section III.2, we show the number of Events-\(t_{\beta}\) in the Figure 15, considering the production cross section \(\sigma(pp\to h\to\tau\mu)=\sigma(gg\to h)\mathcal{BR}(h\to\tau\mu)\) for scenarios **FHC1**, **FHC2** and **FHC3**. The dark area indicates the allowed region by all observables analyzed previously (see Table 2). Given the center-of-mass energy and the integrated luminosity, we find that the maximum number of signal events produced are of the order \(\mathcal{N}_{\mathcal{S}}^{\bf{FHC1}}=\mathcal{O}(10^{2})\), \(\mathcal{N}_{\mathcal{S}}^{\bf{FHC2}}=\mathcal{O}(10^{3})\), \(\mathcal{N}_{\mathcal{S}}^{\bf{FHC3}}=\mathcal{O}(10^{4})\), for \(\nu\)2HDM-II and \(\mu_{X}=10^{-7}\) GeV.
## V Conclusions
We have presented a general discussion of the search for the LFV Higgs decays at LHC, including several versions of the 2HDM, namely: the texturized version (2HDM-Tx), which is a particular version of the general model 2HDM-III, as well as the 2HDM with neutrino masses based on the inverse see-saw mechanism (\(\nu\)2HDM). For the 2HDM-Tx we started by deriving the constraints on the model parameters, and the lessons that LHC can give on the model structure. This was followed by the presentation of our study for the limits on \(\mathcal{BR}(h\to\tau\mu)\) that can be achieved at LHC, as a function of the integrated luminosities that are projected for the future phases of the LHC. Then, we also discuss the expected rates for the LFV decay \(h\to\ell_{a}\ell_{b}\)
within the \(\nu\)2HDM, where such coupling is induced at one-loop level, with some technical details left for the appendix A. These results are summarized in the figure 16. There we can see that current limits from ATLAS and CMS on \(\mathcal{BR}(h\to\tau\mu)\) are getting quite close to the value \(10^{-3}\). Thus, values of the parameter \(\chi_{\tau\mu}\) of order 1 (0.5) are excluded for values of \(t_{\beta}\) larger than about 20 (35). However, we can also see that for \(t_{\beta}=8\), \(c_{\alpha\beta}=0.01\), values of \(\chi_{\tau\mu}=0.5\) imply values of \(\mathcal{BR}(h\to\tau\mu)\) about \(10^{-4}\), which are still consistent with current limits, but could be rule out in searches for future eleton-positron colliders.
On the other hand, for the loop-induced LFV Higgs decay, we find that \(\mathcal{BR}(h\to\tau\mu)\) is about \(10^{-7}\) for the version \(\nu\)2HDM-I, and \(10^{-13}\) for the version \(\nu\)2HDM-II, which are still far from the experimental reach of LHC and future colliders.
Thus, we have learned from LHC that models with large tree-level LFV Higgs couplings, as arising in the 2HDM-III, are being constrained by current data, but even the model version with textures (2HDM-Tx) has regions of parameters that pass current constraints. The coming phases of LHC with higher luminosity will be able to probe models where such LFV Higgs couplings arise at tree-level, but include a suppression mechanism, as in MFV or BGL models [76]. Finally, we conclude that models where such couplings arise at loop-level, as in the MSSM or 2HDM with neutrino masses, seem to be out of the reach of LHC14.
**Acknowledgments**
The work of Marco A. Arroyo-Urena is supported by "Estancias posdoctorales por Mexico (CONAHCYT)" and "Sistema Nacional de Investigadores" (SNI-CONAHCYT). JLD-C and O.F.B. acknowledge the support
Figure 16: Comparison between experimental results (and projections) and our predictions for the \(\mathcal{BR}(h\to\tau\mu)\). We have considered \(t_{\beta}=8\) and \(c_{\beta\alpha}=0.01\) in the numerical evaluation.
Figure 15: Events-\(t_{\beta}\) plane, for (a) Scenario **FCH1**, (b) Scenario **FCH2**, (c) Scenario **FCH3**. The dark area corresponds to the allowed region. See Table 2.
of SNI-CONAHCYT and VIEP (BUAP). M. Zeleny thanks Conacyt for the doctoral grant.
## Appendix A Scalar couplings for LFV-HD and form factors
We present the couplings of SM-like Higgs and charged currents in Table 6. In this case we use the quiral notation in agreement with [74]. with the following definitions:
\[\Xi_{h} =\sin\beta-\alpha, \eta_{h} =\cos\beta-\alpha,\] \[\mathcal{K}_{h} =4m_{A}^{2}-3m_{h}^{2}-2m_{H^{\pm}}^{2}, \mathcal{Q}_{h} =m_{h}^{2}-2m_{H^{\pm}}^{2}, \tag{11}\] \[\rho_{h} =\cos\alpha+\beta, \Delta_{h} =\cos\alpha-3\beta.\]
In the special case of the \(\nu\)2HDM-I,II, 18 Feynman graphs contributes to LFV-HD at one loop, summarized in Table 7, following the topologies shown in Figure 11.
The associated form factors are as follows,
\[\Delta_{ij}^{ab}=\frac{g^{3}}{64\pi^{2}m_{W}^{3}}U_{bj}^{\nu}U_{ai}^{\nu*}. \tag{12}\]
\[A_{L}^{(1)}(G\overline{n}_{i}n_{j}) =m_{\ell_{a}}c_{\phi}^{n}\sum_{i,j=1}^{K+3}\left[\left(\left(\text {B}_{0}^{(12)}\,+m_{W}^{2}\,\text{C}_{0}\right)m_{n_{j}}^{2}-\left(m_{\ell_{a} }^{2}m_{n_{j}}^{2}+m_{\ell_{b}}^{2}m_{n_{i}}^{2}-2m_{n_{i}}^{2}m_{n_{j}}^{2} \right)\text{C}_{1}\right)C_{ij}\right.\] \[+\left(\text{B}_{0}^{(12)}\,+m_{W}^{2}\,\text{C}_{0}-\left(m_{ \ell_{a}}^{2}+m_{\ell_{b}}^{2}-m_{n_{i}}^{2}-m_{n_{j}}^{2}\right)\text{C}_{1} \right)C^{*}{}_{ij}m_{n_{i}}m_{n_{j}}\Big{]}\,\Delta_{ij}^{ab},\] \[A_{R}^{(1)}(G\overline{n}_{i}n_{j}) =m_{\ell_{b}}c_{\phi}^{n}\sum_{i,j=1}^{K+3}\left[\left(\left( \text{B}_{0}^{(12)}\,+m_{W}^{2}\,\text{C}_{0}\right)m_{n_{i}}^{2}+\left(m_{ \ell_{a}}^{2}m_{n_{j}}^{2}+m_{\ell_{b}}^{2}m_{n_{i}}^{2}-2m_{n_{i}}^{2}m_{n_{j }}^{2}\right)\text{C}_{2}\right)C_{ij}\right.\] \[\left.+\left(\text{B}_{0}^{(12)}\,+m_{W}^{2}\,\text{C}_{0}+\left( m_{\ell_{a}}^{2}+m_{\ell_{b}}^{2}-m_{n_{i}}^{2}-m_{n_{j}}^{2}\right)\text{C}_{2} \right)C^{*}{}_{ij}m_{n_{i}}m_{n_{j}}\right]\Delta_{ij}^{ab}. \tag{13}\]
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Vertex** & **Coupling** & **Vertex** & **Coupling** \\ \hline \hline \(hW^{+}W^{-}\) & \(igm_{W}\Xi_{h}\) & \(hG^{+}G^{-}\) & \(-i\frac{gm_{h}^{2}\Xi_{h}}{2m_{W}}\) \\ \hline \(hG^{+}W^{-}\) & \(i\frac{g\Xi_{h}}{2}(p_{+}-p_{0})_{\mu}\) & \(hW^{+}G^{-}\) & \(i\frac{g\Xi_{h}}{2}(p_{0}-p_{-})_{\mu}\) \\ \hline \(hH^{+}W^{-}\) & \(i\frac{g\overline{n}_{h}}{2}(k_{+}-p_{0})_{\mu}\) & \(hW^{+}H^{-}\) & \(i\frac{g\overline{\eta}_{h}}{2}(p_{0}-k_{-})_{\mu}\) \\ \hline \(hH^{\pm}G^{\mp}\) & \(i\frac{g\overline{n}_{h}(m_{H^{\pm}}^{2}-m_{h}^{2})}{2m_{W}}\) & \(hH^{\pm}H^{\mp}\) & \(ig\frac{\rho_{h}\mathcal{K}_{h}-\Delta_{h}\mathcal{Q}_{h}}{4m_{W}\sin 2\beta}+i \frac{4\lambda_{5}m_{W}\rho_{h}}{g\sin 2\beta}\) \\ \hline \(h\overline{l}\) & \(-igc_{h}^{l}\frac{m_{l}}{2m_{W}}\) & \(hn_{i}n_{j}\) & \(\frac{-igc_{h}^{n}}{2m_{W}}\left[C_{ij}\left(P_{L}m_{m_{i}}+P_{R}m_{n_{j}} \right)+C_{ij}^{*}\left(P_{L}m_{n_{j}}+P_{R}m_{n_{i}}\right)\right]\) \\ \hline \(\bar{n}_{i}\ell_{a}W_{\mu}^{+}\) & \(\frac{ig}{\sqrt{2}}U_{ai}^{\nu}\gamma^{\mu}P_{L}\) & \(\overline{\ell_{a}}n_{j}W_{\mu}^{-}\) & \(\frac{ig}{\sqrt{2}}U_{\omega j}^{\nu*}\gamma^{\mu}P_{L}\) \\ \hline \(\bar{n}_{i}\ell_{a}G_{W}^{+}\) & \(-\frac{ig}{\sqrt{2}m_{W}}U_{ai}^{\nu}\left(m_{\ell_{a}}P_{R}-m_{n_{i}}P_{L}\right)\) & \(\overline{\ell_{a}}n_{j}G_{W}^{-}\) & \(-\frac{ig}{\sqrt{2}m_{W}}U_{\omega j}^{*}\left(m_{\ell_{a}}P_{L}-m_{n_{i}}P_{R}\right)\) \\ \hline \(\bar{n}_{i}\ell_{a}H^{+}\) & \(-\frac{ig}{\sqrt{2}m_{W}}U_{ai}^{\nu}(e_{A}^{\prime}m_{\ell_{a}}P_{R}-e_{A}^{ \star}m_{n_{i}}P_{L})\) & \(\overline{\ell_{a}}n_{j}H^{-}\) & \(-\frac{ig}{\sqrt{2}m_{W}}U_{aj}^{*}(e_{A}^{\prime}m_{\ell_{a}}P_{L}-e_{A}^{ \star}m_{n_{i}}P_{R})\) \\ \hline \hline \end{tabular}
\end{table}
Table 6: Couplings of \(h\), \(G^{\pm}\), \(W^{\pm}\) and \(H^{\pm}\), for LFV-HD in the \(\nu\)2HDM-I,II.
\[A_{L}^{(2)}(W\overline{n}_{i}n_{j}) =2m_{W}^{2}m_{\ell_{a}}c_{\phi}^{n}\sum_{i,j=1}^{K+3}\left(\left( \left(m_{n_{i}}^{2}+m_{n_{j}}^{2}\right)\mathrm{C}_{1}-\mathrm{C}_{0}\,m_{n_{j} }^{2}\right)C_{ij}-\left(\mathrm{C}_{0}\,-2\,\mathrm{C}_{1}\right)C^{*}{}_{ij} m_{n_{i}}m_{n_{j}}\right)\Delta_{ij}^{ab},\] \[A_{R}^{(2)}(W\overline{n}_{i}n_{j}) =-2m_{W}^{2}m_{\ell_{b}}c_{\phi}^{n}\sum_{i,j=1}^{K+3}\left(\left( \left(m_{n_{i}}^{2}+m_{n_{j}}^{2}\right)\mathrm{C}_{2}+\mathrm{C}_{0}\,m_{n_{i }}^{2}\right)C_{ij}+\left(\mathrm{C}_{0}\,+2\,\mathrm{C}_{2}\right)C^{*}{}_{ ij}m_{n_{i}}m_{n_{j}}\right)\Delta_{ij}^{ab}. \tag{10}\]
\[A_{L}^{(3)}(n_{i}W^{+}W^{-}) =-4m_{W}^{4}m_{\ell_{a}}\Xi_{\phi}\sum_{i=1}^{K+3}\mathrm{C}_{1} \,\Delta_{ii}^{ab},\] \[A_{R}^{(3)}(n_{i}W^{+}W^{-}) =4m_{W}^{4}m_{\ell_{b}}\Xi_{\phi}\sum_{i=1}^{K+3}\mathrm{C}_{2} \,\Delta_{ii}^{ab}. \tag{11}\]
\[A_{L}^{(4)}(n_{i}W^{-}G^{+}) =m_{W}^{2}m_{\ell_{a}}\Xi_{\phi}\sum_{i=1}^{K+3}\left[\left(2m_{ \ell_{b}}^{2}-m_{n_{i}}^{2}\right)\mathrm{C}_{1}-\mathrm{C}_{0}\,m_{n_{i}}^{2 }-\mathrm{C}_{2}\,m_{\ell_{b}}^{2}\right]\Delta_{ii}^{ba},\] \[A_{R}^{(4)}(n_{i}W^{-}G^{+}) =m_{W}^{2}m_{\ell_{b}}\Xi_{\phi}\sum_{i=1}^{K+3}\left[\mathrm{B}_ {0}^{(12)}+\left(2m_{\ell_{b}}^{2}+m_{n_{i}}^{2}\right)\mathrm{C}_{2}-\left(m_ {\ell_{a}}^{2}+2m_{\ell_{b}}^{2}-2m_{h}^{2}\right)\mathrm{C}_{1}\,+3\,\mathrm{ C}_{0}\,m_{n_{i}}^{2}\right]\Delta_{ii}^{ba}, \tag{12}\]
\[A_{L}^{(5)}(n_{i}G^{-}W^{+}) =m_{W}^{2}m_{\ell_{a}}\Xi_{\phi}\sum_{i=1}^{K+3}m_{n_{i}}\left( \mathrm{B}_{0}^{(12)}-\left(2m_{\ell_{a}}^{2}+m_{n_{i}}^{2}\right)\mathrm{C}_{1 }+\left(2m_{\ell_{a}}^{2}+m_{\ell_{b}}^{2}-2m_{h}^{2}\right)\mathrm{C}_{2}\,+3 \,\mathrm{C}_{0}\,m_{n_{i}}^{2}\right)\Delta_{ii}^{ab},\] \[A_{R}^{(5)}(n_{i}G^{-}W^{+}) =m_{W}^{2}m_{\ell_{a}}\Xi_{\phi}\sum_{i=1}^{K+3}m_{n_{i}}\left(- \left(2m_{\ell_{a}}^{2}-m_{n_{i}}^{2}\right)\mathrm{C}_{2}-\mathrm{C}_{0}\,m_{ n_{i}}^{2}+\mathrm{C}_{1}\,m_{\ell_{a}}^{2}\right)\Delta_{ii}^{ab}. \tag{13}\]
\[A_{L}^{(6)}(n_{i}G^{-}G^{+}) =m_{\ell_{a}}m_{\phi}^{2}\Xi_{\phi}\sum_{i=1}^{K+3}\left(\left( \mathrm{C}_{0}-\mathrm{C}_{1}\right)m_{n_{i}}^{2}+\mathrm{C}_{2}\,m_{\ell_{b}}^ {2}\right)\Delta_{ii}^{ab},\] \[A_{R}^{(6)}(n_{i}G^{-}G^{+}) =m_{\ell_{b}}m_{\phi}^{2}\Xi_{\phi}\sum_{i=1}^{K+3}\left(\left( \mathrm{C}_{0}+\mathrm{C}_{2}\right)m_{n_{i}}^{2}-\mathrm{C}_{1}\,m_{\ell_{a}}^{2 }\right)\Delta_{ii}^{ab}. \tag{14}\]
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline
**No.** & **Structure** & **Diagram** & \(P_{0}\) & \(P_{1}\) & \(P_{2}\) & **No.** & **Structure** & **Diagram** & \(P_{0}\) & \(P_{1}\) & \(P_{2}\) \\ \hline \hline
1 & SFF & 11(i) & \(G_{W}\) & \(\overline{n}_{i}\) & \(n_{j}\) & 11 & SFF & 11(i) & \(H^{\pm}\) & \(\overline{n}_{i}\) & \(n_{j}\) \\ \hline
2 & VFF & 11(j) & \(W\) & \(\overline{n}_{i}\) & \(n_{j}\) & 12 & FVS & 11(c) & \(n_{i}\) & \(W\) & \(H^{\pm}\) \\ \hline
3 & FSS & 11(a) & \(n_{i}\) & \(G_{W}\) & \(G_{W}\) & 13 & FSV & 11(b) & \(n_{i}\) & \(H^{\pm}\) & \(W\) \\ \hline
4 & FVS & 11(c) & \(n_{i}\) & \(W\) & \(G_{W}\) & 14 & FSS & 11(a) & \(n_{i}\) & \(G_{W}\) & \(H^{\pm}\) \\ \hline
5 & FSV & 11(b) & \(n_{i}\) & \(G_{W}\) & \(W\) & 15 & FSS & 11(a) & \(n_{i}\) & \(H^{\pm}\) & \(G_{W}\) \\ \hline
6 & FVV & 11(d) & \(n_{i}\) & \(W\) & \(W\) & 16 & FSS & 11(a) & \(n_{i}\) & \(H^{\pm}\) & \(H^{\pm}\) \\ \hline
7 & FV & 11(g) & \(n_{i}\) & \(W\) & — & 17 & FS & 11(e) & \(n_{i}\) & \(H^{\pm}\) & — \\ \hline
8 & FS & 11(e) & \(n_{i}\) & \(G_{W}\) & — & 18 & SF & 11(f) & \(n_{i}\) & — & \(H^{\pm}\) \\ \hline
9 & VF & 11(h) & \(n_{i}\) & — & \(W\) & — & — & — & — & — \\ \hline
10 & SF & 11(f) & \(n_{i}\) & — & \(G_{W}\) & — & — & — & — & — \\ \hline \hline \end{tabular}
\end{table}
Table 7: Particles involved in each one loop diagram that contribute to \(\phi\to\ell_{a}^{+}\ell_{b}^{-}\) in the \(\nu\)2HDM.
\[A_{L}^{(7)}(n_{i}W) = -\frac{2m_{W}^{2}m_{\ell_{a}}m_{\ell_{b}}^{2}}{m_{\ell_{a}}^{2}-m_{ \ell_{b}}^{2}}c_{\phi}^{l}\sum_{i=1}^{K+3}\mathrm{B}_{1}^{(1)}\,\Delta_{ii}^{ab},\] \[A_{R}^{(7)}(n_{i}W) = -\frac{2m_{W}^{2}m_{\ell_{a}}^{2}m_{\ell_{b}}}{m_{\ell_{a}}^{2}-m_ {\ell_{b}}^{2}}c_{\phi}^{l}\sum_{i=1}^{K+3}\mathrm{B}_{1}^{(1)}\,\Delta_{ii}^{ ab}. \tag{112}\]
\[A_{L}^{(8)}(n_{i}G) = \frac{m_{\ell_{a}}m_{\ell_{b}}^{2}}{m_{\ell_{a}}^{2}-m_{\ell_{b}} ^{2}}c_{\phi}^{l}\sum_{i=1}^{K+3}\left(-\left(m_{\ell_{a}}^{2}+m_{n_{i}}^{2} \right)\mathrm{B}_{1}^{(1)}\,+2\,\mathrm{B}_{0}^{(1)}\,m_{n_{i}}^{2}\right) \Delta_{ii}^{ab},\] \[A_{R}^{(8)}(n_{i}G) = \frac{m_{\ell_{b}}}{m_{\ell_{a}}^{2}-m_{\ell_{b}}^{2}}c_{\phi}^{ l}\sum_{i=1}^{K+3}\left(\left(m_{\ell_{a}}^{2}+m_{\ell_{b}}^{2}\right)\mathrm{B}_{0}^ {(1)}\,m_{n_{i}}^{2}-\left(m_{\ell_{b}}^{2}+m_{n_{i}}^{2}\right)\mathrm{B}_{1} ^{(1)}\,m_{\ell_{a}}^{2}\right)\Delta_{ii}^{ab}. \tag{113}\]
\[A_{L}^{(9)}(Wn_{i}) = -\frac{2m_{W}^{2}m_{\ell_{a}}m_{\ell_{b}}^{2}}{m_{\ell_{a}}^{2}- m_{\ell_{b}}^{2}}c_{\phi}^{l}\sum_{i=1}^{K+3}\mathrm{B}_{1}^{(2)}\,\Delta_{ii}^{ ab},\] \[A_{R}^{(9)}(Wn_{i}) = -\frac{2m_{W}^{2}m_{\ell_{a}}^{2}m_{\ell_{b}}}{m_{\ell_{a}}^{2}- m_{\ell_{b}}^{2}}c_{\phi}^{l}\sum_{i=1}^{K+3}\mathrm{B}_{1}^{(2)}\,\Delta_{ii}^{ ab}. \tag{114}\]
\[A_{L}^{(10)}(Gn_{i}) = -\frac{m_{\ell_{a}}}{m_{\ell_{a}}^{2}-m_{\ell_{b}}^{2}}c_{\phi}^{ l}\sum_{i=1}^{K+3}\left(\left(m_{\ell_{a}}^{2}+m_{\ell_{b}}^{2}\right) \mathrm{B}_{0}^{(2)}\,m_{n_{i}}^{2}+\left(m_{\ell_{a}}^{2}+m_{n_{i}}^{2}\right) \mathrm{B}_{1}^{(2)}\,m_{\ell_{b}}^{2}\right)\Delta_{ii}^{ab},\] \[A_{R}^{(10)}(Gn_{i}) = -\frac{m_{\ell_{a}}^{2}m_{\ell_{b}}}{m_{\ell_{a}}^{2}-m_{\ell_{b}} ^{2}}c_{\phi}^{l}\sum_{i=1}^{K+3}\left(\left(m_{\ell_{b}}^{2}+m_{n_{i}}^{2} \right)\mathrm{B}_{1}^{(2)}\,+2\,\mathrm{B}_{0}^{(2)}\,m_{n_{i}}^{2}\right) \Delta_{ii}^{ab}. \tag{115}\]
\[A_{L}^{(11)}(H^{\pm}\overline{n}_{i}n_{j}) = -m_{a}c_{\phi}^{n}\sum_{i,j=1}^{K+3}\left[\left(K_{1}^{L}B_{0}^{(1 2)}+K_{2}^{L}\,\mathrm{C}_{0}\,+K_{3}^{L}\,\mathrm{C}_{1}\,+K_{4}^{L}\, \mathrm{C}_{2}\right)C_{ij}\right. \tag{116}\] \[+ \left(Q_{1}^{L}B_{0}^{(12)}+Q_{2}^{L}\,\mathrm{C}_{0}\,+Q_{3}^{L} \,\mathrm{C}_{1}\,+Q_{4}^{L}\,\mathrm{C}_{2}\right)C^{*}{}_{ij}m_{n_{i}}m_{n_{ j}}\right]\Delta_{ij}^{ab},\] \[A_{R}^{(11)}(H^{\pm}\overline{n}_{i}n_{j}) = m_{b}c_{\phi}^{n}\sum_{i,j=1}^{K+3}\left[\left(K_{1}^{R}B_{0}^{(1 2)}+K_{2}^{R}\,\mathrm{C}_{0}\,+K_{3}^{R}\,\mathrm{C}_{1}\,+K_{4}^{R}\, \mathrm{C}_{2}\right)C_{ij}\right.\] (117) \[+ \left(Q_{1}^{R}B_{0}^{(12)}+Q_{2}^{R}\,\mathrm{C}_{0}\,+Q_{3}^{R} \,\mathrm{C}_{1}\,+Q_{4}^{R}\,\mathrm{C}_{2}\right)C^{*}{}_{ij}m_{n_{i}}m_{n_{ j}}\right]\Delta_{ij}^{ab},\]
where \(K_{k}^{L}\) and \(Q_{k}^{L}\) with \(k=1,\ldots,4\), are given by
\[K_{1}^{L} = -e_{A}^{l}e_{A}^{n}m_{n}{}_{j}^{2}, \tag{118}\] \[K_{2}^{L} = \left(e_{A}^{l}\right)^{2}m_{\ell_{b}}^{2}m_{n}{}_{i}^{2}-e_{A}^{ l}e_{A}^{n}\left(m_{H^{\pm}}^{2}m_{n}{}_{j}^{2}+m_{\ell_{b}}^{2}m_{n}{}_{i}^{2}+m_{n}{}_{i}^{2}m _{n}{}_{j}^{2}\right)+\left(e_{A}^{n}\right)^{2}m_{n}{}_{i}^{2}m_{n}{}_{j}^{2},\] (119) \[K_{3}^{L} = e_{A}^{n}\left(e_{A}^{l}\left(m_{\ell_{a}}^{2}m_{n}{}_{j}^{2}+m_{ \ell_{b}}^{2}m_{n}{}_{i}^{2}\right)-2e_{A}^{n}m_{n}{}_{i}^{2}m_{n}{}_{j}^{2} \right),\] (120) \[K_{4}^{L} = e_{A}^{l}\left(e_{A}^{l}-e_{A}^{n}\right)\left(m_{n}{}_{i}^{2}+m_ {n}{}_{j}^{2}\right)m_{\ell_{b}}^{2},\] (121) \[Q_{L}^{1} = -e_{A}^{l}e_{A}^{n},\] (122) \[Q_{L}^{2} = \left(e_{A}^{l}\right)^{2}m_{\ell_{b}}^{2}-e_{A}^{l}e_{A}^{n}\left(m _{H^{\pm}}^{2}+m_{\ell_{b}}^{2}+m_{n}{}_{j}^{2}\right)+\left(e_{A}^{n}\right)^{2}m _{n}{}_{j}^{2},\] (123) \[Q_{L}^{3} = e_{A}^{n}\left(e_{A}^{l}\left(m_{\ell_{a}}^{2}+m_{\ell_{b}}^{2} \right)-e_{A}^{n}\left(m_{n}{}_{i}^{2}+m_{n}{}_{j}^{2}\right)\right),\] (124) \[Q_{L}^{4} = 2e_{A}^{l}\left(e_{A}^{l}-e_{A}^{n}\right)m_{\ell_{b}}^{2}, \tag{125}\]
also, \(K_{k}^{R}\) and \(Q_{k}^{R}\) are
\[K_{1}^{R} =-K_{1}^{L}\frac{m_{n_{i}}{}^{2}}{m_{n_{j}}{}^{2}}, \tag{111}\] \[K_{2}^{R} =-\left(e_{A}^{l}\right)^{2}m_{\ell_{a}}^{2}m_{n_{j}}{}^{2}+e_{A}^ {l}e_{A}^{n}\left(m_{H^{\pm}}^{2}m_{n_{i}}{}^{2}+m_{\ell_{a}}^{2}m_{n_{j}}{}^{2 }+m_{n_{i}}{}^{2}m_{n_{j}}{}^{2}\right)-\left(e_{A}^{n}\right)^{2}m_{n_{i}}{}^{2 }m_{n_{j}}{}^{2},\] (112) \[K_{3}^{R} =K_{4}^{L}\frac{m_{\ell_{a}}^{2}}{m_{\ell_{b}}^{2}},\] (113) \[K_{4}^{R} =K_{3}^{L},\] (114) \[Q_{1}^{R} =-Q_{L}^{1},\] (115) \[Q_{2}^{R} =-\left(e_{A}^{l}\right)^{2}m_{\ell_{a}}^{2}+e_{A}^{l}e_{A}^{n} \left(m_{H^{\pm}}^{2}+m_{\ell_{a}}^{2}+m_{n_{i}}{}^{2}\right)-\left(e_{A}^{n} \right)^{2}m_{n_{i}}{}^{2},\] (116) \[Q_{3}^{R} =Q_{L}^{4}\frac{m_{\ell_{a}}^{2}}{m_{\ell_{b}}^{2}}.\] (117) \[Q_{4}^{R} =Q_{L}^{3}\] (118) \[A_{L}^{(12)}(n_{i}W^{-}H^{+}) =m_{W}^{2}m_{\ell_{a}}\eta_{\phi}\sum_{i=1}^{K+3}\left[e_{A}^{l} \left(-2m_{\phi}^{2}+m_{\ell_{a}}^{2}+2m_{\ell_{b}}^{2}\right)\mathrm{C}_{1}-e _{A}^{l}\,\mathrm{B}_{0}^{(12)}\right.\] \[\left.-\left(e_{A}^{l}+2e_{A}^{n}\right)\mathrm{C}_{0}\,m_{n_{i}}{ }^{2}-\left(2e_{A}^{l}m_{\ell_{b}}^{2}+e_{A}^{n}m_{n_{i}}{}^{2}\right)\mathrm{C }_{2}\right]\Delta_{ii}^{ba}., \tag{119}\]
\[A_{L}^{(13)}(n_{i}H^{-}W^{+}) =m_{W}^{2}m_{\ell_{a}}\eta_{\phi}\sum_{i=1}^{K+3}\left[\left(2e_{ A}^{l}m_{\ell_{a}}^{2}+e_{A}^{n}m_{n_{i}}{}^{2}\right)\mathrm{C}_{1}-e_{A}^{l} \left(-2m_{\phi}^{2}+2m_{\ell_{a}}^{2}+m_{\ell_{b}}^{2}\right)\mathrm{C}_{2}\right.\] \[\left.-e_{A}^{l}\,\mathrm{B}_{0}^{(12)}-\left(e_{A}^{l}+2e_{A}^{n} \right)\mathrm{C}_{0}\,m_{n_{i}}{}^{2}\right]\Delta_{ii}^{ab}, \tag{120}\] \[A_{R}^{(13)}(n_{i}H^{-}W^{+}) =m_{W}^{2}m_{\ell_{b}}\eta_{\phi}\sum_{i=1}^{K+3}\left[e_{A}^{n} \,\mathrm{C}_{0}\,m_{n_{i}}{}^{2}-e_{A}^{l}\,\mathrm{C}_{1}\,m_{\ell_{a}}^{2} +\left(2e_{A}^{l}m_{\ell_{a}}^{2}-e_{A}^{n}m_{n_{i}}{}^{2}\right)\mathrm{C}_{2} \right]\Delta_{ii}^{ab}. \tag{121}\]
\[A_{L}^{(14)}(n_{i}G^{-}H^{+}) =m_{\ell_{a}}\eta_{\phi}\sum_{i=1}^{K+3}\left[e_{A}^{l}\left(m_{ \phi}^{2}-\left(m_{H^{\pm}}\right)^{2}\right)\mathrm{C}_{2}\,m_{\ell_{b}}^{2}\right.\] \[\left.+e_{A}^{n}\left(\left(m_{\phi}^{2}-\left(m_{H^{\pm}}\right) ^{2}\right)\left(\mathrm{C}_{0}-\mathrm{C}_{1}\right)\right)m_{n_{i}}{}^{2} \right]\Delta_{ii}^{ba}, \tag{122}\] \[A_{R}^{(14)}(n_{i}G^{-}H^{+}) =m_{\ell_{b}}\eta_{\phi}\sum_{i=1}^{K+3}\left[-e_{A}^{l}\left(m_{ \phi}^{2}-\left(m_{H^{\pm}}\right)^{2}\right)\mathrm{C}_{1}\,m_{\ell_{a}}^{2}\right.\] \[\left.+\left(\left(e_{A}^{l}\,\mathrm{C}_{0}-e_{A}^{n}\,\mathrm{C }_{1}\right)\left(m_{\phi}^{2}-\left(m_{H^{\pm}}\right)^{2}\right)\right)m_{n_ {i}}{}^{2}\right]\Delta_{ii}^{ba}, \tag{123}\]
\[A_{L}^{(15)}(n_{i}H^{-}G^{+}) =m_{\ell_{a}}\eta_{\phi}\sum_{i=1}^{K+3}\left[e_{A}^{l}\left(m_{ \phi}^{2}-\left(m_{H^{\pm}}\right)^{2}\right)\mathrm{C}_{2}\,m_{\ell_{b}}^{2}\right.\] \[\qquad\qquad\left.+\left(\left(e_{A}^{l}\,\mathrm{C}_{0}-e_{A}^{n }\,\mathrm{C}_{1}\right)\left(m_{\phi}^{2}-\left(m_{H^{\pm}}\right)^{2}\right) \right)m_{n_{i}}{}^{2}\right]\Delta_{ii}^{ab}, \tag{110}\] \[A_{R}^{(15)}(n_{i}H^{-}G^{+}) =m_{\ell_{b}}\eta_{\phi}\sum_{i=1}^{K+3}\left[-e_{A}^{l}\left(m_{ \phi}^{2}-\left(m_{H^{\pm}}\right)^{2}\right)\mathrm{C}_{1}\,m_{\ell_{a}}^{2}\right.\] \[\qquad\qquad\qquad\left.+e_{A}^{n}\left(\left(m_{\phi}^{2}-\left( m_{H^{\pm}}\right)^{2}\right)\left(\mathrm{C}_{0}+\mathrm{C}_{2}\right)\right)m_{n_{ i}}{}^{2}\right]\Delta_{ii}^{ab}, \tag{111}\]
\[A_{L}^{(16)}(n_{i}H^{-}H^{+}) =m_{\ell_{a}}\frac{16\lambda_{5}\rho_{\phi}m_{W}^{2}+g^{2}\left( \rho_{\phi}\mathcal{K}_{\phi}-\Delta_{\phi}\mathcal{Q}_{\phi}\right)}{2g^{2} \sin\left(2\beta\right)}\] \[\times\sum_{i=1}^{K+3}\left[-\left(e_{A}^{l}\right)^{2}\mathrm{C} _{2}\,m_{\ell_{b}}^{2}-e_{A}^{l}e_{A}^{n}\,\mathrm{C}_{0}\,m_{n_{i}}{}^{2}+ \left(e_{A}^{n}\right)^{2}\mathrm{C}_{1}\,m_{n_{i}}{}^{2}\right]\Delta_{ii}^{ab}, \tag{112}\] \[A_{R}^{(16)}(n_{i}H^{-}H^{+}) =m_{\ell_{b}}\frac{16\lambda_{5}\rho_{\phi}m_{W}^{2}+g^{2}\left( \rho_{\phi}\mathcal{K}_{\phi}-\Delta_{\phi}\mathcal{Q}_{\phi}\right)}{2g^{2} \sin\left(2\beta\right)}\] \[\times\sum_{i=1}^{K+3}\left[\left(e_{A}^{l}\right)^{2}\mathrm{C} _{1}\,m_{\ell_{a}}^{2}-e_{A}^{l}e_{A}^{n}\,\mathrm{C}_{0}\,m_{n_{i}}{}^{2}- \left(e_{A}^{n}\right)^{2}\mathrm{C}_{2}\,m_{n_{i}}{}^{2}\right]\Delta_{ii}^{ab}. \tag{113}\]
\[A_{L}^{(17)}(n_{i}H^{\pm}) =\frac{m_{\ell_{a}}m_{\ell_{b}}^{2}}{m_{\ell_{a}}^{2}-m_{\ell_{b} }^{2}}\ell_{\phi}^{l}\sum_{i=1}^{K+3}\left(2e_{A}^{l}e_{A}^{n}\,\mathrm{B}_{0} ^{(1)}\,m_{n_{i}}{}^{2}-\left(\left(e_{A}^{l}\right)^{2}m_{\ell_{a}}^{2}+\left( e_{A}^{n}\right)^{2}m_{n_{i}}{}^{2}\right)\mathrm{B}_{1}^{(1)}\right)\Delta_{ii}^{ab}, \tag{114}\] \[A_{R}^{(17)}(n_{i}H^{\pm}) =\frac{m_{\ell_{b}}}{m_{\ell_{a}}^{2}-m_{\ell_{b}}^{2}}\ell_{\phi }^{l}\sum_{i=1}^{K+3}\left(e_{A}^{l}e_{A}^{n}\left(m_{\ell_{a}}^{2}+m_{\ell_{b} }^{2}\right)\mathrm{B}_{0}^{(1)}\,m_{n_{i}}{}^{2}\right.\] \[\qquad\qquad\qquad\left.-\left(\left(e_{A}^{l}\right)^{2}m_{\ell _{b}}^{2}+\left(e_{A}^{n}\right)^{2}m_{n_{i}}{}^{2}\right)\mathrm{B}_{1}^{(1)} \,m_{\ell_{a}}^{2}\right)\Delta_{ii}^{ab}, \tag{115}\]
\[A_{L}^{(18)}(H^{\pm}n_{i}) =-\frac{m_{\ell_{a}}}{m_{\ell_{a}}^{2}-m_{\ell_{b}}^{2}}\ell_{\phi }^{l}\sum_{i=1}^{K+3}\left(e_{A}^{l}e_{A}^{n}\,\mathrm{B}_{0}^{(2)}\,m_{n_{i}}{} ^{2}\right.\] \[\qquad\qquad\qquad\left.+\left(\left(e_{A}^{l}\right)^{2}m_{\ell _{a}}^{2}+\left(e_{A}^{n}\right)^{2}m_{n_{i}}{}^{2}\right)\mathrm{B}_{1}^{(2)} \,m_{\ell_{b}}^{2}\right)\Delta_{ii}^{ab}, \tag{116}\] \[A_{R}^{(18)}(H^{\pm}n_{i}) =-\frac{m_{\ell_{a}}^{2}m_{\ell_{b}}}{m_{\ell_{a}}^{2}-m_{\ell_{b} }^{2}}\ell_{\phi}^{l}\sum_{i=1}^{K+3}\left(2e_{A}^{l}e_{A}^{n}\,\mathrm{B}_{0} ^{(2)}\,m_{n_{i}}{}^{2}+\left(\left(e_{A}^{l}\right)^{2}m_{\ell_{b}}^{2}+\left( e_{A}^{n}\right)^{2}m_{n_{i}}{}^{2}\right)\mathrm{B}_{1}^{(2)}\right)\Delta_{ii}^{ab}. \tag{117}\]
|
2310.12808
|
Model Merging by Uncertainty-Based Gradient Matching
|
Models trained on different datasets can be merged by a weighted-averaging of
their parameters, but why does it work and when can it fail? Here, we connect
the inaccuracy of weighted-averaging to mismatches in the gradients and propose
a new uncertainty-based scheme to improve the performance by reducing the
mismatch. The connection also reveals implicit assumptions in other schemes
such as averaging, task arithmetic, and Fisher-weighted averaging. Our new
method gives consistent improvements for large language models and vision
transformers, both in terms of performance and robustness to hyperparameters.
Code available here.
|
Nico Daheim, Thomas Möllenhoff, Edoardo Maria Ponti, Iryna Gurevych, Mohammad Emtiyaz Khan
|
2023-10-19T15:02:45Z
|
http://arxiv.org/abs/2310.12808v2
|
# Model Merging by Uncertainty-Based
###### Abstract
Models trained on different datasets can be merged by a weighted-averaging of their parameters, but why does it work and when can it fail? Here, we connect the inaccuracy of weighted-averaging to mismatches in the gradients and propose a new uncertainty-based scheme to improve the performance by reducing the mismatch. The connection also reveals implicit assumptions in other schemes such as averaging, task arithmetic, and Fisher-weighted averaging. Our new method gives consistent improvements for large language models and vision transformers, both in terms of performance and robustness to hyperparameters.
## 1 Introduction
Merging models through a weighted averaging of their parameters has recently found many applications in deep learning. For example, averaging checkpoints generated during various training runs can improve out-of-distribution generalization (Izmailov et al., 2018; Wortsman et al., 2022b; Gao et al., 2022, _inter alia_), while averaging models trained on different datasets can borrow knowledge from "donor tasks" (Matena & Raffel, 2022) and enforce specific fine-grained behaviors in models (Ilharco et al., 2023; Daheim et al., 2023). The latter is particularly attractive for post-hoc "editing" of large pretrained models without retraining, for instance, to remove toxicity from a large language model (LLM). Simple weighted-averaging appears to tackle many difficult knowledge transfer and adaptation problems that machine learning methods have struggled to solve in the past.
The reasons behind the effectiveness of weighted-averaging methods are not well understood. The diversity in applications has led to a large number of averaging schemes, including arithmetic mean (Wortsman et al., 2022b;a), linear interpolation (Ilharco et al., 2023; Ortiz-Jimenez et al., 2023; Yadav et al., 2023), or individual parameter weighing (Matena & Raffel, 2022; Daheim et al., 2023). A prominent hypothesis, known as 'linear mode connectivity', is that when the models land in relatively few low-loss basins their interpolation again lies in them (Frankle et al., 2020; Neyshabur et al., 2020; Wortsman et al., 2022a; Ainsworth et al., 2023). However, this does not directly tell us why and when one merging scheme should be preferred over the others, nor does it give any hints on how to improve them. Ideally, we would like to understand the effect of averaging schemes on the accuracy of the merged model and use it to design better merging methods.
In this paper, we improve model merging by proposing an uncertainty-based gradient matching method. We make two contributions: we first connect the inaccuracy of weighted-averaging to mismatches in the gradients and then improve its performance by reducing the mismatch with a second-order approximation; see an illustration in Fig. 1. Our new method uses (cheap) Hessian estimates to merge models which scales well to large Transformers (Vaswani et al., 2017). We use the method to reveal several assumptions implicit in existing model merging schemes like averaging, task arithmetic (Ilharco et al., 2023), and Fisher-weighted merging (Matena & Raffel, 2022); see Table 1. Finally, we show connections of our method to Bayesian approaches and discuss how we
can leverage them to further improve model merging. Empirical results on LLMs and ViTs show consistent improvements, both in terms of performance and robustness to hyperparameters.
## 2 Model Merging by Parameter Averaging
We consider merging multiple models that share the same architecture but are trained on different datasets, for example, by fine-tuning a large pretrained model. We denote each of the \(T>1\) models by its parameter vector \(\mathbf{\theta}_{t}\in\mathbb{R}^{d}\). Throughout this section, we will use an LLM, denoted by \(\mathbf{\theta}_{\text{LLM}}\), but the results straightforwardly apply to other pretrained models. Given \(\mathbf{\theta}_{\text{LLM}}\) and different \(\mathbf{\theta}_{t}\), our goal is to understand the inaccuracies in existing parameter-averaging methods and improve them.
We focus on the following simple weighted-averaging scheme: \(\bar{\mathbf{\theta}}=\mathbf{S}_{0}\,\mathbf{\theta}_{\text{LLM}}+\sum_{t=1}^{T} \mathbf{S}_{t}\,\mathbf{\theta}_{t}\), where \(\bar{\mathbf{\theta}}\) is the merged model obtained with scaling matrices \(\mathbf{S}_{t}\in\mathbb{R}^{d\times d}\) for \(t=0,1,\ldots,T\). Since the dimension \(d\) is often large, simple choices of \(\mathbf{S}_{t}\) are used in practice. The simplest one is the arithmetic mean (AM) or its weighted version (WAM; Wortsman et al., 2022b; a):
\[\bar{\mathbf{\theta}}_{\text{AM}}=\frac{1}{T}\sum_{t=1}^{T}\,\mathbf{\theta}_{t}, \qquad\bar{\mathbf{\theta}}_{\text{WAM}}=\alpha_{0}\mathbf{\theta}_{\text{LLM}}+\sum_ {t=1}^{T}\alpha_{t}\mathbf{\theta}_{t}, \tag{1}\]
where \(\alpha_{t}\geq 0\). For large models, different parameters have different scaling and it is better to take this into account, for example, by using the Fisher matrix \(\mathbf{F}_{t}\):
\[\bar{\mathbf{\theta}}_{\text{FA}}=\sum_{t=1}^{T}\mathbf{S}_{t}\mathbf{\theta}_{t},\; \text{where}\;\mathbf{S}_{t}=\alpha_{t}\bar{\mathbf{F}}^{-1}\mathbf{F}_{t}\; \text{with}\;\bar{\mathbf{F}}=\sum_{t=1}^{T}\alpha_{t}\mathbf{F}_{t},\;\text{ for all}\;t\geq 1, \tag{2}\]
giving rise to 'Fisher Averaging' (FA). We could similarly include \(\mathbf{S}_{0}\) by using the Fisher \(\mathbf{F}_{0}\) of the LLM. In practice, to reduce the computation cost, we may only use the diagonal of the Fisher estimated in an online fashion (Matena and Raffel, 2022). This is similar to strategies in continual learning (Kirkpatrick et al., 2017) where the choice of Fisher is justified through Bayesian updating Huszar (2018). However, such connections are not yet explored or exploited for model merging.
Using Fisher should improve things a bit but the extent of improvement is unclear. A recent work by Jin et al. (2023) uses insights from linear models to justify some of these choices, but such justification may not hold for nonlinear models. In general, it is also not clear how Fisher-averaging takes care of the commonalities between the fine-tuning \(\mathbf{\theta}_{t}\) of the LLM \(\mathbf{\theta}_{\text{LLM}}\). Should we include \(\mathbf{F}_{0}\) or not, and how should it be combined with the other \(\mathbf{F}_{t}\) so as to avoid double counting of information in the models? The current practice is to simply tune \(\alpha_{t}\) on a validation set which is one way to make up for the errors, but this can quickly become expensive as \(T\) increases.
Recently, Ilharco et al. (2023) proposed to subtract the contribution of \(\mathbf{\theta}_{\text{LLM}}\) with the following simple 'task arithmetic' (TA): \(\bar{\mathbf{\theta}}_{\text{TA}}=\mathbf{\theta}_{\text{LLM}}+\sum_{t=1}^{T}\alpha_{ t}(\mathbf{\theta}_{t}-\mathbf{\theta}_{\text{LLM}})\). Subtracting \(\mathbf{\theta}_{\text{LLM}}\)
Figure 1: The left panel illustrates our approach. We connect the error \(\Delta\) of the merged model \(\mathbf{\theta}_{\text{merged}}\) to the gradient mismatch over losses \(\ell_{t}\) and propose a new method that reduces the mismatch by using the Hessian \(\mathbf{H}_{t}\) and error \(\Delta_{t}\) of the individual models \(\mathbf{\theta}_{t}\). The right panel shows an example of adding datasets to RoBERTa trained on IMDB. We clearly see that reducing mismatch also reduces test error of task arithmetic. We consider 5 datasets, each indicated by a number on the markers.
from \(\mathbf{\theta}_{t}\) should reduce double-counting the information, but adding Fisher-style scaling in this scheme can be a bit tricky. A recent work by Dhaheim et al. (2023) proposes to use \(\bar{\mathbf{\theta}}_{\text{FA1}}=\overline{\mathbf{F}}^{-1}(\mathbf{F}_{\text{ LLM}}\mathbf{\theta}_{\text{LLM}}+\sum_{t=1}^{T}\mathbf{F}_{t}(\mathbf{\theta}_{t}-\mathbf{ \theta}_{\text{LLM}}))\) for \(\overline{\mathbf{F}}=\mathbf{F}_{\text{LLM}}+\sum_{t=1}^{T}\bar{\mathbf{F}}_{t}\) but \(\bar{\mathbf{F}}_{t}\) is calculated at \((\mathbf{\theta}_{t}-\mathbf{\theta}_{\text{LLM}})\) which lacks theoretical justification and using a scaling matrix in front of \(\mathbf{\theta}_{\text{LLM}}\) also departs from other approaches. TA also allows for \(\alpha_{t}<0\) to remove the contribution of old knowledge, which differs from other schemes and it is not clear if these schemes can also use negative weights.
To summarize, we want to understand the effect of various choices made in parameter-averaging methods. That is, we want to know the following: (1) how to choose the scaling matrices; (2) what is the effect of these choices on the accuracy of the merged model; and finally (3) how to obtain a new method that reduces the inaccuracies in previous approaches. In what follows, we answer such questions by proposing a new connection with gradient mismatch and a new method inspired by it.
## 3 Model Merging and Connections to Gradient Mismatches
To understand the inaccuracies of parameter averaging, we introduce the idea of a _target model_: it is the model that model merging methods want to estimate. Here is an example: consider two models \(\mathbf{\theta}_{1}\) and \(\mathbf{\theta}_{2}\) trained on two datasets \(\mathcal{D}_{1}\) and \(\mathcal{D}_{2}\), respectively, for example, as follows,
\[\mathbf{\theta}_{1}=\operatorname*{arg\,min}_{\mathbf{\theta}}\ \ \bar{\ell}_{1}(\mathbf{ \theta})+\tfrac{1}{2}\|\mathbf{\theta}\|^{2},\qquad\mathbf{\theta}_{2}=\operatorname* {arg\,min}_{\mathbf{\theta}}\ \ \bar{\ell}_{2}(\mathbf{\theta})+\tfrac{1}{2}\|\mathbf{\theta}\|^{2}. \tag{3}\]
Here, the loss functions on \(\mathcal{D}_{1}\) and \(\mathcal{D}_{2}\) are denoted by \(\bar{\ell}_{1}(\mathbf{\theta})\) and \(\bar{\ell}_{2}(\mathbf{\theta})\) respectively and the regularizer is an \(L_{2}\) regularizer (what follows also holds for other explicit regularizers, also implicit ones). The target model in this case could be a model \(\mathbf{\theta}_{1+2}\) that is trained jointly on the two datasets:
\[\mathbf{\theta}_{1+2}=\operatorname*{arg\,min}_{\mathbf{\theta}}\ \alpha_{1}\bar{\ell}_{1}(\mathbf{ \theta})+\alpha_{2}\bar{\ell}_{2}(\mathbf{\theta})+\tfrac{1}{2}\|\mathbf{\theta}\|^{2}. \tag{4}\]
We have used scalars \(\alpha_{1}\) and \(\alpha_{2}\) which reflect relative weighting of each loss function. We will now connect gradient mismatch to the error between the target \(\mathbf{\theta}_{1+2}\) and a parameter-average \(\alpha_{1}\mathbf{\theta}_{1}+\alpha_{2}\mathbf{\theta}_{2}\). The approach is general and applies to different types of targets and averages. This will be explored extensively in the rest of the paper.
We start with the first-order stationarity conditions of the models in Eqs. 3 and 4,
\[\mathbf{\theta}_{1}=-\nabla\bar{\ell}_{1}(\mathbf{\theta}_{1}),\qquad\mathbf{\theta}_{2}= -\nabla\bar{\ell}_{2}(\mathbf{\theta}_{2}),\qquad\mathbf{\theta}_{1+2}=-\alpha_{1} \nabla\bar{\ell}_{1}(\mathbf{\theta}_{1+2})-\alpha_{2}\nabla\bar{\ell}_{2}(\mathbf{ \theta}_{1+2}), \tag{5}\]
which is obtained by setting the gradient of their objectives to zero. Using these, we can express \(\mathbf{\theta}_{1+2}\) in terms of \(\alpha_{1}\mathbf{\theta}_{1}+\alpha_{2}\mathbf{\theta}_{2}\) and quantify the error made. To do so, we multiply the first and second equations above by \(\alpha_{1}\) and \(\alpha_{2}\) respectively, and add them together. Then, we subtract the resultant from the third equation to get the following expression:
\[\underbrace{\mathbf{\theta}_{1+2}-(\alpha_{1}\mathbf{\theta}_{1}+\alpha_{2}\mathbf{ \theta}_{2})}_{=\Delta,\text{ Error of the merged model}}=-\alpha_{1}\underbrace{\left[ \nabla\bar{\ell}_{1}(\mathbf{\theta}_{1+2})-\nabla\bar{\ell}_{1}(\mathbf{\theta}_{1} )\right]}_{\text{Gradient mismatch for }\mathbf{\theta}_{1}\text{ on }\bar{\ell}_{1}}- \alpha_{2}\underbrace{\left[\nabla\bar{\ell}_{2}(\mathbf{\theta}_{1+2})-\nabla \bar{\ell}_{2}(\mathbf{\theta}_{2})\right]}_{\text{Gradient mismatch for }\mathbf{\theta}_{2}\text{ on }\bar{\ell}_{2}}. \tag{6}\]
The left-hand side is the error \(\Delta=\mathbf{\theta}_{1+2}-(\alpha_{1}\mathbf{\theta}_{1}+\alpha_{2}\mathbf{\theta}_{2})\) which is equal to the weighted-sum of the two gradient-mismatch terms, each measured on the individual losses \(\bar{\ell}_{1}(\mathbf{\theta}_{1})\) and \(\bar{\ell}_{2}(\mathbf{\theta}_{2})\), respectively. The expression shows that if the individual models are already close (in terms of their gradients) to the target model, then parameter averaging should be reasonably accurate. It also tells us that there is room for improvement and mismatch reduction may lead to better schemes.
The method above can be used to analyze errors of generic parameter-averaging schemes. For instance, for data removal tasks, say to target \(\mathbf{\theta}_{2}\) by an operation \(\mathbf{\theta}_{1+2}-\alpha_{1}\mathbf{\theta}_{1}\), we can simply rearrange terms in Eq. 6 to express \(\mathbf{\theta}_{2}\) in terms of \(\mathbf{\theta}_{1+2}\) and \(\mathbf{\theta}_{1}\):
\[\mathbf{\theta}_{2}-\left(\mathbf{\theta}_{1+2}-\alpha_{1}\mathbf{\theta}_{1}\right)/ \alpha_{2}=(\alpha_{1}/\alpha_{2})\left[\nabla\bar{\ell}_{1}(\mathbf{\theta}_{1+2} )-\nabla\bar{\ell}_{1}(\mathbf{\theta}_{1})\right]+\left[\nabla\bar{\ell}_{2}(\bm {\theta}_{1+2})-\nabla\bar{\ell}_{2}(\mathbf{\theta}_{2})\right].\]
Generic loss functions can be used, for example, a non-differentiable loss function can be handled through smoothing techniques. Any convex regularizer can be used in which case the error is measured in the dual space of the regularizer. The approach also works in the absence of a regularizer. Test accuracy can also be analyzed in the same fashion. For example, given a test loss \(\bar{\ell}_{\text{Test}}(\mathbf{\theta})\) and a weighted-average \(\bar{\mathbf{\theta}}\), we can write: \(\bar{\ell}_{\text{Test}}(\mathbf{\theta}_{1+2})-\bar{\ell}_{\text{Test}}(\bar{\mathbf{ \theta}})\approx\nabla\bar{\ell}_{\text{Test}}(\hat{\mathbf{\theta}})^{{\top}}(\mathbf{ \theta}_{1+2}-\tilde{\mathbf{\theta}})\). Large gradient mismatch therefore is expected to correlate with large differences in test performance.
Sources of errors can be analyzed, too. For example, when the test data is more correlated to \(\mathcal{D}_{1}\), then model merging would be effective if gradient mismatch due to \(\mathbf{\theta}_{1}\) is also small. This is similar to linear mode connectivity: when both the target and merged models lie in low-loss basins, we expect gradient mismatch to be low due to flatness. However, gradient-mismatch does not require this and is more general and constructive, that is, it allows us to improve models by actively reducing the mismatch. In what follows, we will use gradient mismatch as a principle to not only study the inaccuracies of various averaging schemes but also to design better ones.
### Analyzing the Inaccuracy of Task Arithmetic on Large Language Models
We will demonstrate the use of the gradient-mismatch principle to analyze the inaccuracy of 'task arithmetic' (Ilharco et al., 2023). Similar analysis can be done for other pretrained models and pretraining procedures but we consider an LLM trained on a large dataset \(\mathcal{D}_{\text{Large}}\),
\[\mathbf{\theta}_{\text{LLM}}=\operatorname*{arg\,min}_{\mathbf{\theta}}\ \ \bar{\ell}_{\text{ LLM}}(\mathbf{\theta})+\tfrac{1}{2}\delta\|\mathbf{\theta}\|^{2},\text{ where }\bar{\ell}_{\text{ LLM}}(\mathbf{\theta})=\sum_{i\in\mathcal{D}_{\text{LMP}}}\ell_{i}(\mathbf{ \theta}). \tag{7}\]
Here, \(\ell_{i}(\mathbf{\theta})\) denotes the loss on the \(i\)'th example. For simplicity, we use an \(L_{2}\) regularization with parameter \(\delta>0\) but the choice is not crucial. The loss function can also be normalized. Our goal is to merge models \(\mathbf{\theta}_{t}\) that are finetuned on different datasets \(\mathcal{D}_{t}\) for \(t=1,2,\dots,T\). In what follows, we assume the following fine-tuning procedure with a regularizer,
\[\mathbf{\theta}_{t}=\operatorname*{arg\,min}_{\mathbf{\theta}}\ \ \bar{\ell}_{t}(\mathbf{ \theta})+\tfrac{1}{2}\|\mathbf{\theta}-\mathbf{\theta}_{\text{LLM}}\|_{\mathbf{H}_{0}} ^{2}, \tag{8}\]
where \(\|\mathbf{\theta}\|_{\mathbf{H}_{0}}^{2}=\mathbf{\theta}^{\top}\mathbf{H}_{0}\mathbf{\theta}\) is the Mahalanobis distance with a scaling matrix \(\mathbf{H}_{0}\) which controls how different \(\mathbf{\theta}\) is from \(\mathbf{\theta}_{\text{LLM}}\). We will discuss how to set \(\mathbf{H}_{0}\) later. _The derivation can be easily adopted to other fine-tuning procedures_ as long as we can express the dependence on \(\mathbf{\theta}_{\text{LLM}}\) explicitly.
Task arithmetic (TA) uses \(\bar{\mathbf{\theta}}_{\text{TA}}=\mathbf{\theta}_{\text{LLM}}+\sum_{t}\alpha_{t}( \mathbf{\theta}_{t}-\mathbf{\theta}_{\text{LLM}})\) to merge models. There are two natural questions: what is the target model that such a scheme is trying to approximate and what are the errors made by TA in approximating it? As before, a reasonable choice of the target model is the one obtained by fine-tuning using a similar procedure as Eq. 8 but on all \(\mathcal{D}_{t}\) at once,
\[\mathbf{\theta}_{1:T}=\operatorname*{arg\,min}_{\mathbf{\theta}}\ \sum_{t=1}^{T}\alpha_{t}\bar{\ell}_{t}(\mathbf{ \theta})+\tfrac{1}{2}\|\mathbf{\theta}-\mathbf{\theta}_{\text{LLM}}\|_{\mathbf{H}_{0} }^{2}. \tag{9}\]
We use the weighting with \(\alpha_{t}\) to align the target model to the weighting used in the merging scheme, but this is not required and other targets can be used. Following the same derivation as Eq. 6, we can quantify the error between \(\mathbf{\theta}_{1:T}\) and \(\bar{\mathbf{\theta}}_{\text{TA}}\) (a full derivation is given in App. A.1):
\[\mathbf{\theta}_{1:T}=\underbrace{\mathbf{\theta}_{\text{LLM}}+\sum_{t=1}^{T}\alpha_{ t}(\mathbf{\theta}_{t}-\mathbf{\theta}_{\text{LLM}})}_{=\bar{\mathbf{\theta}}_{\text{ TA}}}-\sum_{t=1}^{T}\alpha_{t}\mathbf{H}_{0}^{-1}\underbrace{\big{[}\nabla \bar{\ell}_{t}(\mathbf{\theta}_{1:T})-\nabla\bar{\ell}_{t}(\mathbf{\theta}_{t})\big{]} }_{\text{Gradient mismatch for }\mathbf{\theta}_{t}\text{ on }\bar{\ell}_{t}}. \tag{10}\]
The derivation can be used to understand the implicit assumptions made in task arithmetic. The increments \(\mathbf{\theta}_{t}-\mathbf{\theta}_{\text{LLM}}\) arise above due to the quadratic regularizer \(\|\mathbf{\theta}-\mathbf{\theta}_{\text{LLM}}\|^{2}\) used in Eqs. 8 and 9. Using the increments avoids double counting. More importantly, the error between the target \(\mathbf{\theta}_{1:T}\) and \(\bar{\mathbf{\theta}}_{\text{TA}}\) is attributed to gradient mismatch between \(\mathbf{\theta}_{t}\) and \(\mathbf{\theta}_{1:T}\). The expression suggests that by reducing the gradient mismatch we may be able to improve task arithmetic. We will now show that a simple method that uses Taylor's approximation to reduce the gradient mismatch justifies combining TA with a Fisher-like weighting schemes.
### A New Method to Reduce the Gradient Mismatch
We now derive a new parameter-averaging method by reducing the gradient mismatch in Eq. 10. Explicit minimization of the mismatch is non-trivial because \(\nabla\bar{\ell}_{t}(\mathbf{\theta}_{1:T})\) depends non-linearly on \(\mathbf{\theta}_{1:T}\) but we can get rid of the term by using a first-order Taylor approximation,
\[\nabla\bar{\ell}_{t}(\mathbf{\theta})\approx\nabla\bar{\ell}_{t}(\mathbf{\theta}_{t})+ \mathbf{H}_{t}(\mathbf{\theta}-\mathbf{\theta}_{t}) \tag{11}\]
where \(\mathbf{H}_{t}=\nabla^{2}\bar{\ell}_{t}(\mathbf{\theta}_{t})\) is the Hessian of the loss \(\bar{\ell}_{t}\) at \(\mathbf{\theta}_{t}\). Using this in Eq. 10 and after some rearrangement, we get the following merging scheme (a full derivation is given in App. A.2),
\[\hat{\mathbf{\theta}}_{\texttt{1:}T}=\mathbf{\theta}_{\texttt{LLM}}+\sum_{t=1}^{T} \alpha_{t}\left(\bar{\mathbf{H}}^{-1}\mathbf{H}_{0+t}\right)\left(\mathbf{\theta} _{t}-\mathbf{\theta}_{\texttt{LLM}}\right), \tag{12}\]
where \(\bar{\mathbf{H}}=\mathbf{H}_{0}+\sum_{t=1}^{T}\alpha_{t}\mathbf{H}_{t}\) accumulates all Hessians and \(\mathbf{H}_{0+t}=\mathbf{H}_{0}+\mathbf{H}_{t}\) is the Hessian plus a regularization matrix. The new merging scheme adds preconditioners \(\bar{\mathbf{H}}^{-1}\mathbf{H}_{0+t}\) to task arithmetic. The preconditioners depend on the Hessians \(\mathbf{H}_{t}\), which is similar to the Fisher-weighting scheme, but here the choice naturally emerges as a consequence of the gradient-mismatch reduction. Nevertheless we can replace \(\mathbf{H}_{t}\) by the diagonal Fisher \(\mathbf{F}_{t}\) of \(\mathbf{\theta}_{t}\), which is often easier to compute and also easier numerically because positive-definiteness is ensured. The matrix \(\mathbf{H}_{0}\) can be set in a similar way, for example, we can use the diagonal Hessian/Fisher of Eq. 7 at \(\mathbf{\theta}_{\texttt{LLM}}\).
Choosing different setting of \(\alpha_{t}\), \(\mathbf{H}_{0}\), and \(\mathbf{H}_{t}\), can recover many existing schemes as special cases of Eq. 12, as summarized in Table 1. This helps us to understand not only their inaccuracies but also their implicit assumptions. AM and TA can be seen as special cases where the preconditioner \(\mathbf{H}_{t}=\mathbf{0}\). This implies that the gradient mismatch term in Eq. 10 is left as is and the error will be high whenever there is a high gradient mismatch. In contrast, FA and FA1 can be seen as special cases where \(\mathbf{H}_{0}=\mathbf{0}\) which implies that the quadratic regularizer in Eqs. 8 and 9 is not used and therefore the dependence of \(\mathbf{\theta}_{t}\) on \(\mathbf{\theta}_{\texttt{LLM}}\) ignored. In light of Eq. 12, the Fisher at \(\mathbf{\theta}_{t}-\mathbf{\theta}_{\texttt{LLM}}\) used in FA1 (Dhaheim et al., 2023) appears to be an odd choice. We note that we can compensate for errors in FA by adding an additional \(\mathbf{S}_{0}\mathbf{\theta}_{\texttt{LLM}}\), similarly to Dhaheim et al. (2023), but the choice of \(\mathbf{S}_{0}\) is nontrivial: Eq. 12 suggests it to be \(\mathbf{S}_{0}=\mathbf{I}-\sum_{t=1}^{T}\alpha_{t}\bar{\mathbf{H}}^{-1}\mathbf{ H}_{0+t}\). Such a choice would compensate for double-counting of information from \(\mathbf{\theta}_{\texttt{LLM}}\) but it is difficult to come up with this choice without the analysis we present here. Finally, Sparse-Finetuning (SF) (Ansell et al., 2022) can be seen as a special case with \(\mathbf{H}_{0}=0\) and \(\mathbf{H}_{t}\) set to a binary sparse mask whose entries are \(1\) only for the parameters with the highest change. Overall, our approach provides a way to understand the effect of such choices and gives a way to improve them by reducing the gradient mismatch.
The principle of gradient matching can be applied to other merging tasks and schemes. For instance, consider removal of a task or dataset from the LLM which arises when trying to reduce toxic language generation. For such case, we could fine-tune a model on (hopefully the same) toxic dataset and try to'subtract' its contribution from the LLM. Formally, we want to remove a dataset \(\mathcal{D}_{t}\subset\mathcal{D}_{\text{Large}}\) and thereby target a model \(\mathbf{\theta}_{\backslash t}\) trained using Eq. 7 but after removing \(\mathcal{D}_{t}\) from \(\mathcal{D}_{\text{Large}}\). Let us denote the loss by \(\bar{\ell}_{t}\). We can then fine-tune a model \(\mathbf{\theta}_{t}\) by using Eq. 8, and do task arithmetic: \(\hat{\mathbf{\theta}}_{\backslash t}=\mathbf{\theta}_{\texttt{LLM}}-\alpha_{t}(\mathbf{ \theta}-\hat{\mathbf{\theta}}_{\texttt{LLM}})\)(Ilhraco et al., 2023). As shown in App. A.4, we can use gradient matching to understand and improve this method. We get the following improvement,
\[\hat{\mathbf{\theta}}_{\backslash t}=\mathbf{\theta}_{\texttt{LLM}}-\alpha_{t}\bar{ \mathbf{H}}_{\backslash t}^{-1}\mathbf{H}_{0+t}(\mathbf{\theta}_{t}-\mathbf{\theta}_{ \texttt{LLM}}), \tag{13}\]
where \(\bar{\mathbf{H}}_{\backslash t}=\nabla^{2}\bar{\ell}_{\backslash t}(\mathbf{ \theta}_{\texttt{LLM}})+\delta\mathbf{I}\) is the Hessian of Eq. 7 at \(\mathbf{\theta}_{\texttt{LLM}}\) but without \(\mathcal{D}_{t}\). The expression includes a preconditioner which is expected to improve the performance of task arithmetic. Intriguingly, when applied to data removal in a linear model, this update recovers the celebrated influence function (Jaeckel, 1972; Cook, 1977; Koh and Liang, 2017). We formalize this as follows.
**Theorem 1**: _For linear regression models with loss \(\bar{\ell}_{t}(\mathbf{\theta})=\frac{1}{2}\|\mathbf{y}_{t}-\mathbf{X}_{t}\mathbf{ \theta}\|^{2}\) where \(\mathbf{y}_{t}\) is the output vector and \(\mathbf{X}_{t}\) is the feature matrix, the update in Eq. 13 with \(\alpha_{t}=1\) reduces to the well-known
\begin{table}
\begin{tabular}{l l l l l} \hline & \(\alpha_{t}\) & \(\mathbf{H}_{0}\) & \(\mathbf{H}_{t}\) & Drawback \\ \hline Arithmetic Mean (AM) (Wortsman et al., 2022b) & \(1/T\) & \(\mathbf{I}\) & \(\mathbf{0}\) & Uniform weighting \\ Task Arithmetic (TA) (Ilharco et al., 2023) & \(\alpha_{t}\) & \(\mathbf{I}\) & \(\mathbf{0}\) & No preconditioning \\ Fisher averaging (FA) (Matea and Raffel, 2022) & \(\alpha_{t}\) & \(\mathbf{0}\) & \(\text{diag}(\mathbf{F}_{t})\) & \(\mathbf{\theta}_{\texttt{LLM}}\) is ignored \\ FA1 (Dhaheim et al., 2023) & \(\alpha_{t}\) & \(\mathbf{0}\) & \(\text{diag}(\hat{\mathbf{F}}_{t})\) & Fisher lacks justification \\ Sparse Finetuning (SF) (Ansell et al., 2022) & \(\alpha_{t}\) & \(\mathbf{0}\) & Binary mask & \(\mathbf{H}_{t}\) is a 0-1 matrix \\ \hline \end{tabular}
\end{table}
Table 1: Our new connection reveals implicit assumptions made in existing weighted-averaging schemes: AM uses uniform weighting while TA lacks preconditioning matrices (because \(\mathbf{H}_{t}=0\)). We expect high gradient mismatch for both. Both Fisher averaging methods FA and FA1 use preconditioning but ignore the dependence of \(\mathbf{\theta}_{t}\) on \(\mathbf{\theta}_{\texttt{LLM}}\) (because \(\mathbf{H}_{0}=\mathbf{0}\)).
expression of influence by Cook (1977, Eq. 5), which is shown below:_
\[\hat{\mathbf{\theta}}_{\backslash t}-\mathbf{\theta}_{\textit{LLM}}=\bar{\mathbf{H}}_{ \backslash t}^{-1}\mathbf{X}_{t}^{\top}(\mathbf{X}_{t}\mathbf{\theta}_{\textit{LLM} }-\mathbf{y}_{t}). \tag{14}\]
A proof of the result is given in App. A.5. Our results also applies to generic nonlinear models, where the step (13) can be seen as a Newton-like step in a direction \(\bar{\mathbf{H}}_{\backslash t}^{-1}\mathbf{H}_{0+t}(\mathbf{\theta}_{t}-\mathbf{ \theta}_{\textit{LLM}})\). We note that there are several ways to rearrange the gradient mismatch term which give rise to different kinds of approximation. It is up to the designer to choose the preconditioner which goes well in their setup. The derivation in App. A.4 shows an example in the context of task removal (see Eq. 25).
### Gradient Mismatch Reduction as Uncertainty Estimation
Both the gradient-mismatch connection and the new method are closely related to uncertainty estimation via approximate Bayesian methods. We show that Eq. 10 is equivalent to a maximum-a-posteriori (MAP) estimate of the posterior over all data \(\mathcal{D}_{1:T}\) while Eq. 12 is the same but for a posterior approximation obtained with Laplace's method (Laplace, 1974; Tierney and Kadane, 1986; MacKay, 1992). Based on these, we discuss some possible future directions for improvements.
We start by defining the posteriors. Assuming \(p(\mathbf{\theta})=\mathcal{N}(\mathbf{\theta}|\mathbf{\theta}_{\textit{LLM}},\mathbf{H} _{0}^{-1})\) to be the Gaussian prior and \(p(\mathcal{D}_{t}|\mathbf{\theta})\propto e^{-\ell_{t}(\mathbf{\theta})}\) to be a valid likelihood function, we can define a weighted-posterior \(p_{\alpha}(\mathbf{\theta}|\mathcal{D}_{1:T})\) over all datasets, shown below, along with an approximation obtained by Laplace's method,
\[p_{\alpha}(\mathbf{\theta}|\mathcal{D}_{1:T})\propto\ p(\mathbf{\theta})\prod_{t=1}^{ T}e^{-\alpha_{t}\bar{\ell}_{t}(\mathbf{\theta})}\ \approx\ p(\mathbf{\theta})\prod_{t=1}^{T}e^{-\frac{1}{ 2}\alpha_{t}\|\mathbf{\theta}-\mathbf{\theta}_{t}\|^{2}_{\mathbf{H}_{t}}}\propto q_{ \alpha}(\mathbf{\theta}|\mathcal{D}_{1:T}). \tag{15}\]
Here, we use a second-order approximation at \(\mathbf{\theta}_{t}\) to get \(\bar{\ell}_{t}(\mathbf{\theta})\approx\bar{\ell}_{t}(\mathbf{\theta}_{t})+\frac{1}{2 }\|\mathbf{\theta}-\mathbf{\theta}_{t}\|^{2}_{\mathbf{H}_{t}}\). The term \(\bar{\ell}_{t}(\mathbf{\theta}_{t})\) is an irrelevant constant and we get the approximation \(q_{\alpha}(\mathbf{\theta}|\mathcal{D}_{1:T})\). The result below shows that the merged model is the MAP estimate corresponding to the approximate posterior.
**Theorem 2**: _The gradient mismatch equation in Eq. 10 is the stationarity condition of a MAP problem written in terms of posterior \(p(\mathcal{D}_{t}|\mathbf{\theta})\) (the equation on the left), while the merged model \(\hat{\mathbf{\theta}}_{1:T}\) in Eq. 12 is the MAP estimate of the Laplace approximation (equation on the right)._
\[\mathbf{\theta}_{1:T}=\arg\max_{\mathbf{\theta}}\ p(\mathbf{\theta})\prod_{t=1}^{D}\left[ \frac{p(\mathbf{\theta}|\mathcal{D}_{t})}{p(\mathbf{\theta})}\right]^{\alpha_{t}}, \qquad\qquad\hat{\mathbf{\theta}}_{1:T}=\arg\max_{\mathbf{\theta}}\ q_{\alpha}(\mathbf{ \theta}|\mathcal{D}_{1:T}). \tag{16}\]
A detailed proof is given in App. A.3. The result relates the gradient-mismatch approach to the posterior distribution and its approximation. The first equation expresses model merging as merging of posteriors \(p(\mathbf{\theta}|\mathcal{D}_{t})\) that are computed on different datasets. With a Bayesian approach, an exact solution can be recovered even when training on separate datasets. This is an instance of the Bayesian committee machine (Tresp, 2000) or Bayesian data fusion (Mutambara, 1998; Durrant-Whyte, 2001; Wu et al., 2022) which are extensively used for Gaussian processes (Deisenroth and Ng, 2015) and which should also be useful when using Neural Tangent Kernel for model merging (Ortiz-Jimenez et al., 2023). The second equation connects existing methods to a Gaussian approximation obtained using Laplace's method. Table 1 therefore suggests that these methods make crude approximations to uncertainty estimates where either the likelihood or the prior in \(q_{\alpha}\) is ignored.
The gradient mismatch term in Eq. 10 arises due to the ratio \(p(\mathbf{\theta}|\mathcal{D}_{t})/p(\mathbf{\theta})\). To understand this, consider the simple case of linear regression. Suppose we learn two separate linear models with loss function \(\bar{\ell}_{t}(\mathbf{\theta})=\frac{1}{2}\|\mathbf{y}_{t}-\mathbf{X}_{t}\mathbf{ \theta}\|^{2}\). The gradient and Hessian are \(\nabla\bar{\ell}_{t}(\mathbf{\theta})=\mathbf{X}_{t}^{\top}(\mathbf{X}_{t}\mathbf{ \theta}-\mathbf{y}_{t})\) and \(\mathbf{H}_{t}=\mathbf{X}_{t}\mathbf{X}_{t}^{\top}\) respectively. Therefore, the gradient mismatch term can be written as,
\[\nabla\bar{\ell}_{t}(\mathbf{\theta}_{1:T})-\nabla\bar{\ell}_{t}(\mathbf{\theta}_{t})= \mathbf{X}_{t}^{\top}(\mathbf{X}_{t}\mathbf{\theta}_{1:T}-\mathbf{X}_{t}\mathbf{ \theta}_{t})=\mathbf{H}_{t}(\mathbf{\theta}_{1:T}-\mathbf{\theta}_{t})=\left.\nabla \log\frac{p(\mathbf{\theta}|\mathcal{D}_{t})}{p(\mathbf{\theta})}\right|_{\mathbf{\theta}= \mathbf{\theta}_{1:T}}.\]
For linear models, \(p_{\alpha}(\mathbf{\theta}|\mathcal{D}_{t})=q_{\alpha}(\mathbf{\theta}|\mathcal{D}_{t})\) and therefore Taylor's approximation in Eq. 11 is exact. The equation matches Jin et al. (2023, Eq. 1) who use this objective to merge linear parts of a transformer. Our approach extends such efforts to nonlinear problems.
The Bayesian connection also gives direct ways to improve model merging and also reduce the computational burden. For example, one way to improve would be to take a few optimization steps
aiming for the MAP estimate of the exact posterior, and then use the current iterate for the Taylor's approximation in Eq. 10. Solutions obtained this way will provably get better as the number of steps are increased. This is in contrast with other approaches, for example, by Ortiz-Jimenez et al. (2023) who propose to train in the linearized tangent space which may not always converge to the right solution. Another way to improve is to use better posterior approximation, for example, using variational inference (Graves, 2011; Blundell et al., 2015; Osawa et al., 2019) which is known to yield a more global approximation (Opper and Archambeau, 2009). Nevertheless, in this work we focus on improving merging without retraining and with computationally cheap estimates and leave the iterative optimization as future work.
The Bayesian view also connects to similar efforts in continual learning to avoid catastrophic forgetting (Kirkpatrick et al., 2017) where a Bayesian motivation is used to justify the choice of Fisher-based regularizer (Huszar, 2018). Our contribution essentially gives an extension of such ideas to model merging. Our approach is also connected to Knowledge-Adaptation priors (Khan and Swaroop, 2021) where a variety of adaptation tasks are solved by gradient reconstruction. The connection also justifies the choice of diagonal Fisher in place of the Hessian, which essentially forms a Generalized Gauss-Newton approximation (Schraudolph, 2002; Pascanu and Bengio, 2013; Martens, 2020) of it. In our experiments, we use a Monte-Carlo estimator \(\sum_{i}\left[\nabla_{\mathbf{\theta}}\ell_{i}(\mathbf{\theta})\right]^{2}\) of the diagonal Fisher where \(i\) is summed over all examples in the data. Such estimates can also be obtained during training with Adam (Kingma and Ba, 2015) and provide a good estimate of the Hessian for small minibatch sizes (Khan et al., 2018, Thm. 1). The estimate can be normalized or unnormalized, and it is also possible to use another Fisher estimate. However, our derivation suggests to estimate it on the training data and not a held-out set as mentioned in Yadav et al. (2023).
## 4 Experiments & Results
We first show the relationship between gradient mismatch and test error for language models in Sec. 4.1. Then, we consider the setting of task addition, and add tasks to a pretrained ViT (Dosovitskiy et al., 2021) (Sec. 4.2) and LLM (Sec. 4.3). Finally, we consider data removal and remove
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline & IMDB & Yelp & RT & SST2 & Amazon & Avg. & True Avg. \\ Parametrization & & & & Accuracy (\(\uparrow\)) & & & \\ \hline TA (\(\alpha_{t}=1\)) & 90.5 & 95.6 & 86.4 & 91.6 & 94.9 & 91.8 & 94.7 \\ \hline Ours & **94.7**(\(\uparrow\)**42) & **97.3**(\(\uparrow\)**17) & **90.2**(\(\uparrow\)**38) & **93.7**(\(\uparrow\)**22) & **96.7**(\(\uparrow\)**13) & **94.5**(\(\uparrow\)**27) & **96.6**(\(\uparrow\)**13) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Reducing gradient mismatch in Eq. 10 when \(\alpha_{t}=1\) is crucial for merging, here outlined for adding four sentiment analysis tasks to RoBERTa trained on IMDB. Avg. denotes the average over individual dataset accuracies and True Avg. denotes the accuracy calculated over all predictions.
Figure 2: Left: We merge models trained on eight image classification with a pretrained ViT and vary \(\alpha_{t}\). Our method performs similarly to TA for smaller but significantly better for higher \(\alpha_{t}\), improving over the best \(\alpha_{t}\) for TA. Right: We add four sentiment analysis tasks to RoBERTa trained on IMDB. Our merging function dominates TA and requires no tuning of scaling factors. We plot the average over individual dataset accuracies.
toxicity and hallucinations from language models (Sec. 4.4). In all experiments, we approximate Hessians using the squared gradient approximation of the Fisher unless otherwise stated. All models are trained using AdamW (Loshchilov and Hutter, 2019) or a modified version of Adam (Kingma and Ba, 2015) with a decoupled quadratic penalty. Further experimental details can be found in App. C.
### Gradient Mismatch & Test Performance
We measure the mismatch of gradients between a model trained on all data and merged with task arithmetic and our method Eq. 12 in the experiment of Sec. 4.3 by calculating the norm of the difference of their gradients on the training data. Fig. 1 shows that there is a clear correlation between the test error and gradient mismatch. Reducing the mismatch leads to models with less prediction error, confirming our intuition that it plays a key role in successful model merging. Similarly, Table 2 shows that accounting for the gradient mismatch in Eq. 10 provides large improvements.
### Adding Tasks to Pretrained Vision Transformers
We use a pretrained ViT for image classification and add eight datasets to it: Cars (Krause et al., 2013), DTD (Cimpoi et al., 2014), EuroSAT (Helber et al., 2018), GTSRB (Houben et al., 2013), MNIST (LeCun, 1998), RESISC45 (Cheng et al., 2017), SUN397 (Xiao et al., 2010), and SVHN (Yuval, 2011), replicating the method and datasets used in Ilharco et al. (2023). We use the identity matrix to approximate the Hessian of the pretrained ViT, because the training data is not public, but one might also use squared gradients on similarly distributed data. All task models are trained by fine-tuning the ViT. The results are outlined in the leftmost panel of Fig. 2. Our proposed merging function is much more robust to the choice of scaling factors. For larger factors, task arithmetic even falls below the zero-shot baseline and even though performance also drops for our method, it stays well above this baseline and improves slightly over the best \(\alpha_{t}\) found for task arithmetic.
### Sentiment Classification in NLP
We repeat a similar experiment using RoBERTa (Liu et al., 2019) which follows the BERT architecture (Devlin et al., 2019) and is an encoder-only language model. We first train on IMDB (Maas et al., 2011) (arbitrarily chosen, and any other of the datasets would work, too) to obtain the required task-specific classification layer. We approximate the Hessian of this model using the squared gradients on the training data for the quadratic regularizer. We then use this model to initialize all other models which we train on the polarity version of the Amazon (Zhang et al., 2015), RottenTomatoes (Pang and Lee, 2005), SST2 (Socher et al., 2013), and Yelp (Zhang et al., 2015) datasets respectively.
Table 3 shows that our new method gets closer to the "all-data" target model \(\mathbf{\theta}_{1:T}\) than other merging functions, like averaging and FA. Furthermore, our proposed merging improves over TA even when
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline & IMDB & Yelp & RT & SST2 & Amazon & Avg. & True Avg. \\ Parametrization & & & & Accuracy (\(\uparrow\)) & & & \\ \hline All-data & 94.8 & 97.6 & 91.2 & 94.7 & 96.9 & 95.0 & 96.8 \\ \hline Averaging & 94.4 & 97.0 & 89.1 & 93.6 & 96.2 & 94.1 & 96.1 \\ Fisher Averaging (\(\mathbf{F}_{\text{avg.}}\)) & 94.5 & 97.0 & 89.6 & 93.9 & 96.1 & 94.3 & 96.1 \\ Fisher Averaging (\(\mathbf{F}_{\text{sum}}\)) & **94.8** & 97.2 & 89.9 & 93.1 & 96.6 & 94.3 & 96.5 \\ Task Arithmetic (\(\alpha_{t}=1\)) & 90.5 & 95.6 & 86.4 & 91.6 & 94.9 & 91.8 & 94.7 \\ Task Arithmetic (tuned \(\alpha_{2}\))\({}^{\dagger}\) & 94.3 & 97.2 & 89.6 & 94.5 & 96.4 & 94.4 & 96.3 \\ \hline Ours (\(\mathbf{F}_{\text{avg.}}\)) & 94.4 (\(\uparrow\)0.1) & 97.2 (\(\uparrow\)0.0) & **90.2** (\(\uparrow\)0.0) & **94.6** (\(\uparrow\)0.1) & 96.3 (\(\downarrow\)0.1) & **94.5** (\(\uparrow\)0.1) & 96.3 (\(\downarrow\)0.1) \\ Ours (\(\mathbf{F}_{\text{sum}}\)) & 94.7 (\(\uparrow\)0.4) & **97.3** (\(\uparrow\)0.1) & **90.2** (\(\uparrow\)0.0) & 93.7 (\(\uparrow\)0.0) & **96.7** (\(\uparrow\)0.3) & **94.5** (\(\uparrow\)0.1) & **96.6** (\(\uparrow\)0.3) \\ \hline \hline \end{tabular}
\end{table}
Table 3: We merge four tasks with RoBERTa trained on IMDB. Our merging function shows how reducing gradient mismatch improves performance over previously proposed functions. Optimizing the scaling factors of TA on test data (\({}^{\dagger}\)) can not recover the performance of our method, indicating that scalar weighting is insufficient. \(\mathbf{F}_{\text{sum}}\) denotes summing squared gradients, \(\mathbf{F}_{\text{avg.}}\) averaging. Changes in brackets are wrt. TA (tuned \(\alpha_{t}\)). Avg. denotes the average of per-dataset accuracies and True Avg. is calculated over all predictions.
we tune scaling factors on the test set for TA and not at all for our method. Fig. 2 (right) shows a plot over scaling factors where our method dominates TA which also falls below the zero-shot baseline of the IMDB model. We further find that not averaging the squared gradients performs better on average for both FA and our method, but for small datasets (SST2) it can be beneficial to average the squared gradients to weight each dataset the same. An important choice in our experiments for FA was how to lower-bound or add a small \(\delta\) to the Fishers to prevent numerical instability. For instance, for \(\mathbf{F}_{avg}\). we have found adding a small delta (e.g on the order of \(10^{-10}\)) to perform multiple points better than clipping to a larger value, such as \(10^{-6}\). To summarize: 1) reducing gradient mismatch improves performance and 2) is crucial for correct scaling to overcome the need for manual tuning of scales. Furthermore, 3) merging with increments of \(\boldsymbol{\theta}_{t}-\boldsymbol{\theta}_{\text{LIM}}\) instead of just \(\boldsymbol{\theta}_{t}\) gives slight improvements and 4) so does scaling by Fisher. 5) Seemingly small details can have a large impact.
### Editing Language Generation Models By Removing Data
We study two settings for removing harmful examples from LLMs: removing data with hallucinations from dialogue models to improve their faithfulness, and removing toxic data.
We first replicate the set-up from Dhaheim et al. (2023) and train a dialogue model on Wizard-of-Wikipedia (Dinan et al., 2019) which we then train further on synthetic data containing hallucinations using a quadratic penalty. This model is subsequently used for removal from the first one. We found \(1.0\) to be the best scaling factor for both merging functions. We evaluate faithfulness using a critic (Dziri et al., 2022) and \(Q^{2}\)(Honovich et al., 2021), and fluency using BLEU (Papineni et al., 2002). Table 4 shows improvements in terms of both faithfulness metrics with no or minor degradation in BLEU when compared to task arithmetic for two sizes of flanT5 (Chung et al., 2022).
For unlearning toxicity, we use the set-up from Ilharco et al. (2023). We use GPT2 (Radford et al., 2019), and GPT-J 1.3B (Wang and Komatuszaki, 2021) and fine-tune each model on the comments with toxicity score \(\geq 0.8\) from Civil Comments (Borkan et al., 2019). For both task arithmetic and our method we introduce a scaling factor because initial experiments with \(\alpha_{t}=1\) showed degenerate generations. Potentially, \(\mathbf{H}_{0}\) is underestimated, as we can not run the squared gradient approximation on all training data because it is either unavailable or too computationally expensive; instead we estimate it on WikiText103. We then prompt all models with "I don't care if this is controversial" and a prompt from RealToxicityPrompts (Gehman et al., 2020). Finally, we evaluate the toxicity using Detoxify (Hanu and Unitary team, 2020), and perplexity on WikiText103 (Merity et al., 2017). We classify all generations with score \(\geq 0.2\) as toxic. Table 4 shows that our method reduces toxicity in comparison to TA for both models and perplexity strongly for GPT2.
## 5 Conclusion
In this paper, we connect the error of the merged model to the gradient mismatch between the individual models that are merged and the 'target model' that merging aims to recover. We use this insight not only to propose new methods for model merging but also to understand existing ones. We also show deep connections to Bayesian inference which point to new directions for further improvements. Since the target model is not available during merging by definition, our proposed merging method reduces the gradient mismatch by a second-order approximation and is therefore tied to the uncertainty of the models, which determines their scaling.
\begin{table}
\begin{tabular}{l l l l l|l l l l} \hline Model & \(\boldsymbol{\theta}\) & \multicolumn{3}{c|}{Toxicity} & Fluency & Model & Fluency & \multicolumn{2}{c}{Hallucination \%} \\ & & 100-Avg. & Num. Toxic & PPL(\(\downarrow\)) & & BLEU (\(\uparrow\)) & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline GPT2\({}_{117\text{M}}\) & \(\boldsymbol{\theta}_{\text{LIM}}\) & 11.2 & 15.4 \% & 24.9 & \multicolumn{1}{c}{} & FlanT5\({}_{250\text{M}}\) & 17.3 & 27.5 & 11.7 \\ & TA & 9.8 & 13.1 \% & 30.3 & \multicolumn{1}{c}{} & 18.2 & 13.8 & 7.4 \\ \hline & ours & \(\boldsymbol{9.6}\) (\(\pm\)0.2) & \(\boldsymbol{12.8}\) \% (\(\pm\)0.3) & \(\boldsymbol{26.9}\) (\(\pm\)3.4) & & \(\boldsymbol{18.2}\) (\(\pm\)) & \(\boldsymbol{12.8}\) (\(\pm\)1.0) & \(\boldsymbol{7.0}\) (\(\pm\)0.0) \\ \hline GPT-J\({}_{13\text{B}}\) & \(\boldsymbol{\theta}_{\text{LIM}}\) & 11.9 & 16.6 \% & 12.6 & FlanT5\({}_{250\text{M}}\) & 18.4 & 31.5 & 12.8 \\ & TA & 10.7 & 14.5 \% & \(\boldsymbol{12.7}\) & & \(\boldsymbol{18.6}\) & 11.8 & 7.7 \\ & ours & \(\boldsymbol{10.2}\) (\(\pm\)0.5) & \(\boldsymbol{14.0}\) \% (\(\pm\)0.5) & 12.8 (\(\pm\)0.1) & & \(\boldsymbol{18.0}\) (\(\pm\)0.6) & \(\boldsymbol{8.8}\) (\(\pm\)1.30) & \(\boldsymbol{5.0}\) (\(\pm\)2.7) \\ \hline \end{tabular}
\end{table}
Table 4: Reducing gradient mismatch also improves removal of undesirable behaviour from LLMs.
Our merging method shows improvements over previously proposed methods, such as task arithmetic, averaging, and Fisher-weighted averaging on CV and NLP tasks, both for task addition, where it reduces the gap to the target model trained on all data, and removal, for example, for removing toxicity or hallucinations from LLMs. Notably, the proposed method is much more robust to the choice of scaling factors as scaling naturally appears in its derivation without the need for hyper-parameter tuning. We find our approach to scale well to different Transformer architectures.
By providing a new interpretation of model merging as gradient matching, we hope to contribute to a better understanding of model merging and improve its performance.
## Acknowledgements
This project has received funding by the German Federal Ministry of Education and Research and the Hessian Ministry of Higher Education, Research, Science and the Arts within their joint support of the National Research Center for Applied Cybersecurity ATHENE. This work is supported by the Bayes duality project, JST CREST Grant Number JPMJCR2112.
|
2303.14399
|
A numeric study of power expansions around singular points of algebraic
functions, their radii of convergence, and accuracy profiles
|
An efficient method of computing power expansions of algebraic functions is
the method of Kung and Traub and is based on exact arithmetic. This paper shows
a numeric approach is both feasible and accurate while also introducing a
performance improvement to Kung and Traub's method based on the ramification
extent of the expansions. A new method is then described for computing radii of
convergence using a series comparison test. Series accuracies are then fitted
to a simple log-linear function in their domain of convergence and found to
have low variance. Algebraic functions up to degree 50 were analyzed and timed.
A consequence of this work provided a simple method of computing the Riemann
surface genus and was used as a cycle check-sum. Mathematica ver. 13.2 was used
to acquire and analyze the data on a 4.0 GHz quad-core desktop computer.
|
Dominic C. Milioto
|
2023-03-25T08:37:07Z
|
http://arxiv.org/abs/2303.14399v2
|
A numeric study of power expansions around singular points of algebraic functions, their radii of convergence, and accuracy profiles
###### Abstract.
An efficient method of computing power expansions of algebraic functions is the method of Kung and Traub [4] and is based on exact arithmetic. This paper shows a numeric approach is both feasible and accurate while also introducing a performance improvement to Kung and Traub's method based on the ramification extent of the expansions. A new method is then described for computing radii of convergence using a series comparison test. Series accuracies are then fitted to a simple log-linear function in their domain of convergence and found to have low variance. Algebraic functions up to degree 50 were analyzed and timed. A consequence of this work provided a simple method of computing the Riemann surface genus and was used as a cycle check-sum. Mathematica ver. 13.2 was used to acquire and analyze the data on a 4.0 GHz quad-core desktop computer.
Key words and phrases:Puiseux series, fractional power series, algebraic functions, radius of convergence, Newton-polygon 2010 Mathematics Subject Classification: Primary 1401; Secondary 1404
## 1. Introduction
The objective of this paper is five-fold:
1. Provide a simple method of analyzing the branching geometry of algebraic functions,
2. Describe a new method of determining radii of convergence,
3. Construct accuracy and order functions of series expansions around singular points,
4. Analyze test cases and summarize convergence, accuracy, and timing results,
5. Provide an accessible teaching aid of this subject to interested readers.
The functions studied in this paper are algebraic functions \(w(z)\) defined implicitly by the irreducible \(n\)-degree expression in \(w\):
\[f(z,w)=a_{0}(z)+a_{1}(z)w+a_{2}(z)w^{2}+\cdots+a_{n}(z)w^{n}=0 \tag{1}\]
with \(z\) and \(w\) complex variables and the coefficients, \(a_{i}(z)\), polynomials in \(z\) with rational coefficients. By the Implicit Function Theorem, (1) defines locally, an analytic function \(w(z)\) when \(\dfrac{\partial f}{\partial w}\neq 0\). The solution set of (1) defines an algebraic curve, \(w(z)\), and it is known from the general theory of algebraic functions that \(w(z)\) can be described in a disk centered at \(z_{0}\) by \(n\) fractional power series called Puiseux series with radii of convergence extending at least to the distance to the nearest singular point. In an earlier paper by this author [7], the method of Kaub and Traub was used to compute the power expansions, and a numeric integration method was described to compute their radii of convergence. In this paper, a new method to compute radii of convergence is described, and the accuracy of the series is studied in their domain of convergence.
A Puiseux expansion of (1) at a point \(z_{0}\) is a set of \(n\) fractional power expansion in \(z\) given by
\[\{P_{i}(z)\}=\{w_{i}(z)\}=\sum_{k=r}^{\infty}a_{k}(z-z_{0})^{\frac{m_{k}}{c}},\quad i=1,2,\cdots,n \tag{2}\]
where \(z\) lies in the domain of convergence of the series. For finite \(z_{0}\), \(f\) is translated via \(f(z+z_{0},w)\) then \(w(z)\) expanded as the set \(\{w_{i}(z)\}=\sum_{k=r}^{\infty}a_{k}z^{\frac{m_{k}}{z}}\) with \(z^{\frac{m_{k}}{z}}\) interpreted as principal-valued, and the series evaluated at the relative coordinate \(z_{r}=z-z_{0}\).
In the case of an expansion at infinity, \(f\) is translated via \(g(z,w)=z^{\delta}f\left(\frac{1}{z},w\right)\) where \(\delta\) is the largest exponent of \(z\) in \(f(z,w)\). Then an expansion of \(w(z)\) at the origin in terms of \(g(z,w)\) is an expansion of \(w(z)\) at infinity with the series evaluated at \(z_{r}=\frac{1}{z}\). Section 14.4 is an example of an expansion at infinity.
The derivative of \(w(z)\) at a point \((z,w)=(p,q)\) can be computed as
\[\frac{dw}{dz}=\lim_{(z,w)\to(p,q)}\left(-\frac{f_{z}(z,w)}{f_{w}(z,w)}\right) \tag{3}\]
when this limit exist. A singularity \(\{z_{s},w_{s}\}\) of \(w(z)\) is a point where the limit does not exist and in this paper, the term "singular point" refers to the \(z\)-component of the singularity.
## 2. Conventions used in this paper
1. Computations are computed with a working precision of 1000 digits. An exception to this are the accuracy and order functions which do not need this level of precision and are only computed to machine precision. All reported data however are shown with six or fewer digits for brevity.
2. Accuracy is the number of accurate digits to the right of a decimal point. Precision is the total number of accurate digits in a number. For a number \(x\), \(p=a+\log_{10}|x|\) with \(p\) the precision and \(a\) the accuracy.
3. Finite singular points are the zeros of the resultant of \(f\) with \(\frac{\partial f}{\partial w}\) and are arranged into three lists: 1. **The singular list:** This is a list of the finite singular points in order of increasing distance from the origin. Conjugate singular points are ordered real part first then imaginary part. The singular points are labeled \(s_{1}\) through \(s_{T}\) with \(T\) the total number in the list, 2. **The singular sequence:** For each singular point \(s_{b}\), the remaining singular points are ordered in increasing distance from \(s_{b}\), 3. **The comparison sequence:** This is a truncated singular sequence which is used to identify convergence-limiting singular points (CLSPs).
4. Each singular point is assigned a circular perimeter with radius equal to \(1/3\) the distance to the nearest singular point. This circle is called the singular perimeter.
5. A \(k\)-cycled branch refers to a part of \(w(z)\), \(k\)-valued and represented by \(k\) Puiseux series with radius of convergence \(R\).
6. The expansions of (1) at a singular point \(s_{b}\) is a set of \(n\) Puiseux expansions, \(\{P_{i}(z)\}\) in terms of \(z^{1/c}\) where \(c\) is a positive integer and can be different for different series in the set. \(c\) is both the cycle size of the series and cycle size of the branch represented by the series. For example, the power series \(\sum_{k=r}^{\infty}a_{k}z^{\frac{m_{k}}{\beta}}\) has a cycle size of \(3\) and represents a \(3\)-cycle branch: The branch has three coverings over a deleted neighborhood of the expansion center and is represented by three such Puiseux expansions making a \(3\)-cycle conjugate set of power expansions. The geometry of the branch is similar to the geometry of \(z^{\pm 1/3}\).
The set of \(n\) expansions are numbered \(1\) through \(n\) with conjugate members sequentially numbered. For example, a \(15\)-degree function may have a \(5\)-cycle branch with series numbers \(\{7,8,9,10,11\}\), a \(1\)-cycle branch with series number \(\{12\}\), a \(3\)-cycle branch with members \(\{13,14,15\}\), and a \(6\)-cycle with members \(\{1,2,3,4,5,6\}\).
7. Branches of algebraic functions are categorized according to their algebraic and geometric morphologies into six types: \(T,E,F_{p}^{q},V_{p}^{q},P_{p}^{q},L^{q}\). The \(T,E\) and \(L^{q}\) branches are \(1\)-cycle branches, and \(F_{p}^{q},V_{p}^{q}\) and \(P_{p}^{q}\) are \(p\)-cycle branches. Appendix 16 describes each branch type.
8. Reference is made to a base singular point \(s_{b}\). This refers to a center of expansion of a Puiseux series with \(s_{b}\) a singular point. In the procedure described below, the branch surfaces about \(s_{b}\) are analytically continued over other singular points \(s_{n}\) in order of increasing distance from the base singular point until the nearest convergence-limiting singular point (CLSP) is encountered. Another closely related term is the impinging singular point or ISP of a branch sheet. The ISP of a single-valued sheet of a multivalued branch is the nearest singular point impeding the analytic continuity of the branch surface. The nearest ISP of all branch sheets in a conjugate set is the CLSP for the branch and establishes the radius of convergence of their power expansions. The CLSP of a conjugate set is not unique as multiple (conjugate) singular points may impinge the analytic continuity of a branch sheet. In these cases, the first member in the singular sequence is selected as the CLSP.
9. \(R\) is a positive real number representing the radius of convergence of a power series centered at a singular point \(s_{b}\). The value of \(R\) is expressed in terms of the associated CLSP. For example, if a power expansion has a center at the tenth singular point \(s_{10}\), and its CLSP was found to be \(s_{25}\), then \(R=|s_{10}-s_{25}|\). This notation is presented as the exact symbolic expression for radius of convergence.
10. The Puiseux expansions of \(w(z)\) at a point \(s_{b}\) are grouped into conjugate classes. For example, a \(5\)-cycle branch of \(w(z)\) is expanded into five Puiseux series in powers of \(z^{1/5}\), one series for each single-valued sheet of the branch. These five series make up a single \(5\)-cycle conjugate class. The sum of the conjugate classes at a point \(z\) is always equal to the degree of the function in \(w\). A power expansion of a \(10\)-degree function consist of the set \(\{P_{i}\}\) such that the sum of the conjugate types is \(10\). This could consists of a single \(10\)-cycle conjugate class containing ten series, or three different \(3\)-cycle conjugate classes and a single \(1\)-cycle conjugate class or some other combination of conjugate classes adding up to \(10\). One member of each conjugate class is selected as the class generator. Each series member in a conjugate class can be generated by conjugation of a member of the class as follows: Let \[P_{k}(z)=\sum_{i=r}^{\infty}a_{i}z^{\frac{m_{i}}{c}}\] (4) be the \(k\)-th member of a \(c\)-cycle conjugate class of Puiseux series where all \(\frac{m_{i}}{c}\) exponents are placed under a least common denominator \(c\) and \(z^{\frac{m_{i}}{c}}\) is the principal-valued root. Then the \(c\) members of this conjugate class are generated via conjugation of (4) as follows: \[P_{j}(z)=\sum_{k=r}^{\infty}a_{k}\left(e^{\frac{2i\pi i}{c}}\right)^{m_{k}}z^{ \frac{m_{k}}{c}};\quad j=0,1,\cdots,c-1.\] (5)
11. The order of a \(n\)-cycle series of length \(l\) is denoted by \(\mathcal{O}\) and is the highest integer power of \(z\) in the series nearest to the exponent of the \(l\)'th term. Accuracy measurement in this study are done
relative to a series order and not to a specific number of series terms and therefore accuracy results of multiple series will often include series of different lengths. For example, 500 terms of a 1-cycle series can have an order of 500 whereas 500 terms of a 20-cycle series may only attain an order of 30 if the expansion has many fractional exponents in the series.
* If an expansion of an \(n\)-degree function produces \(n\) series in terms of \(z^{1/n}\), the function fully-ramifies into a single \(n\)-cycle branch producing \(n\) series belonging to an \(n\)-cycle conjugate class. The branch is morphologically similar to \(z^{\pm 1/n}\). An \(n\)-degree function minimally-ramifies at a singular point if it ramifies into a single 2-cycle branch and \((n-2)\) single-cycle branches.
* Absolute coordinates and relative coordinates in the z-plane are used. An absolute point \(z_{a}\) is a point in the z-plane. A relative point \(z_{r}\) is a point relative to a finite singular point \(s_{b}\) given by \[z_{a}=s_{b}+z_{r},\] (6) and in the case of an expansion at infinity, \[z_{a}=\frac{1}{z_{r}}.\] This is necessary for the following reasons: 1. Power series are generated relative to an expansion center, \(s_{b}\). For example if the expansion center is \(s_{b}=1+i\) and the series is evaluated at \(z_{r}=0.25\), the accuracy is determined by computing a higher precision branch value \(v_{b}\) by first solving for the roots \(\{w_{i}\}\) of \(f(1.25+i,w)=0\) and identifying which root \(v_{b}\) corresponds to the series value at \(z_{r}\). 2. The point \(D\) in Figure 3 is an absolute point. If \(s_{b}=2+2i\) and \(D=1.7-1.8i\), in order to evaluate a series expansions at \(D\), the absolute point \(D=1.7-1.8i\) is first converted to the relative point \(z_{r}=1.7-1.8i-(2+2i)=-0.3-3.8i\). 3. A series is evaluated over a list of points \(C=\{z_{i}\}\) on a circle around the singular point \(s_{b}\) to compute accuracy profiles. The points \(C=\{z_{i}\}\) are relative coordinates to \(s_{b}\). The points in \(C\) are translated to absolute coordinates as \(C_{a}=\{s_{b}+z_{i}\}\), then the roots \(\{w_{i}\}\) of \(f(w,C_{a})=0\) are computed to a higher precision and corresponding branch values identified to determine series accuracies. 4. An expansion at infinity is generated relative to an expansion at zero, and the associated power expansions use relative coordinates \(z_{r}=\frac{1}{z_{a}}\). See Section 14.4.
* The series comparison test used to determine CLSPs relies on comparing a base series value \(v_{s}\) of a branch expansion at a point \(D\) in Figure 3 to a list of series values \(\{u_{i}\}\) computed at the next nearest singular point \(s_{n}\) at point \(D\). Since this involves comparing numeric values at finite precision, a separation tolerance \(s_{t}\) is used to identify a match. \(s_{t}\) is 1/10'th the minimum separation of the members in \(\{u_{i}\}\). If \(|v_{s}-u_{k}|<s_{t}\), then the \(k\)'th series value at \(s_{n}\) is a match for \(v_{s}\). If this tolerance is exceeded as in the case of branch values being very close to on another, or an insufficient number of terms in the series, or multiple matches, the series comparison test halts and the analysis reverts to the numerical integration method. This is described further in Section 10.2
* An accuracy profile of a conjugate set of series is generated by computing the accuracy of generator series over a region in their domain of convergence. For each value \(|z|<R\), the accuracy is determined by comparing the series results to more precise roots \(\{w_{i}\}\) of \(f(z,w)=0\). These roots are computed with Mathematica's NSolve function. This accuracy is then fitted to an accuracy function \(A(r_{f},o)\).
(16) Table 1 is a list of symbols used in this paper.
For example, let \(z=r_{f}Re^{it}\) with \(R\) the radius of convergence of a series. \(v_{s}\) is the value of the series at \(z\). \(v_{b}\) is the value of the corresponding branch at \(z\) computed to a higher precision than the series precision. \(c_{e}=|v_{b}-v_{s}|\) is the comparison error. The accuracy of the series then becomes the negative of the exponent of \(c_{e}\).
Generator series are evaluated in their domain of convergence along circular domains \(|z|<R\) and the accuracy determined by comparing to the corresponding member in the set \(\{w_{i}\}\). The accuracies are then fitted to an accuracy function \(A(r_{f},o)\) which gives the expected accuracy, \(e_{a}\), of the series as a function of the radial ratio \(r_{f}\) and order \(o\) of the series. Solving for \(o\) in \(e_{a}=A(r_{f},o)\) gives \(o=O(r_{f},e_{a})\) which is the order function for estimating the order of a series needed for a desired accuracy \(e_{a}\) at \(z=r_{f}Re^{it}\).
The genus, \(\mathcal{G}\), of \(w(z)\) is easily calculated via the Riemann-Hurwitz formula once conjugate classes at all singular points are found. The Riemann-Hurwitz sum \(\mathcal{K}\) must be an even number and serves as a necessary (but not sufficient) check-sum of the overall cycle geometry. See Section 13.
(17) Power expansions are generated by the Newton Polygon method [4]. This algorithm has two types of function iterations:
1. **Polygon iteration:** The first step in the algorithm is to create a Newton polygon establishing the initial terms of each expansion. If there are multiple roots in the resulting characteristic
\begin{table}
\begin{tabular}{|c|l|} \hline Symbol & Description \\ \hline \(m\) & Number of series terms used in a calculation \\ \(M\) & Maximum number of terms in a series \\ \(R\) & Radius of convergence \\ \(r_{f}\) & A positive rational number between 0 and 1 \\ \(r\) & Radius of \(z=re^{it}\) \\ \(s_{a}\) & Series accuracy \\ \(A(r_{f},o)\) & Accuracy function. (Section 12) \\ \(e_{a}\) & Expected accuracy at \(|z|<R\) given by \(A(r_{f},o)\) \\ \(\mathcal{O}(r_{f},e_{a})\) & Order function (Section 11) \\ \((a,b,c,d)\) & Coefficients of accuracy profile function \\ \(s_{b}\) & Singular point at center of expansion \\ \(s_{n}\) & Singular point \(n\) in the singular list \\ \(z_{a}\) & Absolute coordinate value of \(z\) \\ \(z_{r}\) & Relative coordinate of \(z_{a}\) with respect to a singular point \\ \(\{w_{i}\}\) & \(n\) values of \(w(z_{a})\) computed to high precision \\ \(\{v_{b}\}\) & Particular set of branch values with \(\{v_{b}\}\subseteq\{w_{i}\}\) \\ \(\{v_{s}\}\) & Branch series values at \(z_{r}\) corresponding to the set \(\{v_{b}\}\) \\ \(c_{e}\) & Comparison error given by \(c_{e}=|w_{i}-v_{s}|\) \\ \(r_{t}\) & Residual tolerance. (Section 5) \\ \(s_{t}\) & Separation tolerance. (Section 10.2) \\ \(m_{s}\) & Minimum separation. (Section 10.2) \\ \(s_{f}\) & Separation factor. (Section 10.2) \\ \(c_{z}\) & Coefficient zero. (Section 3) \\ \(\{c_{i}\}\) & Roots to the polygonal characteristic equation \\ \(N_{zm}\) & Maximum number of zero modular terms. (Section 6) \\ \(\mathcal{G}\) & Riemann surface genus. (Section 13) \\ \(\mathcal{K}\) & Riemann-Hurwitz sum (Section 13) \\ \hline \end{tabular}
\end{table}
Table 1. Symbols
equation, a new Newton polygon is created by iteration via the expression
\[f_{2}=z^{-\beta_{i}}f\left(z,z^{\lambda_{i}}(w+c)\right) \tag{7}\]
and a second Newton polygon created for \(f_{2}\). If the resulting characteristic equation has multiple roots, polygon iterate \(f_{3}\) of \(f_{2}\) created and so on until the characteristic roots are simple. The substitution \(f\left(z,z^{\lambda_{i}}(w+c)\right)\) can cause numerical errors if not pre-processed beforehand. See Section 3.
2. **Newton Iteration:** Upon obtaining simple characteristic roots, the final polygon iterate \(f_{n}\), after two additional transformations, is iterated by a Newton-like iteration to produce the desired number of expansion terms via the expression \[w_{j+1}=w_{j}-\mathrm{mod}\left(\frac{\overline{f}(z,w_{j})}{\overline{f}_{w}( z,w_{j})},z^{2^{(j+1)}}\right);\quad w_{0}=c_{i}.\] (8) In the case of fractional polynomial solutions, the modular function continues to return zero after reaching the polynomial. Finite polynomial solutions however have to be distinguished from infinite solutions with extremely large gaps in exponents between successive series terms which would also return zero for a (often small) number of iterations. This is done by setting the number of maximum modular zeros, \(N_{zm}\) to a large but manageable number. In this case, \(N_{zm}\) was set to 15. See Determining radii of convergence of fractional power expansions around singular points of algebraic functions for additional information about these concepts.
This study is organized as follows:
1. Compute the singular points to 1000 digits of precision,
2. Compute conjugate classes at all singular points,
3. Compute at least 1000 terms of each branch expansion at a base singular point \(s_{b}\) with a working precision sufficient to obtain series with at least 900 digits of precision,
4. Compute expansions for all singular points in the comparison sequence to at least 100 terms,
5. Estimate radius of convergence for each branch around \(s_{b}\) using the Root Test,
6. Compute CLSPs via the comparison test and integration test,
7. Compute the accuracy function \(A(r_{f},o)\) and order function \(O(r_{f},e_{a})\),
8. Generate convergence, accuracy, and timing data for six test functions.
## 3. Precision of computations
The precision of a series is limited by the precision of the singular points as well as the reduction in precision incurred by various steps in the Newton polygon procedure.
Figure 1 is a precision plot of the five generator series for Test Case 1 and exhibits a reduction in precision after each Newton iteration. For example, the 1-cycle \(T\) branch shown as the purple line begins with 1000 digits of precision dropping to 918 digits at the 1024'th term. The precision of each series is therefore dependent on the the number of terms used. If the \(1T\) series was evaluated very close to its center of expansion, the accuracy could rapidly increase as additional terms are added to the computation. At some point, the precision of the results could approach the maximum precision of the series and no longer increase as more terms are used leading to a case in which the precision of the comparison error \(c_{e}=|w_{i}-v_{s}|\) drops to zero. This would skew the accuracy results and subsequently the fit functions \(A(r_{f},o)\) and \(O(r_{f},e_{a})\). To avoid this possibility, \(c_{e}\) values with zero precision are omitted from the accuracy results.
The Newton polygon algorithm produces translated functions \(f(z+s_{i},w)\) and \(f(z,w+c_{i})\). These substitutions can create coefficients which are actually zero but due to finite numerical precisions, result in very small residual values. Two residual conditions arise which are processed in the following order:
1. **R1:** A singular point can be a zero to a coefficient of the transformed function \(f(z+s_{i},w)\). These are called coefficient zeros denoted by the symbol \(c_{z}\). Since the singular points are computed to a finite number of digits, coefficients which are actually zero can have very small non-zero residues. If not eliminated, this would cause incorrect polygon calculations. The two sets \(\{p_{i}\}\) and \(\{q_{i}\}\), described below, are such singular points. There may however exist other singular point zeros. \(f(z,w)\) is pre-processed to remove these zero coefficient before the substitution \(f(z+s_{i},w)\).
2. **R2:** Roots \(\{c_{i}\}\), of the polygonal characteristic equations are substituted into the polygonal iterates as \(f\left(z,z^{\lambda}(w+c_{i})\right)\) and likewise results in zero coefficients which may have very small residues and must be removed. Similar to the **R1** pre-processing, these coefficients are first identified and removed prior to the substitution \(w\to w+c_{i}\).
### Additional precision issues
1. Singular points with very small absolute values can lead to very small coefficients in the translated function \(f(z+s_{i},w)\) if the function has high powers of \(z\). For example if \(s_{i}=1/1000\), then the substitution \(z\to z+s_{i}\) into \(z^{50}\) leads to a coefficient on the order of \(10^{-150}\), and this small value must not be lost due to inadequate numerical precision as doing so would adversely affect the polygonal iteration step of the Newton polygon algorithm producing incorrect initial segments. Since the test cases below were run with an average precision of 1000, this particular cases would be correctly processed. However, there exists functions with arbitrarily small singular sizes which would be mis-handled. The singular size therefore is carefully monitored and if the size exceeds the precision limitation of the calculation, the procedure is halted.
2. **Multiple polygon iteration** Each polygon iterates of (7) can reduce the precision of the resulting function iterate \(f_{i}\).
3. **Limitations of Mathematica's NSolve function:** Roots returned by NSolve are limited in precision to the precision of the input equations. If the equations are at 1000 digits of precision, then the maximum precision of the roots is 1000. However, the precision of the roots returned by NSolve may be less than the precision of the equations. For example, the roots of \(1/2+x^{2}\) returned by NSolve when the precision of the equation is set to 1000 is 1000. However, if the precision of \(1/4+x+x^{2}\) is set to 1000, NSolve returns roots to only 500 digits of precision.
## 4. Computing the singular points
The finite singular points are computed by solving for the zeros of the resultant of \(f\) with \(\dfrac{\partial f}{\partial w}\) using Mathematica's NSolve function. This computation is CPU-intensive when \(f\) is of high degree and the coefficients \(c_{i}(z)\) are non-sparse and high degree. The 50-degree function studied in Test Case 4 took 3.8 hours to compute 4584 singular points to 1000 digits of precision, whereas a random 20-degree function with 156 singular points and low degree coefficients takes about one second.
Figure 1. Precision Profile Plot
There are two sets of singular points by inspection:
1. Roots of \(a_{0}(z)\) if \(a_{1}(z)=0\): These are the set \(\{q_{i}\}\),
2. Roots of \(a_{n}(z)\): These are the set of poles \(\{p_{i}\}\).
## 5. Computing initial terms and identifying conjugate class membership
The method of Kung and Traub [4] implements Netwon polygons to generate the initial terms (segments) of each power expansion. See also [7] for more information about Newton polygons. For a generic polynomial, computing the initial terms requires only a few iterations of Newton polygon and is executed quickly. Even the 50-degree polynomial in Test Case 5, required at most two seconds to compute the initial segments at a singular point.
However, in many cases, not all initial segments require further expanding. Rather, in this paper the initial terms are first used to identify the cycle size of each expansion and their conjugate class membership. Class membership is obvious from the initial segment exponents when there are distinct multi-cycles. In the case of multiple \(k\)-cycle segments, membership is determined by conjugating the initial terms in order to determine which expansions belong to each conjugate set. One member of each class is selected as the class generator and further expanded via Kung and Traub's method of iteration.
Consider an expansion at the origin of the following 12-degree function:
\[f(z,w)=w^{4}z\left(\mathrm{a}2\left(1-w^{2}\left(\mathrm{a}2^{*}\right)^{2} \right)\right)^{4}-\left(\mathrm{a}2^{*}\left(\mathrm{a}2^{2}-w^{2}\right) \right)^{4};\quad a=2/3+i/4, \tag{9}\]
and the initial terms obtained from the Newton Polygon step given by (10). Since there are 12 expansions all of which are 4-cycle, there are three 4-cycle conjugate classes of branch expansions:
\[P_{1}(z) =-0.667-0.25i-(0.28+0.244i)z^{1/4} \tag{10}\] \[P_{2}(z) =-0.667-0.25i-(0.244-0.28i)z^{1/4}\] \[P_{3}(z) =-0.667-0.25i+(0.244-0.28i)z^{1/4}\] \[P_{4}(z) =-0.667-0.25i+(0.28+0.244i)z^{1/4}\] \[P_{5}(z) =0.667+0.25i-(0.28+0.244i)z^{1/4}\] \[P_{6}(z) =0.667+0.25i-(0.244-0.28i)z^{1/4}\] \[P_{7}(z) =0.667+0.25i+(0.244=0.28i)z^{1/4}\] \[P_{8}(z) =0.667+0.25i+(0.28+0.244i)z^{1/4}\] \[P_{9}(z) =-\frac{1.973}{z^{1/4}}\] \[P_{10}(z) =-\frac{1.973i}{z^{1/4}}\] \[P_{11}(z) =\frac{1.973i}{z^{1/4}}\] \[P_{12}(z) =\frac{1.973}{z^{1/4}}.\]
In order to determine which expansions belong to each 4-cycle conjugate class, segment members are conjugated. Consider:
\[P_{1}(z)=(-0.666667-0.25I)-(0.2799+0.244276I)z^{1/4}.\]
Conjugation of \(P_{1}\) produces the following list of members:
\[\begin{array}{l}(-0.666667-0.25i)-(0.2799+0.244276i)z^{1/4}\\ (-0.666667-0.25i)+(0.244276-0.2799I)z^{1/4}\\ (-0.666667-0.25i)+(0.2799+0.244276I)z^{1/4}\\ (-0.666667-0.25i)-(0.244276-0.2799I)z^{1/4},\end{array} \tag{11}\]
and these are \(P_{1}\) through \(P_{4}\). Therefore, \(P_{1}\) through \(P_{4}\) are the four members of a 4-cycle conjugate class \(1V_{4}^{1}\) and the series numbers for this set are \(\{1,2,3,4\}\). Conjugating expression \(P_{5}(z)\) in 10 gives the next four series, \(\{5,6,7,8\}\) making up a second 4-cycle class \(2V_{4}^{1}\), and conjugating \(P_{9}\) gives the set of the next four series, \(\{9,10,11,12\}\) making up a third 4-cycle conjugate set \(3P_{4}^{-1}\). Series 1, 5, and 9 are selected as the generators of the three conjugate classes and further expanded via Newton Iteration. Once the desired number of terms for each generator series has been computed, the full 12 expansions around this singular point can be generated by conjugating the generator series. Since conjugation is much faster than Newton iteration, computing the 12 series this way is faster than generating each member separately via Newton Iteration.
Computing 1024 terms of each generator series at 1000 digits of precision took 3.4 minutes. Conjugation of all generators took one second. Compared to expanding all initial segments, this represents a four-fold reduction in execution time. The performance gain is dependent upon the ramification extent at the expansion center with minimal gain obtained with minimal ramification.
## 6. Expanding the initial segments via Newton-like iteration
The following is a brief summary of the Kung and Traub method of iterating the initial series segments. For more information, interested readers are referred to the author's website: Examples of power expansions around singular points of algebraic functions. which includes worked examples.
Consider the following function from the website:
\[f(z,w)=(z+z^{2})+(1+z)w+w^{2}=0. \tag{12}\]
Processing this function through the Newton Polygons twice produces the first two terms (initial segments) of each series
\[P_{1}(z) =-2/3-iz^{1/2}\] \[P_{2}(z) =-2/3+iz^{1/2},\]
and the second function iterate \(f_{2}(z,w)\).
A critical part of a numerical Newton polygon algorithm is accurately identifying multiple roots of the characteristics equation. This is accomplished by first setting the roots to the same precision. When two numbers accurate to a set precision are subtracted in Mathematica, the precision of the difference is zero. Thus, the roots are first set to the minimum precision of the set, and the differences between roots are checked with those having zero precision identified as multiples. Although the roots are computed with a default precision of 1000 digits, the actual precision can be significantly lower as described above. In these cases, the singular points are generated with a sufficient precision to obtain roots near 1000 digits of precision.
As the Newton polygon algorithm will often lead to fractional polynomials, \(f_{2}\) is transformed into a polynomial with integer powers by the following two transformations:
\[\begin{split}\widehat{f}(z,w)&=\frac{1}{z^{\beta_{k}} }f_{k}(z,z^{\lambda_{k}}w)\\ \overline{f}(z,w)&=\widehat{f}(z^{d},w)\end{split} \tag{13}\]
where \(k\) is the index of the last polygonal function which did not produce a characteristic equation with multiple roots (\(k\) is \(2\) in this case). Let \(d\) be the lowest common denominator of the exponents \(\{\lambda_{j}\}\) for \(j=1,2,\cdots,k\) for each of the segments above. In this case, \(k=2\) and \(d=2\) with \(\lambda_{2}=1/2\) and \(\beta_{2}=1\). Therefore we have:
\[\begin{split}\widehat{f}(z,w)&=\frac{1}{z^{\beta_{k }}}f_{k}(z,z^{\lambda_{k}}w)\\ &=z^{-1}\left[(z+z^{2})+z(z^{1/2}w)+zw^{2}\right]\\ &=1+z+z^{1/2}w+w^{2}=0\end{split} \tag{14}\]
and
\[\begin{split}\overline{f}(z,w)&=\widehat{f}(z^{d},w)\\ &=1+z^{2}+zw+w^{2}.\end{split}\]
In order to generate more terms of the series, Kung and Traub implements a Newton-like iteration on \(\overline{f}\):
\[w_{j+1}=w_{j}-\operatorname{mod}\left(\frac{\overline{f}(z,w_{j})}{\overline {f}_{w}(z,w_{j})},z^{2^{(j+1)}}\right);\quad w_{0}=\{i,-i\} \tag{15}\]
where the mod function extracts all terms of the Taylor expansion of the quotient with power less than \(2^{(j+1)}\). Listing 1 is the modulus step in (15) implemented in Mathematica.
```
newtonIterate= Normal[Series[fBar/fBarDeriv, {z, 0, 2^(j + 1) - 1}]];
```
Listing 1. Mathematica code
Solutions with fractional polynomial solutions return sequential zero modular result after a finite number of iterations. In this study, this number of maximum zero modular values is \(N_{zm}\) and set to 15. See Test Case 14.2 for an example of a polynomial solution.
## 7. Approximating radii of convergence via the Root Test
The Root Test is used to approximate the radius of convergence of a branch expansion in order to estimate the number of additional power expansions needed to identify the CLSP of the branch series. The Root Test however is not applicable to polynomial solutions which have infinite radii of convergence.
The standard definition is modified to include the branch cycle size:
\[R=\frac{1}{\liminf_{k\to\infty}\;|a_{k}|^{\frac{c}{m_{k}}}} \tag{16}\]
where \(c\) is the cycle size of the series, and the set \(\{m_{i}\}\) is the set of exponent numerators under a least common denominator. For example, the terms \(a_{0}+a_{1}z+a_{2}z^{3/2}+a_{3}z^{9/4}+z^{3}\) would have the set
\(\{0,4,6,9,12\}\). Then the radius of convergence of each branch expansion can be approximated by forming the set:
\[S=\bigg{\{}\left(\frac{1}{m_{k}},\frac{1}{|a_{k}|^{\frac{c}{m_{k}}}}\right)\bigg{\}}\]
and extrapolating \(\lim\limits_{k\to\infty}S\) using a sufficient number of trailing points of \(S\). For example, consider 512 terms of a 2-cycle series:
\[a_{1}z^{1/2}+a_{2}z^{2}+a_{3}z^{5/2}+a_{4}z^{3}+\cdots+a_{512}z^{512}.\]
Therefore
\[S= \bigg{\{}\left(\frac{1}{1},\frac{1}{|a_{1}|^{1/2}}\right),\left( \frac{1}{4},\frac{1}{|a_{2}|^{1/2}}\right),\left(\frac{1}{5},\frac{1}{|a_{3}|^ {2/5}}\right),\left(\frac{1}{6},\frac{1}{|a_{3}|^{1/3}}\right),\cdots,\left( \frac{1}{1024},\frac{1}{|a_{512}|^{1/512}}\right)\bigg{\}}.\]
The radius of convergence, \(R\), is approximated by minimizing a suitable curve to the greatest lower bound of \(S\) using Mathematica's Minimize function and then extrapolating to zero. Linear, quadratic and cubic curves were used as test fit functions with the smallest residual error selected as the best fit. Figure 2 shows the generator series points for the \(F_{5}^{16}\) branch of Test Case 1 as blue points. The dashed red curve is the best fit of the greatest lower bound of points. The black point is the extrapolated value of 0.645 for the radius of convergence. The actual radius of convergence for this branch is \(|s_{27}|\approx 0.641\). The CLSP for the power expansion are approximated by selecting a singular point with absolute value closest to the extrapolated point and then used to estimate the minimum number of singular points in the comparison sequence.
Figure 2. Root Test results of \(F_{5}^{16}\) branch of Test Case 1
## 8. Computing the comparison sequence
The most distant estimated CLSP determined by the Root Test determines the minimum length of the comparison sequence used by the series comparison and integration tests to identify CLSPs at a base singular point \(s_{b}\). Table 2 gives the Root Test results for Test Case 1, and since the expansion center is the origin, \(s_{118}\) is the most distant estimated CLSP giving a minimum comparison sequence \(s_{2}\) through \(s_{118}\).
Expansions must be generated at all singular points in the comparison sequence but do not require a large number of terms since these series are evaluated at their singular perimeter with good convergence. Since the Root Test is only an approximation for the most distant CLSP, a reasonable comparison sequence in this case is \(s_{2}\) through \(s_{125}\).
## 9. Computation of convergence-limiting singular points
### Necessary and sufficient conditions for analytically-continuing a branch across a singular point:
1. In order to analytically continue a \(k\)-cycle branch from one singular point to the next nearest singular point, the next singular point must have at least \(k\) single-cycle analytic branches to support continuity, i.e., 1-cycle branches which do not have poles,
2. A \(k\)-cycle branch is continuous across a singular point if all branch sheets continue onto analytic 1-cycle branches,
Individual single-valued branch sheets of an \(k\)-cycle branch may continue across different singular points but the analytic region of the branch as well as the convergence domain of its power expansion is established upon analytic continuity with the nearest multi-cycle branch sheet or branch sheet with a pole. The first singular point in which this occurs is the CLSP for the associated set of conjugate Puiseux series and establishes their radii of convergence.
## 10. Methods to compute CLSPs
Two methods are used to find CLSPs:
### Constructing an analytically-continuous route between singular points by numerical integration
Since the Puiseux series for (1) converge at least up to the next nearest singular point, an analytically-continuous route can be created between the perimeter of the base singular point and perimeter of the next nearest singular point. This is shown in Figure 3. In the diagram the circles are the singular perimeters. A straight line path between point \(A\) and \(D\) is given by the expression
\[z(t)=A(1-t)+Dt;\quad 0\leq t\leq 1, \tag{17}\]
and then each branch sheet of \(s_{b}\) is numerically integrated over the path from point \(A\) to point \(D\) onto a branch sheet of \(s_{n}\)via the following set of initial value problems:
\[\frac{dw}{dt}=-\frac{f_{z}}{f_{\rm w}}\frac{dz}{dt},\quad\ w_{i}(0)=P_{i}(A-s_ {b});\quad i=1,2,\cdots,n \tag{18}\]
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Index & Sheet & Type & Cycle Size & est. R & est. CLSP & Series size \\ \hline
1 & 1 & \(F_{5}^{16}\) & 5 & 0.64571 & 31 & 1987 \\
2 & 6 & \(F_{4}^{9}\) & 4 & 0.50751 & 7 & 2022 \\
3 & 10 & \(F_{3}^{4}\) & 3 & 0.16797 & 2 & 1017 \\
4 & 13 & \(V_{2}^{1}\) & 2 & 0.16772 & 2 & 1024 \\
5 & 15 & T & 1 & 1.0966 & 118 & 1024 \\ \hline \end{tabular}
\end{table}
Table 2. Test Case 1 Root Test results at the origin
with \(z(t)\) defined by \((17),0\leq t\leq 1\), and each \(P_{i}(z)\) is a Puiseux series at \(s_{b}\). Each \(s_{b}\) branch sheet is then checked for analytic continuity over \(s_{n}\) as per Section 9.1. See also [9] for further background about this technique.
After \(s_{n}\) has been checked and branch sheets have been found to be continuous, a path to the next sequential singular point is created and the previously-continued branches tested for further continuity. However there is the possibility of attempting to continue a branch sheet to another singular point when a removable singular point is in the path of integration. Numerical integration will fail over a removable singularity even though the function is analytic because Equation (2) is used for the derivative and at a removable singular point, this quotient is indeterminate. In this case, the integration path is split into paths \(\beta_{1}\), around half the perimeter of the removable singular point \(E\), and over \(\beta_{2}\) shown in the second diagram of Figure 3.
### Identifying CLSPs by the series comparison test
This section introduces a simpler method for computing a CLSP based on a series comparison test.
In the series comparison test, the value of a base branch sheet at point \(D\) in Figure 3 is compared to the expansions centered at \(s_{n}\) at point \(D\) using the separation threshold \(s_{t}\). Convergence of both series is guaranteed since the series centered at \(s_{n}\) converges on its perimeter, and convergence of the current analytically-continuous base series are guaranteed to converge at \(D\) since they converge at the previous \(n-1\) singular points. However, the convergence rate of the base sheets will decrease as successive singular points approach the CLSP of a base expansion sheet. This necessitates computing the base series at a sufficiently high precision and number of terms.
Using the data from Section 5, with three 4-cycle branches and three singular points, consider series 1 expansion at the origin with a value of \(v_{s}=-0.92827-0.314806i\) at point D in Figure 3 and compare it with the 12 expansion values at \(s_{2}\) at point D given in Table 3. In this case, there are no poles at \(s_{1}\) and \(s_{2}\). The minimum distance between the points in Table 3 is 0.04 so that the separation threshold is \(s_{t}=0.04/10=0.004\) and clearly \(v_{s}\) is within this threshold for the third entry in this list. The actual difference between the two values in this test was \(10^{-40}\). Therefore, the branch sheet corresponding to series 1 continues onto the branch sheet associated with the third series at \(s_{2}\).
As the analysis moves farther away from the base expansion and nearer to the CLSP, the accuracy of the base expansions will decrease and may not satisfy the separation threshold or may produce multiple
Figure 3. Continuation path used by numerical integration method
matches. This condition is checked in the algorithms and when encountered, the comparison test is halted and flagged for the numerical integration test.
## 11. Illustrative example of Radius of convergence computation using series comparison test
Consider the function from Test Case 1:
\[f_{1}(z,w)=\left(z^{30}+z^{32}\right)+\left(z^{14}+z^{20}\right)w^{5}+\left(z^{ 5}+z^{9}\right)w^{9}+\left(z+z^{3}\right)w^{12}+\left(6\right)w^{14}+\left(2+z ^{2}\right)w^{15}. \tag{19}\]
Figure 4 graphically illustrates the analytic continuation of all branch sheets at the expansion center \(s_{1}\), into the branch sheets of \(s_{2}\). For example, the first series at \(s_{1}\) is a 5-cycle with the value at point \(D\) of \(-0.000855-0.000229i\). Following the arrow from this point into the list of \(s_{2}\) values, it continues onto sheet 6 of \(s_{2}\) which is a 1-cycle. The remaining four sheets of this 5-cycle also continue onto 1-cycles of \(s_{2}\). Therefore, the 5-cycle at \(s_{1}\) is analytically-continuous over \(s_{2}\).
Consider now series 10 of \(s_{1}\) which is a 3-cycle branch. It continues onto series 14 of \(s_{2}\) which is a 2-cycle branch. Therefore, the 3-cycle of \(s_{1}\) is not analytically continuous over \(s_{2}\) and so \(s_{2}\) is the CLSP for this branch. Likewise the 2-cycle of series 14 of \(s_{1}\) continues onto the second sheet of the 2-cycle of \(s_{2}\) and therefore establishes the CLSP for this branch as well. Therefore the CLSP for the 3 and 2-cycle branches is \(s_{2}\). Next, analytic continuity of the 1, 4, and 5-cycle branches are checked over \(s_{3}\) and if any are continuous, further singular points are checked in this way until all CLSPs are found.
## 12. Creating accuracy profile functions
Once the radii of convergence of a branch generator is determined, the accuracy of the series can be studied as a function of \(|z|<R\) and series order \(\mathcal{O}\). The accuracy of a series depends on four factors:
1. **Precision of series:** The precision of the series is limited to the precision of the expansion center, i.e., the singular points. For the test cases below, singular points were computed to about 1000 digits of precision. Also, the precision of the associated power expansions decrease after each iteration of the Newton iteration step as shown in Figure 1. For example, terms of the \(T\) series of Test Case 1 gradually drop to 918 digits of precision near the end of 1000 terms. This trend limits the maximum accuracy of a particular value of the series to the precision of the terms used.
2. **Number of terms:** The accuracy of an expansion was found to have a linear relation to the number of terms used in the expansion as illustrated in Figure 7.
3. **The absolute value of \(z\) relative to \(R\):** The accuracy of an expansion exhibited a logarithmic dependence on \(|z|<R\) as shown in Figure 6.
\begin{table}
\begin{tabular}{|l|} \hline \(-2.44964-1.22342i\) \\ \(-1.0006+1.0348i\) \\ \(-0.92827-0.314806i\) \\ \(-0.783318+0.21872i\) \\ \(-0.696357-0.639337i\) \\ \(-0.371485-0.178941i\) \\ \(0.371485+0.178941i\) \\ \(0.696357+0.639337i\) \\ \(0.783318-0.21872i\) \\ \(0.92827+0.314806i\) \\ \(1.0006-1.0348i\) \\ \(2.44964+1.22342i\) \\ \hline \end{tabular}
\end{table}
Table 3. Branch sheet values at \(s_{2}\)
4. **Presence of nearby singular points**. The accuracy of a series at \(z\) is affected by the presence of nearby impinging singular points. This is further explained below.
An accuracy function \(A(r_{f},o)\) gives an expected accuracy of a series as a function of the radial ratio \(0<r_{f}<1\) and order, \(o\), of a series. An order function \(O(r_{f},e_{a})\), for a given \(r_{f}\) and desired accuracy \(e_{a}\), returns the estimated order of a series needed for the desired accuracy. The series is then be searched for the term \(m\) corresponding to this order, and terms 1 through \(m\) used to evaluate the accuracy of the series. Generator series are evaluated in their convergence domains and compared to more precise values of the function and fitted to accuracy functions. Letting the expected accuracy \(e_{a}=A(r_{f},o)\), and solving for \(o\) gives the order function.
However, the accuracy of a power expansion is not constant along a circle \(z=r_{f}Re^{it}\) but varies slightly along the outermost region of convergence depending on the presence of impinging singular points. The \(F_{5}^{16}\) branch of Test Case 1 has conjugate series \((1,2,3,4,5)\) with \(R=s_{27}\approx 0.6413\). The impinging singular point of series 5 is \(s_{27}\). Figure 5 is a plot of the log of the difference between the actual value of sheet 5 and 1987 terms of the series along a circle with \(r_{f}=15/26\), that is \(r=15/26R\). Notice how the difference is not constant around the circle but varies approaching a minimum accuracy at around an argument of 0.4. The red line in the plot is the argument of this branch sheet's ISP. Notice the peak matches this line at \(\log(\Delta)=-238.59\). This and other cases suggest accuracy is affected by nearby impinging singular points. The ISPs accuracy effect is however small.
Figure 4. Test Case 1 analytic continuation between \(s_{1}\) and \(s_{2}\)
Figure 5. Minimum arc accuracy diagram
difference is \((4.6\times 10^{-105},2.39\times 10^{-104})\). This study does not include this variation in accuracy, rather, random points along the circle \(z=r_{f}Re^{it}\) are used as test points although including this effect would improve the accuracy of the fit functions.
Consider the accuracy results of 1287 terms of this branch for \(1/25\leq r_{f}\leq 24/25\). Results of this test are shown in Figure 6A and clearly shows a logarithmic trend.
To this extent, the data points in Figure 6A are fitted to \(L(r)=a+b\log(r)\). Figure 6B shows this fit as the dashed black line \(L(r)=3.3333-118.986\log(x)\).
Consider next the accuracy trend at \(r_{f}=1/5\) for \(20\leq o\leq 400\) shown in Figure 7A. The accuracy data follows a linear trend and is fitted to \(G(o)=9.55371+0.0931429o\) in Figure 7B.
These accuracy trends are observed for all test cases studied in this paper.
When the accuracy data is plotted in space, the set of points shown in Figure 8A are obtained. Note in this figure the logarithmic trend over \(r_{f}\) and the linear trend over the series order \(\mathcal{O}\). So it is reasonable to construct a log-linear accuracy function given by
\[A(r_{f},o)=a+b\log(r_{f})+o(c+d\log(r_{f})) \tag{20}\]
with \(r_{f}\) the radial ratio and \(o\), the order of the series using Mathematica's NonlinearModelFit function. When this is done, the surface shown in 8B with
Figure 6. \(F_{5}^{16}\) accuracy profile as a function of \(r\), 1549 terms
Figure 7. \(F_{5}^{16}\) accuracy profile as a function of order with \(r_{f}=1/5\)
\(0.0871573\log(r)\)) is generated which also shows the accuracy data as black points superimposed on the surface.
Letting \(e_{a}=A(r_{f},o)\) and solving for \(o\) gives the order function
\[O(r_{f},e_{a})=\left\lceil\frac{e_{a}-a-b\log(r_{f})}{c+d\log(r_{f})}\right\rceil \tag{21}\]
which estimates the order needed to achieve a particular accuracy \(e_{a}\) at \(r_{f}\).
## 13. Computing the Riemann surface genus using the Riemann-Hurwitz sum
Once initial segments are computed for all singular points including the point at infinity, the Riemann surface genus is easily computed using the Riemann-Hurwitz formula in the form of
\[\begin{split}\mathcal{G}&=1+1/2\sum_{S}\sum_{i=1}^ {T}(c_{i}-1)-D\\ &=1+1/2\;\mathcal{K}-D\end{split} \tag{22}\]
where \(S\) is the set of singular points, and the double sum is over all conjugate groups of singular points with \(c_{i}\) the cycle size. \(D\) is the degree of the function in \(w\). Note the Riemann-Hurwitz sum \(\mathcal{K}\) must be an even number and serves as a cycle check-sum: if it's odd a cycle error has occurred. However this is only a necessary condition for an error-free function ramification profile. As an example, Test Case 1 with 179 singular points has the ramification profile shown in Table 6. A total of 174 singular points each minimally ramify contributing a total of 174 to \(\mathcal{K}\). \(s_{1}\) contributes 10, \(s_{110}\) and \(s_{111}\) contribute 16 giving \(\mathcal{K}=200\). Then \(\mathcal{G}=200/2+1-15=86\). The time required for this calculation is predominantly the time to compute the singular points and initial segments.
Figure 8. \(F_{5}^{16}\) accuracy profile as a function of \(r_{f}\) and \(\mathcal{O}\)
## 14. Test results
The following test cases include three data sets for each function:
1. Timing data to compute the singular points, segments, power expansions, CLSPs and accuracy results,
2. Summary report of radii of convergence and accuracy results at a selected singular point,
3. Ramification profile summarizing the cycle geometry at all singular points.
### Test Case 1: \(15\)-degree function expanded at the origin,\(\mathbf{\mathcal{G}}=86\)
\[f_{1}(z,w)=\left(z^{30}+z^{32}\right)+\left(z^{14}+z^{20}\right)w^{5}+\left(z ^{5}+z^{9}\right)w^{9}+\left(z+z^{3}\right)w^{12}+6w^{14}+\left(2+z^{2}\right) w^{15} \tag{23}\]
Timing data is shown in Table 4. The base expansions are done to at least \(1000\) terms and the comparison series to at least \(100\) terms at \(1000\) digits of precision.
Table 5 summarizes the CLSP and accuracy results.
The accuracy constants \((a,b,c,d)\) can be used to determine the approximate order needed to achieve a desired accuracy. For example, if a value of the \(F_{5}^{16}\) branch at \(z=1/3Re^{3\pi i/4}\) to \(20\) digits of accuracy is desired, that is, \(e_{a}=20\), solve \(O(1/3,20)\) where
\[O(r_{f},e_{a})=\left\lceil\frac{a+0.329926\log(r)-3.19696}{0.00567716\,-0.4338 46\log(r)}\right\rceil.\]
This gives an order estimate of \(35\). A simple scan of the expansion exponents can determine the series term corresponding to an order of \(35\). This turns out to be term \(99\). The first \(99\) terms of the generator series at \(z\) leads to a value accurate to approximately \(20\) digits. In this case, the accuracy is \(20\) digits as shown by comparing the value to the corresponding root of \(f_{1}(z,w)=0\). The estimated accuracy however will often differ from the actual error by a small amount as shown by the variance in Table 5.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Singular & Initial & Base gen. & Comparison & CT & IT \\ points & segments & expansions & expansions & expansions & \\ \hline (179,1.9 s) & 8 s & (5,13.4 m) & (125,37.5 m) & 1.3 m & 2.7 m \\ \hline \end{tabular}
\begin{tabular}{|c|c|c|c|} \hline \multicolumn{5}{|c|}{**Singular points:** (Total points, time), **Initial segments:** time} \\ \multicolumn{5}{|c|}{**Base expansion:** (Total generators,time), **Comparison expansions:** (Total sing. pts.,time)} \\ \multicolumn{5}{|c|}{**CT:** comparison test time, **IT:** Integration Test time} \\ \multicolumn{5}{|c|}{**s:** Seconds, **m:** Minutes **h:** hours} \\ \end{tabular}
\end{table}
Table 4. Test Case 1 Timing Data
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Type & CLSP & R & Terms & a & b & c & d & Var \\ \hline \(F_{5}^{16}\) & \(27\) & \(0.641328\) & \(1987\) & \(3.19696\) & \(-0.329926\) & \(0.00567716\) & \(-0.433846\) & \(0.23481\) \\ \(F_{4}^{9}\) & \(7\) & \(0.504901\) & \(2022\) & \(3.4831\) & \(-0.219358\) & \(0.00355264\) & \(-0.434293\) & \(0.228403\) \\ \(F_{3}^{4}\) & \(2\) & \(0.166817\) & \(1017\) & \(3.42982\) & \(-0.287641\) & \(0.00466601\) & \(-0.434467\) & \(0.173804\) \\ \(V_{2}^{1}\) & \(2\) & \(0.166817\) & \(1024\) & \(3.5265\) & \(-0.224677\) & \(0.00418632\) & \(-0.434702\) & \(0.259237\) \\ T & \(118\) & \(1.09352\) & \(1024\) & \(2.83953\) & \(-0.287592\) & \(0.00193844\) & \(-0.434512\) & \(0.565284\) \\ \hline \end{tabular}
\end{table}
Table 5. Test Case 1 Summary Report at the origin
The constants \((a,b,c,d)\) for each branch in Table 5 are similar in size. This means that a single average accuracy function could be generated for all the branches in this case. However in other cases, the accuracy constants can be quite different.
Table 6 summarizes the ramification profile describing the cycle geometry at all singular points. The set of singular points minimally-ramified is \(\{\bar{s}_{n}\}\) with ramification \((2,[13,1])\) signifying a single \(2\)-cycle branch and \(13\) single-cycle branches. Singular points with higher ramifications are listed separately or \(\{u_{i}\}_{n}\) for multiple singular points with the same ramification.
### Test Case \(2\): \(5\)-degree function with polynomial solution at the origin, \(\mathcal{S}=0\)
\[f_{2}(z,w)=(z^{4})+(2z^{2}+z^{4})w+(1+z^{2}+z^{3})w^{2}+(z)w^{3}+(1/4-z/2)w^{4 }+(-(1/2))w^{5} \tag{24}\]
This function has two singular points \(\{0,1\}\) and a \(4\)-cycle polynomial solution at the origin:
\[\begin{split} w_{1}(z)&=1-z^{1/4}+z^{1/2}-z^{3/4} \\ w_{2}(z)&=1-iz^{1/4}-z^{1/2}+iz^{3/4}\\ w_{3}(z)&=1+z^{1/4}+z^{1/2}+z^{3/4}\\ w_{4}(z)&=1+iz^{1/4}-z^{1/2}-iz^{3/4}.\end{split} \tag{25}\]
In this case, the modular operation of the Newton iteration step (15) returns a finite polynomial after reaching the maximum zero modular value \(N_{zm}\). And since the solutions are finite, \(R=\infty\). However this can only be true if the function does not ramify or is non-polar at \(z=1\). That is, the singularity at \(z=1\) is removable. We can show this as follows:
The expansions at \(z=1\) are four \(1\)-cycles with starting terms:
\[\begin{split} w_{1}(z)&=-0.5z+0.0625z^{2}-0.03125z ^{3}+0.0205078z^{4}+\cdots\\ w_{2}(z)&=(-0.5-0.5I)z+(0.125+0.I)z^{2}-(0.0625-0.01 5625I)z^{3}+\cdots\\ w_{3}(z)&=(-0.5+0.5I)z+(0.125+0.I)z^{2}-(0.0625+0.01 5625I)z^{3}+\cdots\\ w_{4}(z)&=4+1.5z-0.3125z^{2}+0.15625z^{3}+\cdots. \end{split} \tag{26}\]
\begin{table}
\begin{tabular}{|c|c|} \hline Singular point & Cycles \\ \hline \(s_{1}\) & \((1,2,3,4,5)\) \\ \(s_{110}\) & \((9,[6,1])\) \\ \(s_{111}\) & \((9,[6,1])\) \\ \(\{p_{i}\}_{2}\) & \([15,1]\) \\ \(\{\bar{s}\}_{174}\) & \((2,[13,1])\) \\ \(s_{\infty}\) & [15,1] \\ \hline \end{tabular} \(\{p_{i}\}_{2}:\) set of two poles
**[m,n]:**\(n\)-cycle branches, \(m\) total
\(\{\bar{s}\}_{n}:\) remaining \(n\) singular points minimally ramified
\end{table}
Table 6. Test Case \(1\) Ramification profile, \(\mathcal{K}=200\)
The solutions to \(f_{2}(1,w)=0\) are \(\{0,0,0,4\}\) and these are the values of (26) at \(z=0\) and note
\[\begin{split}\frac{dw}{dz}&=\lim_{(z,w)\to(1,0)} \left(-\frac{f_{z}(z,w)}{f_{w}(z,w)}\right)\to\frac{0}{0}\\ \frac{dw}{dz}&=\lim_{(z,w)\to(1,4)}\left(-\frac{f_{z} (z,w)}{f_{w}(z,w)}\right)=-\frac{3}{2}.\end{split} \tag{27}\]
However the derivatives of (26) at \(z_{r}=0\) are finite so that we can immediately solve for the indeterminate limits
\[\frac{dw}{dz}=\lim_{(z,w)\to(1,0)}\left(-\frac{f_{z}(z,w)}{f_{w}(z,w)}\right)= \{-0.5,-0.5-0.5i,-0.5+0.5i\}. \tag{28}\]
And since the function fully-ramifies at the origin, the four expansions at \(z_{a}=1\) all have \(R=1\). This is confirmed by the Root Test and both series comparison and integration tests. Accuracy results are given in Table 7. And by virtue of the singularity at the origin, the expansions at \(z=1\) all have radii of convergence of \(1\). Finally, as this branch is equivalent to the function \(f(z)=z^{1/4}\), the ramification at infinity will also be \(4\)-cycle and thus the genus is \((3+3)/2+1-4=0\). Timing for this case was minimal.
### Test Case \(3\): \(34\)-degree function expanded at \(s_{487}\), \(\mathcal{G}=264\)
\[\begin{split} f_{3}(z,w)&=\left(-\frac{1043}{60}- \frac{5}{3}z^{2}+2z^{3}-4z^{4}-\frac{6}{5}z^{5}-\frac{2}{3}z^{9}+\frac{8}{3}z^ {14}+\frac{25}{4}z^{15}+4z^{16}\right)\\ &+\left(\frac{11}{3}\right)w^{5}+\left(-\frac{8}{3}\right)w^{12} +\left(-\frac{38}{5}-\frac{1}{2}z\right)w^{34}\\ &=0\end{split} \tag{29}\]
This function was selected to stress-test the methods of finding CLSPs and also to study the fully-ramified branch at infinity. The function has \(493\) singular points with \(s_{487}\) ramifying into a \(22\)-cycle and \(12\) single cycles and having \(13\) nearest neighbors with an average separation distance of \(10^{-35}\).
Root Tests results were not precise enough to distinguish individual singular points near the expansion center and returned \(1.056\times 10^{-35}\) as the estimate for the single-cycle branches and \(1.09\times 10^{-35}\) for the \(22\)-cycle branch identifying \(s_{487}\) as the estimated CLSP of the base expansions. This in itself is not an error but rather attempting to use the Root Test at an extremely small tolerance. The comparison sequence however can be set to a reasonable number of singular points and in this case was set to the first \(25\) singular point in the singular sequence.
However, the comparison test failed to identify CLSPs at the default settings as the base series were not accurate enough to meet the comparison tolerances of \(10^{-6}\). The Integration Test likewise failed to identify CLSPs with a working precision of \(40\) and default integration method. However, increasing the integration working precision of the integration test from \(40\) to \(80\) with "StiffnessSwitching" method met the tolerances and found all CLSPs in \(11.5\) minutes. These results are shown in Table 9.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Type & CLSP & R & Terms & a & b & c & d & Var \\ \hline
1E & 1 & 1. & 1024 & 3.07174 & \(-0.302067\) & \(0.00142759\) & \(-0.434367\) & \(0.156447\) \\
2E & 1 & 1. & 1024 & 2.83926 & \(-0.321447\) & \(0.00161223\) & \(-0.434362\) & \(0.164684\) \\
3E & 1 & 1. & 1024 & 2.83926 & \(-0.321447\) & \(0.00161223\) & \(-0.434362\) & \(0.164684\) \\ T & 1 & 1. & 1024 & 2.59935 & \(-0.324381\) & \(0.00175722\) & \(-0.434358\) & \(0.195594\) \\ \hline \end{tabular}
\end{table}
Table 7. Test Case \(2\) Summary Report at \(s_{2}=1\)
The function has a fully-ramified \(34\)-cycle branch at infinity which means the CLSP is \(s_{2}\) relative to infinity. The default setting for the comparison test were not sufficient to identify this CLSP. The integration test using default integration methods also failed but succeeded in identifying \(s_{2}\) as the CLSP using Mathematica's NDSolve StiffnessSwitching method.
### Test Case 4: \(35\)-degree function expanded at infinity,\(\mathcal{G}=32\)
\[\begin{split} f_{4}(z,w)&=\left(-\frac{31}{10}+ \frac{179}{30}z+\frac{1}{4}z^{2}\right)+\left(-\frac{7}{4}\right)w^{2}+(4)w^{3} +\left(-\frac{1}{2}-\frac{5}{2}z\right)w^{8}+\left(\frac{11}{3}\right)w^{10}+ \left(6+\frac{5}{2}z\right)w^{14}\\ &+(5)w^{18}+\left(-\frac{64}{15}\right)w^{20}+\left(\frac{11}{2} -\frac{1}{2}z^{2}\right)w^{22}+\left(-\frac{9}{2}+\frac{7}{3}z\right)w^{25}+ \left(\frac{18}{5}-\frac{3}{4}z^{2}\right)w^{28}\\ &+\left(-\frac{3}{2}-1z\right)w^{33}+\left(-\frac{8}{3}\right)w^{ 35}\\ &=0\end{split} \tag{30}\]
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Type & CLSP & R & Terms & a & b & c & d & Var \\ \hline
1T & 493 & \(1.0511\times 10^{-35}\) & 1024 & \(2.57488\) & \(-0.408571\) & \(0.00198314\) & \(-0.434325\) & \(0.246471\) \\
2T & 488 & \(1.0511\times 10^{-35}\) & 1024 & \(2.43652\) & \(-0.456495\) & \(0.00189591\) & \(-0.434416\) & \(0.221283\) \\
3T & 489 & \(1.0511\times 10^{-35}\) & 1024 & \(2.42991\) & \(-0.493089\) & \(0.00181696\) & \(-0.434467\) & \(0.24568\) \\
4T & 485 & \(1.0511\times 10^{-35}\) & 1024 & \(2.58246\) & \(-0.391823\) & \(0.00188475\) & \(-0.434426\) & \(0.304147\) \\
5T & 486 & \(1.0511\times 10^{-35}\) & 1024 & \(2.49944\) & \(-0.447571\) & \(0.00205355\) & \(-0.434356\) & \(0.244369\) \\
6T & 482 & \(1.0511\times 10^{-35}\) & 1024 & \(2.60673\) & \(-0.384546\) & \(0.0019365\) & \(-0.434366\) & \(0.241232\) \\
7T & 481 & \(1.0511\times 10^{-35}\) & 1024 & \(2.52156\) & \(-0.458827\) & \(0.00198745\) & \(-0.434316\) & \(0.238367\) \\
8T & 484 & \(1.0511\times 10^{-35}\) & 1024 & \(2.64395\) & \(-0.365494\) & \(0.00186476\) & \(-0.434422\) & \(0.242922\) \\
9T & 483 & \(1.0511\times 10^{-35}\) & 1024 & \(2.61553\) & \(-0.379034\) & \(0.00196351\) & \(-0.434371\) & \(0.247183\) \\
10T & 491 & \(1.0511\times 10^{-35}\) & 1024 & \(2.66054\) & \(-0.378256\) & \(0.00185278\) & \(-0.434409\) & \(0.229304\) \\
11T & 490 & \(1.0511\times 10^{-35}\) & 1024 & \(2.56919\) & \(-0.427104\) & \(0.00188795\) & \(-0.434372\) & \(0.248727\) \\
12T & 492 & \(1.0511\times 10^{-35}\) & 1024 & \(2.60019\) & \(-0.386299\) & \(0.00190533\) & \(-0.434368\) & \(0.246803\) \\ \(P_{22}^{-1}\) & 492 & \(1.0511\times 10^{-35}\) & 2021 & \(1.64383\) & \(-0.210138\) & \(0.0150419\) & \(-0.433515\) & \(0.183742\) \\ \hline \end{tabular}
\end{table}
Table 9. Test Case 3 Summary Report at \(s_{487}\)
\begin{table}
\begin{tabular}{|c|c|c|} \hline Singular point & Cycles \\ \hline \(\{u_{i}\}_{16}\) & \((5,[29,1]))\) \\ \(s_{487}\) & \((22,[12,1])\) \\ \(\{\bar{s}\}_{476}\) & \((2,[32,1])\) \\ \(s_{\infty}\) & \((34)\) \\ \hline \end{tabular}
\end{table}
Table 10. Test Case 3 Manifestation profile, \(\mathcal{K}=594\)
First consider \(f_{4}(z,w)\) which has 127 finite singular points all of which are minimally-ramified. In order to obtain the ramification at infinity, \(f_{4}(z,w)\) is transformed to \(g_{4}(z,w)=z^{\delta}f_{4}\left(\frac{1}{z},w\right)\) with \(\delta\) the largest power of \(z\) in \(f_{4}(z,w)\). In this case \(\delta=2\) giving:
\[\begin{split} g_{4}(z,w)&=\left(\frac{1}{4}+\frac{ 179}{30}z-\frac{31}{10}z^{2}\right)+\left(-\frac{7}{4}z^{2}\right)w^{2}+\left( 4z^{2}\right)w^{3}+\left(-\frac{5}{2}z-\frac{1}{2}z^{2}\right)w^{8}+\left( \frac{11}{3}z^{2}\right)w^{10}\\ &=+\left(\frac{5}{2}z+6z^{2}\right)w^{14}+\left(5z^{2}\right)w^{1 8}+\left(-\frac{64}{15}z^{2}\right)w^{20}+\left(-\frac{1}{2}+\frac{11}{2}z^{2 }\right)w^{22}\\ &+\left(\frac{7}{3}z-\frac{9}{2}z^{2}\right)w^{25}+\left(-\frac{ 3}{4}+\frac{18}{5}z^{2}\right)w^{28}+\left(-z-\frac{3}{2}z^{2}\right)w^{33}+ \left(-\frac{8}{3}z^{2}\right)w^{35}\\ &=0\end{split} \tag{31}\]
Then an expansion of \(w(z)\) at \(z_{a}=0\) through \(g_{4}(z,w)\) is the expansion of \(w(z)\) defined by \(f_{4}(z,w)\) at infinity. The base series and comparison series were next computed relative to \(g_{4}(z,w)\). Table 11 is the timing summary.
The expansion at infinity ramified as \((5,2,[28,1])\). Table 12 is a Summary Report. Consider the 5-cycle branch expansions of \(P_{5}^{-1}\) of \(g_{4}(z,w)\) with \(R=0.04874\). These are expansions of \(w(z)\) centered at infinity which means the five values of \(w(z)\) associated with this branch at \(z_{a}=\frac{1}{z_{r}}\) are the same five branch values \(\{v_{s}\}\) at \(z_{r}\). For example, let \(z_{a}=200+100i\) which is outside the domain of finite singular points of \(f_{4}\) and therefore \(z_{r}=\frac{1}{200+100i}=0.004-0.002i\) is closer to infinity than the nearest singular point of \(f_{4}\). Then the \(P_{5}^{-1}\) expansions will converge at \(z_{r}\). The 35 values of \(w(z_{a})\) are easily found by solving for the roots \(\{w_{i}\}=f_{4}(z_{a},w)=0\). Among these roots are the five values of \(P_{5}^{-1}\) at \(z_{r}\):
\[-2.83979-0.469744i,-1.3576+2.63902i,-0.501188-2.89321i,2.05904+2.02105i,2.62398- 1.28185i.\]
The following are the roots \(\{w_{i}\}\) with the above branch values highlighted in red.
\[\begin{split}-3.92407+9.53357i,-2.83979-0.469744i,-1.3576+2.639 02i,-0.940335+0.00182458i,-0.922871+0.235934i,\\ -0.922161-0.232486i,-0.888861-0.448904i,-0.886022+0.452484i,-0.783866 -0.586343i,-0.780327+0.586443i,\\ -0.604223-0.72708i,-0.601147+0.730563i,-0.501188-2.89321i,-0.403707-0.8 50957i,-0.399549+0.853659i,\\ -0.187928-0.945784i,-0.181958+0.946171i,-0.000537808-0.999928i,-0.0001 28033+1.00055i,0.182496-0.945415i,\\ 0.187148\,+0.94525i,0.399708-0.85302i,0.403067+0.850813i,0.601121-0. 730076i,0.603749+0.727117i,\\ 0.781083\,-0.585674i,0.783141\,+0.585525i,0.887975\,-0.453228i,0.889622+0.446953i,0.92263+0.231865i,\\ 0.923506\,-0.236407i,0.940601\,-0.00209421i,2.05904\,+2.02105i,2.62398 -1.28185i,3.9374-9.5466i\end{split}\]
Another way to visualize the expansions at infinity is to consider the ramification of \(f_{4}(z,w)\) outside a disc containing all the finite singular points, that is a disc \(r<|s_{127}|\approx 27.15\). This ramification is the
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Singular & Initial & Base & Comparison & CT & IT \\ points & segments & expansion & expansions & CT & IT \\ \hline (128,2 s) & (15 s) & (30,3.4 h) & (32,68 m) & 2.6 m & 7.4 m \\ \hline \end{tabular}
\end{table}
Table 11. Test Case 4 Timing Data at infinity
same \((5,2,[28,1])\) ramification as that at infinity and is in fact the same set of branches. That is, we can compute the ramification at infinity for \(f_{4}(z,w)\) by simply computing the ramification of \(f_{4}(z,w)\) at say \(r=28\). See On the branching geometry of algebraic functions for a method of doing this.
\begin{table}
\begin{tabular}{|c|c|} \hline Singular point & Cycles \\ \hline \(\{\bar{s}\}_{127}\) & \((2,[33,1])\) \\ \(s_{\infty}\) & \((5,2,[28,1])\) \\ \hline \end{tabular}
\end{table}
Table 13. Test Case 4 ramification profile, \(\mathcal{K}=132\)
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Type & CLSP & R & Terms & a & b & c & d & Var \\ \hline
[MISSING_PAGE_POST]
\(P_{5}^{-1}\) & 27 & 0.0487394 & 1020 & 3.27798 & \(-0.0326398\) & 0.0077994 & \(-0.434404\) & 0.111103 \\ \(P_{2}^{-1}\) & 27 & 0.0487394 & 1023 & 2.81247 & \(-0.33028\) & 0.0036736 & \(-0.434123\) & 0.207571 \\ \hline \end{tabular}
\end{table}
Table 12. Test Case 4 summary report at the point of infinity, i.e., \(g_{4}\) expanded at \(z=0\)
### Test Case 5: \(50\)-degree function expanded at the origin, \(\mathcal{G}=2268\)
\[\begin{split} f_{5}(z,w)&=\left(2z^{6}+\frac{1}{2}z^ {7}-\frac{5}{4}z^{11}+4z^{22}+\frac{29}{10}z^{34}-1z^{40}-\frac{13}{2}z^{43} \right)+\left(\frac{3}{5}z^{10}+\frac{7}{4}z^{24}-\frac{1}{4}z^{50}\right)w^{ 2}\\ &+\left(2z^{17}+\frac{7}{2}z^{34}\right)w^{3}+\left(-\frac{3}{2} z^{30}+\frac{4}{3}z^{38}+\frac{8}{5}z^{42}\right)w^{4}+\left(-\frac{6}{5}z^{2}- \frac{1}{2}z^{6}+\frac{7}{3}z^{31}\right)w^{9}\\ &+\left(-\frac{2}{5}z^{11}-\frac{3}{2}z^{26}+1z^{45}\right)w^{10 }+\left(\frac{7}{5}z^{24}-6z^{32}-6z^{49}\right)w^{14}\\ &+\left(-\frac{3}{4}z^{5}+\frac{7}{3}z^{21}-\frac{1}{4}z^{26}+ \frac{4}{5}z^{27}+\frac{4}{3}z^{32}-2z^{36}+\frac{1}{3}z^{39}-\frac{3}{4}z^{4 1}-z^{43}\right)w^{16}\\ &+\left(-6z^{14}-2z^{31}-z^{33}\right)w^{18}+\left(-2z^{27}-\frac {8}{3}z^{50}\right)w^{22}+\left(4z^{8}+\frac{4}{5}z^{25}-\frac{3}{2}z^{27} \right)w^{24}\\ &+\left(-3z^{4}+\frac{8}{3}z^{22}-\frac{8}{5}z^{43}\right)w^{33}+ \left(\frac{7}{3}z^{14}-\frac{3}{2}z^{18}\right)w^{34}+\left(-4+8z^{13}-\frac{ 7}{4}z^{47}\right)w^{36}\\ &+\left(z^{2}-\frac{1}{4}z^{7}\right)w^{38}+\left(-\frac{1}{2}z^ {20}-z^{29}+z^{46}\right)w^{40}+\left(\frac{1}{3}z^{10}+\frac{7}{4}z^{11}+ \frac{8}{5}z^{21}\right)w^{47}\\ &+\left(\frac{2}{3}z^{2}+6z^{26}+\frac{3}{5}z^{43}\right)w^{48}+ \left(-z^{9}+\frac{1}{4}z^{13}+2z^{14}+2z^{18}+z^{36}-2z^{44}\right)w^{49}\\ &+\left(-\frac{1}{3}z^{23}-\frac{7}{2}z^{40}+z^{42}\right)w^{50} \\ &=0\end{split} \tag{32}\]
This function has \(4584\) singular points. The singular point at the origin was selected as \(s_{b}\) and the Root Test indicated a comparison sequence of \(s_{2}\) through \(s_{100}\). Table 14 is the timing data, and Table 15, the Summary Report.
Test Case \(6\): \(25\) degree function with complex coefficients expanded at \(s_{29}\), \(\mathcal{G}=326\)
\[f_{6}(z,w) =-\left(\frac{311}{20}i+\frac{467}{30}\right)-\left(\frac{16}{3}i+ \frac{3}{2}\right)z+\left(\frac{1}{4}-\frac{3}{4}i\right)z^{3}+\left(\frac{9}{ 4}i+3\right)z^{4}-\left(\frac{3}{10}-\frac{21}{5}i\right)z^{5}\] \[-\left(1-\frac{7}{3}i\right)z^{7}-\frac{5}{3}z^{8}-\left(\frac{12 }{5}i+1\right)z^{9}-\left(2i+\frac{5}{4}\right)z^{11}-\left(\frac{1}{4}-3i \right)z^{12}\] \[-\left(\frac{27}{5}i+3\right)z^{13}+\left(\frac{19}{6}i+7\right) z^{14}+(3i-2)z^{15}\] \[+\biggr{(}-\left(\frac{59}{60}i+\frac{13}{10}\right)+(1-3i)z^{12 }+\left(4i+\frac{4}{5}\right)z^{13}\biggr{)}w\] \[+\biggr{(}-\left(\frac{71}{10}-\frac{1}{6}i\right)-\left(\frac{8 }{3}-6i\right)z^{2}+(-i-8)z^{8}\biggr{)}w^{6}\] \[+\biggr{(}\frac{15}{2}i+\frac{116}{15})w^{8}+(\left(\frac{12}{5} -\frac{15}{4}i\right)+\frac{2iz^{3}}{5}+\left(-7i-\frac{5}{2}\right)z^{10} \biggr{)}w^{9}\] \[+\biggr{(}\frac{13}{2}-\frac{32}{5}i\biggr{)}w^{10} \tag{33}\] \[+\biggr{(}-\left(\frac{19}{4}i+\frac{5}{3}\right)-\left(3-\frac{6 }{5}i\right)z^{3}+\left(\frac{3}{2}i-\frac{3}{4}\right)z^{13}\biggr{)}w^{13}\] \[+\biggr{(}\left(\frac{47}{15}i+\frac{155}{12}\right)-\left(\frac{ 1}{6}-\frac{1}{2}i\right)z^{2}-\left(\frac{1}{5}-\frac{5}{3}i\right)z^{10}+ \left(\frac{3}{2}i-2\right)z^{14}\biggr{)}w^{14}\] \[+\biggr{(}-\left(\frac{109}{12}-\frac{37}{12}i\right)-\frac{iz^{5 }}{2}\biggr{)}w^{18}\] \[+\biggr{(}\left(5i+\frac{6}{5}\right)-\left(\frac{1}{5}i+\frac{7 }{4}\right)z^{12}+\left(\frac{8}{5}i+7\right)z^{14}\biggr{)}w^{21}\] \[+\biggr{(}\left(\frac{2}{5}-\frac{13}{10}i\right)-\frac{8}{3}z^{ 7}+(-2i-1)z^{10}\biggr{)}w^{25}\]
This function has \(660\) finite singular points and was expanded at pole \(s_{29}\). Table 17 gives the timing data and Tables 18 and 19 detail the accuracy and ramification data.
\begin{table}
\begin{tabular}{|c|c|} \hline Singular point & Cycles \\ \hline \(s_{1}\) & \((9,27,[2,6],[2,1])\) \\ \(\{p_{i}\}_{19}\) & \([50,1]\) \\ \(\{\bar{s}\}_{4564}\) & \((2,[48,1])\) \\ \(s_{\infty}\) & \((14,13,2,[21,1]))\) \\ \hline \end{tabular}
\end{table}
Table 16. Test Case \(5\) Ramification profile, \(\mathcal{K}=4634\)
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Singular points & Initial segments & Base gen. expansions & Comparison expansions & CT & IT \\ \hline (660,30 m) & (660,3.3 m) & (22,2 h) & (27,29 m) & 1.3 m & 3.4 m \\ \hline \end{tabular}
\end{table}
Table 17. Test Case \(6\) Timing Data at \(s_{29}\)
## 15. Conclusions
1. A numerical approach to Newton polygon initially seems problematic. However this work includes several error checking algorithms: 1. **Cycle check-sum:** The sum of the conjugate classes at any expansion center must equal to the degree of the function in \(w\), 2. **Global cycle check-sum:** The Riemann-Hurwitz sum must be a positive number, 3. **Removal of coefficient zeros:** In order to minimize residual errors, coefficient zeros are removed from \(f\) and its iterates before processing by the Newton polygon algorithm, 4. **Precision monitoring:** The precision of the calculations are monitored throughout the algorithms and terminate the analysis if it drops below 900 digits, 5. **Accuracy results:** The accuracy results of a set of expansions would not follow the log-linear trend with low variance if a branch was incorrectly computed. These measures reduce the potential of errors. However, there exists functions which can usurp this numeric approach. These would include the following scenarios: 1. **Singular size limits:** Functions with very small singular sizes coupled with very large exponents sufficient to compromise a realistic level of precision achievable in a reasonable amount of time, 2. **Regular size limits:** Functions with very small singular sizes coupled with very large exponents sufficient to compromise a realistic level of precision achievable in a reasonable amount of time, 3. **Regular size limits:** Functions with very small singular sizes coupled with very large exponents sufficient to compromise a realistic level of precision achievable in a reasonable amount of time, 4. **Regular size limits:** Functions with very small singular sizes coupled with very large exponents sufficient to compromise a realistic level of precision achievable in a reasonable amount of time, 5. **Regular size limits:** Functions with very small singular sizes coupled with very large exponents sufficient to compromise a realistic level of precision achievable in a reasonable amount of time, 6. **Regular size limits:** Functions with very small singular sizes coupled with very large exponents sufficient to compromise a realistic level of precision achievable in a reasonable amount of time, 7. **Regular size limits:** Functions with very small singular sizes coupled with very large exponents sufficient to compromise a realistic level of precision achievable in a reasonable amount of time, 8. **Regular size limits:** Functions with very small singular sizes coupled with very large exponents sufficient to compromise a realistic level of precision achievable in a reasonable amount of time, 9. **Regular size limits:** Functions with very small singular sizes coupled with very large exponents sufficient to compromise a realistic level of precision achievable in a reasonable amount of time, 10. **Regular size limits:** Functions with very small singular sizes coupled with very large exponents sufficient to compromise a realistic level of precision achievable in a reasonable amount of time, 11. **Regular size limits:** Functions with very small singular sizes coupled with very large exponents sufficient to compromise a realistic level of precision achievable in a reasonable amount of time, 12. **Regular size limits:** Functions with very small singular sizes coupled with very large exponents sufficient to compromise a realistic level of precision achievable in a reasonable amount of time, 13. **Regular size limits:** Functions with very small singular sizes coupled with very large exponents sufficient to compromise a realistic level of precision achievable in a reasonable amount of time, 14. **Regular size limits:** Functions with very small singular sizes coupled with very large exponents sufficient to compromise a realistic level of precision achievable in a reasonable amount of time, 15. **Regular size limits:** Functions with very small singular sizes coupled with very large exponents sufficient to compromise a realistic level of precision achievable in a reasonable amount of time, 16. **Regular size limits:** Functions with very small singular sizes coupled with very large exponents sufficient to compromise a realistic level of precision achievable in a reasonable amount of time, 17. **Regular size limits:** Functions with very small singular sizes coupled with very large exponents sufficient to compromise a realistic level of precision achievable in a reasonable amount of time, 18. **Regular size limits:** Functions with very small singular sizes coupled with very large exponents sufficient to compromise a realistic level of precision achievable in a reasonable amount of time, 19. **Regular size limits:** Functions with very small singular sizes coupled with very large exponents sufficient to compromise a realistic level of precision achievable in a reasonable amount of time, 19.
2. **High polygon iterates:** Functions which entail multiple polygon iterations sufficient to reduce the precision of the characteristic equations below a reasonable level of numeric precision, 3. **High polygon iterates:** It is likely there are functions with arbitrary polygon iterates which would decrease the precision of the calculations beyond any effort to keep the results above a minimum level, 4. **High poly mod function:** Functions with extremely large gaps between successive expansion terms would cause the modular operation of (15) to exceed the (arbitrary) maximum number of zero term iterations. In this case, an infinite power expansion would be identified as a fractional polynomial.
2. Identifying conjugate classes and only expanding generator series of each class is an improvement to the standard approach to expanding all initial segments of a Newton Polygon expansion. The greatest time saving is when the conjugate set at an expansion center is highly ramified. An example of this is the 50-degree function of test case 4 which only required iterating six generator series in 50 minutes rather than an estimated seven hours to generate the full 50 set.
3. The test cases were designed to stress-test the series comparison and integration tests with complex functions. With minor tuning of the methods, CLSPs and corresponding radii of convergence results agreed well with the estimated values determined by the Root Test when the separation tolerance was set to 1/10 the separation distance of the branch values. In cases where the comparison test and integration test succeeded in identifying CLSPs, both identified the same CLSP for each branch. In Test Case 3 where the comparison test failed, this was due to extremely small singular perimeters on the order of \(10^{-35}\) causing the base expansions to be evaluated extremely close to their radii of convergence. However the integration test succeeded with proper adjustments to the integration method.
4. The accuracy and order functions were found to agree well with the actual accuracies of the series as shown by the accompanying low variances of the fit function showing \(A(r_{f},o)\) to be a robust predictor of accuracy in the testing range.
5. The close approximation of the Root Test to the CLSPs shows the Root Test to be a reliable means of estimating radii of convergences.
6. This work opens the subject to further research: 1. Find and analyze function which entail more polygon iterations, 2. Find and analyze functions with fractional polynomial solutions, 3. Further fine-tuning the algorithm as problem cases arise, 4. Reducing execution time, 5. Identifying problem functions and updating the method to accommodate them.
## Appendix A Branch types used in this paper
In the following branch descriptors, all exponents \(\dfrac{q}{p}\) of a series are presumed placed under a least common denominator \(p\).
**Type \(T\):** Power series with positive integer powers (Taylor series). These are 1-cycle branches.
**Type \(E\):** 1-cycle \(T\) branch with a removable singular point at its center.
**Type \(F_{p}^{q}\):**\(p\)-cycle branch with \(p>1\) of order \(q\) with non-negative exponents and lowest non-zero exponent \(\dfrac{q}{p}\) with \(q>p\). These branches are multi-valued consisting of \(p\) single-valued sheets with a finite tangent at the singular point. An example \(F_{2}^{3}\) series is \(z^{3/2}+z^{2}+\cdots\).
**Type \(V_{p}^{q}\):**\(p\)-cycle branch with \(p>1\) of order \(q\) with non-negative exponents and lowest non-zero exponent \(\dfrac{q}{p}\) with \(q<p\) and vertical tangent at center of expansion. An example \(V_{4}^{3}\) series is \(z^{3/4}+z^{2}+\cdots\).
**Type \(P_{p}^{q}\):**: \(p\)-cycle branch unbounded at center with \(p>1\) of order \(q\) having negative exponents with lowest negative exponent \(\dfrac{q}{p}\). An example 3-cycle \(P\) series of order \(-1\) is \(z^{-1/3}+z^{2}+\cdots\). An example 3-cycle \(P\) series of order \(-5\) is \(z^{-5/3}+z^{-1/3}+\cdots\).
**Type \(L^{q}\):**: Branch with Laurent series of order \(q\) as the Puiseux series. An example \(L^{-2}\) series is \(1/z^{2}+1/z+z^{2}+\cdots\)
|
2305.10271
|
Epitaxial growth and electronic structure of Ruddlesden-Popper
nickelates ($ \mathrm{La}_{n+1}\mathrm{Ni}_{n}\mathrm{O}_{3n+1}, n=1-5 $)
|
We report the epitaxial growth of Ruddlesden-Popper nickelates, $
\mathrm{La}_{n+1}\mathrm{Ni}_{n}\mathrm{O}_{3n+1} $, with $ n $ up to 5 by
reactive molecular beam epitaxy (MBE). X-ray diffractions indicate high
crystalline quality of these films and transport measurements show strong
dependence on the $ n $ values. Angle-resolved photoemission spectroscopy
(ARPES) reveals the electronic structure of $
\mathrm{La}_{5}\mathrm{Ni}_{4}\mathrm{O}_{13} $, showing a large hole-like
pocket centered around the Brillouin zone corner with a $ (\pi, \pi) $
back-folded copy.
|
Zi Li, Wei Guo, Tingting Zhang, Jianhui Song, Tianyi Gao, Zhengbin Gu, Yuefeng Nie
|
2023-05-17T15:02:57Z
|
http://arxiv.org/abs/2305.10271v1
|
Epitaxial growth and electronic structure of Ruddlesden-Popper nickelates (La\({}_{n+1}\)Ni\({}_{n}\)O\({}_{3n+1}\), \(n\)=1-5)
###### Abstract
We report the epitaxial growth of Ruddlesden-Popper nickelates, La\({}_{n+1}\)Ni\({}_{n}\)O\({}_{3n+1}\), with \(n\) up to 5 by reactive molecular beam epitaxy (MBE). X-ray diffractions indicate high crystalline quality of these films and transport measurements show strong dependence on the \(n\) values. Angle-resolved photoemission spectroscopy (ARPES) reveals the electronic structure of La\({}_{5}\)Ni\({}_{4}\)O\({}_{13}\), showing a large hole-like pocket centered around the Brillouin zone corner with a (\(\pi\), \(\pi\)) back-folded copy.
## I Introduction
Layered perovskite nickelates have long been considered as close analogs to high T\({}_{\rm c}\) cuprates and have been extensively studied by theoretical and experimental investigations for potential new superconductors [1, 2, 3]. Recently, remarkable progress has been achieved by Li and his collaborators in the synthesis and observation of superconductivity in Sr doped infinite-layer nickelate NdNiO\({}_{2}\)[4, 5], which has been reproduced by other groups [6, 7]. NdNiO\({}_{2}\) is synthesized by removing the apical oxygen atoms of the perovskite NdNiO\({}_{3}\) through a careful reduction process employing CaH\({}_{2}\)[4, 6].
In addition to the infinite-layer compound, the \(n\)=\(\infty\) member of the Ruddlesden-Popper (RP) phases (\(Ln_{n+1}\)Ni\({}_{n}\)O\({}_{3n+1}\), \(Ln\): rare earth elements), the other RP members are natural candidates to be explored for high T\({}_{\rm c}\) superconductivity. The homologous series of high T\({}_{\rm c}\) cuprates exhibits a clear dependence of the superconducting transition temperature with number of layers in their layered structure [8, 9, 10], making it more tempting to investigate the RP nickelates with higher \(n\) values. In RP structure (Fig. 1), each \(n\) layers of corner-shared perovskite octahedra are separated by a \(R\)O-\(R\)O rock-salt structure, which breaks the bonding between adjacent perovskite slabs. By varying the
\(n\) value, the crystalline structure of \(R_{n+1}\)Ni\({}_{n}\)O\({}_{3n+1}\) can be tuned from two-dimensional (\(n\)=1) to three-dimensional (\(n\)=\(\infty\)) with nominal valence state of Ni from Ni\({}^{2+}\) to Ni\({}^{3+}\) and \(d\) orbital occupation from \(d^{\prime}\)(\(n\)=\(\infty\)) to \(d^{8}\) (\(n\)=1). However, these are far from the nearly \(d^{\theta}\) state in cuprates. By using reduction methods to complete remove the oxygen atoms from the LaO layers in the perovskite blocks and rearrange the oxygen atoms in (LaO)\({}_{2}\) blocks\({}^{\@@cite[cite]{[\@@bibref{}{Kazanov et al., 2008}{}{}]}}\), the RP phases can be turned into the square-planar phases (\(R_{n+1}\)Ni\({}_{n}\)O\({}_{2n+2}\)) with average \(d\) orbital occupation ranging from \(d^{8}\)(\(n\)=1) to \(d^{\theta}\)(\(n\)=\(\infty\))\({}^{\@@cite[cite]{[\@@bibref{}{Kazanov et al., 2008}{}{}]}}\). Starting from NdNiO\({}_{2}\), the reduction product of the \(n\)=\(\infty\) RP member, superconducting dome spanning \(d^{8.75}\)- \(d^{8.875}\) was found by substituting Nd\({}^{3+}\) with Sr\({}^{2+\@@cite[cite]{[\@@bibref{}{Kazanov et al., 2008}{}{}]}}\). Since the valance state and \(d\) orbital occupation can also be tuned by only changing the \(n\) value, giving the opportunity to realize the optimal doping even without cation substitutions (Fig. 1). The RP phases with high \(n\) values are believed to be more promising as cuprate analogs since it has large orbital polarization\({}^{\@@cite[cite]{[\@@bibref{}{Kazanov et al., 2008}{}{}]}}\) and the T\({}_{\rm c}\) may be enhanced at certain \(n\) values like that in cuprates\({}^{\@@cite[cite]{[\@@bibref{}{Kazanov et al., 2008}{}{}]},\@@cite[cite]{[\@@bibref{}{Kazanov et al., 2008}{}{}]}}\). Moreover, these reduced RP phase nickelates have many other interesting properties such as dimensionality-controlled insulator-metal transition \({}^{\@@cite[cite]{[\@@bibref{}{Kazanov et al., 2008}{}{}]}}\), stripe order \({}^{\@@cite[cite]{[\@@bibref{}{Kazanov et al., 2008}{}{}]}}\), magnetic order \({}^{\@@cite[cite]{[\@@bibref{}{Kazanov et al., 2008}{}{}]}}\), spin stripe order \({}^{\@@cite[cite]{[\@@bibref{}{Kazanov et al., 2008}{}{}]}}\), charge order \({}^{\@@cite[cite]{[\@@bibref{}{Kazanov et al., 2008}{}{}]}}\), magnetic order-disorder transitions \({}^{\@@cite[cite]{[\@@bibref{}{Kazanov et al., 2008}{}{}]}}\), _etc_.
Up to date, however, the RP phase with highest \(n\) value (except perovskite, the \(n\)= \(\infty\) member) is La\({}_{4}\)Ni\({}_{3}\)O\({}_{10}\) (\(n\)=3). The corresponding square-planar phase (La\({}_{4}\)Ni\({}_{3}\)O\({}_{8}\)) has a Ni\({}^{1.33+}\) (\(d^{8.67}\)) configuration \({}^{\@@cite[cite]{[\@@bibref{}{Kazanov et al., 2008}{}{}]}}\), which is still out of the superconducting dome of cuprates and nickelates. As such, synthesizing RP nickelates with high \(n\) values to explore the potential superconductivity in layered nickelates is highly desired, but it is rather challenging since these structures are metastable and only achievable by using a layer-by-layer deposition techinque \({}^{\@@cite[cite]{[\@@bibref{}{Kazanov et al., 2008}{}{}]}}\).
Here we report the synthesis of RP nickelate series (La\({}_{n+1}\)Ni\({}_{n}\)O\({}_{3n+1}\)) with \(n\) values up to 5 using MBE. X-ray diffraction indicates the high crystalline quality of these films and transport measurements show a strong dependence on the \(n\) value. The electronic structure of the \(n\)=4 member is also investigated.
**II. MBE GROWTH, STRUCTURE AND TRANSPORT MEASUREM OF La\({}_{n+1}\)Ni\({}_{n}\)O\({}_{3n+1}\) THIN FILMS**
RP La\({}_{n+1}\)Ni\({}_{n}\)O\({}_{3n+1}\) thin films were epitaxially grown on (001) LaAlO\({}_{3}\) (LAO) single-crystalline substrates from MTI at 550 \(\ \ \mbox{\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$ \circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$ \circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$ \circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$ \circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$ \circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$ \circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$ \circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$ \circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$ \circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$ \circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$ \circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$ \circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$ \circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$ \circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$ \circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$ \circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$ \circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$ \circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$ \circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$ \circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$ \circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72 pt}{$\circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$ \circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$ \circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$ \circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72 pt}{$\circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$ \circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$ \circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72 pt}{$\circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$ \circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$ \circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$ \circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72 pt}{$\circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$ \circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72 pt}{$\circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$ \circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.72 pt}{$\circ$}\raisebox{-1.72pt}{$\circ$}\raisebox{-1.
To obtain high quality layered RP epitaxial films, especially the non-thermal equilibrium phase with high \(n\) values (\(n>\)3), a layer-by-layer deposition using shutter-control method is needed. In addition to the requirement of correct stoichiometry (La:Ni ratio), precise monolayer dosage of LaO and NiO\({}_{2}\) layers are also extremely critical for the synthesis of high crystalline quality RP films, especially for high \(n\)-value RP members that the imperfect layer dosage cannot be accommodated through a self-resembled process. Typically, a shutter-control layer-by-layer deposition of SrTiO\({}_{3}\), BaTiO\({}_{3}\) and many other perovskite oxides exhibit a nearly perfect sinusoidal curve pattern. The diffraction intensity saturates near the end of the growth of a full atomic monolayer, providing clear signatures for optimizing the shutter time for each source. However, the RHEED patterns of LaNiO\({}_{3}\), LaAlO\({}_{3}\) and many other La-related perovskites do not show ideal sinusoidal curve patterns in a shutter-control growth mode, which precludes the calibration of monolayer dosages with high precision. Instead, precise shutter times for La and Ni sources were calibrated by optimizing the growth parameters of the simplest RP member, LaNiO\({}_{3}\), using a co-deposition method [25]. After a rough calibration of La and Ni flux ratio to be 1:1 using a quartz crystal microbalance (QCM), the shutter of La and Ni sources were opened to deposit LaO and NiO\({}_{2}\) layers simultaneously.
During the calibration process, the intensity of (11) diffractions in the RHEED patterns shows a clear oscillation pattern (Fig. 2a) and each period of the oscillations is corresponding to the growth of about one unit cell. In addition to the oscillations, the overall average intensity is sensitive to the film stoichiometry and drifts to higher (lower) intensity when Ni (La) is rich. By adjusting the source temperature to optimize the beam flux ratio to be 1:1, a stable oscillation pattern can be obtained and yield precise monolayer dosages for LaO and NiO\({}_{2}\) layers. By extracting the oscillation period from more than 10 oscillation cycles, precise shutter times for La and Ni sources were obtained. Based on these monolayer dosages calibrated by a co-deposition method, RP nickelates of any \(n\) value can be synthesized by deposing LaO and NiO\({}_{2}\) layers alternatively in the desired sequences using a shutter-control mode. During the shutter-control growth process, the RHEED electron beam was closed and the substrates were rotating to increase the uniformity of the films.
As shown in Fig. 2b, high resolution XRD measurements confirm the crystalline quality of the nickelate films. All films exhibit clear and sharp (00L) diffraction peaks in the 2\(\theta\)-_co_ scans for all superlattice peaks even the films are rather thin (only 20 formula units for LaNiO\({}_{3}\) and 5 formula units for all other RP films). This indicates that the combination of co-deposition calibration and shuttered-growth is an effective and precise method to grow RP phase nickelates. All the films are fully strained to the substrate and a representative reciprocal space mapping (RSM) of \(n\)=4 compound is shown in Fig. 2d. The extracted \(c\) lattice constants are listed in Table 1. The LaNiO\({}_{3}\) films are atomically smooth showing clear step and terrace feature in the atomic force microscopy (AFM) image while the RP phases with (LaO)\({}_{2}\) are less smooth and only show weak step and terrace feature (Fig. 2c). Nonetheless, the crystalline quality of the RP films are also of high quality. Although we here only demonstrate the growth of RP
phases with \(n\) up to 5, members with higher \(n\) values are expected to be able to be synthesized using this method.
Transport properties of La\({}_{n+1}\)Ni\({}_{n}\)O\({}_{3n+1}\) films show a clear dependence on the \(n\) value (Fig. 3). The \(n\)=1 member is fully insulating at room temperature, which is consistent with the literature. The \(n\)=2 compound is sensitive to the detailed crystalline structure or the oxygen stoichiometry and whether it is metallic or insulating is still controversial. [26 - 34] In our epitaxial thin film case, the higher crystalline quality \(n\)=2 compound exhibits insulating temperature dependence. We also notice that the crystalline quality has strong impact on the transport property. Some \(n\)=2 films exhibit broad x-ray diffraction peaks and metallic behavior (not shown). For the \(n\)=3 compound, our films show a metallic behavior down to the lowest measured temperature and do not show any upturn or kink (signature of ordering) as reported previously. [26, 28, 30, 32, 33, 34] It is most likely that the compressive epitaxial strain imposed by the substrate increases the electron hopping integral and suppresses the gap opening. For \(n\)\(\geq\)3, La\({}_{n+1}\)Ni\({}_{n}\)O\({}_{3n+1}\) are also metallic and the resistivity monotonically decreases as n increases, which could be explained by the higher electron hopping integral due to the increase of the dimensionality and the compressive strain imposed by the substrate.
## III Electronic Structure of La\({}_{8}\)Ni\({}_{4}\)O\({}_{13}\) films
Following the growth, La\({}_{8}\)Ni\({}_{4}\)O\({}_{13}\) films were transferred through an ultrahigh vacuum (\(<\)1\(\times\)10\({}^{-10}\) Torr) into a high-resolution ARPES system with a VG Scienta R8000 electron analyzer and a VUV5000 helium plasma discharge lamp. ARPES measurements were performed using He I\(\alpha\) (hv = 21.2 eV) photons with an energy resolution \(\Delta\)E of 11 meV. As shown in Fig. 4, ARPES measurements were performed on La\({}_{8}\)Ni\({}_{4}\)O\({}_{13}\) at 9K. The Fermi surface shows a gapless large hole-like pocket centered at the Brillouin zone corner, which is similar to the LaNiO\({}_{3}\)[25] and La\({}_{8}\)Ni\({}_{3}\)O\({}_{10}\)[35] and similar to the other nickelates[25, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 225, 226, 227, 228, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 284, 286, 287, 288, 289, 291, 285, 287, 288, 289, 292, 293, 286, 288, 289, 294, 280, 287, 288, 289, 295, 280, 281, 282, 285, 286, 288, 289, 296, 287, 289, 297, 281, 288, 282, 289, 298, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 30, 32, 31, 33, 34, 36, 38, 39, 31, 34, 39, 32, 35, 37, 39, 32, 36, 38, 39, 33, 37, 38, 39, 30, 33, 34, 38, 39, 31, 35, 39, 32, 33, 36, 39, 34, 37, 38, 39, 32, 39, 33, 35, 39, 34, 35, 36, 37, 38, 39, 35, 39, 36, 38, 39, 37, 39, 38, 39, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 82, 84, 85, 87, 89, 91, 83, 86, 88, 89, 92, 93, 94, 95, 96, 97, 98, 99, 100, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 18, 19, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 43, 42, 44, 46, 48, 49, 51, 52, 53, 54, 55, 56, 57, 58, 59, 61, 63, 64, 65, 66, 67, 68, 69, 71, 72, 73, 74, 75, 76, 78, 79, 81, 82, 83, 84, 85, 86, 87, 88, 89, 93, 94, 95, 96, 97, 98, 99, 100, 101, 12, 13, 14, 15, 16, 17, 18, 19, 19, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 43, 44, 45, 46, 47, 49, 52, 53, 54, 56, 57, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 12, 13, 14, 15, 16, 17, 18, 19, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 79,
show no gap around the whole momentum space, similar to that in LaNiO\({}_{3}\)[25] and La\({}_{4}\)Ni\({}_{3}\)O\({}_{10}\)[35] but distinct from the pseudogap feature observed in \(R_{2\text{-}}\)Sr\({}_{x}\)NiO\({}_{4}\) (R=Nd, Eu)[37]. Moreover, there exist a clear (\(\pi\), \(\pi\)) back-folded copy of the hole-like band (marked by dotted green line), which is similar to cuprates[38, 39, 40, 41] and La\({}_{4}\)Ni\({}_{3}\)O\({}_{10}\)[35] but different from LaNiO\({}_{3}\)[25]. This band folding is consistent with the superlattice diffractions observed in the low energy electron diffraction (LEED) pattern (Fig. 4a) and most likely due to the lattice reconstruction driven by NiO\({}_{6}\) octahedral rotations. Also, the energy gap and kink feature observed in the \(n\)=3 compound[35] are absent in our ARPES and transport measurements on the \(n\)=4 compound, which may be due to the enhancement of the electron hopping integral due to the increase of dimensionality and compressive strain imposed by the substrate.
**VI. DISCUSSION AND CONCLUSION**
One of the main interests in RP nickelates is to engineer the \(d\) orbital occupation for high T\({}_{\text{c}}\) superconductivity. By removing the oxygen atoms from the LaO layers in the perovskite blocks and rearrange the oxygen atoms in (LaO)\({}_{2}\) blocks in La\({}_{8}\)Ni\({}_{4}\)O\({}_{13}\) and La\({}_{6}\)Ni\({}_{5}\)O\({}_{16}\), the average Ni valance states in the reduced phases of La\({}_{8}\)Ni\({}_{4}\)O\({}_{10}\) and La\({}_{6}\)Ni\({}_{5}\)O\({}_{12}\) are Ni\({}^{1.25+}\) and Ni\({}^{1.2+}\) with an average 3\(d\) filling of \(d^{8.75}\) and \(d^{8.8}\), respectively. With no need for cation substitution, these layered nickelates are within the superconducting dome in cuprates[36] and infinite-layer nickelates[4, 6]. Such reduction process has been applied in the synthesis of LaNiO\({}_{2}\), NdNiO\({}_{2}\) and La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) by annealing in Ar/H\({}_{2}\) atmosphere[16] or vacuum sealed with CaH\({}_{2}\) powder[42]. However, the reduction process is sensitive to many factors and very challenging to obtain superconducting phases. The reduction of these films needs to be further explored in the future and is out of the scope of this work. Also, as Sr doped LaNiO\({}_{2}\) is not superconducting [4], whether the other La-based RP series would be superconducting is an interesting question to be investigated. Moreover, the synthesis method for La-based RP series demonstrated in this work can be applied in Nd-based RP series.
In summary, we synthesize a series of RP lanthanum nickelate thin films with \(n\) up to 5 by using reactive MBE. Especially, the metastable \(n\)=4 and \(n\)=5 compounds are synthesized for the first and only achievable by using a layer-by-layer deposition method. A combination of co-deposition and shuttered growth methods were employed to achieve precise monolayer-precise control, allowing the synthesis of high quality RP members with high \(n\) values. Transport measurements show an increase of the conductivity with the \(n\) value. The La\({}_{8}\)Ni\({}_{4}\)O\({}_{13}\) films show a large hole-like pockets centered at zone corner with a (\(\pi\), \(\pi\)) back-folded copy, similar to high T\({}_{\text{c}}\) cuprates and other layered nickelates. The synthesis of RP nickelates with high \(n\) values provide a great platform to explore high T\({}_{\text{c}}\) superconductivity upon further reduction and charge carrier doping.
**ACKNOWLEDGMENTS**
This work was supported by the National Natural Science Foundation of China (Grant Nos. 11774153, 11861161004, 51672125, and 51772143) and the Fundamental Research Funds for the Central Universities (Grant No. 0213-14380167).
**Availability of data:** The data that support the findings of this study are available from the corresponding author upon reasonable request.
Figure1 Crystal structures of La\({}_{n+1}\)Ni\({}_{n}\)O\({}_{3n+1}\) and La\({}_{n+1}\)Ni\({}_{n}\)O\({}_{2n+2}\). Insert shows the phase diagram of Sr doped infinite layer nickelates reported in Ref.[5, 6] and the \(3d\) orbital occupation of square-planar phase La\({}_{n+1}\)Ni\({}_{n}\)O\({}_{2n+2}\) with different \(n\) values.
Figure 2: (a) RHEED patterns and RHEED intensity oscillations by monitoring the 11 diffraction (red square) during the growth of LaNiO\({}_{3}\). The average RHEED intensity increases when Ni is rich and stabilizes with a flux ratio of La:Ni=1:1. (b) XRD _20-o_ scan patterns of Ruddlesden-Popper LNO films grown on (001) LaAlO\({}_{3}\) substrates. The thickness of LaNiO\({}_{3}\) is 20 formula units and the thickness of other RP films are all 5 formula units. The asterisks denote diffraction peaks from the substrate. (c) AFM image of the surface morphology of a 5 u.c. La\({}_{3}\)Ni\({}_{4}\)O\({}_{13}\)/LaAlO\({}_{3}\) (001) sample. (d) Reciprocal space map around the 103 diffraction peak of the substrate.
Figure 3: Resistivity v.s. temperature curves of La\({}_{n+1}\)Ni\({}_{n}\)O\({}_{3n+1}\) films. The \(n\)=1 films are insulating and out of the measurement limit.
Figure 4: (a) LEED pattern taken with an incident electron beam energy of 100 eV for La\({}_{5}\)Ni\({}_{4}\)O\({}_{13}\) films showing superlattice diffractions (pointed by yellow arrows). (b) Symmetrized Fermi surface of first Brillouin zone showing (\(\pi\), \(\pi\)) back-folded large hole-like pocket centered at the zone corner. The dashed lines represent the folded Brillouin zone. (c, d) E vs K maps taken along cut I and cut II shown in panel (b).
|
2306.16350
|
Low-ground/High ground capacity regions analysis for Bosonic Gaussian
Channels
|
We present a comprehensive characterization of the interconnections between
single-mode, phaseinsensitive Gaussian Bosonic Channels resulting from channel
concatenation. This characterization enables us to identify, in the parameter
space of these maps, two distinct regions: low-ground and high-ground. In the
low-ground region, the information capacities are smaller than a designated
reference value, while in the high-ground region, they are provably greater. As
a direct consequence, we systematically outline an explicit set of upper bounds
for the quantum and private capacity of these maps, which combine known upper
bounds and composition rules, improving upon existing results.
|
Farzad Kianvash, Marco Fanizza, Vittorio Giovannetti
|
2023-06-28T16:26:42Z
|
http://arxiv.org/abs/2306.16350v2
|
# Low-ground/High ground capacity regions analysis for Bosonic Gaussian Channels
###### Abstract
We present a comprehensive characterization of the interconnections between single-mode, phase-insensitive Gaussian Bosonic Channels resulting from channel concatenation. This characterization enables us to identify, in the parameter space of these maps, two distinct regions: low-ground and high-ground. In the low-ground region, the information capacities are smaller than a designated reference value, while in the high-ground region, they are provably greater. As a direct consequence, we systematically outline an explicit set of upper bounds for the quantum and private capacity of these maps, which combine known upper bounds and composition rules, improving upon existing results.
pacs: 03.67.-a, 03.67.Ac, 03.65.Ta.
## I Introduction
The efficiency of classical communication lines can be expressed using a single, simple formula [1; 2]. However, when it comes to quantum communication lines (quantum channels) that utilize quantum systems as information carriers instead of classical signals [3; 4; 5; 6], this simplification no longer holds. Instead, a multitude of different and computationally challenging capacity functionals are required to fully assess the quality of these transmission lines. For instance, the classical capacity of a quantum channel, characterizes the optimal rate of classical bits that can be reliably transferred per channel uses; the quantum capacity instead provides the optimal rate of transmitted qubits, and the private capacity, the optimal rate of bits that can be transmitted privately along the channel. In our study, we specifically focus on a special class of quantum communication lines known as Gaussian Bosonic Channels (GBCs), which are commonly employed to model communication procedures utilizing the electromagnetic field as the carrier of transmitted messages [7; 8; 9; 10]. Despite significant progress made in recent years, the analysis of GBCs still presents complex challenges. Specifically, computing the exact values of certain information capacities for these maps requires optimization techniques that remain difficult to tackle. In principle, calculating these quantities necessitates taking the limit of properly regularized entropic functionals, considering the potential utilization of entanglement across multiple channel uses [11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29]. Given these complexities, the derivation of upper and lower bounds for the capacities of significant channels represents crucial progress in the field.
An established strategy for upper bounding information capacities involves utilizing data processing inequalities. For instance, it is possible to obtain an upper bound for the capacity of a specific channel by expressing it as a concatenation of channels whose capacities are already known or upper bounded [30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41]. In this article, we employ this method to enhance the previously established bounds in the literature [39; 40; 41; 42] for the quantum and private capacity of single-mode, phase-insensitive Gaussian Bosonic Channels (PI-GBCs). To achieve this result, we present a detailed decomposition of the parameter space of PI-GBC maps into regions encompassing all channels that can simulate a given channel through concatenation with other PI-GBC elements.
The structure of the article is as follows: In Sec. II we introduce the fundamental concepts and notation used in this article. We start by giving an overview of continuous variable quantum channels and establish crucial notation. Then, we briefly discuss various quantum capacities of a quantum channel, namely quantum capacity, private capacity, two-way quantum capacity, and secret-key capacity. Following that, we introduce phase-insensitive one-mode GBCs and establish specific notation to aid us throughout the manuscript. In Sec. III we provide a concise review of the current state-of-the-art bounds for the quantum capacity of GBCs. We discuss the main techniques used to derive these bounds, including the use of data processing inequalities and channel concatenation. We also highlight the key challenges that remain in computing the exact quantum capacity for these channels. In Sec. IV we study the parameter space of PI-GBCs in terms of channel concatenation. We present a detailed decomposition of the parameter space of PI-GBC maps into regions encompassing all channels that can simulate a given channel through concatenation with other PI-GBC elements. This analysis allows us to identify the channels that can be used to upper bound the quantum capacity of a given PI-GBC. In Sec. VI we derive new upper bounds for the quantum and private capacity of single-mode, phase-insensitive Gaussian Bosonic Channels (PI-GBCs) by employing the channel concatenation method discussed in Sec. IV. We demonstrate that our new bounds improve upon the previously established bounds in the literature, providing a more accurate estimation of the capacities for these channels. Sec. VII concludes the manuscript.
Preliminaries
A quantum communication line connecting two distant parties can be seen as a physical transformation that associates the states of a system \(A\), representing the input messages of the model, with the states of a second system \(B\), representing the associated output messages. At mathematical level such an object is described as a completely positive trace preserving (LCPTP) linear map \(\Lambda:\mathcal{B}_{1}(\mathcal{H}_{A})\mapsto\mathcal{B}_{1}(\mathcal{H}_{B})\) that links the set of the trace-class operators of the (possibly infinite dimensional) Hilbert spaces \(\mathcal{H}_{A}\), \(\mathcal{H}_{B}\) associated with \(A\) and \(B\) respectively [5; 6]. By Stinespring representation we can always express \(\Lambda\) as a reduction of an isometry \(\hat{V}\) that connects \(\mathcal{H}_{A}\) to an extension \(\mathcal{H}_{BE}\) of \(\mathcal{H}_{B}\), i.e. \(\Lambda(\cdots)=\mathrm{Tr}_{E}(\hat{V}\cdots\hat{V}^{\dagger})\,,\) with \(\mathrm{Tr}_{E}\) being the partial trace with respect to \(E\). Such a construction allows us to introduce the notion of complementary channel \(\tilde{\Lambda}:\mathcal{B}_{1}(\mathcal{H}_{A})\mapsto\mathcal{B}_{1}( \mathcal{H}_{E})\) defined as \(\tilde{\Lambda}(\cdots):=\mathrm{Tr}_{A}(\hat{V}\cdots\hat{V}^{\dagger})\), which can be interpreted as the transformation induced on the environment of the communication line by the signaling process [43].
Similarly to what happens in classical information theory, the efficiency of a quantum channel \(\Lambda\) can be gauged in terms of a series figures of merit (the quantum capacities of the channel) that evaluate the optimal ratio between the amount of data which can be sent reliably through the channel and the total amount of redundancy needed to achieve such a goal [3; 4; 5; 6]. In this paper we focus on special instances of such quantities which in the context of continuous variable quantum information processing (see next section), admit optimal finite values even when allowing unbounded energy resources, i.e. the quantum capacity \(Q(\Lambda)\), the private capacity \(P(\Lambda)\), the two-way quantum capacity \(Q_{2}(\Lambda)\), and the secret-key capacity \(K(\Lambda)\)[6]. They are hierarchically ordered via the inequalities
\[K(\Lambda)\geq Q_{2}(\Lambda),P(\Lambda)\geq Q(\Lambda)\;. \tag{1}\]
The smallest among such terms, i.e. \(Q(\Lambda)\), measures the maximum rate at which the communication line can transmit quantum information reliably over asymptotically many uses of the channel [11; 12; 13]; \(P(\Lambda)\) is instead the maximum rate at which we can transmit classical messages through the channel \(\Lambda\) in such a way that an external party that is monitoring the line, will not be able to read such messages [13]; \(Q_{2}(\Lambda)\) represents the maximum quantum information transmission rate attainable by allowing the communicating party to use (arbitrary) distillation protocols through the use of a classical side-channel [44]; and finally the largest of these quantities, i.e. \(K(\Lambda)\), describes the maximum rate at which two parties can use the channel to distill secret random string of bits.
Despite being operationally well defined, no universal formula is known that allows one to explicitly compute the values of \(Q_{2}(\Lambda)\) and \(K(\Lambda)\) as entropic functionals. On the contrary, such characterizations exist for \(Q(\Lambda)\) and for \(P(\Lambda)\), based on regularized optimizations of, respectively, the output coherent information for \(Q\), and the Holevo information gap between \(\Lambda\) and its complementary map \(\tilde{\Lambda}\), for \(P\). Even in these cases, however, the explicit computation of \(Q(\Lambda)\) and \(P(\Lambda)\) is typically rather challenging and has been carried out only a very limited set of models (in particular for the special classes of degradable and anti-degradable maps). A possible way to circumvent this problem is to make use of data-processing inequalities. Specifically a simple resource counting argument can be invoked to observe that, if a quantum channel \(\Lambda\) can be expressed in terms of a LCPTP map \(\Lambda^{\prime}\) via the concatenated action of other two LCPTP linear maps \(\Lambda_{1}\), \(\Lambda_{2}\), then the following relations hold
\[\Lambda=\Lambda_{2}\circ\Lambda^{\prime}\circ\Lambda_{1}\quad \Longrightarrow\quad\mathcal{K}(\Lambda)\leq\mathcal{K}(\Lambda^{\prime})\;, \tag{2}\]
where hereafter we shall use the symbol \(\mathcal{K}\) to represent an arbitrary capacity (e.g. \(Q\), \(P\), \(Q_{2}\), or \(K\)) see e.g. [4; 5; 6]. Accordingly if the capacity values of \(\Lambda_{1}\) or \(\Lambda_{2}\) are known, or if upper bounds for those quantities are available, we can then use (2) to constraint the performances of \(\Lambda\). Alternatively, if instead the capacity of \(\Lambda\) is known or if a lower bound for it is available, we can use (2) to provide lower bounds for those of \(\Lambda_{1}\) and \(\Lambda_{2}\). In what follows, we shall make use of this simple idea to improve the capacity analysis of a special class of quantum maps which plays an important role in quantum information theory, that is the Bosonic Gaussian Channels set, whose properties are briefly reviewed in the next subsection.
### Bosonic Gaussian Channels
Bosonic Gaussian Channels (BGCs) model a vast collection of noise models that tamper communication schemes which rely on the uses of e.m. signals [4; 9]. Formally they can be introduced as a special set of LCPT transformations which act on the Hilbert space \(L^{2}(\mathbb{R}^{n})\) of the square integrable functions, representing the states of \(n\) independent harmonic oscillators each corresponding to an individual mode of the field. Indicating with \(\mathbf{\hat{r}}:=(\hat{x}_{1},\hat{p}_{1},...,\hat{x}_{n},\hat{p}_{n})^{ \mathrm{T}}\) the set of canonical position and momentum operators of the modes, the action of a BGC map can be assigned in terms of linear mappings they induce on the first and second canonical momenta of the quantum states \(\hat{\rho}\in\mathfrak{S}(L^{2}(\mathbb{R}^{n}))\) of the model, i.e. the \(2n\)-dimensional real vector \(\mathbf{m}:=\mathrm{Tr}(\mathbf{\hat{r}}\hat{\rho})\) and the \(2n\times 2n\) real matrix \(V:=\mathrm{Tr}\Big{(}\{(\mathbf{\hat{r}}-\mathbf{m}),(\mathbf{\hat{r}}- \mathbf{m})^{\mathrm{T}}\}\hat{\rho}\Big{)}\).
For what it concerns the present work we shall limit the analysis to a special subset of single-mode (\(n=1\)) Phase insensitive GBCs (or PI-GBCs in brief) formed by the maps \(\Phi_{x,M}\) characterized by two positive noise parameters \(x,M\geq 0\), whose action on the system is fully
determined by the transformations
\[\begin{cases}&\mathbf{m}\xrightarrow{\Phi_{x,M}}\mathbf{m}^{\prime}=\sqrt{x}\; \mathbf{m}\;,\\ \\ &V\xrightarrow{\Phi_{x,M}}V^{\prime}=xV+(2M+|1-x|)I_{2}\;,\end{cases} \tag{3}\]
with \(\mathbf{m}^{\prime}\) and \(V^{\prime}\) being respectively the first and second momenta of the output state \(\Phi_{x,M}(\hat{\rho})\), and with \(I_{2}\) the \(2\times 2\) identity matrix. For \(x=\eta\in[0,1]\), and \(M=(1-\eta)N\) with \(N\geq 0\), the mapping (3) corresponds to the thermal attenuator channel \(\mathcal{E}_{\eta,N}\) which describes the interaction of the a single-mode of the e.m. field with an external thermal reservoir with \(N\) mean photon number, mediated by a beam-splitter coupling of transmissivity \(\eta\); for \(x=g\geq 1\) and \(M=(g-1)N\) with \(N\geq 0\) instead, \(\Phi_{x,M}\) reduces to a thermal amplifier \(\mathcal{A}_{g,N}\) which describes the interaction with the input mode with a thermal bath of mean photon number \(N\), through a two mode squeezing operator with gain parameter \(g\); finally for \(x=1\) and \(M=N\geq 0\), \(\Phi_{x,M}\) reduces to the the additive classical noise GBC \(\mathcal{N}_{N}\), i.e.
\[\begin{cases}&\mathcal{E}_{\eta,N}:=\Phi_{x=\eta,M=(1-\eta)N},\quad\eta\in[0,1],N\geq 0\;,\\ &\mathcal{N}_{N}:=\Phi_{x=1,M=N},\qquad\qquad\qquad N\geq 0\;.\\ &\mathcal{A}_{g,N}:=\Phi_{x=g,M=(g-1)N},\qquad g\geq 1,N\geq 0\;.\end{cases} \tag{4}\]
It is easy to check that the maps \(\Phi_{x,M}\) are closed under concatenation, specifically given \((x_{1},M_{1}),(x_{2},M_{2})\in\mathbb{R}_{+}^{2}\) we have that
\[\Phi_{x_{3},M_{3}}=\Phi_{x_{2},M_{2}}\circ\Phi_{x_{1},M_{1}}\;, \tag{5}\]
is also a channel of the model with noise parameters \((x_{3},M_{3})\in\left(\mathbb{R}^{+}\right)^{2}\) fulfilling the identities
\[\begin{cases}&x_{3}=x_{2}x_{1}\;,\\ &\\ &M_{3}=M_{2}+x_{2}M_{1}+\frac{|x_{2}-1|+x_{2}|x_{1}-1|-|x_{2}x_{1}-1|}{2}\;, \end{cases} \tag{6}\]
which we express in terms of attenuators and amplifiers in Table 1.
## III A brief review on PI-GbC capacities
We start by recalling that for \(N\) sufficiently large the channels \(\mathcal{E}_{\eta,N}\), \(\mathcal{N}_{N}\), and \(\mathcal{A}_{g,N}\) are Entanglement-Breaking (EB) [5; 9], specifically
\[\left\{\begin{array}{l}\mathcal{E}_{\eta,N}\equiv\text{EB}\iff\eta\in[0,1] \;,N\geq\frac{\eta}{1-\eta}\;,\\ \mathcal{N}_{N}\equiv\text{EB}\iff N\geq 1\;,\\ \mathcal{A}_{g,N}\equiv\text{EB}\iff g\geq 1\;,N\geq\frac{1}{g-1}\;.\end{array}\right. \tag{7}\]
In the notation (3) this translates into the condition
\[\Phi_{x,M}\equiv\text{EB}\qquad\Longleftrightarrow\qquad(x,M)\in\mathbb{EB }\;, \tag{8}\]
with the set
\[\mathbb{EB}:=\{(x,M)\in\left(\mathbb{R}^{+}\right)^{2}:M\geq M_{\text{EB}}(x) \}\;, \tag{9}\]
defined by the threshold function
\[M_{\text{EB}}(x):=\min\{1,x\}\;, \tag{10}\]
(see Fig. 1). By construction EB maps have all zero capacity values, i.e.
\[(x,M)\in\mathbb{EB}\quad\Longrightarrow\quad\mathcal{K}(\Phi_{x,M})=0\;, \tag{11}\]
with the implication that can be reversed for the two-way capacity and for the secret-key capacity [45], meaning that \(\mathbb{EB}\) corresponds to the largest parameter region for which \(Q_{2}\) and \(K\) are null. The case of \(Q\) and \(P\) is different as it is known that these capacities nullify also for channels which do not belong to \(\mathbb{EB}\). In particular one has that
\[\begin{cases}Q(\mathcal{E}_{\eta,N})=P(\mathcal{E}_{\eta,N})=0\;,\quad\eta\in [0,1]\;,N\geq\frac{2\eta-1}{2(1-\eta)}\;,\\ \\ Q(\mathcal{A}_{g,N})=P(\mathcal{A}_{g,N})=0\;,\quad g\geq 1\;,N\geq\frac{1}{2(g-1)}\;, \end{cases} \tag{12}\]
or
\[(x,M)\in\mathbb{AD}\quad\Longrightarrow\quad Q(\Phi_{x,M})=P(\Phi_{x,M})=0\;, \tag{13}\]
with
\[\mathbb{AD}\;:=\;\{(x,M)\in\left(\mathbb{R}^{+}\right)^{2}:M\geq M_{\text{AD }}(x)\}\;, \tag{14}\]
\[M_{\text{AD}}(x)\;:=\;\min\{x-1/2,1/2\}\;, \tag{15}\]
corresponding to the Anti-Degradability (AD) region for the single-model PI-GBCs [46; 47; 48; 49] (see Fig. 1). Notice that at present it is still not clear whether or not \(\mathbb{AD}\) is the largest set where \(Q\) and/or \(P\) nullify. What it is known are the exact values of these quantities for the special cases where \(M=0\). Specifically in the case of the quantum and private capacities we have [50]
\[P(\Phi_{x,0})=Q(\Phi_{x,0})=\max\{0,\log_{2}\frac{x}{|1-x|}\}\;, \tag{16}\]
while for the secret-key and two-way capacities it holds [42]
\[K(\Phi_{x,0})=Q_{2}(\Phi_{x,0})=\log_{2}(\tfrac{\max\{1,x\}}{|1-x|})\;, \tag{17}\]
(notice that for amplifiers, i.e. \(x\geq 1\), Eq. (16) and (17) coincide).
### Upper bounds
State-of-the-art upper bounds for the capacities of thermal attenuators and amplifiers are given in Refs. [39; 40; 41; 42]. Specifically in [42] it has been showed that, outside the EB region (8), all the quantum capacities \(\mathcal{K}\) of thermal attenuators and amplifiers can be bounded as follows
\[\begin{cases}\mathcal{K}(\mathcal{E}_{\eta,N})\leq\mathcal{K}^{\rm att}_{\rm PLOB }(\eta,N):=-h(N)-\log_{2}((1-\eta)\eta^{N})\;,\\ \\ \mathcal{K}(\mathcal{A}_{g,N})\leq\mathcal{K}^{\rm amp}_{\rm PLOB}(g,N):=-h(N) +\log_{2}(\frac{g^{N+1}}{g-1})\;,\end{cases} \tag{18}\]
with
\[h(x):=(x+1)\log_{2}(x+1)-x\log_{2}(x)\;, \tag{19}\]
(see also Ref. [51] for a strong-converse extension of these inequalities). Partial improvements w.r.t. to the above inequalities for the case of quantum and private capacities, have been reported in Refs. [39; 40] where special instances of the decomposition rules (\(\mathbf{C_{3.1}}\)) and (\(\mathbf{C_{3.2}}\)) were employed to observe that outside the AD region (12) (i.e. for \(N\leq\frac{2\eta-1}{2(1-\eta)}\) for \(\mathcal{E}_{\eta,N}\) and \(N\leq\frac{1}{2(g-1)}\) for \(\mathcal{A}_{g,N}\)) one can write \(\mathcal{E}_{\eta,N}=\mathcal{E}_{\eta-N(1-\eta),0}\circ\mathcal{A}_{\frac{ \eta}{N-N(1-\eta)},0}\) and \(\mathcal{A}_{g,N}=\mathcal{E}_{1-(g-1)N,0}\circ\mathcal{A}_{\frac{\eta}{1-(g- 1)N},0}\), which ultimately leads to
\[\begin{cases}Q(\mathcal{E}_{\eta,N}),P(\mathcal{E}_{\eta,N})\leq Q(\mathcal{E }_{\eta-N(1-\eta),0})=\log_{2}(\frac{\eta-N(1-\eta)}{1-\eta+N(1-\eta)})\\ \\ Q(\mathcal{A}_{g,N}),P(\mathcal{A}_{g,N})\leq Q(\mathcal{E}_{1-(g-1)N,0})= \log_{2}(\frac{1-(g-1)N}{(g-1)N})\end{cases} \tag{20}\]
(notice that the bounds nullifies at the border with the AD region). In Ref. [41] instead, using degradable extensions of thermal attenuators, it was proven that
\[\begin{cases}Q(\mathcal{E}_{\eta,N}),P(\mathcal{E}_{\eta,N})\leq Q^{\rm att}_ {\rm FKG}(\eta,N)\;,\\ \\ Q(\mathcal{A}_{g,N}),P(\mathcal{A}_{g,N})\leq Q^{\rm amp}_{\rm FKG}((g-1)N)\;, \end{cases} \tag{21}\]
with
\[Q^{\rm att}_{\rm FKG}(\eta,N) := \log_{2}(\frac{\eta}{1-\eta})+h((1-\eta)N)-h(\eta N)\;,\] \[Q^{\rm amp}_{\rm FKG}(M) := -\log_{2}(eM)+2h\left(\frac{\sqrt{M^{2}+1}-1}{2}\right)\;.\]
Notice that the first is a function which, for fixed \(\eta\) is monotonically decreasing w.r.t. \(N\) and, for fixed \(N\), monotonically increasing in \(\eta\) (being null for \(\eta\leq 0.5\)).
Figure 1: Left panel: zero capacity regions for the PI-GBCs \(\Phi_{x,M}\); Right panel: same plot expressed in terms of the parametrization (4). In both plots, the greenish areas represent the regions where the all the capacities (\(Q_{2}\), \(K\), \(Q\), and \(P\)) are zero. The bluish areas represent the region where \(Q\) and \(P\) are zero but \(Q_{2}\) and \(K\) are not. Whether \(Q\) and \(P\) can be zero also for points in the white region is still an open problem.
\begin{table}
\begin{tabular}{|c|l|l|} \hline \(\mathbf{(C_{1})}\) & \(\mathcal{E}_{\eta_{2},N_{2}}\circ\mathcal{E}_{\eta_{1},N_{1}}=\mathcal{E}_{ \eta_{3},N_{3}}\) & \(\left\{\begin{array}{l}\eta_{3}=\eta_{2}\eta_{1}\\ (1-\eta_{3})N_{3}=(1-\eta_{2})N_{2}+(1-\eta_{1})\eta_{2}N_{1}\end{array}\right.\) \\ \hline \(\mathbf{(C_{2})}\) & \(\mathcal{A}_{g_{2},N_{2}}\circ\mathcal{A}_{g_{1},N_{1}}=\mathcal{A}_{g_{3},N_{3}}\) & \(\left\{\begin{array}{l}g_{3}=g_{2}g_{1}\\ (g_{3}-1)N_{3}=(g_{2}-1)N_{2}+(g_{1}-1)g_{2}N_{1}\end{array}\right.\) \\ \hline \(\mathbf{(C_{3.1})}\) & \(\mathcal{E}_{\eta_{2},N_{2}}\circ\mathcal{A}_{g_{1},N_{1}}=\left\{\begin{array}{l} \mathcal{E}_{\eta_{3},N_{3}}\\ (1-\eta_{3})(2N_{3}+1)=(1-\eta_{2})(2N_{2}+1)+(\eta_{3}-\eta_{2})(2N_{1}+1) \end{array}\right.\) \\ \(\mathbf{(C_{3.2})}\) & \(\mathcal{A}_{g_{3},N_{3}}\) & \(\left\{\begin{array}{l}g_{3}=\eta_{2}g_{1}\\ (g_{3}-1)(2N_{3}+1)=(1-\eta_{2})(2N_{2}+1)+(g_{3}-\eta_{2})(2N_{1}+1)\end{array}\right.\) \\ \hline \(\mathbf{(C_{4.1})}\) & \(\mathcal{A}_{g_{2},N_{2}}\circ\mathcal{E}_{\eta_{1},N_{1}}=\left\{\begin{array}{l} \mathcal{E}_{\eta_{3},N_{3}}\\ (1-\eta_{3})(2N_{3}+1)=(g_{2}-1)(2N_{2}+1)+(g_{2}-\eta_{3})(2N_{1}+1)\end{array} \right.\) \\ \(\mathbf{(C_{4.2})}\) & \(\mathcal{A}_{g_{3},N_{3}}\) & \(\left\{\begin{array}{l}g_{3}=g_{2}g_{1}\\ (g_{3}-1)(2N_{3}+1)=(g_{2}-1)(2N_{2}+1)+(g_{2}-g_{3})(2N_{1}+1)\end{array} \right.\) \\ \hline \end{tabular}
\end{table}
Table 1: Composition rules (5) and (6) expressed in terms of thermal attenuators and amplifiers via the identities (4).
On the contrary, for \(M=(g-1)N\in[0,1/2]\) (i.e. the only region where, in view of Eq. (12) it makes sense to consider (21)) \(Q^{\rm amp}_{\rm FKG}(M)\) is a positive, monotonically increasing function. As explicitly shown in the left part of Fig. 2, one may notice that while for low values of \(\eta\) the upper bound (20) outperform the others, as \(\eta\) approaches 1, (18) and (21) win (in particular \(Q^{\rm int}_{\rm FKG}\) provides the best bound in the low noise regime \(N\ll 1\), while \(Q^{\rm att}_{\rm PLOB}\) does it for higher \(N\)).
### Lower bounds
Lower bounds for the two-way and secret capacities are provided by the inequalities [8; 52]
\[\begin{cases}K(\mathcal{E}_{\eta,N})\geq Q_{2}(\mathcal{E}_{\eta,N})\geq\max \{0,-h(N)-\log_{2}(1-\eta)\}\;,\\ \\ K(\mathcal{A}_{g,N})\geq Q_{2}(\mathcal{A}_{g,N})\geq\max\{0,-h(N)+\log_{2}( \frac{g}{g-1})\}\,,\end{cases} \tag{22}\]
which have been improved in [45; 53]. [53] improved the lower bound for secret capacity adopting Gaussian protocols based on suitable trusted-noise detectors, and [45] showed that the region where \(Q_{2}\) and \(K\) are non-zero extend beyond what predicted by the above equations, including all the non-EB region. A lower bound on \(Q\) and \(P\) is given instead by the coherent information for one use of the channel, evaluated on an infinite temperature state
\[Q(\mathcal{E}_{\eta,N})\geq Q^{\rm att}_{\rm low}(\eta,N)=\max\{0,\log_{2}( \frac{\eta}{1-\eta})-h(N)\}\,. \tag{23}\]
## IV Low ground/high ground capacity regions analysis
Using the composition rules (5), (6) together with the data-processing inequality (2), in the parameter space of the maps (3) we can identify regions where the capacities are provably smaller or larger than an assigned reference value. For this purpose given \((x,M)\in\left(\mathbb{R}^{+}\right)^{2}\) we define \(\mathbb{L}_{x,M}:=\mathbb{L}(\Phi_{x,M})\) the collection of points \((x^{\prime},M^{\prime})\in\left(\mathbb{R}^{+}\right)^{2}\) such that such that the channel \(\Phi_{x^{\prime},M^{\prime}}\) can be decomposed as a three-elements concatenation \(\Phi_{x^{\prime},M^{\prime}}=\Phi_{\bar{x}_{1},\bar{M}_{1}}\circ\Phi_{x,M} \circ\Phi_{\bar{x}_{2},\bar{M}_{2}}\) that involves \(\Phi_{x,M}\) together with two other maps \(\Phi_{\bar{x}_{1},\bar{M}_{1}}\) and \(\Phi_{\bar{x}_{2},\bar{M}_{2}}\), i.e. [54]
\[\mathbb{L}_{x,M}:=\left\{(x^{\prime},M^{\prime})\in\left(\mathbb{R}^{+} \right)^{2}:\exists(\bar{x}_{1},\bar{M}_{1}),(\bar{x}_{2},\bar{M}_{2})\in \left(\mathbb{R}^{+}\right)^{2}\right.\]
\[\left.\Phi_{x^{\prime},M^{\prime}}=\Phi_{\bar{x}_{1},\bar{M}_{1}}\circ\Phi_{x,M}\circ\Phi_{\bar{x}_{2},\bar{M}_{2}}\right\}\,. \tag{24}\]
From (2) it turns out that
\[\mathcal{K}(\Phi_{x^{\prime},M^{\prime}})\leq\mathcal{K}(\Phi_{x,M})\;,\qquad \forall(x^{\prime},M^{\prime})\in\mathbb{L}_{x,M}\;. \tag{25}\]
Figure 2: **Left:** The top panel shows a numerical comparison between the upper bounds (18), (20), and (21) for the quantum capacity \(Q\) of the channels \(\mathcal{E}_{\eta,N}\) and \(\mathcal{A}_{g,N}\). Each region with different colour indicates which result is the best upper bound. Purple region is where the quantum capacity is zero according to Eq. (15). The bottom panel presents the same comparison expressed in terms of the \(x,M\) parametrization (3). **Right:** updated version of previous figure which includes the improved bounds \(Q^{(1)}_{\rm FKG}(x,M)\) and \(\underline{Q}^{(2)}_{\rm FKG}(x,M)\) of Eq. (103): the orange and yellow regions (marked with the script _NEW_) are where they provide better constraints than the upper bounds of Sec. III.1 (notice that no improvement is obtained for amplifiers). The top panel reports the result in terms of the \(\eta,N\) and \(g,N\) parametrization, while the bottom panel reports the same result in the \(x,M\) parametrization.
so we dub \(\mathbb{L}_{x,M}\) the _low-ground capacity region_ of the channel \(\Phi_{x,M}\). Notice in particular that, since \(\Phi_{0,M^{\prime}}\circ\Phi_{x,M}=\Phi_{0,M^{\prime}}\) for all \((x,M)\) and \(M^{\prime}\geq 0\), one has that all the points \((0,M^{\prime})\) are included into \(\mathbb{L}_{x,M}\), i.e.
\[(0,M^{\prime})\in\mathbb{L}_{x,M}\;,\quad\forall M^{\prime}\geq 0\;,\forall(x,M) \in\left(\mathbb{R}^{+}\right)^{2}\;. \tag{26}\]
By reversing the ordering of the concatenations in Eq. (24) we also introduce the _high-ground capacity region_\(\mathbb{H}_{x,M}:=\mathbb{H}(\Phi_{x,M})\) of the channel \(\Phi_{x,M}\), i.e.
\[\mathbb{H}_{x,M}:=\left\{(x^{\prime},M^{\prime})\in\left(\mathbb{ R}^{+}\right)^{2}:\exists(\bar{x}_{1},\bar{M}_{1}),(\bar{x}_{2},\bar{M}_{2}) \in\left(\mathbb{R}^{+}\right)^{2}\right.\] \[\qquad\qquad\left.\Phi_{x,M}=\Phi_{\bar{x}_{1},\bar{M}_{1}}\circ \Phi_{x^{\prime},M^{\prime}}\circ\Phi_{\bar{x}_{2},\bar{M}_{2}}\right\}, \tag{27}\]
which fulfils the condition
\[\mathcal{K}(\Phi_{x^{\prime},M^{\prime}})\geq\mathcal{K}(\Phi_{x,M})\;,\qquad \forall(x^{\prime},M^{\prime})\in\mathbb{H}_{x,M}\;. \tag{28}\]
Notice that by construction \(\mathbb{L}_{x,M}\) and \(\mathbb{H}_{x,M}\) obey a natural ordering
\[\mathbb{L}_{x^{\prime},M^{\prime}} \subseteq \mathbb{L}_{x,M} \forall(x^{\prime},M^{\prime})\in\mathbb{L}_{x,M}\;, \tag{29}\] \[\mathbb{H}_{x^{\prime},M^{\prime}} \subseteq \mathbb{H}_{x,M} \forall(x^{\prime},M^{\prime})\in\mathbb{H}_{x,M}\;, \tag{30}\]
and fulfil the complementary relation
\[(x^{\prime},M^{\prime})\in\mathbb{H}_{x,M}\quad\Longleftrightarrow\quad(x,M) \in\mathbb{L}_{x^{\prime},M^{\prime}}\;. \tag{31}\]
In the next subsections we shall provide an analytic characterization of \(\mathbb{L}_{x,M}\) and \(\mathbb{H}_{x,M}\). As we shall see one can identify two different regimes ruled by the function \(M_{\mathrm{EB}}(x)\) of Eq. (10) which identifies the EB sector. Indeed for points \((x,M)\) with
\[M\leq M_{\mathrm{EB}}(x)=\min\{1,x\}\;, \tag{32}\]
that is for channels which are non-EB and for the EB ones which are on the border line of the region \(\mathtt{EB}\), the sets \(\mathbb{L}_{x,M}\) and \(\mathbb{H}_{x,M}\) are defined by the functions
\[f^{(1)}_{x,M}(x^{\prime}) := M+(1-x)\Theta(1-x)+(x^{\prime}-1)\Theta(1-x^{\prime})\;,\] \[f^{(2)}_{x,M}(x^{\prime}) := (x^{\prime}/x)\big{[}M+(x-1)\Theta(x-1)\big{]} \tag{33}\] \[\qquad\qquad-(x^{\prime}-1)\Theta(x^{\prime}-1)\;,\]
with \(\Theta(x)\) being the Heaviside step-function. Specifically we shall prove that under the condition (32), \(\mathbb{L}_{x,M}\) is formed by all points which are above \(f^{(1)}_{x,M}(x^{\prime})\) and \(f^{(2)}_{x,M}(x^{\prime})\), i.e.
\[\mathbb{L}_{x,M} = \left\{(x^{\prime},M^{\prime})\in\left(\mathbb{R}^{+}\right)^{2}:\right.\] \[\qquad\left.M^{\prime}\geq\max\{f^{(1)}_{x,M}(x^{\prime}),f^{(2)}_ {x,M}(x^{\prime})\}\right\},\]
while \(\mathbb{H}_{x,M}\) is given by the polytope formed by the points below such curves, i.e.
\[\mathbb{H}_{x,M} = \left\{(x^{\prime},M^{\prime})\in\left(\mathbb{R}^{+}\right)^{2}:\right.\] \[\qquad\left.M^{\prime}\leq\min\{f^{(1)}_{x,M}(x^{\prime}),f^{(2)}_ {x,M}(x^{\prime})\}\right\}.\]
As evident from Fig. 3, the domains identified by Eqs. (34) and (35) admit \((x,M)\) as unique contact point, implying that for a relative large portion of the phase space \(\left(\mathbb{R}^{+}\right)^{2}\) we cannot assign a definite ordering w.r.t. \(\mathcal{K}(\Phi_{x,M})\) (white regions of plots). The situation change however when the map \(\Phi_{x,M}\) is deep inside the EB region, i.e. for
\[M>M_{\mathrm{EB}}(x)=\min\{1,x\}\;. \tag{36}\]
Under the constraint (36) the sets \(\mathbb{H}_{x,M}\) and \(\mathbb{L}_{x,M}\) provide a complete covering of the phase space and have a non trivial overlap \(\mathbb{0}_{x,M}:=\mathbb{H}_{x,M}\bigcap\mathbb{L}_{x,M}\). Indeed, one can show that irrespectively from the specific choice of \((x,M)\), \(\mathbb{H}_{x,M}\) coincides with the entire space \(\left(\mathbb{R}^{+}\right)^{2}\), i.e.
\[\mathbb{H}_{x,M}=\left(\mathbb{R}^{+}\right)^{2}\;, \tag{37}\]
while \(\mathbb{L}_{x,M}\) (and hence \(\mathbb{0}_{x,M}\)) corresponds to the \(x\to 0,M\to 0\), limit of Eq. (34), i.e. [55]
\[\mathbb{L}_{x,M} = \mathbb{0}_{x,M}\] \[= \mathbb{L}_{0,0}:=\left\{(x^{\prime},M^{\prime})\in\left( \mathbb{R}^{+}\right)^{2}:M^{\prime}>M_{\mathrm{EB}}(x^{\prime})\right\}\,,\]
which coincides with the subset identified by Eq. (36). This implies that given any two points \((x_{1},M_{1})\) and \((x_{2},M_{2})\) in \(\mathbb{L}_{0,0}\), their associated channel are equivalent up to concatenation with extra GBCs (3), i.e. there exist proper choices of the maps \(\Phi_{\bar{x}_{1},\bar{M}_{1}}\), \(\Phi_{\bar{x}_{2},\bar{M}_{2}}\), \(\Phi_{\bar{x}_{3},\bar{M}_{3}}\), and \(\Phi_{\bar{x}_{4},\bar{M}_{4}}\), such that we can write
\[\begin{cases}\Phi_{x_{1},M_{1}}=\Phi_{\bar{x}_{1},\bar{M}_{1}}\circ\Phi_{x_{2},M_{2}}\circ\Phi_{\bar{x}_{2},\bar{M}_{2}}\;,\\ \Phi_{x_{2},M_{2}}=\Phi_{\bar{x}_{3},\bar{M}_{3}}\circ\Phi_{x_{1},M_{1}}\circ\Phi _{\bar{x}_{4},\bar{M}_{4}}\;,\end{cases} \tag{39}\]
which in turn imposes \(\mathcal{K}(\Phi_{x_{1},M_{1}})=\mathcal{K}(\Phi_{x_{2},M_{2}})\) in agreement with the property (11).
## V Analytical characterization of \(\mathbb{L}_{x,M}\) and \(\mathbb{H}_{x,M}\)
In this section we give an analytical characterization of the low-ground and high-ground regions for an arbitrary channel \(\Phi_{x,M}\). We start in Sec. V.1 by focusing on a simplified version of the concatenation rules entering in the definitions (24) and (27) where \(\Phi_{x,M}\) is connected with the elements \(\Phi_{x^{\prime},M^{\prime}}\) via only a single extra PI-GBC element \(\Phi_{\bar{x},\bar{M}}\). This will allows us to identify two regions
\[\mathbb{L}^{(0)}_{x,M}:=\left\{(x^{\prime},M^{\prime})\in\left( \mathbb{R}^{+}\right)^{2}:\exists(\bar{x},\bar{M})\in\left(\mathbb{R}^{+} \right)^{2}\right. \tag{40}\] \[\left.\Phi_{x^{\prime},M^{\prime}}=\Phi_{\bar{x},\bar{M}}\circ\Phi_{ x,M}\;\text{or}\;\;\Phi_{x^{\prime},M^{\prime}}=\Phi_{x,M}\circ\Phi_{\bar{x},\bar{M}} \right\}\,,\]
and
\[\mathbb{H}^{(0)}_{x,M}:=\left\{(x^{\prime},M^{\prime})\in\left( \mathbb{R}^{+}\right)^{2}:\exists(\bar{x},\bar{M})\in\left(\mathbb{R}^{+} \right)^{2}\right. \tag{41}\] \[\left.\Phi_{x,M}=\Phi_{\bar{x},\bar{M}}\circ\Phi_{x^{\prime},M^{ \prime}}\;\text{or}\;\;\Phi_{x,M}=\Phi_{x^{\prime},M^{\prime}}\circ\Phi_{\bar{x}, \bar{M}}\right\}.\]
which by construction are subsets of \(\mathbb{L}_{x,M}\) and \(\mathbb{H}_{x,M}\), i.e.
\[\mathbb{L}_{x,M}^{(0)}\subseteq\mathbb{L}_{x,M}\;,\qquad\mathbb{H}_{x,M}^{(0)} \subseteq\mathbb{H}_{x,M}\;. \tag{42}\]
In Sec. V.2 we shall prove that for \((x,M)\) fulfilling the constraint (32), \(\mathbb{L}_{x,M}^{(0)}\) and \(\mathbb{H}_{x,M}^{(0)}\) indeed coincide with \(\mathbb{L}_{x,M}\) and \(\mathbb{H}_{x,M}\) leading to Eqs. (34) and (35). The derivation of Eqs. (37) and (38) for maps not fulfilling (32) will instead be given in Sec. V.3.
### Two-elements concatenations
To determine \(\mathbb{L}_{x,M}^{(0)}\) and \(\mathbb{H}_{x,M}^{(0)}\) we adopt the parametrization (4) to better underline the role played by amplifiers and attenuators. Given hence \(\eta\in[0,1]\) and \(N\geq 0\), let us introduce the following functions
\[N_{\eta,N}^{(1)}(\eta^{\prime}) := N\left(\frac{1-\eta}{\eta}\right)\left(\frac{\eta^{\prime}}{1- \eta^{\prime}}\right)\;, \tag{43}\] \[N_{\eta,N}^{(2)}(\eta^{\prime}) := (N+1)\left(\frac{1-\eta}{1-\eta^{\prime}}\right)-1\;. \tag{44}\]
It then turns out that
**Property 1**.: _The attenuator map \(\mathcal{E}_{\eta,N}\) admits_
\[\begin{cases}\mathbb{L}_{\eta,N}^{(\text{att},1)}&:=\left\{(\eta^{\prime},N^{ \prime}):0\leq\eta^{\prime}\leq\eta\,,N^{\prime}\geq\max\{N_{\eta,N}^{(1)}( \eta^{\prime}),0\}\right\},\\ \mathbb{L}_{\eta,N}^{(\text{att},2)}&:=\left\{(\eta^{\prime},N^{\prime}):1\geq \eta^{\prime}\geq\eta\,,N^{\prime}\geq\max\{N_{\eta,N}^{(2)}(\eta^{\prime}),0 \}\right\},\\ \mathbb{L}_{\eta,N}^{(\text{att})}&:=\mathbb{L}_{\eta,N}^{(\text{att},1)}\cup \mathbb{L}_{\eta,N}^{(\text{att},2)}\;,\end{cases} \tag{45}\]
_(light blue area on the left-hand-side of the top panel of Fig. 4), as subset of the corresponding two-element concatenation, low-ground capacity region \(\mathbb{L}^{(0)}(\mathcal{E}_{\eta,N})\), and_
\[\begin{cases}\mathbb{H}_{\eta,N}^{(\text{att},1)}&:=\left\{(\eta^{\prime},N^{ \prime}):1\geq\eta^{\prime}\geq\eta\,,0\leq N^{\prime}\leq N_{\eta,N}^{(1)}( \eta^{\prime})\right\}\;,\\ \mathbb{H}_{\eta,N}^{(\text{att},2)}&:=\left\{(\eta^{\prime},N^{\prime}):1\leq \eta^{\prime}\leq\eta\,,0\leq N^{\prime}\leq N_{\eta,N}^{(2)}(\eta^{\prime}) \right\}\;,\\ \mathbb{H}_{\eta,N}^{(\text{att})}&:=\mathbb{H}_{\eta,N}^{(\text{att},1)}\cup \mathbb{H}_{\eta,N}^{(\text{att},2)}\;,\end{cases} \tag{46}\]
_(yellow area on the left-hand-side of top panel of Fig. 4) as subset of \(\mathbb{H}^{(0)}(\mathcal{E}_{\eta,N})\), i.e._
\[\mathbb{L}_{\eta,N}^{(\text{att})}\subseteq\mathbb{L}^{(0)}(\mathcal{E}_{\eta, N})\;,\qquad\mathbb{H}_{\eta,N}^{(\text{att})}\subseteq\mathbb{H}^{(0)}( \mathcal{E}_{\eta,N})\;. \tag{47}\]
Proof.: To derive the first inclusion of Eq. (47) we set \((\eta_{3},N_{3})=(\eta^{\prime},N^{\prime})\) and \((\eta_{1},N_{1})=(\eta,N)\) in Eq. (\(\mathbf{C_{1}}\)) of Tab. 1 to observe that for all \(\eta_{2}\in[0,1]\) and \(N_{2}\geq 0\),
\[\mathcal{E}_{\eta^{\prime},N^{\prime}}=\mathcal{E}_{\eta_{2},N_{2}}\circ \mathcal{E}_{\eta,N}\;, \tag{48}\]
is also a thermal channel with parameters
\[\begin{cases}\eta^{\prime}=\eta\eta_{2}\leq\eta\;,\\ N^{\prime}=\frac{(1-\eta)\eta_{2}N+(1-\eta_{2})N_{2}}{1-\eta\eta_{2}}\geq\frac{(1 -\eta)\eta_{2}}{1-\eta\eta_{2}}N=N_{\eta,N}^{(1)}(\eta^{\prime})\;,\end{cases} \tag{49}\]
that span the entire set \(\mathbb{L}_{\eta,N}^{(\text{att},1)}\), as the inequality is saturated whenever \(N_{2}=0\). Therefore in view of the definition (24) we can conclude that
\[\mathbb{L}_{\eta,N}^{(\text{att},1)}\subseteq\mathbb{L}^{(0)}(\mathcal{E}_{ \eta,N})\;. \tag{50}\]
Figure 3: Low-ground capacity region \(\mathbb{L}_{x,M}\) (24) (light blue area) and high-ground capacity region \(\mathbb{H}_{x,M}\) (27) (yellow) for the channel \(\Phi_{x,M}\) (red dot element). The first three top panels refer to the regime (32) where \(\mathbb{L}_{x,M}\) and \(\mathbb{H}_{x,M}\) are expressed respectively by Eqs. (34) and (35): specifically in panel a) the reference map is an attenuator (\(x=0.6\), \(M=0.1\)), in panel b) it is an additive classical noise map (\(x=1\), \(M=0.5\)), and in panel c) it is an amplifier (\(x=1.15\), \(M=0.1\)). Panel d) instead presents a case where \(\Phi_{x,M}\) fulfils the EB condition (36); here \(\mathbb{H}_{x,M}\) coincides with the full plane \(\left(\mathbb{R}^{+}\right)^{2}\), while \(\mathbb{L}_{x,M}\) (and hence the overlap \(\mathbb{O}_{x,M}=\mathbb{H}_{x,M}\bigcap\mathbb{L}_{x,M}\)), are given by the set \(\mathbb{L}_{0,0}\) of Eq. (38): by construction any two points in this area are equivalent under GBC concatenation, see Eq. (39). The border lines which identify the various regions are the functions \(f_{x,M}^{(1)}(x^{\prime})\) (blue curve) and \(f_{x,M}^{(2)}(x^{\prime})\) (orange curve) defined in Eq. (33) – for panel d) the border is given by \(f_{0,0}^{(1)}(x^{\prime})=\min\{1,x^{\prime}\}\)[55]. The arrows in the plots show the direction where the capacities have to decrease (or at most remain constant) and the white regions describe portions of the parameter space where the composition rules cannot be used to determine a specific capacity ordering with respect to the reference channel. In panel a) we have \(\mathbf{A}=(x-M,0)\), \(\mathbf{B}=(1,M-x+1)\), \(\mathbf{C}=(1,M/x)\), \(\mathbf{D}=(x/(x-M),0)\); in panel b) and c) instead \(\mathbf{A}=(1-M,0)\), \(\mathbf{B}=(1,M)\), \(\mathbf{C}=(1,(M+x-1)/x)\), \(\mathbf{D}=(x/(1-M),0)\) (notice that for b), \(\mathbf{B}=\mathbf{C}=(x,M)\)).
We next invoke Eq. (\(\mathbf{C_{3.1}}\)) of Tab. 1 to observe that
\[\mathcal{E}_{\eta^{\prime},N^{\prime}}=\mathcal{E}_{\eta,N}\circ\mathcal{A}_{g_{1},N_{1}}\;, \tag{51}\]
with
\[\begin{cases}\eta^{\prime}=g_{1}\eta\geq\eta\;,\\ \\ N^{\prime}=\frac{(1-\eta)(2N+1)+\eta(g_{1}-1)(2N_{1}+1)-(1-\eta^{\prime})}{2(1 -\eta^{\prime})}\\ \qquad\geq\frac{\eta^{\prime}-\eta+(1-\eta)N}{1-\eta^{\prime}}=N^{(2)}_{\eta,N }(\eta^{\prime})\;,\end{cases} \tag{52}\]
is a thermal map too which spans the entire subset \(\mathbb{L}^{(\text{att},2)}_{\eta,N}\), as the inequality is saturated whenever \(N_{1}=0\). Accordingly we can claim that
\[\mathbb{L}^{(\text{att},2)}_{\eta,N}\subseteq\mathbb{L}^{(0)}(\mathcal{E}_{ \eta,N})\;, \tag{53}\]
which together with (50) yields the first identity of Eq. (47). The derivation of the second identity of Eq. (47) follows along the same lines by simply inverting the roles of \((\eta^{\prime},N^{\prime})\) and \((\eta,N)\) in the previous passages.
Given next \(g\geq 1\) and \(N\geq 0\) and the functions
\[N^{(1)}_{g,N}(g^{\prime}) := N\left(\tfrac{g-1}{g^{\prime}-1}\right)\;, \tag{54}\] \[N^{(2)}_{g,N}(g^{\prime}) := (N+1)\left(\tfrac{g-1}{g}\right)\left(\tfrac{g^{\prime}}{g^{ \prime}-1}\right)-1\;. \tag{55}\]
We can show that
**Property 2**.: _The amplifier channel \(\mathcal{A}_{g,N}\) admits_
\[\begin{cases}\mathbb{L}^{(\text{amp},1)}_{g,N}&:=\left\{(g^{\prime},N^{\prime }):g^{\prime}\geq g\,,N^{\prime}\geq\max\{N^{(1)}_{g,N}(g^{\prime}),0\}\right\},\\ \mathbb{L}^{(\text{amp},2)}_{g,N}&:=\left\{(g^{\prime},N^{\prime}):g^{\prime} \leq g\,,N^{\prime}\geq\max\{N^{(2)}_{g,N}(g^{\prime}),0\}\right\},\\ \mathbb{L}^{(\text{amp})}_{g,N}&:=\mathbb{L}^{(\text{amp},1)}_{g,N}\cup \mathbb{L}^{(\text{amp},2)}_{g,N}\;,\end{cases} \tag{56}\]
_(light blue area on the right-hand-side of the bottom panel of Fig. 4), as subset of the corresponding two-element concatenation, low-ground capacity region \(\mathbb{L}^{(0)}(\mathcal{A}_{\eta,N})\), and_
\[\begin{cases}\mathbb{H}^{(\text{amp},1)}_{g,N}&:=\left\{(g^{\prime},N^{\prime }):1\leq g^{\prime}\leq g\,,0\leq N^{\prime}\leq N^{(1)}_{g,N}(g^{\prime}) \right\},\\ \mathbb{H}^{(\text{amp},2)}_{g,N}&:=\left\{(g^{\prime},N^{\prime}):g^{\prime} \geq g\,,0\leq N^{\prime}\leq N^{(2)}_{g,N}(g^{\prime})\right\},\end{cases} \tag{57}\]
_(yellow area on the right-hand-side of bottom panel of Fig. 4), as subspace of \(\mathbb{H}^{(0)}(\mathcal{A}_{\eta,N})\), i.e._
\[\mathbb{L}^{(\text{amp})}_{g,N}\subseteq\mathbb{L}^{(0)}(\mathcal{A}_{g,N}) \;,\qquad\mathbb{H}^{(\text{amp})}_{g,N}\subseteq\mathbb{H}^{(0)}(\mathcal{A} _{g,N})\;. \tag{58}\]
Proof.: To derive the first inclusion we set \((g_{3},N_{3})=(g^{\prime},N^{\prime})\) and \((g_{2},N_{2})=(g,N)\) in Eq. (\(\mathbf{C_{2}}\)) of Tab. 1 to observe for all \(g_{1}\geq 1\) and \(N_{1}\geq 0\),
\[\mathcal{A}_{g^{\prime},N^{\prime}}=\mathcal{A}_{g,N}\circ\mathcal{A}_{g_{1},N _{1}} \tag{59}\]
is also a thermal channel with parameters
\[\begin{cases}g^{\prime}=g_{1}g\geq g\;,\\ \\ N^{\prime}=\frac{(g_{1}-1)gN_{1}+(g-1)N}{g^{\prime}-1}\geq N^{(1)}_{g,N}(g^{ \prime})\;,\end{cases} \tag{60}\]
which span the entire area \(\mathbb{L}^{(\text{amp},1)}_{g,N}\), as the inequality is saturated whenever \(N_{1}=0\), leading to
\[\mathbb{L}^{(\text{amp},1)}_{g,N}\subseteq\mathbb{L}^{(0)}(\mathcal{A}_{g,N})\;. \tag{61}\]
We next invoke (\(\mathbf{C_{3.2}}\)) of Tab. 1 to observe that
\[\mathcal{A}_{g^{\prime},N^{\prime}}=\mathcal{E}_{\eta_{2},N_{2}}\circ\mathcal{A }_{g,N}\;, \tag{62}\]
is an amplifier too with parameters
\[\begin{cases}&g^{\prime}=\eta_{2}g\leq g\;,\\ \\ &N^{\prime}=\frac{(1-\eta_{2})(2N_{2}+1)+\eta_{2}(g-1)(2N+1)-(g^{\prime}-1)}{2(g ^{\prime}-1)}\geq N^{(2)}_{g,N}(g^{\prime})\;,\end{cases} \tag{63}\]
that cover the entire subset \(\mathbb{L}^{(\text{amp},2)}_{g,N}\), as the inequality is
Figure 4: Two-element concatenation analysis of the low-ground and high-ground capacity regions for thermal attenuators and amplifiers (for all the examples reported in the plots the channels are non-EB, i.e. fulfil the condition (32)). Top panel: given a thermal attenuator \(\mathcal{E}_{\eta,N}\) (red dot element), the light blue areas correspond respectively to \(\mathbb{L}^{(\text{att})}_{\eta,N}\) and \(\mathbb{X}^{(\text{att},-)}_{\eta,N}\); the yellow areas describe instead \(\mathbb{H}^{(\text{att})}_{\eta,N}\) and \(\mathbb{X}^{(\text{att},+)}_{\eta,N}\). Plot realized using \(N=0.1\) and \(\eta=0.6\). Bottom panel: given a thermal amplifier \(\mathcal{A}_{g,N}\) (red dot element), the light blue area describe the sets \(\mathbb{L}^{(\text{amp})}_{g,N}\) and \(\mathbb{X}^{(\text{amp},-)}_{g,N}\) while the yellow area \(\mathbb{H}^{(\text{amp})}_{g,N}\) and \(\mathbb{X}^{(\text{amp},+)}_{g,N}\). Plot realized using \(N=0.2\) and \(g=1.5\). The orange and blue curves represent the border lines of the low ground/high ground regions defined by the functions \(N^{(1,2)}_{\eta,N}(\eta^{\prime})\) of Eqs. (43), (44), \(N^{(3,4)}_{\eta,N}(g^{\prime})\) of Eqs. (65), (66), \(N^{(1,2)}_{g,N}(g^{\prime})\) of Eqs. (54), (55), and \(N^{(3,4)}_{g,N}(\eta^{\prime})\) of Eqs. (71), (72): along these curves the arrows show the direction where the capacities have to decrease (or at most remain constant) – see Corollaries 4.2 and 4.3 of App. A. For the points in the white regions the composition rules cannot be used to determine a specific capacity ordering with respect to the capacity of the red dot element.
saturated whenever \(N_{2}=0\). Hence we have
\[\mathbb{L}_{g,N}^{(\mathrm{amp},1)}\subseteq\mathbb{L}^{(0)}(\mathcal{A}_{g,N})\;. \tag{64}\]
which together with (64) gives us the thesis. The derivation of the second inclusion of Eq. (68) follows along the same lines by simply inverting the roles of \((g^{\prime},N^{\prime})\) and \((g,N)\) in the previous passages.
It is finally possible to establish a partial ordering between the capacities of attenuators and those of the amplifiers. Specifically given \(\eta\in[0,1]\) and \(N\geq 0\) and the functions
\[N_{\eta,N}^{(3)}(g) := N\left(\tfrac{1-\eta}{\eta}\right)\left(\tfrac{g}{g-1}\right)-1\;, \tag{65}\] \[N_{\eta,N}^{(4)}(g) := (N+1)\left(\tfrac{1-\eta}{g-1}\right)\;, \tag{66}\]
it follows that
**Property 3**.: _The attenuator map \(\mathcal{E}_{\eta,N}\) admits_
\[\chi_{\eta,N}^{(\mathrm{att},+)} := \left\{(g,N^{\prime})\cdot g\geq 1,0\leq N^{\prime}\leq N_{g,N}^{( 3)}(g)\right\}, \tag{67}\] \[\chi_{\eta,N}^{(\mathrm{att},-)} := \left\{(g,N^{\prime})\cdot g\geq 1,N^{\prime}\geq\max\{N_{ \eta,N}^{(4)}(g),0\}\right\},\]
_(yellow and light blue areas on the right-hand-side of the top panel of Fig. 4) as subsets of \(\mathbb{H}^{(0)}(\mathcal{E}_{\eta,N})\) and \(\mathbb{L}^{(0)}(\mathcal{E}_{\eta,N})\) respectively, i.e._
\[\chi_{\eta,N}^{(\mathrm{att},+)}\subseteq\mathbb{H}^{(0)}(\mathcal{E}_{\eta,N })\;,\qquad\chi_{\eta,N}^{(\mathrm{att},-)}\subseteq\mathbb{L}^{(0)}( \mathcal{E}_{\eta,N})\;. \tag{68}\]
Proof.: To prove the first inclusion we use the decomposition rule \((\mathbf{C_{3.1}})\) of Tab. 1, which setting \((\eta_{3},N_{3})=(\eta,N)\), \((g_{1},N_{1})=(g,N^{\prime})\), and arbitrary \(\eta_{2}\in[0,1]\), \(N_{2}\geq 0\), allows us to write
\[\mathcal{E}_{\eta,N}=\mathcal{E}_{\eta_{2},N_{2}}\circ\mathcal{A}_{g,N^{ \prime}}\;, \tag{69}\]
for all \(g\geq 1\) and \(N^{\prime}\leq N_{\eta,N}^{(3)}(g)\), i.e. for the entire set \(\chi_{\eta,N}^{(\mathrm{att},+)}\). The second inclusion follows instead from Eq. \((\mathbf{C_{3.2}})\) of Tab. 1, which setting setting \((\eta_{2},N_{2})=(\eta,N)\), \((g_{3},N_{3})=(g,N^{\prime})\), and arbitrary \(g_{1}\geq 1\), \(N_{1}\geq 0\) allows us to write
\[\mathcal{A}_{g,N^{\prime}}=\mathcal{E}_{\eta,N}\circ\mathcal{A}_{g_{1},N_{1}}\;, \tag{70}\]
for all \(g\geq 1\) and \(N^{\prime}\geq N_{\eta,N}^{(4)}(g)\), i.e. for the entire set \(\chi_{\eta,N}^{(\mathrm{att},-)}\).
In a similar way, given
\[N_{g,N}^{(3)}(\eta) := N\left(\tfrac{g-1}{1-\eta}\right)-1\;, \tag{71}\] \[N_{g,N}^{(4)}(\eta) := (N+1)\left(\tfrac{g-1}{g}\right)\left(\tfrac{\eta}{1-\eta} \right)\;, \tag{72}\]
we have that
**Property 4**.: _The amplifier map \(\mathcal{A}_{g,N}\) admits_
\[\chi_{g,N}^{(\mathrm{amp},+)} := \tag{73}\] \[\chi_{g,N}^{(\mathrm{amp},-)} := \left\{(\eta,N^{\prime})\cdot g\geq 1,N^{\prime}\geq\max\{N_{g, N}^{(4)}(\eta),0\}\right\},\]
_(yellow and light blue areas on the left-hand-side of the bottom panel of Fig. 4) as subsets of \(\mathbb{H}^{(0)}(\mathcal{A}_{g,N})\) and \(\mathbb{L}^{(0)}(\mathcal{A}_{g,N})\) respectively, i.e._
\[\chi_{g,N}^{(\mathrm{amp},+)}\subseteq\mathbb{H}^{(0)}(\mathcal{A}_{g,N})\;, \qquad\chi_{g,N}^{(\mathrm{amp},-)}\subseteq\mathbb{L}^{(0)}(\mathcal{A}_{g,N })\;. \tag{74}\]
Proof.: The above inclusions are just an alternative way to cast the results of Property 3. For the sake of symmetry we provide however an independent derivation. The first can be proven by using the decomposition rule \((\mathbf{C_{3.2}})\) of Tab. 1, which setting \((g_{3},N_{3})=(g,N)\), \((\eta_{2},N_{2})=(\eta,N^{\prime})\), and \(g_{1}\geq 1\), \(N_{1}\geq 0\), allows us to write
\[\mathcal{A}_{g,N}=\mathcal{E}_{\eta,N^{\prime}}\circ\mathcal{A}_{g_{1},N_{1}}\;, \tag{75}\]
for all \(\eta\in[0,1]\) and \(0\leq N^{\prime}\leq N_{g,N}^{(3)}(\eta)\), i.e. for all the points of \(\chi_{g,N}^{(\mathrm{amp},+)}\). The second inclusion of Eq. (74) follows instead from Eq. \((\mathbf{C_{3.1}})\) of Tab. 1, which setting setting \((\eta_{3},N_{3})=(\eta,N^{\prime})\), \((g_{1},N_{1})=(g,N)\), and \(\eta_{2}\in[0,1]\), \(N_{2}\geq 0\), gives us
\[\mathcal{E}_{\eta,N^{\prime}}=\mathcal{E}_{\eta_{2},N_{2}}\circ\mathcal{A}_{g,N }\;, \tag{76}\]
for all \(\eta\in[0,1]\) and \(N^{\prime}\geq N_{g,N}^{(4)}(\eta)\), i.e. for all the points of \(\chi_{g,N}^{(\mathrm{amp},-)}\).
Inclusions which are different with respect to the one reported in the Properties can be obtained by reversing the order of the decompositions employed in the proofs. Such inequalities however are provably less performant than those presented. For instance setting \((\eta_{3},N_{3})=(\eta^{\prime},N^{\prime})\) and \((\eta_{2},N_{2})=(\eta,N)\) in Eq. \((\mathbf{C_{1}})\) allows us to determine that \((\eta^{\prime},N^{\prime})\in\mathbb{L}(\mathcal{E}_{\eta,N})\) for all
\[\eta^{\prime}\leq\eta\;,\qquad N^{\prime}\geq\left(\frac{1-\eta}{1-\eta^{ \prime}}\right)N\;, \tag{77}\]
a result which is implied by (50) since the set \(\mathbb{L}_{\eta,N}^{(\mathrm{att},2)}\) includes all points falling the condition (77). Similarly using Eq. \((\mathbf{C_{4.1}})\) instead of \((\mathbf{C_{3.1}})\) allows one to show that i) \((\eta^{\prime},N^{\prime})\in\mathbb{L}(\mathcal{E}_{\eta,N})\) for all \(\eta^{\prime}\geq\eta\) and \(N^{\prime}\geq\frac{\eta^{\prime}-\eta+\eta^{\prime}(1-\eta)N}{\eta(1-\eta^{ \prime})}\) (a condition that is already implied by (47)); ii) \((\eta^{\prime},N^{\prime})\in\mathbb{H}(\mathcal{E}_{\eta,N})\) for all \(\eta^{\prime}\leq\eta\) and \(N^{\prime}\leq\frac{\eta^{\prime}-\eta+\eta^{\prime}(1-\eta)N}{\eta(1-\eta^{ \prime})}\) (which again is implied by (47)); iii) \((g,N^{\prime})\in\mathbb{H}(\mathcal{E}_{\eta,N})\) for all \(g\geq 1\) and \(N^{\prime}\leq N\left(\frac{1-\eta}{g-1}\right)-1\) (implied by the first inclusion of Eq. (68)); _iv)_\(g,N^{\prime})\in\mathbb{L}(\mathcal{E}_{\eta,N})\) for all \(g\geq 1\) and \(N^{\prime}\geq(N+1)\left(\frac{g}{g-1}\right)\left(\frac{1-\eta}{\eta}\right)\) (implied by the second inclusion of Eq. (68)). Putting together these results we can hence conclude that the two-elements concatenation
regions \(\mathbb{L}^{(0)}(\mathcal{E}_{\eta,N})\) and \(\mathbb{L}^{(0)}(\mathcal{A}_{g,N})\) coincide respectively with \(\mathbb{L}^{(\text{att})}_{\eta,N}\bigcup\mathbb{X}^{(\text{att},-)}_{\eta,N}\) and \(\mathbb{L}^{(\text{amp})}_{g,N}\bigcup\mathbb{X}^{(\text{amp},-)}_{g,N}\), a condition which translated into the parametrization (3) can be expressed as
\[\mathbb{L}^{(0)}_{x,M} = \left\{\begin{array}{ll}\mathbb{L}^{(\text{att})}_{x,M/(1-x)} \bigcup\mathbb{X}^{(\text{att},-)}_{x,M/(1-x)}&\text{for $x\in[0,1]$}\;,\\ \\ \mathbb{L}^{(\text{amp})}_{x,M/(x-1)}\bigcup\mathbb{X}^{(\text{amp},-)}_{x,M/( x-1)}&\text{for $x\geq 1$}\;.\end{array}\right. \tag{78}\]
Analogously we have that \(\mathbb{H}^{(0)}(\mathcal{E}_{\eta,N})\) and \(\mathbb{H}^{(0)}(\mathcal{A}_{g,N})\) correspond respectively to \(\mathbb{H}^{(\text{att})}_{\eta,N}\bigcup\mathbb{X}^{(\text{att},+)}_{\eta,N}\) and \(\mathbb{H}^{(\text{amp})}_{g,N}\bigcup\mathbb{X}^{(\text{amp},+)}_{g,N}\), so that
\[\mathbb{H}^{(0)}_{x,M} = \left\{\begin{array}{ll}\mathbb{H}^{(\text{att})}_{x,M/(1-x)} \bigcup\mathbb{X}^{(\text{att},+)}_{x,M/(1-x)}&\text{for $x\in[0,1]$}\;,\\ \\ \mathbb{H}^{(\text{amp})}_{x,M/(x-1)}\bigcup\mathbb{X}^{(\text{amp},+)}_{x,M/( x-1)}&\text{for $x\geq 1$}\;.\end{array}\right. \tag{79}\]
The border lines of these regions are provided by the curves \(M^{(j)}_{x,M}(x^{\prime})\) of Tab. 2 obtained from \(N^{(1,2)}_{\eta,N}(\eta^{\prime})\), \(N^{(1,2)}_{g,N}(g^{\prime})\), \(N^{(3,4)}_{\eta,N}(g^{\prime})\), and \(N^{(1,2)}_{g,N}(\eta^{\prime})\) via the substitutions
\[M^{(j)}_{x,M}(x^{\prime}):=|1-x^{\prime}|N^{(j)}_{x,M/|1-x|}(x^{\prime})\;. \tag{80}\]
For instance one has that \(\mathbb{L}^{(0)}_{x,M}\) is given by the points \((x^{\prime},M^{\prime})\) such that
\[M^{\prime}\geq\left\{\begin{array}{ll}M^{(1)}_{x,M}(x^{\prime})=M\frac{x^{ \prime}}{x},&(0\leq x^{\prime}\leq x),\\ M^{(2)}_{x,M}(x^{\prime})=M-x+x^{\prime},&(x\leq x^{\prime}\leq 1),\\ M^{(4)}_{x,M}(x^{\prime})=M+1-x,&(1\leq x^{\prime}),\end{array}\right. \tag{81}\]
for \(x\leq 1\), and
\[M^{\prime}\geq\left\{\begin{array}{ll}M^{(4)}_{x,M}(x^{\prime})=(M+x-1)\frac {x^{\prime}}{x},&(0\leq x^{\prime}\leq 1),\\ M^{(2)}_{x,M}(x^{\prime})=(M-1)\frac{x^{\prime}}{x}+1,&(1\leq x^{\prime}\leq x ),\\ M^{(1)}_{x,M}(x^{\prime})=M,&(x\leq x^{\prime}),\end{array}\right. \tag{82}\]
for \(x\geq 1\). Viceversa we have that \(\mathbb{H}^{(0)}_{x,M}\) includes all points \((x^{\prime},M^{\prime})\) such that
\[0\leq M^{\prime}\leq\left\{\begin{array}{ll}M^{(2)}_{x,M}(x^{\prime})=M-x+x^ {\prime},&(0\leq x^{\prime}\leq x),\\ M^{(1)}_{x,M}(x^{\prime})=M\frac{x^{\prime}}{x},&(x\leq x^{\prime}\leq 1),\\ M^{(3)}_{x,M}(x^{\prime})=(M-x)\frac{x^{\prime}}{x}+1,&(1\leq x^{\prime}), \end{array}\right. \tag{83}\]
for \(x\leq 1\), and
\[0\leq M^{\prime}\leq\left\{\begin{array}{ll}M^{(3)}_{x,M}(x^{\prime})=M-1+x ^{\prime},&(0\leq x^{\prime}\leq 1),\\ M^{(1)}_{x,M}(x^{\prime})=M,&(1\leq x^{\prime}\leq x),\\ M^{(2)}_{x,M}(x^{\prime})=(M-1)\frac{x^{\prime}}{x}+1,&(x\leq x^{\prime}),\end{array}\right. \tag{84}\]
for \(x\geq 1\).
### Three-elements concatenation regions for channels \(\Phi_{x,M}\) fulfilling Eq. (32)
Comparing the l.h.s. of Eqs. (81)-(84) with the functions \(f^{(1)}_{x,M}(x^{\prime})\) and \(f^{(1)}_{x,M}(x^{\prime})\) of Eq. (33), one can easily check that for channels \(\Phi_{x,M}\) which fulfil Eq. (32), the two-element concatenation regions \(\mathbb{L}^{(0)}_{x,M}\) and \(\mathbb{H}^{(0)}_{x,M}\) can be expressed as
\[\mathbb{L}^{(0)}_{x,M} = \left\{(x^{\prime},M^{\prime})\in\left(\mathbb{R}^{+}\right)^{2}:\right.\] \[\left.M^{\prime}\geq\max\{f^{(1)}_{x,M}(x^{\prime}),f^{(2)}_{x,M} (x^{\prime})\}\right\}\,,\] \[\mathbb{H}^{(0)}_{x,M} = \left\{(x^{\prime},M^{\prime})\in\left(\mathbb{R}^{+}\right)^{2}:\right.\] \[\left.M^{\prime}\leq\min\{f^{(1)}_{x,M}(x^{\prime}),f^{(2)}_{x,M} (x^{\prime})\}\right\}\,.\]
Accordingly under the condition (32), the proof of the identities (34) and (35) reduces hence to show that the two-element concatenation regions \(\mathbb{L}^{(0)}_{x,M}\) and \(\mathbb{H}^{(0)}_{x,M}\) correspond to the three-elements concatenations sets \(\mathbb{L}_{x,M}\) and \(\mathbb{H}_{x,M}\), i.e.
**Corollary 4.1**.: _Given \((x,M)\) such that \(M\leq M_{EB}(x)\) we have that_
\[\mathbb{L}^{(0)}_{x,M}=\mathbb{L}_{x,M}\;,\qquad\mathbb{H}^{(0)}_{x,M}=\mathbb{H }_{x,M}\;. \tag{87}\]
Proof.: This result can be derived by noticing that for channels \(\Phi_{x,M}\) which are non-EB, \(\mathbb{L}^{(0)}_{x,M}\) and \(\mathbb{H}^{(0)}_{x,M}\) fulfil the same ordering rules of \(\mathbb{L}_{x,M}\) and \(\mathbb{H}_{x,M}\) given in Eqs. (29) and (30), i.e.
\[M<M_{\text{EB}}(x)\Longrightarrow\left\{\begin{array}{ll}\mathbb{L}^{(0)}_{x^{ \prime},M^{\prime}}\subseteq\mathbb{L}^{(0)}_{x,M}&,\forall(x^{\prime},M^{ \prime})\in\mathbb{L}^{(0)}_{x,M}\\ \\ \mathbb{H}^{(0)}_{x^{\prime},M^{\prime}}\subseteq\mathbb{H}^{(0)}_{x,M}&,\forall(x^{ \prime},M^{\prime})\in\mathbb{H}^{(0)}_{x,M}\\ \end{array}\right. \tag{88}\]
The derivation of these identities relies on a series geometric relations in which one has to compare the relative size and position of the two-dimensional polytopes defined in Eqs. (93) and (94). It suffices to do show this for the high ground region. In fact, since \((x^{\prime},M^{\prime})\in\mathbb{L}^{(0)}_{x,M}\) if and only if \((x,M)\in\mathbb{H}^{(0)}_{x^{\prime},M^{\prime}}\), we have that \((x^{\prime\prime},M^{\prime\prime})\in\mathbb{L}^{(0)}_{x,M}\), \((x^{\prime\prime},M^{\prime\prime})\in\mathbb{L}^{(0)}_{x^{\prime},M^{\prime}}\), \((x^{\prime},M^{\prime})\in\mathbb{L}^{(0)}_{x,M}\) if and only if \((x,M)\in\mathbb{H}^{(0)}_{x^{\prime\prime},M^{\prime\prime}}\), \((x^{\prime},M^{\prime})\in\mathbb{H}^{(0)}_{x^{\prime\prime},M^{\prime\prime}}\), \((x,M)\in\mathbb{H}^{(0)}_{x^{\prime\prime},M^{\prime\prime}}\). Therefore it suffices to prove only that the latter is true whenever \((x^{\prime},M^{\prime})\in\mathbb{H}^{(0)}_{x^{\prime\prime},M^{\prime\prime
of \(\mathbb{L}_{x_{2},M_{2}}^{(0)}\) (indeed \(\Phi_{x^{\prime},M^{\prime}}=\Phi_{\bar{x}_{1},\bar{M}_{1}}\circ\Phi_{x_{2},M_{2}}\)). However, because of Eq. (88) the latter is a subset of \(\mathbb{L}_{x,M}^{(0)}\), so we can conclude that
\[(x^{\prime},M^{\prime})\in\mathbb{L}_{x,M}\Longrightarrow(x^{\prime},M^{\prime })\in\mathbb{L}_{x,M}^{(0)}\;, \tag{90}\]
which together with Eq. (42) gives the first of the identities Eq. (87).
By the same token, let \((x^{\prime},M^{\prime})\in\mathbb{H}_{x,M}\): from (27) it follows that we can write
\[\Phi_{x,M}=\Phi_{\bar{x}_{1},\bar{M}_{1}}\circ\Phi_{x^{\prime},M^{\prime}}\circ \Phi_{\bar{x}_{2},\bar{M}_{2}}\;, \tag{91}\]
for some proper choice of \((\bar{x}_{1},\bar{M}_{1}),(\bar{x}_{2},\bar{M}_{2})\in\left(\mathbb{R}^{+} \right)^{2}\). Setting then \(\Phi_{x_{2},M_{2}}:=\Phi_{x^{\prime},M^{\prime}}\circ\Phi_{\bar{x}_{2},\bar{M }_{2}}\), we can claim that \((x^{\prime},M^{\prime})\) is an element of \(\mathbb{H}_{x_{2},M_{2}}^{(0)}\), and \((x_{2},M_{2})\) an element of \(\mathbb{H}_{x,M}^{(0)}\) (indeed \(\Phi_{x,M}=\Phi_{\bar{x}_{1},\bar{M}_{1}}\circ\Phi_{x_{2},M_{2}}\)). However, because of Eq. (88) it follows that \(\mathbb{H}_{x_{2},M_{2}}^{(0)}\) is included into \(\mathbb{H}_{x,M}^{(0)}\), so that
\[(x^{\prime},M^{\prime})\in\mathbb{H}_{x,M}\Longrightarrow(x^{\prime},M^{\prime })\in\mathbb{H}_{x,M}^{(0)}\;, \tag{92}\]
which gives the second of the identities Eq. (87).
### Three-elements concatenation regions for channels \(\Phi_{x,M}\) fulfilling (36)
Here we show that for channels \(\Phi_{x,M}\) which are deep in the EB region (i.e. such that (36) holds true), the low-ground/high-ground regions are determined by Eqs. (38) and (37). To begin with let us observe that, for \(M\geq M_{\rm EB}(x)\), Eqs. (81)-(84) lead to express the two-elements concatenation sets \(\mathbb{L}_{x,M}^{(0)}\) and \(\mathbb{H}_{x,M}^{(0)}\) as
\[\mathbb{L}_{x,M}^{(0)} = \Big{\{}(x^{\prime},M^{\prime})\in\left(\mathbb{R}^{+}\right)^{2}:\] \[\quad M^{\prime}\geq\min\{f_{x,M}^{(1)}(x^{\prime}),f_{x,M}^{(2)} (x^{\prime})\}\Big{\}}\;,\] \[\mathbb{H}_{x,M}^{(0)} = \Big{\{}(x^{\prime},M^{\prime})\in\left(\mathbb{R}^{+}\right)^{2}:\] \[\quad M^{\prime}\leq\max\{f_{x,M}^{(1)}(x^{\prime}),f_{x,M}^{(2)} (x^{\prime})\}\Big{\}}\;,\]
with \(f_{x,M}^{(1,2)}(x^{\prime})\) the functions defined in Eq. (33). It turns out that such regions always admit a non trivial overlap \(\mathbb{O}_{x,M}^{(0)}:=\mathbb{L}_{x,M}^{(0)}\bigcap\mathbb{H}_{x,M}^{(0)}\) that includes a finite portion of the plane \(\left(\mathbb{R}^{+}\right)^{2}\) in the neighbourhood of the origin - see Fig. 6. As a matter of fact \((0,0)\) itself can be considered as element of \(\mathbb{O}_{x,M}^{(0)}\) (to be precise, it is an element of the closure of \(\mathbb{O}_{x,M}^{(0)}\) - see App. B for a refinement of the following argument, taking this into account), i.e.
\[M>M_{\rm EB}(x)\Longrightarrow(0,0)\in\mathbb{O}_{x,M}^{(0)}\subseteq\mathbb{ H}_{x,M}^{(0)}\subseteq\mathbb{H}_{x,M}\;. \tag{95}\]
Now from (26) and (31) we know that
\[(x^{\prime},M^{\prime})\in\mathbb{H}_{0,0}\;,\qquad\forall(x^{\prime},M^{ \prime})\in\left(\mathbb{R}^{+}\right)^{2}\;, \tag{96}\]
which using (29) gives
\[(x^{\prime},M^{\prime})\in\mathbb{H}_{x,M}\;,\qquad\forall(x^{\prime},M^{ \prime})\in\left(\mathbb{R}^{+}\right)^{2}\;, \tag{97}\]
hence proving Eq. (37) Observe next that if \((x^{\prime},M^{\prime})\) is also EB, then from (97) we also have \((x,M)\in\mathbb{H}_{x^{\prime},M^{\prime}}\) which lead to Eq. (38) via the complementarity rule (31).
## VI Stabilizing bounds
Building up from the low-ground/high-ground analysis presented in the previous sections we can now detail a general technique that allows one to potentially improve existing upper and lower bounds for the capacities of single-mode PI-GBCs. Indeed suppose that \(\mathcal{K}^{(+)}(x,M)\), \(\mathcal{K}^{(-)}(x,M)\) are two functions which bound for the capacity \(\mathcal{K}(x,M):=\mathcal{K}(\Phi_{x,M})\) of the channel \(\Phi_{x,M}\), i.e.
\[\mathcal{K}^{(+)}(x,M)\geq\mathcal{K}(x,M)\geq\mathcal{K}^{(+)}(x,M)\;, \tag{98}\]
for all \((x,M)\in\left(\mathbb{R}^{+}\right)^{2}\). From (25) and (28) it follows that the quantities
\[\begin{cases}\underline{\mathcal{K}}^{(+)}(x,M)&:=\min_{(x^{\prime},M^{\prime })\in\mathbb{H}_{x,M}}\mathcal{K}^{(+)}(x^{\prime},M^{\prime})\;,\\ \overline{\mathcal{K}}^{(-)}(x,M)&:=\max_{(x^{\prime},M^{\prime})\in \mathbb{L}_{x,M}}\mathcal{K}^{(-)}(x^{\prime},M^{\prime})\;,\end{cases} \tag{99}\]
can potentially improve the inequalities (98), i.e.
\[\begin{cases}&\mathcal{K}^{(+)}(x,M)\geq\underline{\mathcal{K}}^{(+)}(x,M)\;, \\ &\\ &\underline{\mathcal{K}}^{(+)}(x,M)\geq\mathcal{K}(x,M)\geq\overline{\mathcal{ K}}^{(-)}(x,M)\;,\\ &\\ &\overline{\mathcal{K}}^{(-)}(x,M)\geq\mathcal{K}^{(-)}(x,M)\;.\end{cases} \tag{100}\]
Of course it is very possible that the functions \(\underline{\mathcal{K}}^{(+)}(x,M)\) and \(\overline{\mathcal{K}}^{(-)}(x,M)\) will coincides with \(\mathcal{K}^{(+)}(x,M)\) and \(\mathcal{K}^{(-)}(x,M)\), respectively: this happens for instance for all bounding functions (98) which arise from operational procedures that automatically incorporate data processing, e.g. the bounds (18), (22), and (23). There are however examples where the construction (99) leads to non trivial overall improvements. In what follow we shall detail one of such cases.
### Improving the upper bounds for the quantum capacity of thermal attenuators
Expressing the functions (22) in terms of the parametrization (3) we can claim that the quantum and
Figure 5: Graphical explanation of inclusions rules of Eq. (88). The border of the high ground region can have two different shapes, depending on \(x_{i}\) being smaller or larger than \(1\) (the degenerate case \(x_{i}=1\) is not plotted but it is analogous). In the case \(x_{i}<1\), the region is obtained as the intersection of \(\left(\mathbb{R}^{+}\right)^{2}\) with the half-planes delimited by a \(45\) degrees line passing through \((x_{i},M_{i})\), a line passing through the origin and \((x_{i},M_{i})\) and intersecting the \(x=1\) line at \(\mathbf{C_{i}}\), and a line passing through \(\mathbf{C_{i}}\) and \((0,1)\). In the case \(x_{i}>1\), the region is obtained as the intersection of \(\left(\mathbb{R}^{+}\right)^{2}\) with the half-planes delimited by an horizontal line passing through \((x_{i},M_{i})\) and intersecting the \(x=1\) line at \(\mathbf{B_{i}}\), a \(45\) degrees line intersecting \(\mathbf{B_{i}}\), and a line passing through \((x_{i},M_{i})\) and \((0,1)\). If \(x_{0}<1\) and \(x_{1}<1\), all the the intersection of \(\left(\mathbb{R}^{+}\right)^{2}\) and the half-planes individuated by the segments \(\mathbf{A_{1}}-(x,M)\), \((x,M)-\mathbf{C_{1}}\) and \(\mathbf{C_{1}}-\mathbf{D_{1}}\) are contained in the corresponding region individuated by \(x_{0}\), and therefore their intersection also satisfy the same inclusions. If \(x_{0}>1\) and \(x_{1}>1\), all the the intersection of \(\left(\mathbb{R}^{+}\right)^{2}\) and the half-planes individuated by the segments \(\mathbf{A_{1}}-\mathbf{B_{1}}\), \(\mathbf{B_{1}}-(x,M)\),and \((x,M)-\mathbf{D_{1}}\) are contained in the corresponding region individuated by \(x_{0}\), and therefore their intersection also satisfy the same inclusions. If \(x_{0}<1\) and \(x_{1}>1\), a calculation shows that in the non EB region the region on the left is convex, therefore it contains the triangle \(\mathbf{A_{1}}-\mathbf{B_{1}}-(1,0)\), while the inclusion of the triangle \(\mathbf{C_{1}}-\mathbf{D_{1}}-(1,0)\) follows from the same half-plane argument as before. If \(x_{0}>1\) and \(x_{1}<1\), a calculation shows that in the non EB region the region on the right is convex, therefore it contains the triangle \(\mathbf{C_{1}}-\mathbf{D_{1}}-(1,0)\), while the inclusion of the triangle \(\mathbf{A_{1}}-\mathbf{C_{1}}-(1,0)\) follows from the same half-plane argument as before.
Figure 6: Examples of the two-elements compositions low-ground/high-ground regions \(\mathbb{L}_{x,M}^{(0)}\) and \(\mathbb{H}_{x,M}^{(0)}\) for EB channels \(\Phi_{x,M}\). Notice that such sets admits a non trivial overlap \(\mathbb{O}_{x,M}^{(0)}:=\mathbb{L}_{x,M}^{(0)}\bigcap\mathbb{H}_{x,M}^{(0)}\) (pale yellow region) which always includes the origin point \((0,0)\).
private capacities of the channel \(\Phi_{x,M}\) is always smaller than or equal to
\[Q_{\rm FKG}(x,M):=\left\{\begin{array}{ll}Q_{\rm FKG}^{\rm att}(x,\frac{M}{1-x}) \;,&\forall x\in[0,1]\;,\\ \\ Q_{\rm FKG}^{\rm amp}(M)\;,&\forall x\geq 1\;.\end{array}\right. \tag{101}\]
Following Eq. (99) we can produce an upper bound \(\underline{Q}_{\rm FKG}(x,M)\) by taking the minimum of \(\underline{Q}_{\rm FKG}(x^{\prime},M^{\prime})\) over the set \(\mathbb{H}_{x,M}\). This strategy had been suggested and shown to give improvements in [41], and it will be fully explored here. Without loss of generality we assume \((x,M)\) not to belong to the AD domain \(\mathbb{A}\), i.e.
\[\begin{cases}x\geq 1/2,\\ M\leq M_{\rm AD}(x)=\min\{x-1/2,1/2\}\;,\end{cases} \tag{102}\]
a condition which via Eq. (35), allows us to identify the high-ground set of the model with the yellow regions of Fig. 3. We next compute the value \(\underline{Q}_{\rm FKG}^{(1)}(x,M)\) which represents the minimum of \(Q_{\rm FKG}(x^{\prime},M^{\prime})\) for points of \(\mathbb{H}_{x,M}\) which have \(x^{\prime}\in[0,1]\), and the value \(\underline{Q}_{\rm FKG}^{(2)}(x,M)\) which instead involves points of \(\mathbb{H}_{x,M}\) with \(x\geq 1\), i.e.
\[\underline{Q}_{\rm FKG}^{(1)}(x,M) := \min_{(x^{\prime},M^{\prime})\in\mathbb{H}_{x,M};x^{\prime}\in[0,1]}Q_{\rm FKG}^{\rm att}(x^{\prime},\frac{M^{\prime}}{1-x^{\prime}})\;,\] \[\underline{Q}_{\rm FKG}^{(2)}(x,M) := \min_{(x^{\prime},M^{\prime})\in\mathbb{H}_{x,M};x^{\prime}\geq 1 }Q_{\rm FKG}^{\rm amp}(M^{\prime})\;. \tag{103}\]
Once we have these terms we can then write the global minimum of \(Q_{\rm FKG}(x,M)\) over \(\mathbb{H}_{x,M}\) as
\[\underline{Q}_{\rm FKG}(x,M)=\min\{\underline{Q}_{\rm FKG}^{(1)}(x,M), \underline{Q}_{\rm FKG}^{(2)}(x,M)\}\;. \tag{104}\]
Consider first the evaluation of \(\underline{Q}_{\rm FKG}^{(2)}(x,M)\). Let us start observing that from Eqs. (83) and (84), the maximum value of \(M^{\prime}\) we can get for points \((x^{\prime},M^{\prime})\) of \(\mathbb{H}_{x,M}\) with \(x^{\prime}\geq 1\) is
\[M_{\rm max}^{(>)}(x):=\left\{\begin{array}{ll}M/x\;,&\forall x\in[\frac{1}{ 2},1]\\ M\;,&\forall x\geq 1\end{array}\right.=\frac{M}{M_{\rm EB}(x)}\;, \tag{105}\]
which, in virtue of Eq. (VII) is always smaller than or equal to \(1/2\). Recalling hence that on the interval \(\kappa\in[0,1/2]\) the function \(Q_{\rm FKG}^{\rm amp}(\kappa)\) is monotonically decreasing, we can write
\[\underline{Q}_{\rm FKG}^{(2)}(x,M)=Q_{\rm FKG}^{\rm amp}(M_{\rm max }^{(>)}(x))\] \[=-\log_{2}(\frac{\epsilon M}{M_{\rm EB}(x)})+2h\left(\frac{\sqrt{M ^{2}+M_{\rm EB}^{2}(x)}-M_{\rm EB}^{2}(x)}{2M_{\rm EB}(x)}\right).\]
In the case of amplifiers (i.e. for \(x\geq 1\)) this implies that \(\underline{Q}_{\rm FKG}^{(2)}(x,M)\) always coincides with the old bound (101), so no improvement can be obtained.
Consider next \(\underline{Q}_{\rm FKG}^{(1)}(x,M)\). Here the key observation is that for fixed value of \(x^{\prime}\), \(Q_{\rm FKG}^{\rm att}(x^{\prime},\frac{M^{\prime}}{1-x^{\prime}})\) is a decreasing function of \(M^{\prime}\). Observe also that for \(x\in[\frac{1}{2},1]\), Eqs. (VII.2) and (VII.2) implies that the maximum value of \(M^{\prime}\) we can get for points \((x^{\prime},M^{\prime})\) of \(\mathbb{H}_{x,M}\) which have \(x^{\prime}\leq 1\) is
\[M_{\rm max}^{(<)}(x^{\prime},x):=\left\{\begin{array}{ll}M+x^{\prime}-x\;,& \forall x^{\prime}\in[x-M,x],\\ \frac{x^{\prime}}{x}M\;,&\forall x^{\prime}\in[x,1].\end{array}\right. \tag{107}\]
Therefore we can write
\[\underline{Q}_{\rm FKG}^{(1)}(x,M) = \min_{x^{\prime}\in[x-M,1]}Q_{\rm FKG}^{\rm att}(x^{\prime},\frac {M_{\rm F}^{(<)}(x^{\prime},x)}{1-x^{\prime}})\] \[= \min\{\underline{Q}_{\rm FKG}^{(1,1)}(x,M),\underline{Q}_{\rm FKG }^{(1,2)}(x,M)\}\;,\]
with
\[\underline{Q}_{\rm FKG}^{(1.1)}(x,M) := \min_{x^{\prime}\in[x-M,x]}Q_{\rm FKG}^{\rm att}(x^{\prime},\frac {M+x^{\prime}-x}{1-x^{\prime}})\] \[= \min_{\epsilon\in[0,1]}Q_{\rm FKG}^{\rm att}(x-\epsilon M,\frac{(1 -\epsilon)M}{1-x+\epsilon M})\;,\]
where the second identify simply follows from a proper parametrization of \(x^{\prime}\), and
\[\underline{Q}_{\rm FKG}^{(1.2)}(x,M) := \min_{x^{\prime}\in[x,1]}Q_{\rm FKG}^{\rm att}(x^{\prime},\frac{x^ {\prime}M}{(1-x^{\prime})x})\] \[= \min_{\epsilon\in[0,1]}Q_{\rm FKG}^{\rm att}\left(1+\epsilon(1-x), \frac{1+\epsilon(1-x)M}{(1-x)x\epsilon}\right)\;.\]
Notice that for \(\epsilon=0\) the function \(Q_{\rm FKG}^{\rm att}(x-\epsilon M,\frac{(1-\epsilon)M}{1-x+\epsilon M})\) corresponds to \(Q_{\rm FKG}^{\rm att}(x,M)\) in Eq. (22) while for \(\epsilon=1\) we recover the bound \(Q(\mathcal{E}_{x-M,0})\) of Eq. (20). Therefore for the attenuators we have that \(\underline{Q}_{\rm FKG}^{(1.1)}(x,M)\) (and hence \(\underline{Q}_{\rm FKG}^{(1)}(x,M)\)) is always guaranteed to provide bounds which are at least equal than those reported in Eqs. (22) and (20), i.e.
\[\underline{Q}_{\rm FKG}^{(1)}(x,M)\leq\min\{Q(\mathcal{E}_{x-M,0}),Q_{\rm FKG}^{ \rm att}(x,M)\}\;. \tag{111}\]
On the contrary for \(x\geq 1\), Eq. (107) gets replaced
\[M_{\rm max}^{(<)}(x^{\prime},x):=M+x^{\prime}-1\;,\forall x^{\prime}\in[1-M,1], \tag{112}\]
leading to
\[\underline{Q}_{\rm FKG}^{(1)}(x,M) := \min_{x^{\prime}\in[1-M,1]}Q_{\rm FKG}^{\rm att}(x^{\prime},\frac {M+x^{\prime}-1}{1-x^{\prime}}) \tag{113}\] \[= \min_{\epsilon\in[0,1]}Q_{\rm FKG}^{\rm att}\left(1-\epsilon M, \frac{1-\epsilon}{\epsilon}\right)\;.\]
#### vi.2.1 Numerical analysis
Numerical analysis shows that \(\underline{Q}_{\rm FKG}^{(1.2)}(x,M)\) is always less performant than \(\underline{Q}_{\rm FKG}^{(1.1)}(x,M)\) so we can drop it form Eq. (104). Accordingly we can claim that for an attenuator channel (\(x\leq 1\)) the quantum capacity \(Q(\Phi_{x,M})\) must fulfil two new sets of inequalities, i.e.
\[Q(\Phi_{x,M})\leq Q_{\rm FKG}^{(2)}(x,M)=Q_{\rm FKG}^{\rm amp}(M/x)\;, \tag{114}\]
which follows from (106) and (105), and
\[Q(\Phi_{x,M})\leq\underline{Q}^{(1)}_{\rm FKG}(x,M)=\min_{\epsilon\in[0,1]}Q^{ \rm att}_{\rm FKG}(x-\epsilon M,\tfrac{(1-\epsilon)M}{1-x+\epsilon M})\;, \tag{115}\]
which instead follows from (109). As shown in Fig. 7 for some value of the channel parameter these two functions provide better upper bounds that those reported in Sec. III.1.
For amplifiers (i.e. \(x\geq 1\)) we have already observed that \(\underline{Q}^{(2)}_{\rm FKG}(x,M)\) always coincides with the old bound (101). Accordingly new results can only derive from \(\underline{Q}^{(1)}_{\rm FKG}(x,M)\) (i.e. from \(\underline{Q}^{(1.1)}_{\rm FKG}(x,M)\)): unfortunately numerical study reveals that such function is always less performant than the bounds of Sec. III.1. A comprehensive comparison between the old bounds and the improved versions derived in this section is presented in the right part of Fig. 2.
## VII Conclusion
In the present manuscript, we sought to better understand the behavior of the capacities of single-mode phase-insensitive Gaussian bosonic channels (PI-GBCs) and their concatenation rules. Through extensive analysis, we were able to establish, for each point in the parameter space of PI-GBCs, an analytical characterization of two regions in the parameter space, of higher and lower capacity. That is, the capacity of points within each region was found to be respectively higher or lower than that of the original channel. Using these regions, we were able to improve upon the previous upper bounds. This structure of parameter phase space of PI-GBCs can be used to potentially improve any new upper or lower bound.
We thank F. A. Mele and L. Lami for comments and suggestions. We also acknowledge financial support by MUR (Ministero dell' Universita e della Ricerca) through the PNRR MUR project PE0000023-NQSTI. MF is supported by Juan de la Cierva - Formacion (Spanish MICIN project FJC2021-047404-I), with funding from MCIN/AEI/10.13039/501100011033 and European Union "NextGenerationEU"/PRTRR.
|
2306.02325
|
Random Feedback Alignment Algorithms to train Neural Networks: Why do
they Align?
|
Feedback alignment algorithms are an alternative to backpropagation to train
neural networks, whereby some of the partial derivatives that are required to
compute the gradient are replaced by random terms. This essentially transforms
the update rule into a random walk in weight space. Surprisingly, learning
still works with those algorithms, including training of deep neural networks.
This is generally attributed to an alignment of the update of the random walker
with the true gradient - the eponymous gradient alignment -- which drives an
approximate gradient descend. The mechanism that leads to this alignment
remains unclear, however. In this paper, we use mathematical reasoning and
simulations to investigate gradient alignment. We observe that the feedback
alignment update rule has fixed points, which correspond to extrema of the loss
function. We show that gradient alignment is a stability criterion for those
fixed points. It is only a necessary criterion for algorithm performance.
Experimentally, we demonstrate that high levels of gradient alignment can lead
to poor algorithm performance and that the alignment is not always driving the
gradient descend.
|
Dominique Chu, Florian Bacho
|
2023-06-04T10:50:13Z
|
http://arxiv.org/abs/2306.02325v1
|
# Random Feedback Alignment Algorithms to train Neural Networks: Why do they Align?
###### Abstract
Feedback alignment algorithms are an alternative to backpropagation to train neural networks, whereby some of the partial derivatives that are required to compute the gradient are replaced by random terms. This essentially transforms the update rule into a random walk in weight space. Surprisingly, learning still works with those algorithms, including training of deep neural networks. This is generally attributed to an alignment of the update of the random walker with the true gradient -- the eponymous gradient alignment -- which drives an approximate gradient descend. The mechanism that leads to this alignment remains unclear, however. In this paper, we use mathematical reasoning and simulations to investigate gradient alignment. We observe that the feedback alignment update rule has fixed points, which correspond to extrema of the loss function. We show that gradient alignment is a stability criterion for those fixed points. It is only a necessary criterion for algorithm performance. Experimentally, we demonstrate that high levels of gradient alignment can lead to poor algorithm performance and that the alignment is not always driving the gradient descend.
keywords: Neural networks, Feedback alignment, Random walk +
Footnote †: journal:
## 1 Introduction
The backpropagation algorithm (BP) [1] underpins a good part of modern neural network (NN) based AI. BP-based training algorithms continue to be the state of the art in many areas of machine learning ranging from benchmark problems such as the MNIST dataset[2] to the most recent transformer-based architectures [3]. While its success is undeniable, BP has some disadvantages. The main one is that BP is computationally expensive. It is so in two ways. Firstly, BP has high requirements on pure processing power. It also requires sequential processing of layers during both the forward and backward pass, limiting its scope for parallelisation. This is sometimes called _backward locking_[4].
Secondly, BP is biologically implausible. One issue is that for every neuronal feedforward connection, BP would require neurons to also have a symmetric feedback connection with the same weight. This has never been observed. From a purely machine learning point of view, the lack of biological plausibility may not be too concerning, since the aim of applied AI is more often performance, rather than neuroscientific realism. However, there is a sense in which biological plausibility becomes a real concern after all: The update of any particular connection weight in a neural network requires global information about the entire network. This entails intense data processing needs [5], which in turn leads to high energy consumption [6; 7]. The electricity consumption required for the training of large scale NNs is a barrier to adoption and environmentally unsustainable and is widely recognised as problematic [8; 9]. BP is also not compatible with neuromorphic hardware platforms [10], such as Loihi [11] or SpiNNaker [12].
In the light of this, there has been some recent interest in alternatives to BP that alleviate these issues [13]. One particularly intriguing example are _random feedback alignment_ (FA) algorithms [14]. The basic FA algorithm is just BP with the symmetric feedback weights replaced by a randomly chosen, but fixed, feedback matrix. A variant of FA is _direct feedback alignment_ (DFA) [15], which bypasses backpropagation through layers and transmits the error directly to weights from the output layer via appropriately chosen feedback matrices. This enables layer-wise parallel updates of NNs. Furthermore, training no longer requires global knowledge of the entire network, which makes it amenable to implementation on neuromorphic hardware. FA and DFA have been found to perform surprisingly well on a number of benchmark problems [16, 17]. Recently, it has been reported that they even work on large-scale architectures such as transformers [18], often reaching performances that are comparable, albeit not exceeding, those of BP-based algorithms. Both algorithms do not work well on convolutional neural networks [18].
FA algorithms replace partial derivatives in the gradient computation by random matrices. Mathematically, the resulting update will no longer be a gradient of the loss function, but must be expected to be orthogonal to the gradient. It is therefore, at a first glance, surprising that DFA and FA work at all. A key insight into why they work was already given in the original paper by Lillicrap [14] who showed that the update direction of FA is not orthogonal to the gradient after all. They observed so-called _weight alignment_, whereby the weights of the network align with (i.e. point into approximately the same direction as) the feedback matrices and _gradient alignment_ where the updates of the FA algorithm align with the gradient as computed by BP. They conjectured that this alignment drives the approximate gradient descent of FA.
A mechanism that could lead to this alignment was suggested by Refinetti and co-workers [19]. They modelled a linear two-layer network using a student-teacher setup based on an approach by Saad and Solla [20]. This showed that, at least in their setup, when starting from initially zero weights, the weight update is in the direction of the feedback matrix, leading to weight alignment and consequently gradient alignment. A corollary of their results is the prediction that alignment is particularly strong when the weights are initially vanishing. Another important theoretical contribution is by Nokland [15] who formulated a stability criterion for DFA.
The above results were obtained using mathematically rigorous methods, but also rely on restrictive simplifying assumptions (e.g. linear networks in a student-teacher setup), which may or may not be relevant for realistic NNs. There is therefore a need to understand how FA operates in unrestricted NN, and whether the insights derived from simplified setups remain valid. The aim of the present contribution is to shed more light onto why FA works. To do this, we will complement existing approaches and view FA as a random walk [21], or more specifically a spatially inhomogeneous random walk in continuous weight space where the distribution of jump lengths and directions varies according to the position of the walker. In the present case, the update is entirely determined by the FA update rule and the distribution of training examples. The latter acts as the source of randomness.
We will show below that across weight space there are particular points, that is specific choices of weights, where the jump length vanishes. In a slight abuse of notation, we will refer to those as _fixed points_ of the random walk. As will become clear, these correspond to local extrema of the loss function, and as such correspond to valid solutions of the BP algorithm. If the random walker landed exactly on one of those, then it would remain there. However, typically these fixed points are not stable under the FA update rule, that is they are not attractors of the random walker. In this case, a walker initialised in the neighbourhood of the fixed point would move away from the fixed point. As one of the main contributions of this paper we will show that gradient alignment is the condition for fixed points to be stable. This stability criterion is different from the one derived by Nokland [15] who showed that under certain conditions gradient alignment can lead to loss minimisation. We show here that feedback alignment **is** the stability criterion to first order approximation and that it can be derived in a general way without any simplifying assumptions.
Furthermore, we will also show that gradient alignment while necessary for FA to find good solution, is not a sufficient criterion. Based on simulation results, we will conjecture that alignment is not a driving the approximate gradient descent, but rather is a side-product of a random walk that is attracted by local extrema of the loss function. Finally, based on extensive simulations, we will also propose a model of how NN learning under FA works.
## 2 Results
### Notation and basic setup
We will start by introducing the notation and the basic setup on which the remainder of this paper is based. Throughout, we will consider a feedforward neural network (multi-layer perceptron) parametrised by some weights \(\mathbf{w}\). The network takes the vectorised input \(\mathbf{x}\) and returns the output vector \(\mathbf{m}(\mathbf{x};\mathbf{w})\). When the input is irrelevant, then we will use the shorthand notation \(\mathbf{m}(\mathbf{w})\) to describe the neural network. We consider a network of \(L\) layers, where each layer \(1\leq l\leq L\) comprises \(n_{l}\) artificial neurons, whose output is a scalar non-linear functions \(f_{i}^{(l)}(\cdot)\), where the index \(1\leq i\leq n_{l}\) labels the neuron to which this output belongs. The argument to those activation function is the pre-activation function
\[h_{j}^{(l)}:=\sum_{i=1}^{n_{l-1}}w_{ji}^{(l)}f_{i}^{(l-1)}\]
with \(w_{ji}^{(l)}\in\mathbb{R}\) denoting the parameters (or "weights") of \(h_{j}^{(l)}\). For convenience, we will write \(x_{i}^{(l)}:=f_{i}^{(l)}\) and in particular \(f_{i}^{(0)}:=x_{i}\) is the input to the network. Throughout this manuscript, we denote the loss function by \(\mathcal{L}(\mathbf{m}(\mathbf{x}))\), and assume that it is to be minimised via gradient descend, although all our conclusions will remain valid for gradient ascend problems.
Finally, we define the _alignment measure_ of two vectors or two matrices \(\mathbf{a}\) and \(\mathbf{b}\). In the case of two matrices, this is computed by flattening the matrices in some way, for example by stacking the columns to obtain \(\mathbf{a}^{\prime}\) and \(\mathbf{b}^{\prime}\). The alignment measure is then computed as the inner product of the vectors divided by their norms,
\[\frac{\mathbf{a}^{\prime}\cdot\mathbf{b}^{\prime}}{\|\mathbf{a}^{\prime}\|\| \mathbf{b}^{\prime}\|}.\] (alignment measure)
The maximal value of the alignment is \(1\). This can be interpreted as \(\mathbf{a}^{\prime}\) and \(\mathbf{b}^{\prime}\) being completely parallel. The minimal value is \(-1\), which indicates that they are anti-parallel. In high dimensional spaces, two randomly chosen matrices/vectors will typically have an alignment of \(0\), indicating that they are orthogonal to one another.
### BP and FA algorithms
Using this notation, we can formulate the BP update rule for layer \(l\) of a feedforward multi-layer perceptron as
\[\Delta^{\text{BP}}w_{pq}^{(l)}:=\frac{\partial\mathcal{L}}{\partial f_{i}^{(L) }}\frac{\partial f_{i}^{(L)}}{\partial h_{j}^{(L)}}\frac{\partial h_{j}^{(L)}} {\partial f_{l}^{(L-1)}}\frac{\partial f_{l}^{(L-1)}}{\cdots}\cdots\frac{ \cdots}{\partial f_{k}^{(l)}}\frac{\partial f_{k}^{(l)}}{\partial h_{s}^{(l)} }\frac{\partial h_{s}^{(l)}}{\partial w_{pq}^{(l)}}. \tag{1}\]
Here (and in the following), we use the convention that repeated indices are summed over, that is \(a_{i}b_{i}:=\sum_{i}a_{i}b_{i}\); note that this convention does not apply to the superscripts in parenthesis that indicate the layer.
Equation 1 can be evaluated, by noting that the function \(f_{i}^{(l)}\) only depends on \(h_{i}^{(l)}\), and furthermore for the common type of neural network, \(\frac{\partial h_{i}^{(l)}}{\partial w_{jk}^{(l)}}=\delta_{ij}f_{k}^{(l-1)}\), where \(\delta_{ij}\) is \(1\) if \(i=j\) and \(0\) otherwise. Thus, eq. 1 reduces to
\[\Delta w_{pq}^{(l)} =\partial\mathcal{L}_{i}\widetilde{B}_{ik}^{(l)}\frac{\partial f_ {k}^{(l)}}{\partial h_{s}^{(l)}}\frac{\partial h_{s}^{(l)}}{\partial w_{pq}^{ (l)}}\] \[=\partial\mathcal{L}_{i}\widetilde{B}_{ik}^{(l)}\partial f_{ks}^{ (l)}\delta_{ps}f_{q}^{(l-1)}\] \[=\partial\mathcal{L}_{i}\widetilde{B}_{ik}^{(l)}\partial f_{kp}^{ (l)}f_{q}^{(l-1)}. \tag{2}\]
Here we abbreviated the partial derivative of the loss by \(\partial\mathcal{L}_{i}\), \(\partial f_{ks}^{(l)}:=\partial f_{k}^{(l)}/\partial h_{s}^{(l)}\), and \(\widetilde{B}_{ik}^{(l)}\) is a shorthand for the middle terms of the chain rule of eq. 1. Note that, \(\partial f_{ks}^{(l)}\) is zero for \(k\neq s\).
FA is the same as BP, except that the terms \(\partial h_{i}^{(\cdot)}/\partial f_{j}^{(\cdot)}\) appearing in \(\widetilde{B}_{ik}^{(l)}\) are replaced by randomly chosen (but fixed) numbers \(R_{ij}\), drawn from some user-determined distribution. This leads to the partially random feedback matrices \(\mathbf{B}\) with elements \(B_{ik}^{(l)}\). The _FA pseudo-gradient_ in layer \(l\) is defined by
\[\Delta^{\text{FA}}w_{pq}^{(l)}:=\partial\mathcal{L}_{i}B_{ik}^{(l)}\partial f_ {kp}^{(l)}x_{q}^{(l)}. \tag{3}\]
Note, that the rhs of the equation is not a gradient of a a particular function.
DFA is the same as FA, except that the elements of the matrix \(B_{ik}^{(l)}\) are fixed but chosen randomly.
### Deriving the stability criterion for FA
The dynamics induced by the FA-pseudo-gradient (eq. 3) constitutes a random walk in weight space. The randomness is introduced via the choice of the particular training example for the next weight update. Therefore, FA is not a gradient descent/ascent algorithm, except of course in the output layer which follows a gradient to a local extremum of the loss function (in exactly the same way as BP).
We will now show that as a consequence of this, the FA pseudo-gradient update shares with BP a number of fixed points in weight space. These correspond to local extrema of the loss function. Under certain conditions, FA will converge to those. In order to understand the difference between the FA and BP it is instructive to consider the update in the penultimate layer of the network (which has the simplest form). In the case of BP, this is:
\[\Delta^{\text{BP}}w_{pq}^{(l)} =\frac{\partial\mathcal{L}}{\partial f_{i}^{(L)}}\frac{\partial f _{i}^{(L)}}{\partial h_{j}^{(L)}}\frac{\partial h_{j}^{(L)}}{\partial f_{l}^{ (L-1)}}\frac{\partial f_{l}^{(L-1)}}{\partial h_{k}^{(L-1)}}\frac{\partial h_{ k}^{(L-1)}}{\partial w_{pq}^{(L-1)}}\] \[=\frac{\partial\mathcal{L}}{\partial f_{i}^{(L)}}\frac{\partial f _{i}^{(L)}}{\partial h_{j}^{(L)}}w_{jl}^{(L)}\frac{\partial f_{l}^{(L-1)}}{ \partial h_{k}^{(L-1)}}\frac{\partial h_{k}^{(L-1)}}{\partial w_{pq}^{(L-1)}}\] (gradient)
The corresponding expression in the case of FA is then:
\[\Delta^{\text{FA}}w_{pq}^{(l)}=\frac{\partial\mathcal{L}}{\partial f_{i}^{(L) }}\frac{\partial f_{i}^{(L)}}{\partial h_{j}^{(L)}}R_{jl}^{(L)}\frac{\partial f _{l}^{(L-1)}}{\partial h_{k}^{(L-1)}}\frac{\partial h_{k}^{(L-1)}}{\partial w_ {pq}^{(L-1)}}\] (FA pseudo-gradient)
where \(\mathbf{R}\) is a randomly chosen, but fixed matrix.
We see from the above equations that for BP the gradient vanishes at each layer when the the derivative of the loss \(\left(\partial\mathcal{L}/\partial f_{i}^{(L)}\right)\left(\partial f_{i}^{( L)}/\partial h_{j}^{(L)}\right)\) vanishes for all indices \(j\). If the weight matrices are full rank, then this is indeed the only way for the gradient to vanish. We observe that random matrix \(\mathbf{R}\) will be maximal rank for as long as elements are chosen _iid_. Its nullspace vanishes and local extrema of the loss function are therefore also points where the update of the FA pseudo-gradient vanishes. For as long as \(\mathbf{R}\) is full rank, then all fixed points of the FA pseudo-gradient will be local extrema of the loss function. As a consequence, the fixed points of FA will be local extrema of the loss function, and local attractors under the BP. This means, that the optimiser in the output layer pushes the entire network, to fixed points of the FA pseudo-gradient. Note that these fixed points need not be attractors of the FA pseudo-gradient. When initialised close to such a fixed point, it is conceivable that FA moves away from the neighbourhood of the fixed point. Indeed, it can be easily seen that this happens (see fig. 1 for an example).
The question is now whether or not there are fixed points that are attractive under the FA pseudo-gradient. The condition for stability is that under the FA update \(\Delta^{\text{FA}}\mathbf{w}\) the loss does not increase, such that
\[\mathcal{L}\left(\mathbf{m}\left(\mathbf{w}\right)\right)-\mathcal{L}\left( \mathbf{m}\left(\mathbf{w}-\Delta^{\text{FA}}\mathbf{w}^{(l)}\right)\right) \geq 0. \tag{4}\]
Here, we suppressed the label superscripts for clarity and wrote \(\mathbf{w}\) instead of \(\mathbf{w}^{(l)}\). Assuming the weight update is a small one, we can now expand to first order and obtain
\[\mathbf{m}\left(\mathbf{w}-\Delta^{\text{FA}}\mathbf{w}\right)\approx\mathbf{ m}\left(\mathbf{w}\right)-\mathbf{m}^{\prime}\left(\mathbf{w}\right)\Delta^{ \text{FA}}\mathbf{w}, \tag{5}\]
where \(\mathbf{m}^{\prime}\) is a three-dimensional matrix with elements \(m^{\prime}_{ijk}:=\partial m_{i}/\partial w_{jk}\). We can now further expand the loss function to first order to obtain
\[\mathcal{L}\left(m\left(\mathbf{w}\right)\right)-\mathcal{L}\left(\mathbf{m} \left(\mathbf{w}-\Delta^{\mathrm{FA}}\mathbf{w}\right)\right)\approx\frac{ \partial\mathcal{L}}{\partial\mathbf{m}}\mathbf{m}^{\prime}(\mathbf{w})\Delta^ {\mathrm{FA}}\mathbf{w}. \tag{6}\]
Thus the stability criterion becomes, in first order approximation
\[\frac{\partial\mathcal{L}}{\partial\mathbf{m}}\mathbf{m}^{\prime}(\mathbf{w}) \Delta^{\mathrm{FA}}\mathbf{w}\geq 0 \tag{7}\]
We observe that \(\frac{\partial\mathcal{L}}{\partial\mathbf{m}}\mathbf{m}^{\prime}(\mathbf{w})\) is just \(\Delta^{\mathrm{BP}}\mathbf{w}\), and \(\Delta^{\mathrm{FA}}\mathbf{w}\) is the FA pseudo-gradient. Furthermore, we see that the rhs of eq. 7 is up to normalisation the alignment measure of the FA update with the BP gradient. Thus, eq. 7 formulates a necessary condition for the stability of a local extremum with respect to the update by the FA pseudo-gradient. This stability criterion states that the alignment measure needs to be positive.
\[\frac{\left(\Delta\mathbf{w}\right)^{\mathrm{BP}}\cdot\left(\Delta\mathbf{w} \right)^{\mathrm{FA}}}{\left\|\Delta\mathbf{w}\right\|^{\mathrm{BP}}\cdot \left\|\Delta\mathbf{w}\right\|^{\mathrm{FA}}}\geq 0 \tag{8}\]
Put differently, we can now say that gradient alignment is a stability criterion for FA.
### A conjectured mechanism
The FA pseudo-random update rule is a spatially inhomogeneous random walk, whose update steps depend both on the input \(\mathbf{x}\) to the network and the current weights \(\mathbf{w}\). A priori, there cannot be an expectation that the walker moves along a trajectory that reduces the loss because the FA pseudo-gradient is not a gradient, except in the output layer.
Based on the form of the FA pseudo-gradient, we can still conjecture a mechanism for the walker to find a local extremum of the loss function.
* Initially, the walker moves randomly through parameter space in the input and hidden layers.
* In the output layer, the error is minimised, driving the derivative of the loss function \(\partial\mathcal{L}_{i}\) to zero, and hence (as discussed above) towards a fixed point of the FA pseudo-gradient.
Figure 1: FA and BP were trained on the same sequence of examples from MNIST starting from the same initial conditions. The graphs show the weight alignment over time between FA and BP for the hidden and output layer. A value of 1 means that FA and BP have the same weights up to a constant factor. (a) Initially, the two algorithms diverge. After 25000 update steps the weights of the FA network were transferred to the network trained using BP. Following this, they remain aligned, demonstrating that the solution found by FA remains stable under BP. The inset shows the accuracy for both for reference.(b) Same, but at 25000 the FA takes the weight found by BP. This is not stable and the two solutions diverge quickly following the swap. Note how the accuracy of the FA drops immediately following the weight swap, indicating that it is repelled from the loss extremum.
* The relevant fixed point may be unstable under the FA pseudo-gradient update rule. In this case, the FA random walker in the input and hidden layers will have a systematic drift, making it impossible for the gradient descent process in the output layer to settle on its local extremum.
* Convergence to a fixed point will only happen when the trajectory towards this fixed point is compatible with the stability criterion for all layers.
There is no guarantee that such a compatible extremum is found. For example, the relevant loss extrema may not be accessible from the initial state, or there may be no extrema for which the stability criterion holds. It could also be that the random walk gets stuck in a region with near-zero update step sizes. For example, the local derivative \(\partial f_{ij}^{(l)}\) vanishes as weights go to infinity (see A for a discussion of this). This prevents the random walker from exploring the weight space.
### Experiments
In the following, we will try to understand the behaviour of the random walker experimentally by focussing on the particular example of a 3 layer feed-forward neural network trained on the MNIST problem. As we will see below, for this problem and our parameter settings, FA works reasonably well, which makes this a suitable example to gain some insights into how FA works. Note that in what follows the focus is on understanding FA. The NN used and the MNIST benchmark are chosen as illustrative examples. We are not trying to optimise the performance of FA on this task, nor are we attempting to give novel approaches to solve MNIST. Instead, the sole aim of the experiments below is to gain some new insights about FA by observing how it works. The example itself (MNIST) is convenient, because it is easy to solve, but the precise choice of example is to some extent irrelevant, and others could have been used to obtain the same insights.
#### 2.5.1 Alignment
There are two meanings to alignment in FA. (\(i\)) Weight alignment and (\(ii\)) gradient alignment. The consensus view in the literature on FA is that FA (somehow) brings about weight alignment, which implies gradient alignment. The latter then drives the FA towards extrema of the loss function. In that sense, FA approximates BP. In this section, we will show some examples where the approach to the local extremum is not driven by weight alignment, at least not during some stages of the simulations.
One of the key results by Refinetti _et al_ is that when starting from initially vanishing weights, the update of weights is in the direction of the feedback matrices. If this is true and gradient alignment drives FA learning, then one would expect that small initial weights lead to initially larger alignment than non-vanishing initial weights and that low initial weights lead to faster convergence of FA to a good solution. Our simulations are consistent with this. Fig. 2 shows that there is higher weight alignment (fig. 2c) and gradient alignment (fig. 2d) when initial weights are small. Interestingly, weight alignment remains rather modest in comparison to the gradient alignment. Within 10 epochs, it does not even reach a value of 0.1. Still, as expected, FA finds good solutions faster when starting with lower weights (fig. 2a). This conclusion also holds in the long run. Even after 50 epochs, the initial conditions matter for the achieved accuracy. The higher the initial weights, the lower the accuracy (see fig. 2b).
These results are consistent with the view that rapid feedback alignment during early updates is important for the eventual performance of the algorithm. A closer examination, however, reveals some additional complexities, which provide further insight. The first one is highlighted by fig. 3, which shows the norm of the gradient for the input, hidden and output layers after the first update step, so after the algorithm has been presented with the first example of the training set (figs. 3a-3c). The main observation to be made from these figures is that the norm reduces rapidly as the initial weights increase. If we take the norm as an indicator for the step size of the random walker, then this suggests that walkers initialised with high weights suffer from slow speed as a result of small update steps. High weights, therefore mean an effectively reduced learning rate at the beginning of learning. Note the dramatic decrease of the learning rate as the weights start to differ from 0.
Figure 2: The upper panel shows (a) the accuracy as a function of the update step for altogether 10 epochs. (b) The accuracy after 50 epochs as a function of the initial weights (see Methods for an explanation). An approximate linear dependence is discernible. The lower panel shows the (c) weight alignment and (d) gradient alignment for various initial weights as a function of updates. All results are averaged over 3 independent repetitions and used standard parameters (see Methods).
Figure 4: (a) We trained a gradient using BP. For the input and hidden layer we perturbed the gradient so that it had an angle relative to the actual gradient. The graph shows the accuracy after one epoch relative to BP. A value of 1 indicates that the perturbed network performs equally well as BP. The red line indicates the performance of a network where only the last layer is trained. The accuracy after 5 updates against the mean gradient alignment for the (b) input and (c) hidden layer. The average is taken over 500 repetitions. The error bars show the standard deviations. For comparison, the perturbed BP is also shown. Clearly, FA does better than the perturbed BP in those examples.
Figure 3: (a)-(c) The infinite norm of the gradient at the first update step as a function of the initial value scale. (d)-(f) Same, but the norm of the gradient after 3 epochs. Note the different scale on the vertical axis, showing how the norm of the gradients has reduced over time. Each point corresponds to a single simulation.
This suggests a new explanation for the improved performance of networks initialised with low weights: FA with initially vanishing weights may perform better because the initial speed of exploration is faster, and hence accuracy can increase over fewer update steps. This means that gradient alignment is not necessarily the only reason why FA performs. At least, we have shown one example which suggests a different explanation. We hasten to add that the two explanations are not mutually exclusive.
This begs now the question whether or not gradient alignment drives accuracy, or whether gradient alignment is merely a by-product of the FA dynamics. This is best explored during the earliest stages of learning, before a substantial alignment has formed. In order to be able to investigate this, we first need to understand the relationship between gradient alignment and loss. To this end, we generated a baseline curve as follows: We used the BP algorithm to train the network for 1 epoch on the MNIST dataset. However, each time the gradient was computed in the hidden and input layer, we randomly perturbed it, such that the actual gradient used for updating the weights was different from the gradient determined by BP. We could then, for each experiment, determine the alignment between the true gradient and the perturbed gradient. We did this systematically in fig. 4a, which shows the accuracy after 1 epoch as a function of the average alignment. By design, this set of experiments isolates the effect of the gradient alignment, while all other details of the algorithm are left the same. The figure also shows in red, the baseline of a multi-layer network where only the last layer is trained, whereas all other layers remain at the initial weights.
A number of observations can be made. Firstly, the accuracy of the network intersects the red line for an alignment of 0. In this case, the perturbed gradients are orthogonal to the actual gradients one would obtain from BP which means that weight updates do not have an overall drift either into the direction of better accuracy or away from it. Consistently, the accuracy of the network then corresponds to simulations where only the output layer is trained (indicated by the red line in fig. 4a). Further increasing the alignment, only improves the performance a little bit, reflecting the well known fact that training only the output layer leads to high accuracies in many cases. On the other hand, for negative alignments, the performance quickly drops to random guessing, which corresponds to an accuracy of 0.1. (Note that the effect of updating against the gradient is a randomisation of the performance, rather than guiding the network to an accuracy below 0.1, which would require updating the weights of the output layer against the gradient as well.) A further observation to be made from this is that the gradient descent process operating at the output layer cannot settle on a minimum when the weights in the lower layers have a systematic drift in one direction.
Fig. 4a shows how alignment (or rather misalignment) with the BP gradient impacts learning performance. We can now use this to see whether the FA performance is driven solely by the alignment of the pseudo-gradient or whether there is more to it. If FA performs exactly as well as the perturbed BP with the same alignment, then we know that gradient alignment is driving performance. If, on the other hand, it performs better, then there is some additional driver.
Fig. 4b gives some relevant insight. It shows the performance as a function of the average alignment for FA, gradient perturbed BP and and a network where only the top layer is trained. Fig. 4b and 4c compare the performance of the three algorithms after 5 updates against the alignment of the input and hidden layer. This shows that, for similar alignment, FA does much better than the other two algorithms. This suggests two things: \((i)\) Gradient alignment is not the only driver of performance of FA, at least not during early stages. \((ii)\) The performance of the FA during early stages is not entirely driven by the gradient descent in the output layer, but the update of the input and hidden layer does add to performance.
#### 2.5.2 Alignment can reduce performance
So far, we have established that alignment between the gradient and pseudo-gradient is not driving performance during early stages of learning. We will now show that alignment is not sufficient for performance, even during later stages, and indeed can be outright detrimental.
Fig. 5 shows, as an example, a set of three different simulations of FA with identical hyper-parameters. The only difference between them is the initialisation of the weights. The blue line is just a standard simulation, with weights being initially drawn from a normal distribution before being scaled by 0.05. The green simulation is the same, but the signs of the initial weights were set equal to the entries of the corresponding feedback matrices. As a consequence, there is a large initial alignment between the weights and the feedback matrices. Finally, the red points show the results for a simulation where the initial weights
were set to be identical to the feedback matrices before being scaled by \(0.05\). For all three simulations, we drew the elements of the feedback matrices from a normal distribution, rather than from the set \(\{-1,1\}\). We did this so that the initial weights in the blue simulation are statistically indistinguishable from the other two simulations, while the initial alignment between the weights is different in the three cases. By construction, the blue simulation is unaligned initially, the red simulation is perfectly aligned and the green simulation is somewhere in between those two cases.
We find from the simulations presented in fig. 5, as expected, that good weight alignment translates to high gradient alignment. The key message of the graph is that the initially unaligned simulation performs best, amongst the three runs. While we are not claiming that high gradient alignment is always detrimental, from this single example we can still draw two conclusions: (_i_) Weight alignment is not a necessary condition for performance of the FA algorithm. (_ii_) High gradient alignment is not sufficient for performance of the FA algorithm.
While the performance of the red simulation is still good in the sense that it is apparently learning something, it is possible to construct examples of FA networks that have almost perfect alignment throughout, but learn nothing (data not shown). The simplest way to do this is to use feedback matrices that are initialised by randomly drawing elements from the set \(\{-1,1\}\), and set the initial weights equal to the feedback matrices. These initial conditions are not conducive to algorithm performance, and the network does not train well. Altogether, we find, that gradient alignment is not always sufficient to explain the performance of FA algorithms. We could show at least one example, where other mechanisms are required.
## 3 Discussion
At a first glance, FA and DFA should not work. By replacing a key term in the update equation, the gradient descent of BP is effectively transformed into a random walk in weight-space. The key observation that had been made early on was that this random walk aligns with the "true" BP gradient. In the literature this alignment is commonly assumed to be the driver for the performance of the FA algorithm. It remains unclear, however, why FA aligns. Existing mathematical models only cover special simplified cases of neural networks. They suggest that the update dynamics of FA leads to weight alignment, which then implies gradient alignment. In that sense, FA approximates the true gradient. However, weight alignment is typically much weaker than gradient alignment. This suggests that some other explanations are required.
Our theoretical results suggest a somewhat different view: FA is not approximating BP at all, and indeed does not descend or ascend the gradient of a loss function or an approximation thereof. It is not "learning" in the sense one normally understands this term. Instead, it performs a random walk in weight space. It works based on the following conjectured mechanism: At the output layer, the gradient descend process drives the network as a whole towards a fixed point, corresponding to a vanishing gradient of the loss function. Many
Figure 5: Feedback alignment, with initial weights chosen at random (blue), randomly but the sign of each element was the same as the feedback matrix (green) and exactly the same as the feedback matrix (red). We show (a) the angle between the true gradient and the feedback gradient of the hidden layer, (a) the alignment between the weights of the hidden layer and the feedback matrix, (c) the accuracy over time. There is no trend between higher alignment and better accuracy, or better increase of accuracy. Each point represents a value taken after an update step. Altogether, the simulations here represent 10 epochs.
of those fixed points are unstable for the random walker and the updates in the hidden and input layer will drive the network away from the fixed point, until a fixed point is found for which the stability criterion is valid. Then, the network will converge towards this fixed point.
There is no guarantee that FA finds extrema that are compatible with the alignment criterion. Failure to do so could be either because there are no compatible extrema, or because FA is initialised in a part of parameter space from which it cannot find a route to compatible fixed points. The latter scenario could realise when extrema of the loss function are sparse in parameter space or when the weights are initialised in an area that has small update steps, for example for large initial weights. The often cited failure of FA for convolutional neural networks is likely a consequence of the sparseness of loss extrema for those networks.
Throughout this article we concentrated on FA, but clearly the conclusions can be transferred directly to DFA. The only difference between FA and DFA is that in the latter each layer performs an independent walk, whereas in FA the training process is a single random walker in a higher dimensional space. The basic mechanism of how DFA finds extrema remains the same as in the case of FA.
Some open questions remain. The first one relates to the distribution of local extrema of the loss function in parameter space. In particular, it may be that for certain types of problems there are conditions that guarantee that FA and DFA find local extrema or alternatively, that they do not find such extrema (as seems to be the case for convolutional neural networks). There is also a lack of theoretical understanding of spatially inhomogeneous random walks. In order to come to a complete theoretical description of FA, we need a sound justification of how and under which conditions random walks approach fixed points.
## 4 Methods
Unless otherwise stated, we used a feed forward multi-layer perceptron with an input/hidden layer of \(700/1000\) neurons. The size of the network was chosen to be large, while still being fast to execute. Our results are not sensitive to variations of the size of the network, although classification performance clearly is. Throughout, we used \(\tanh\) as the activation function; we also experimented with relu which led to qualitatively the same results (data not shown). For the final layer we used the softmax function and as a loss function we used the cross-entropy. The batch size was chosen to be \(100\) and the learning rate was set to \(0.05\). Again, our results are not sensitive to those parameters.
Unless stated otherwise, feedback matrices were chosen randomly by drawing matrix elements from the set \(\{-1,1\}\). We found FA not to be overly sensitive to the particular choice of this set. However, we found algorithm performance to depend on it to some extent. The particular choice we made gives good performance, but we made no attempt to choose the optimal one.
For all layers, initial weights were drawn from a normal distribution and then scaled with a weight scaling factor, resulting in both negative and positive weights. If the factor is zero, then all weights were initially zero. The larger the factor, the larger the (absolute value of) initial weights.
All simulations were done using Julia (1.8.5) Flux for gradient computations and network construction, and CUDA for simulations on GPUs. Throughout no optimisers were used. Weight updates were made by adding the gradient, scaled by the learning rate, to the weights. The networks were trained on the MNIST dataset as included with the Flux library. Whenever an accuracy is reported it was computed based on the test-set of the MNIST dataset.
|
2302.08499
|
Non-Fermi liquids from kinetic constraints in tilted optical lattices
|
We study Fermi-Hubbard models with kinetically constrained dynamics that
conserves both total particle number and total center of mass, a situation that
arises when interacting fermions are placed in strongly tilted optical
lattices. Through a combination of analytics and numerics, we show how the
kinetic constraints stabilize an exotic non-Fermi liquid phase described by
fermions coupled to a gapless bosonic field, which in many respects mimics a
dynamical gauge field. This offers a novel route towards the study of non-Fermi
liquid phases in the precision environments afforded by ultracold atom
platforms.
|
Ethan Lake, T. Senthil
|
2023-02-16T18:54:43Z
|
http://arxiv.org/abs/2302.08499v2
|
# Non-Fermi liquids from kinetic constraints in tilted optical lattices
###### Abstract
We study Fermi-Hubbard models with kinetically constrained dynamics that conserves both total particle number and total center of mass, a situation that arises when interacting fermions are placed in strongly tilted optical lattices. Through a combination of analytics and numerics, we show how the kinetic constraints stabilize an exotic non-Fermi liquid phase described by fermions coupled to a gapless bosonic field, which in several respects mimics a dynamical gauge field. This offers a novel route towards the study of non-Fermi liquid phases in the precision environments afforded by ultracold atom platforms.
Introduction:A major ongoing program in quantum many body physics is the characterization of phases of matter in which the quasiparticle paradigm breaks down. The most striking examples where this occurs are non-Fermi liquids (NFLs), believed to describe the observed strange metal behavior in a number of quantum materials. The low energy excitations in NFLs typically admit no quasiparticle-like description, with their ground states instead being more aptly thought of as a strongly-interacting quantum soup. Our understanding of such states of matter, as well as the conditions in which they may be expected to occur, is very much in its infancy. To this end, it is extremely valuable to have examples of simple microscopic models in which NFLs can be shown to arise, especially so when these models are amenable to experimental realization.
In this work we propose just such a model, by demonstrating the emergence of a NFL in a kinetically constrained 2d Fermi-Hubbard model. This model is interesting in its own right, but our interest derives mainly from the fact that it finds a natural realization in strongly tilted optical lattices, a setup which has received recent experimental attention as a platform for studying ergodicity breaking and anomalous diffusion [1; 2; 3]. The key physics afforded by the strong tilt is that it provides a way of obtaining dynamics that conserves both total particle number _and_ total dipole moment (for us 'dipole moment' is synonymous with 'center of mass'), with the latter conserved over a prethermal timescale which as we will see can be made extremely long.
In different settings, the kinetic constraints provided by dipole conservation are well-known to arrest thermalization and produce a variety of interesting dynamical phenomena [4; 5; 6; 7; 8]. More recently, it has been realized that dipole conservation also has profound consequences for the nature of quantum ground states [9; 10; 11; 12; 13; 14; 15] and the patterns of symmetry breaking that occur therein [16; 17].
In this work, we show that when these constraints arise in the context of Fermi-Hubbard models, they produce an exotic NFL state in an experimentally-accessible region of parameter space. The low energy theory of this NFL is closely analogous to a famous model in condensed matter physics, namely that of a Fermi surface coupled to a dynamical \(U(1)\) gauge field [18; 19; 20; 21; 22]. We leverage this analogy to derive a number of striking features of the NFL state, chief among these being the absence of quasiparticles despite the presence of a sharp Fermi surface, and a vanishing conductivity despite the presence of a nonzero compressibility.
Fermions in strongly tilted optical lattices:We begin by considering a model of spinless fermions on a tilted 2d square optical lattice, interacting through repulsive nearest-neighbor interactions (spinful fermions are similar and will be briefly discussed later). Writing the fermion annihilation operators as \(f_{\bf r}\) and letting \(n_{\bf r}\equiv f_{\bf r}^{\dagger}f_{\bf r}\), we consider the microscopic Hamiltonian \(H=H_{FH}+H_{\Delta}\), with the lattice tilt captured by \(H_{\Delta}=\sum_{{\bf r},{\bf a}}\Delta_{\bf r}r^{a}n_{\bf r}\), and the Fermi-Hubbard part given by
\[H_{FH}=\sum_{{\bf r},{\bf a}}[-t_{a}(f_{\bf r}^{\dagger}f_{{\bf r }+{\bf a}}+h.c.)+V_{0,a}n_{\bf r}n_{{\bf r}+{\bf a}}], \tag{1}\]
where \({\bf a}=\hat{x},\hat{y}\) label the unit vectors of the square lattice. The bare nearest-neighbor repulsion \(V_{0,a}\) can be engineered by employing atoms with strong dipolar interactions [23; 24] or by using Rydberg dressing [25; 26; 27]. To simplify the notation we will let both \(t_{a}/\Delta_{a}\) and \(V_{0}\equiv V_{0,a}\) be independent of \(a\), with \(\Delta_{x}/\Delta_{y}=t_{x}/t_{y}\) unconstrained.
We will be interested in the large-tilt regime \(t_{a}/\Delta_{a},V_{0}/\Delta_{a}\ll 1\), with \(t_{a}/V_{0}\) arbitrary. Here it is helpful to pass to a rotating frame which eliminates \(H_{\Delta}\) via the time-dependent gauge transformation \(e^{itH_{\Delta}}\). In this frame, the Hamiltonian is
\[H_{rot}(t)=\sum_{{\bf r},{\bf a}}[-t_{a}(e^{-i\Delta_{a}t}f_{\bf r}^{\dagger }f_{{\bf r}+{\bf a}}+h.c.)+V_{0}n_{\bf r}n_{{\bf r}+{\bf a}}]. \tag{2}\]
We then perform a standard high-frequency expansion [28; 29; 30; 31; 32; 2] to perturbatively remove the quickly oscillating phases in the first term. The time-independent part of the resulting expansion conserves the total dipole moments \(\sum_{r}r^{a}n_{\bf r}\) because dipoles--being charge neutral objects--can hop freely without picking up any \(e^{-i\Delta_{a}t}\) phases. As shown in App. A, the result of this expan
sion is the static Hamiltonian
\[H_{DFH}=-\sum_{\bf r,a}t_{d}[d^{a\dagger}_{\bf r}(d^{a}_{{\bf r}+2{ \bf a}}+d^{a}_{{\bf r}+{\bf a}+{\bf\bar{a}}}+d^{a}_{{\bf r}+{\bf a}-{\bf\bar{a}}} +h.c.)] \tag{3}\] \[+\sum_{\bf r,a}n_{\bf r}(Vn_{{\bf r}+{\bf a}}+V^{\prime}n_{{\bf r}+ 2{\bf a}}+V^{\prime\prime}(n_{{\bf r}+{\bf a}+{\bf\bar{a}}}+n_{{\bf r}+{\bf a}-{ \bf\bar{a}}}))\]
where we have defined the dipole operators \(d^{a}_{\bf r}\equiv f^{\dagger}_{\bf r}f_{{\bf r}+{\bf a}}\) and let \({\bf\bar{a}}\) be the spatial coordinate opposite to \(a\). The couplings constants in \(H_{DFH}\) are given by
\[t_{d}=V_{0}{\bf t}^{2},\,V=V_{0}(1-6{\bf t}^{2}),\,V^{\prime}=2t_{d},\,V^{ \prime\prime}=4t_{d}, \tag{4}\]
with the dimensionless hopping strength \({\bf t}\equiv t_{a}/\Delta_{a}\). We will refer to the model (3) as the _dipolar Fermi-Hubbard model_ (DFHM).
As a time-independent theory, the DFHM only captures the system's dynamics over a (long) prethermal timescale. For (yet longer) times the fermions can exchange energy between \(H_{FH}\) and \(H_{\Delta}\), and a system initially prepared in the ground state of \(H_{DFH}\) will begin to heat up. We will see later that this is actually not an issue, as the relevant time scale can (in principle) be made arbitrarily long. Before explaining this however, we first turn our attention to understanding the low-energy physics of \(H_{DFH}\).
Theory of the dipolar Fermi-Hubbard model:In the DFHM, dipole conservation fixes the center of mass of the fermions, which cannot change under time evolution. This precludes a net flow of particles in any many-body ground state, implying a particle number conductivity which is strictly zero at all frequencies and guaranteeing that \(H_{DFH}\) always describes an insulating state [9]. In clean systems, a vanishing conductivity almost always comes hand-in-hand with a vanishing compressibility \(dn/d\mu=0\). We will however see that for a wide range of \({\bf t}\) the natural ground state of the DFHM is in fact _compressible_. In this regime the system has a sharp Fermi surface but lacks well-defined Landau quasiparticles, and is therefore an example of a NFL.
To understand the claims in the previous paragraph, we start by considering the limit of small \({\bf t}\). Here the repulsive interactions dominate, and various crystalline states may form in a manner dependent on the fermion density. As \({\bf t}\) is increased, the system can lower its energy by letting dipolar bound states delocalize across the system, by virtue of the dipole hoppings on the first line of (3). For large enough \({\bf t}\) the dipoles will lower their kinetic energy by condensing, producing a phase where \(D^{a}\equiv\langle d^{a}\rangle\neq 0\) and spontaneously breaking the dipole symmetry. When applied to the Hamiltonian (3) at half-filling, a mean-field treatment (see App. B) predicts a condensation transition \({\bf t}=1/4\), a value small enough that the perturbative analysis leading to \(H_{DFH}\) should remain qualitatively correct.
For simplicity we consider an isotropic dipole condensate, with \(|D^{x}|=|D^{y}|\equiv D\). As also happens in the bosonic version of this model [9; 10; 11], the dipole condensate liberates the motion of single fermions, as they can move by absorbing dipoles from the condensate. Writing \(D^{a}\simeq De^{i\phi_{a}}\) and taking \(D\) to be constant (as amplitude fluctuations are gapped in the condensate), the first line of (3) becomes the _single_-fermion hopping term
\[H_{hop}=-t_{d}D\sum_{{\bf r},{\bf a}}(f^{\dagger}_{\bf r}e^{i\phi_{a}({\bf r} )}f_{{\bf r}+{\bf a}}+h.c.). \tag{5}\]
If we freeze out the dynamics of \(\phi_{a}\), \(H_{hop}\) will lead the fermions to form a Fermi surface, with an area set by their density as per Luttinger's theorem. The important question is then to ask what happens when one accounts for the dynamics of the \(\phi_{a}\) fields. As soon as we introduce these dynamics, the system looses its ability to respond to uniform electric fields, and is rendered insulating. Indeed, turning on a background vector potential \(A_{a}\) in \(H_{hop}\) simply amounts to replacing \(\phi_{a}\) by \(\phi_{a}+A_{a}\). We can then completely eliminate the coupling of the fermions to \(A_{a}\) through a shift of \(\phi_{a}\) (see also [33]). Since \(\phi_{a}\) is the Goldstone mode for the broken dipole symmetry, all other terms in the effective Hamiltonian can only involve gradients of \(\phi_{a}\), and thus after the shift, the Hamiltonian can only depend on gradients of \(A_{a}\). This then leads to a particle conductivity \(\sigma(\omega,{\bf q})\) that vanishes for all \(\omega\) as \({\bf q}\to 0\).
To deepen our understanding of this phase, we
Figure 1: _Tilted lattice_: We consider an extended Hubbard model in a tilted optical lattice, with single particle hoppings \(t_{x,y}\), nearest neighbor repulsions \(V_{x,y}\), and tilts along both directions with strengths \(\Delta_{x,y}\). _DFHM_: In the large tilt limit the system is described by a dipole-conserving Fermi-Hubbard model, whose dynamics is such that only dipolar bound states—rather than individual fermions—are allowed to move. _NFL:_ As the dipole hopping strength \(t_{d}\) increases the dipoles condense, with \(f^{\dagger}_{\bf r}f_{{\bf r}+{\bf a}}\sim e^{i\phi_{a}({\bf r})}\) developing an expectation value, and \(\phi_{a}\) effectively playing the role of the spatial components of a dynamical \(U(1)\) gauge field. The condensate liberates the motion of individual fermions, which form a Fermi surface. As described in the main text, in this regime fluctuations in \(\phi_{a}\) turn the system into a non-Fermi liquid.
pass to a field theory description by writing \(f_{\mathbf{r}}\simeq\int d\theta\,e^{i\mathbf{K}_{F}(\theta)\cdot\mathbf{r}}\psi_{ \theta}(\mathbf{r})\), where \(\mathbf{K}_{F}(\theta)\) is the Fermi momentum at an angle \(\theta\) on the Fermi surface. Standard arguments then lead to the imaginary-time Lagrangian
\[\begin{split}\mathcal{L}_{DFH}&=\int d\theta\,\psi_{ \theta}^{\dagger}\Big{(}\partial_{\tau}-i\mathbf{v}_{\theta}\cdot\nabla+\frac{ \varkappa}{2}\nabla_{\parallel}^{2}\,+\sum_{a}g_{a}(\theta)\phi_{a}\Big{)} \psi_{\theta}\\ &\qquad\qquad+\sum_{a}\kappa_{D}(\partial_{\tau}\phi_{a})^{2}+ \sum_{a,b}K_{a,b}(\nabla_{a}\phi_{b})^{2}.\end{split} \tag{6}\]
In writing the above we have approximated the dispersion of \(\psi_{\theta}\) to include only the leading terms in the momentum deviation from \(\mathbf{K}_{F}(\theta)\), written the Fermi velocity as \(\mathbf{v}_{\theta}\), let \(\varkappa\) denote the Fermi surface curvature, and taken \(\nabla_{\parallel}\) as the derivative along the Fermi surface.
In the important 'Yukawa' term \(g_{a}(\theta)\phi_{a}\psi_{\theta}^{\dagger}\psi_{\theta}\), the coupling function \(g_{a}(\theta)\) is strongly constrained by dipole symmetry, which sends \(\psi_{\theta}\to e^{i\alpha\cdot\mathbf{r}}\psi_{\theta}\), \(\phi_{a}\to\phi_{a}+\alpha_{a}\) for any constant vector \(\alpha_{a}\). The requirement that (6) be invariant then gives the constraint
\[g_{a}(\theta)=-v_{a}(\theta). \tag{7}\]
This implies that \(\phi_{a}\) couples to the fermions in _exactly_ the same way as the spatial part of a \(U(1)\) gauge field. This draws a connection between the DFHM and a Fermi surface coupled to a dynamical \(U(1)\) gauge field, a system with a long history in condensed matter physics. In both models the modes that couple to the fermions are vector fields that are guaranteed to be gapless--by gauge invariance in the gauge field case, and by their origin as Goldstone modes in the DFHM. Crucially, the coupling between \(\phi_{a}\) and the fermions is not "soft", remaining nonzero even at zero momentum (soft couplings are irrelevant under RG, and fail to induce NFL behavior). In line with the general framework of Ref. [34], this is made possible by the fact that the dipole charge and the total momentum \(P^{b}\) satisfy \(\langle[\sum_{\mathbf{r}}r^{a}n_{\mathbf{r}},P^{b}]\rangle=i\delta^{a,b}\sum_ {\mathbf{r}}\langle n_{\mathbf{r}}\rangle\neq 0\), the non-vanishing of which is necessary to avoid obtaining a soft coupling.
An important difference compared to the Fermi surface + gauge field problem is that in the DFHM, there is no analog of a time component of the gauge field. For fermions coupled to a dynamical gauge field \(a_{\mu}\), the coupling to \(a_{0}\) renders the theory incompressible: an external probe potential \(A_{0}\) (the susceptibility to which the compressibility corresponds) evokes no response, as \(A_{0}\) can be absorbed into \(a_{0}\). Since there is no analogue of \(a_{0}\) in the DFHM this argument does not apply, and indeed it is well-known that a Fermi surface coupled to a gapless boson is generically compressible [35; 36; 37; 21; 38]. Remarkably, we thus manage to obtain a system with both vanishing conductivity _and_ nonzero compressibility (as also occurs in the "Bose-Einstein insulator" phase of the dipole-conserving Bose-Hubbard model [9]).
To demonstrate the NFL nature of \(\mathcal{L}_{DFH}\), we note that it is essentially the same as the action that arises at the 'Hertz-Millis' theory [39; 40] of the quantum critical point associated with the onset of loop current order in a metal [33] (but with the crucial restriction (7) coming from dipole conservation). Like in that case, fluctuations of \(\phi_{a}\) turn the system into an NFL. Indeed, standard calculations show that at the Fermi surface, the fermion self energy has the form \(\Sigma_{f}(\mathbf{K},i\omega)=i\,\text{sgn}(\omega)|\omega|^{\delta}\) with the exponent \(\delta<1\) (a variety of theoretical approximations all converge on \(\delta=2/3\)[35; 36; 41; 21; 42], which is also the exponent of the low-temperature specific heat, \(C\sim T^{\delta}\)). This shows that there are no sharply-defined quasiparticles in this model, despite the existence of a sharply-defined Fermi surface. We also note that this model has no weak-coupling pairing instability, due to the strong repulsive interaction between fermions on antipodal patches mediated by \(\phi_{a}\)[43].
_Numerics:_ We now provide a first step towards testing the above theoretical predictions by performing DMRG on small cylinders with the DFHM Hamiltonian (3). We focus on the case of half-filling so as to compare with predictions from mean field, which predicts a dipole condensation transition at \(\mathbf{t}=1/4\). Fig. 2 shows the DMRG results for a cylinder of modest size \((L_{x},L_{y})=(20,6)\). For a range of \(\mathbf{t}\equiv 1/4\) we compute the expectation values of the dipole operators and the inverse participation ratio \(\mathcal{I}_{n}\equiv L_{x}L_{y}(\sum_{i}n_{i}^{2})/(\sum_{i}\langle n_{i} \rangle)^{2}\) (Fig. 2 left; \(\langle d^{x}\rangle\) is similar to \(\langle d^{y}\rangle\) but smaller in magnitude, presumably due to finite-size effects). We find \(\mathcal{I}_{n}\approx 2\), \(\langle d^{y}\rangle=0\) for \(\mathbf{t}\leq\mathbf{t}_{*}\) (as expected from a charge-ordered state) and \(\mathcal{I}_{n}\approx 1\), \(\langle d^{y}\rangle>0\) for \(\mathbf{t}>\mathbf{t}_{*}\) (as expected from a dipole condensate), where the critical value \(\mathbf{t}_{*}\approx 0.275\) is respectably close to the mean-field estimate.
Figure 2: DMRG results for the dipolar Fermi-Hubbard Hamiltonian \(H_{DFH}\) at half-filling on a cylinder of size \((L_{x},L_{y})=(20,6)\) and at bond dimension \(\chi=400\). a) the inverse participation ratio \(\mathcal{I}_{n}\) of the density and the expectation value \(\langle d^{y}_{\mathbf{r}}\rangle=\langle f_{\mathbf{r}}^{\dagger}f_{\mathbf{r} +\mathbf{y}}\rangle\) averaged over lattice sites. As judged by \(\mathcal{I}_{n}\) charge order occurs at small \(\mathbf{t}\) but melts at \(\mathbf{t}_{*}\approx 0.275\); the plot of \(\langle d^{y}\rangle\) shows this is also where dipole condensation occurs. b) The energy cost to add or remove a fermion, \(\mu_{\pm}\equiv\pm E(N\pm 1)\mp E(N)\), with \(E(N)\) the ground state energy in the sector with total charge \(N\). The charge gap \(\mu_{+}-\mu_{-}\) closes at the same location where dipole condensation occurs, suggesting the onset of the NFL.
To investigate the state at \(\mathbf{t}>\mathbf{t}_{*}\) we compute the chemical potentials \(\mu_{\pm}\equiv\pm E(N\pm 1)\mp E(N)\), where \(E(N)\) is the ground state energy in the symmetry sector with total charge \(N\). The gap to charged excitations is given by \(\mu_{+}-\mu_{-}\), which is seen to approximately close at \(\mathbf{t}_{*}\) (Fig. 2, right). This suggests that at \(\mathbf{t}_{*}\) the system undergoes a (presumably first-order) transition into the NFL described by (6). While this is all in accordance with our theoretical analysis, these numerics do not answer questions about the doping dependence of \(\mathbf{t}_{*}\), or reveal the nature of the correlations present in the NFL phase. A proper treatment of these questions is left to future work.
_Experimental considerations:_ The most obvious experimental signature of the NFL state is the simultaneous presence of both a nonzero compressibility and a vanishing particle conductivity, both of which can be directly measured from density snapshots taken in quantum gas microscopes [44; 45]. The dipole condensate can also be directly detected by measuring [46; 47] correlation functions of the dipole operators \(f_{\mathbf{r}}^{\dagger}f_{\mathbf{r}+\mathbf{a}}\); these correlation functions are long-ranged in the NFL but short-ranged in the charge-ordered state. The Fermi surface itself can be detected in principle by looking for Friedel oscillations in the density-density correlation function [48], which are even stronger [36] than in a conventional Fermi liquid.
We now discuss issues relating to the experimental preparation of the NFL state. In one possible protocol, the system is prepared in a uniform density product state at zero single particle hopping \(t_{a}=0\) and zero tilt \(\Delta_{a}=0\). \(\Delta_{a}\) is then diabatically switched on to a value much larger than the Hubbard interaction and the dimensionless hopping strength \(\mathbf{t}\equiv t_{a}/\Delta_{a}\) is slowly increased, with the goal of reaching the NFL regime while keeping the dipole-conserving system at an effective temperature \(T\lesssim T_{F}\), with \(T_{F}\sim t_{d}=V_{0}\mathbf{t}^{2}\) the Fermi temperature.
At this point in the discussion, the prethermal nature of \(H_{DFH}\) becomes important. In going from (2) to (3) we only kept the time-independent part of the effective Hamiltonian; a more complete analysis shows that in fact \(H=H_{DFH}+\mathcal{V}(t)\), with the most important part of \(\mathcal{V}(t)\) being \(\mathcal{V}(t)=V_{0}\mathbf{t}^{2}\sum_{\mathbf{r};s=\pm 1}e^{i(\Delta_{s}+s \Delta_{a})t}\mathcal{O}_{\mathbf{r}}^{*}+h.c\), where \(\mathcal{O}_{\mathbf{r}}^{*}\) is a rather complicated four-fermion interaction with a net dipole moment of 1 (\(s\)) in the \(x\) (\(y\)) direction (see App. A for the full expression). \(\mathcal{V}(t)\) causes a system initially prepared in the ground state of \(H_{DFH}\) to heat up. Furthermore, if \(|\Delta_{x}|=|\Delta_{y}|\), \(\mathcal{V}(t)\) contains time-_independent_ terms which break one linear combination of the two components of the dipole moment symmetry (if \(|\Delta_{x}|/|\Delta_{y}|=p/q\) is rational, such terms will arise at \(q\)th order in perturbation theory). Breaking the symmetry in this way will generically yield a nonzero conductivity along one spatial direction and produce a crossover to an anisotropic phase that preempts the NFL at large scales. Fortunately, we now argue that these problems are not as severe as they might appear.
The issue of \(\mathcal{V}(t)\) containing time-independent dipole-violating terms can be circumvented simply by taking \(|\Delta_{x}|/|\Delta_{y}|\) to be irrational [5]. However even when \(|\Delta_{x}|=|\Delta_{y}|\), the time-independent part of \(\mathcal{V}(t)\) is highly irrelevant and only produces a violation of (7) through 2-loop diagrams that are suppressed by further powers of \(\mathbf{t}\). In practice, these symmetry-breaking terms may thus only lead to a crossover out of the NFL at length scales larger than experimentally relevant system sizes.
To assess the effects of heating, we estimate the heating rate \(r\) of a state initially prepared in the ground state of \(H_{DFH}\) and then time-evolved with \(H_{DFH}+\mathcal{V}(t)\). \(r\) can be bounded using the theory of Floquet prethermalization [49; 50; 51; 52] as
\[r<C^{\prime}V_{0}^{2}e^{C|\Delta_{x}|/J}, \tag{8}\]
where \(C,C^{\prime}\) are dimensionless constants depending on \(|\Delta_{x}|/|\Delta_{y}|\), and where \(J\) is an energy scale determined by the maximum amount of energy locally absorbable by \(H_{DFH}\) (if \(|\Delta_{x}|/|\Delta_{y}|\) is irrational, \(|\Delta_{x}|/J\) in the exponent is replaced by \(\sqrt{|\Delta_{x}|/J}\)[52]).
From the couplings given in (4), we see that _all_ of the terms in \(H_{DFH}\) are proportional to \(V_{0}\), meaning that \(J=C^{\prime\prime}V_{0}\) for another dimensionless constant \(C^{\prime\prime}\). Crucially though, the parameter \(\mathbf{t}\) which tunes between the different phases of \(H_{DFH}\) is _independent_ of \(V_{0}\). This implies that by decreasing \(V_{0}\) and keeping \(\mathbf{t}\) fixed we can make \(|\Delta_{x}|/J\) arbitrarily large--and hence \(r\) arbitrarily small--all while remaining at a _fixed_ point in the phase diagram. This parametric suppression of \(r\) means that the issue of prethermal heating can in principle be sidestepped simply by working at weak bare interactions.
_Discussion:_ We have demonstrated the emergence of a rather exotic non-Fermi liquid (NFL) from a simple dipole-conserving Fermi-Hubbard model. This model has a natural realization in strongly tilted optical lattices, and in the NFL regime is described by fermions coupled to an emergent bosonic mode which plays the role of a spatial gauge field. This provides an ultracold-atoms path towards the study of strongly interacting fermions and gauge fields in a manner rather distinct from approaches that build in gauge fields at a more microscopic level [53; 54; 55].
As always with ultracold atoms, the experimental crux is likely to be whether or not one can access the temperatures \(T\lesssim T_{F}\) required to probe the physics of the NFL ground state. With this in mind it is natural to wonder if the kinetic constraints imposed by dipole conservation lead to any interesting _dynamical_ signatures of the NFL regime, which could be more readily identified in experiments unable to perform a sufficiently adiabatic parameter sweep.
While our focus so far has been on systems of spinless fermions, similar physics is also realizable in spinful models. A natural place to look is the tilted Fermi-Hubbard model
\[H=\sum_{\mathbf{r},\mathbf{a},\sigma}[-t_{a}(f_{\mathbf{r},\sigma}^{\dagger}f_{ \mathbf{r}+\mathbf{a},\sigma}+h.c.)+\Delta_{a}r^{a}n_{\mathbf{r},\sigma}]+V_{0 }\sum_{\mathbf{r}}n_{\mathbf{r},\uparrow}n_{\mathbf{r},\downarrow}. \tag{9}\]
In the large \(\Delta_{a}\) limit, this Hamiltonian yields a dipole-conserving model with nearest-neighbor interactions and a Heisenberg exchange proportional to \(V_{0}\). At half filling and with _attractive_ bare interactions, mean-field calculations predict a transition at \(t_{a}/\Delta_{a}\gtrsim 0.26\), where the dipole operators \(f^{\dagger}_{\mathbf{r},\sigma}f_{\mathbf{r}+\mathbf{a},\sigma}\) condense and subsequently produce an NFL. We leave a more thorough investigation of this physics to future work.
_Ackgnowledgements:_ We thank Monika Aidelsburger, Zhen Bi, Soonwon Choi, and Byungmin Kang for discussions, and Jung-Hoon Han, Hyun-Yong Lee, and Michael Hermele for collaborations on related work. T.S. was supported by US Department of Energy Grant No. DE-SC0008739, and partially through a Simons Investigator Award from the Simons Foundation. This work was also partly supported by the Simons Collaboration on Ultra-Quantum Matter, which is a grant from the Simons Foundation (Grant No. 651446, T.S. DMRG simulations were performed with the help of the Julia iTensor library [56].
_Note added:_ While preparing this preprint we learned of a related and soon-to-appear work by A. Anakru and Z. Bi.
|
2301.11498
|
CLASSY VI: Density, Structure and Size of Galactic Outflows
|
Galaxy formation and evolution are regulated by the feedback from galactic
winds. Absorption lines provide the most widely available probe of winds.
However, since most data only provide information integrated along the
line-of-sight, they do not directly constrain the radial structure of the
outflows. In this paper, we present a method to directly measure the gas
electron density in outflows (ne), which in turn yields estimates of outflow
cloud properties (e.g., density, volume filling-factor, and sizes/masses). We
also estimate the distance (r) from the starburst at which the observed
densities are found. We focus on 22 local star-forming galaxies primarily from
the COS Legacy Archive Spectroscopic SurveY (CLASSY). In half of them, we
detect absorption lines from fine structure excited transitions of Si II (i.e.,
Si II*). We determine ne from relative column densities of Si II and Si II*,
given Si II* originates from collisional excitation by free electrons. We find
that the derived ne correlates well with the galaxy's star-formation rate per
unit area. From photoionization models or assuming the outflow is in pressure
equilibrium with the wind fluid, we get r ~ 1 to 2 * rstar or ~ 5 * rstar,
respectively, where rstar is the starburst radius. Based on comparisons to
theoretical models of multi-phase outflows, nearly all of the outflows have
cloud sizes large enough for the clouds to survive their interaction with the
hot wind fluid. Most of these measurements are the first-ever for galactic
winds detected in absorption lines and, thus, will provide important
constraints for future models of galactic winds.
|
Xinfeng Xu, Timothy Heckman, Alaina Henry, Danielle A. Berg, John Chisholm, Bethan L. James, Crystal L. Martin, Daniel P. Stark, Matthew Hayes, Karla Z. Arellano-Cordova, Cody Carr, Mason Huberty, Matilde Mingozzi, Claudia Scarlata, Yuma Sugahara
|
2023-01-27T02:16:51Z
|
http://arxiv.org/abs/2301.11498v1
|
# CLASSY VI: The Density, Structure and Size of Absorption-Line Outflows in Starburst Galaxies1
###### Abstract
Galaxy formation and evolution are regulated by the feedback from galactic winds. Absorption lines provide the most widely available probe of winds. However, since most data only provide information integrated along the line-of-sight, they do not directly constrain the radial structure of the outflows. In this paper, we present a method to directly measure the gas electron density in outflows (\(n_{\rm e}\)), which in turn yields estimates of outflow cloud properties (e.g., density, volume filling-factor, and sizes/masses). We also estimate the distance (\(r_{n}\)) from the starburst at which the observed densities are found. We focus on 22 local star-forming galaxies primarily from the COS Legacy Archive Spectroscopic SurveY (CLASSY). In half of them, we detect absorption lines from fine structure excited transitions of Si ii (i.e., Si ii*). We determine \(n_{\rm e}\) from relative column densities of Si ii and Si ii*, given Si ii* originates from collisional excitation by free electrons. We find that the derived \(n_{\rm e}\) correlates well with the galaxy's star-formation rate per unit area. From photoionization models or assuming the outflow is in pressure equilibrium with the wind fluid, we get \(r_{n}\sim 1\) to \(2r_{\rm s}\) or \(\sim 5r_{s}\), respectively, where \(r_{\rm s}\) is the starburst radius. Based on comparisons to theoretical models of multi-phase outflows, nearly all of the outflows have cloud sizes large enough for the clouds to survive their interaction with the hot wind fluid. Most of these measurements are the first-ever for galactic winds detected in absorption lines and, thus, will provide important constraints for future models of galactic winds.
Galactic Winds (572), Galaxy evolution (1052), Galaxy kinematics and dynamics(602), Starburst galaxies (1570), Ultraviolet astronomy (1736), Galaxy spectroscopy (2171) +
Footnote †: journal: AASJournal ApJ
## 1 Introduction
Galactic winds are essential to the evolution of galaxies and the intergalactic medium (IGM). In star-forming galaxies (without accreting black holes), these winds are driven by mass, energy, and momentum supplied by star-formation, in the form of radiation, stellar winds, and supernovae (e.g., Veilleux et al., 2005). The latter two result in the creation of a tenuous and energetic wind fluid that flows out and accelerates existing gas clouds, which are observable as warm to cold outflows (e.g., Xu et al., 2022). Galac
|
2304.03642
|
Polarized images of charged particles in vortical motions around a
magnetized Kerr black hole
|
In this work, we study the images of a Kerr black hole (BH) immersed in
uniform magnetic fields, illuminated by the synchrotron radiation of charged
particles in the jet. We particularly focus on the spontaneously vortical
motions (SVMs) of charged particles in the jet region and investigate the
polarized images of electromagnetic radiations from the trajectories along
SVMs. We notice that there is a critical value $\omega_c$ for charged particle
released at a given initial position and subjected an outward force, and once
$|qB_0/m|=|\omega_B|>|\omega_c|$ charged particles can move along SVMs in the
jet region. We obtain the polarized images of the electromagnetic radiations
from the trajectories along SVMs. Our simplified model suggests that the SVM
radiations can act as the light source to illuminate the BH and form a photon
ring structure.
|
Zhenyu Zhang, Yehui Hou, Zezhou Hu, Minyong Guo, Bin Chen
|
2023-04-07T13:46:36Z
|
http://arxiv.org/abs/2304.03642v2
|
# Polarized images of charged particles in vortical motions around a magnetized Kerr black hole
###### Abstract
In this work, we study the images of a Kerr black hole (BH) immersed in uniform magnetic fields, illuminated by the synchrotron radiation of charged particles in the jet. We particularly focus on the spontaneously vortical motions (SVMs) of charged particles in the jet region and investigate the polarized images of electromagnetic radiations from the trajectories along SVMs. We notice that there is a critical value \(\omega_{c}\) for charged particle released at a given initial position and subjected an outward force, and once \(|qB_{0}/m|=|\omega_{B}|>|\omega_{c}|\) charged particles can move along SVMs in the jet region. We obtain the polarized images of the electromagnetic radiations from the trajectories along SVMs. Our simplified model suggests that the SVM radiations can act as the light source to illuminate the BH and form a photon ring structure.
\(*\) Corresponding author: [email protected]
Introduction
The Event Horizon Telescope (EHT) Collaboration has successfully photographed the images of the supermassive black holes (BHs) at the center of the elliptical galaxy M87 and the Galactic center in 2017[1, 2]. There has accumulated more and more attention to the physics in BH images ever since their first release in 2019. The formation of the ring structure surrounding an inner BH shadow shown in each BH picture is a subtle but essential question for understanding the BH physics. In particular, from the photon ring structure and inner shadow [3], we are able to catch a glimpse of the physical environment around a BH since a BH is not isolated in a real universe and needs a light source to illuminate it to form a picture.
In the previous studies, many works focus on the belief that the light source for the ring structure mainly comes from the accretion disk around BHs or BH mimickers [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27]. Thereinto, the EHT Collaboration considered Standard and Normal Evolution (SANE) and Magnetically Arrested Disk (MAD) models in their general relativistic magnetohydrodynamic (GRMHD) simulations [28]. It was found that in all MAD models, much of the emission arises in disk regions. Nevertheless, in large \(R_{\rm high}\) SANE models, where \(R_{\rm high}\simeq T_{i}/T_{e}\) in the midplane of the disk with \(T_{i}\) and \(T_{e}\) being the temperatures of ions and electrons, the inner ring emission arises from the funnel wall. The funnel is understood as the strongly magnetized region near the pole where the Poynting flux jets play a leading role [29], and the funnel wall denotes the boundary between the funnel and the corona [30], which comes down to the unbound mass flux in the jets. In other words, the light that makes up the photon ring probably comes mainly from the radiations on the boundary of the jet around a BH. In addition, exciting works concentrating on the images of the whole jet in the vicinity of BHs can be found in [31, 32, 33, 34, 35, 36].
However, the mechanism of jet formation and jet morphology are still open questions even though past research has shown that energies can be extracted from a BH by Blandford-Znajek mechanism [37], gas-pressure acceleration [38], magnetic reconnection [39] and the like. In the well-known Blandford-Znajek process, the energy of the central BH is extracted by the torsion of the magnetic field and transmitted along magnetic lines in the form of Poynting flux, which in turn accelerates charged particles in the distance. Moreover, Ruffini et al. proposed a theoretical framework of the "inner engine" [40, 41, 42], which states that the gravitomagnetic interaction of a Kerr BH with a surrounding magnetic field would induce a large-scale electric field that can accelerate charged particles near the event horizon directly to form the SVMs, in which particles are accelerated to ultra-relativistic energies and thus create a powerful electrically charged jet.
In this work, we would like to reexamine the mechanism proposed by Ruffini et al.. In particular, employing the effective potential method [43, 44, 45], we plot the effective potentials that visually show how charged particles are accelerated and the configurations capable of ejecting particles. Also, we
introduce an effective force along the axis and identify sufficient conditions for forming the SVMs of charged particles in this mechanism. Based on the knowledge of the SVMs, we then study the polarized synchrotron radiations of these SVM particles. By using the numerical backward ray-tracing method, we further explore their polarized images received by a distant observer. In our simplified model, we show that there exists the ring structure produced by the multiple images of the SVMs.
The paper is organized as follows. In Sec. 2, we review the framework of the inner engine and study the vortical motions of charged particles in the jet region. In Sec. 3, we set up our model for imaging the SVM radiations and show the results. We close our work with a summary in Sec. 4. In this paper, we work in the geometrized unit with \(G=c=1\).
## 2 SVMs of charged particles in the jet region
In this section, we focus on SVMs of charged particles in the spacetime containing a Kerr BH with a uniform magnetic field. In particular, we assume that the uniform magnetic field of interest is not strong, so the backreaction to the spacetime can be ignored. Precisely, we assume that the uniform magnetic field is described by the Wald solution, which satisfies the Einstein-Maxwell equations [46]. In [40, 41, 42], it has been shown that the system containing a Kerr BH and a Wald magnetic field can be provided as an inner engine to accelerate charged particles to ultra-relativistic energies along vortical motions in the jet region of the BH.
### Kerr BH and Wald electromagnetic field
In this subsection, we would like to review the basic framework of the inner engine and present some important results appeared in [42]. We begin with the background spacetime, which is described by the Kerr metric. The line element takes the following form
\[\mathrm{d}s^{2} = -\bigg{(}1-\frac{2Mr}{\Sigma}\bigg{)}\mathrm{d}t^{2}+\frac{ \Sigma}{\Delta}\mathrm{d}r^{2}+\Sigma\mathrm{d}\theta^{2}+\bigg{(}r^{2}+a^{2} +\frac{2Mra^{2}}{\Sigma}\mathrm{sin}^{2}\theta\bigg{)}\mathrm{d}\phi^{2}- \frac{4Mra}{\Sigma}\mathrm{sin}^{2}\theta\mathrm{d}t\mathrm{d}\phi\,, \tag{2.1}\] \[= g_{\mu\nu}\mathrm{d}x^{\mu}\mathrm{d}x^{\nu}\]
in the Boyer-Lindquist (BL) coordinates, and
\[\Delta=r^{2}-2Mr+a^{2}\,,\quad\Sigma=r^{2}+a^{2}\cos^{2}\theta\,, \tag{2.2}\]
where \(M\) and \(a\) are the mass and the spin parameters of the Kerr BH, respectively. For simplicity and without loss of generality, we set \(M=1\) hereafter. The horizons can be found by solving the equation \(\Delta=0\) which gives \(r_{\pm}=1\pm\sqrt{1-a^{2}}\).
Note that the Kerr BH is stationary and axisymmetric, we may consider a stationary uniform magnetic field, which was first found by R. Wald in [46]. In the Wald solution, the electromagnetic 4-potential is given by
\[A_{t} = -aB_{0}\Big{[}1-\frac{r}{\Sigma}\big{(}1+\cos^{2}\theta\big{)}\Big{]}\,,\] \[A_{\phi} = \frac{1}{2}B_{0}\sin^{2}\theta\bigg{[}r^{2}+a^{2}-\frac{2ra^{2}}{ \Sigma}\big{(}1+\cos^{2}\theta\big{)}\bigg{]}\,, \tag{2.3}\]
where the parameter \(B_{0}\) denotes the magnetic strength at infinity. We disregard the backreaction of the magnetic field to the background spacetime. As shown in the work [42], due to the interaction of the magnetic field and the gravitational field, an electric field is induced in the locally non-rotating frame (LNRF). The electric field is sourced by a quadrupolar distribution of surface charge density on the horizon [47]
\[\sigma(\theta)=\frac{B_{0}ar_{+}(r_{+}-1)\big{(}r_{+}\sin^{4} \theta-\cos^{4}\theta-\cos^{2}\theta\big{)}}{4\pi\big{(}r_{+}^{2}+a^{2}\cos^{2 }\theta\big{)}}\,, \tag{2.4}\]
leading to a zero net charge of the BH. The tetrad basis of the LNRF takes the form
\[e_{(0)}=\frac{g_{\phi\phi}\partial_{t}-g_{\phi t}\partial_{\phi} }{\sqrt{g_{\phi\phi}\Big{(}g_{\phi t}^{2}-g_{\phi\phi}g_{tt}\Big{)}}}\,,\quad e _{(1)}=\frac{\partial_{r}}{\sqrt{g_{rr}}}\,,\quad e_{(2)}=\frac{\partial_{ \theta}}{\sqrt{g_{\theta\theta}}}\,,\quad e_{(3)}=\frac{\partial_{\phi}}{ \sqrt{g_{\phi\phi}}}\,. \tag{2.5}\]
The components of the electric and magnetic field measured by the locally non-rotating observer (LNRO) \(u^{\mu}\equiv e_{(0)}^{\mu}\) in the LNRF are then given by
\[E_{(i)}=F_{\mu\nu}e_{(i)}^{\mu}u^{\nu}=F_{\mu\nu}e_{(i)}^{\mu}e _{(0)}^{\nu}\,,\quad B_{(i)}=\frac{1}{2}\epsilon_{\mu\nu\sigma\rho}F^{\sigma \rho}u^{\mu}e_{(i)}^{\nu}=\frac{1}{2}\epsilon_{\mu\nu\sigma\rho}F^{\sigma \rho}e_{(0)}^{\mu}e_{(i)}^{\nu}\,, \tag{2.6}\]
where \(F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\) is the electromagnetic field tensor in the BL coordinates and \(\epsilon_{\mu\nu\sigma\rho}\) denotes the Levi-Civita tensor. After some monotonous calculations, the expressions of the non-zero components of the electric and magnetic field in the LNRF are
\[E_{(1)} = -\frac{B_{0}a}{\Sigma^{2}A^{1/2}}\big{[}\big{(}r^{2}+a^{2}\big{)} \big{(}r^{2}-a^{2}\cos^{2}\theta\big{)}\big{(}1+\cos^{2}\theta\big{)}-2r^{2} \sin^{2}\theta\Sigma\big{]}\,,\] \[E_{(2)} = \frac{B_{0}a\Delta^{1/2}}{\Sigma^{2}A^{1/2}}2ra^{2}\sin\theta \cos\theta\big{(}1+\cos^{2}\theta\big{)}\,,\] \[B_{(1)} = -\frac{B_{0}\cos\theta}{\Sigma^{2}A^{1/2}}\big{[}2ra^{2}\big{(}2 r^{2}\cos^{2}\theta+a^{2}+a^{2}\cos^{4}\theta\big{)}-\big{(}r^{2}+a^{2}\big{)} \Sigma^{2}\big{]}\,,\] \[B_{(2)} = -\frac{\Delta^{1/2}B_{0}\sin\theta}{\Sigma^{2}A^{1/2}}\big{[}a^ {2}\big{(}r^{2}-a^{2}\cos^{2}\theta\big{)}\big{(}1+\cos^{2}\theta\big{)}+r \Sigma^{2}\big{]}\,, \tag{2.7}\]
where \(A=(r^{2}+a^{2})^{2}-\Delta a^{2}\sin^{2}\theta\). Combining with Eq. (2.5), in the BL coordinate basis the electric and magnetic field are expressed as
\[E^{r}=\frac{\Delta^{1/2}}{\Sigma^{1/2}}E_{(1)}\,,\quad E^{\theta }=\frac{E_{(2)}}{\Sigma^{1/2}}\,,\quad B^{r}=\frac{\Delta^{1/2}}{\Sigma^{1/2} }B_{(1)}\,,\quad B^{\theta}=\frac{B_{(2)}}{\Sigma^{1/2}}\,. \tag{2.8}\]
In Fig. 1, we show the electric and magnetic fields measured by the LNRO in the \(\rho\)-\(z\) plane, where \(\rho=r\sin\theta,z=r\cos\theta\). Our results are consistent with Fig.3 and Fig.4 in [42], where their electricmagnetic field lines are presented in the Kerr-Schild coordinates1. Considering the \(\mathcal{Z}_{2}\) symmetry in the Kerr BH spacetime, we discuss the behaviors of the electricmagnetic field in terms of the northern hemisphere. On the one hand, for the electric field we can see that there exists a critical angle \(\theta_{c}\), at which the electric field vanishes and reverse direction in the upper panel of Fig. 1. In fact, we can
Figure 1: Electromagnetic field configuration. Upper: Electric field lines of the Wald electromagnetic field in the \(\rho\)-\(z\) plane with \(\rho=r\sin\theta,z=r\cos\theta\). Lower: Magnetic field lines in the \(\rho\)-\(z\) plane. Left: We set \(a=0.94,B_{0}=1\). Right: \(a=0.94,B_{0}=-1\). The black filled disk denotes the BH horizon.
find the value of \(\theta_{c}\) from
\[\sigma(\theta_{c})=0\,. \tag{2.9}\]
Note that the electric field force on electrons is opposite to the direction of the electric field, and it is just the other way around for protons. Therefore, when \(B_{0}>0\) we have a polar electronic jet region within the critical angle, that is, \(0\leq\theta\leq\theta_{c}\). On the contrary, a polar protonic jet region happens at \(0\leq\theta\leq\theta_{c}\). when \(B_{0}<0\). On the other hand, in the lower panel of Fig. 1 we can see that the magnetic field is asymptotically directed along the \(z\)-axis, since \(|B^{z}|\gg|B^{x}|\) always holds.
As a result, within the jet region, one expects that charged particles could be accelerated outward to high energies by the electric field and move vortically under the influence of the magnetic field in the jet region. If a charged particle is released without radial and polar velocities in the LNRF and then move along an outward vortical motion in the jet region, we refer to the motion of the particle as a spontaneously vortical motion (SVM). The conical surface \(\theta=\theta_{c}\) serves as the boundary of the SVM cluster.
### SVMs in the jet region
In this subsection, we study SVMs of charged particles in the jet region and present the corresponding trajectories. We consider a charged particle of mass \(m\) and charge \(q\) in the Kerr BH spacetime with a Wald magnetic field. The Hamiltonian for the dynamics of the charged particle can be written in the form
\[H=\frac{1}{2}g_{\mu\nu}\dot{x}^{\mu}\dot{x}^{\nu}=\frac{1}{2}g^{\mu\nu}\Big{(} \frac{\pi_{\mu}}{m}-\frac{q}{m}A_{\mu}\Big{)}\Big{(}\frac{\pi_{\nu}}{m}- \frac{q}{m}A_{\nu}\Big{)}\,, \tag{2.10}\]
where \(\dot{x}^{\mu}\equiv dx^{\mu}/d\tau\) with \(\tau\) the proper time and \(\pi_{\mu}=p_{\mu}+qA_{\mu}=m\dot{x}_{\mu}+qA_{\mu}\) is the canonical momentum with \(p^{\mu}\) the 4-momentum. Then the motions of charged particles are governed by the Hamilton canonical equation, that is
\[\dot{x}^{\mu}=\frac{\partial H}{\partial\pi_{\mu}}\,,\quad\dot{\pi}_{\mu}=- \frac{\partial H}{\partial x^{\mu}}\,. \tag{2.11}\]
Due to the presence of the Killing vectors \(\partial_{t}\) and \(\partial_{\phi}\) in the system, we can define the generalized energy and angular momentum as
\[E=-\frac{\pi_{t}}{m}=-\frac{p_{t}+qA_{t}}{m}\,,\quad L=\frac{\pi_{\phi}}{m}= \frac{p_{\phi}+qA_{\phi}}{m}\,, \tag{2.12}\]
which are constants along the motions of charged particles. In addition, in some of the discussions bellow, we have scaled these two conserved quantities divided by the mass, that is, \(E\) and \(L\) are
understood as the the generalized energy and momentum per unit mass. Incorporated with the normalization condition \(g_{\mu\nu}\dot{x}^{\mu}\dot{x}^{\nu}=-1\), Eq. (2.12) implies
\[E=\frac{-\beta+\sqrt{\beta^{2}-4\alpha X}}{2\alpha}\,, \tag{2.13}\]
where we introduce
\[\alpha = -g^{tt}\,,\] \[\beta = 2\Big{[}g^{t\phi}\Big{(}L-\frac{q}{m}A_{\phi}\Big{)}-g^{tt} \frac{q}{m}A_{t}\Big{]}\,,\] \[X = -g^{\phi\phi}\Big{(}L-\frac{q}{m}A_{\phi}\Big{)}^{2}-g^{tt}\frac{ q^{2}}{m^{2}}A_{t}^{2}+2g^{t\phi}\frac{q}{m}A_{t}\Big{(}L-\frac{q}{m}A_{\phi} \Big{)}-1-g^{\theta\theta}\frac{p_{\theta}^{2}}{m^{2}}-g^{rr}\frac{p_{r}^{2}}{ m^{2}}\,. \tag{2.14}\]
Considering that we focus on the SVMs in the jet region, we should set the initial radial and polar velocities of the particle vanishing in the LNRF, that is, \(p_{(1)}=p_{(2)}=0\) initially, or equivalently, \(p_{r}(\tau=0)=p_{\theta}(\tau=0)=0\)2. Hence, it is helpful to introduce an effective potential to analysis the SVMs of charged particles, which is defined as
Footnote 2: An analytical study on the vortical motions in the near horizon region of an extreme Kerr BH can be found in a recent work [48].
\[V_{\rm eff}=E|_{p_{\theta}=p_{r}=0}=\frac{-\beta+\sqrt{\beta^{2}-4\alpha \gamma}}{2\alpha}\,, \tag{2.15}\]
where
\[\gamma\equiv X|_{p_{\theta}=p_{r}=0}=-g^{\phi\phi}\Big{(}L-\frac{q}{m}A_{\phi }\Big{)}^{2}-g^{tt}\frac{q^{2}}{m^{2}}A_{t}^{2}+2g^{t\phi}\frac{q}{m}A_{t} \Big{(}L-\frac{q}{m}A_{\phi}\Big{)}-1\,. \tag{2.16}\]
Then, with the expression of the Wald solution Eq. (2.3), we find that \(V_{\rm eff}\) depends on the spin \(a\), the generalized momentum \(L\), and the parameter \(\omega_{B}\equiv qB_{0}/m\), where \(\omega_{B}\) is introduced as the coupling factor describing the interaction between charged particles and the electromagnetic field. In this work, we focus on \(\omega_{B}<0\), namely, the charge and the magnetic field have the opposite sign. From Fig. 1, we can see that charged particles always have an outward electric field force in the jet region at \(\omega_{B}<0\) so that the SVMs become possible when the electric field force is large enough to break through the gravitational barrier no matter whether the charge is positive or negative. In order to see the magnitudes of \(\omega_{B}\) for electrons and protons, we recover the dimension of \(\omega_{B}\) in Gaussian units, that is,
\[w_{B}=\frac{|e|}{m_{e}}B_{0}\bigg{(}\frac{q}{|e|}\bigg{)}\Big{(}\frac{m_{e}}{ m}\Big{)}\frac{GM}{c^{3}}\simeq 86\bigg{(}\frac{q}{|e|}\bigg{)}\Big{(}\frac{m_{e}}{ m}\Big{)}\bigg{(}\frac{M}{M_{\odot}}\bigg{)}\bigg{(}\frac{B_{0}}{1{\rm Gauss}} \bigg{)}\,, \tag{2.17}\]
where \(|e|\) is the unit charge, \(m_{e}\) is the mass of an electron, \(M_{\odot}\) is the solar mass. For the supermassive black hole in the centre of the M87 galaxy, the mass is about \(6.5\times 10^{9}M_{\odot}\) and the magnetic field strength is about \(1-30\) Gauss according to the estimation by the EHT Collaboration [49]. So let
us take \(B_{0}=10\) Gauss in our analysis. Thus, for an electron we have \(|\omega_{B}|\simeq 5.59\times 10^{12}\gg 1\) considering \(|e|/m_{e}\simeq 1.76\times 10^{11}\) C/kg, as a result, electromagnetic field interactions are much larger than gravitational interactions. In addition, we are also interested in the case of \(|\omega_{B}|\sim 1\) in terms of theoretical research, since the electromagnetic force is about as strong as the gravitational force in this case. Now the specific charge could be \(|q|/m\sim 10^{-2}\) C/kg, which corresponds to charged dust or cloud of charged particles around the BH. In particular, considering that the electromagnetic field interactions vary with the specific charge and the gravitational force on a charged particle depends only on the mass, there might be a critical value \(\omega_{B}=\omega_{c}\), at which the gravitational and electromagnetic forces just cancel out along the \(z\) direction when we release the charged particle. In order to determine \(\omega_{c}\), we define an effective force as
\[F_{z}^{\text{eff}}\equiv-\partial_{z}V_{\text{eff}}\,, \tag{2.18}\]
along the \(z\) direction. Thus, \(\omega_{c}\) can be obtained by solving \(F_{z}^{\text{eff}}=0\) at the initial position (\(t_{i}=0,r_{i},\theta_{i},\phi_{i}\)). Note that the system of interest is axisymmetric and we can set \(\phi_{i}=0\) for simplicity.
Next, we would like to consider some specific situations. We set the spin of the Kerr BH to \(a=0.94\), which is in the scope of the estimated values provided by the EHT Collaboration [28]. In this case, we set \(r_{i}=2>r_{+}\simeq 1.34\) and \(\theta_{i}=\pi/9<\theta_{c}\simeq 52.10^{\circ}\). Then we can solve \(\partial_{z}V_{\text{eff}}=0\) at \((r,\theta)=(2,\pi/9)\), and find \(\omega_{c}\simeq-1.48\). Thus, for a charged particle released at \((r,\theta)=(2,\pi/9)\), we know that when \(|\omega_{B}|>1.48\) the particle can be ejected from around the BH, while \(|\omega_{B}|<1.48\) the charged particle would fall into the BH. In Fig. 2, we show the effective potentials \(V_{\text{eff}}\) at \(\omega_{B}=\omega_{1}=-5.59\times 10^{12}\) and \(\omega_{B}=\omega_{2}=-1.7\). The red cross marks the initial position of the charged particle, and the grey dashed
Figure 2: The effective potential of charged particles along SVMs with \(p_{\phi}(\tau=0)=0\). The horizontal and vertical coordinates are \(\rho=r\sin\theta\) and \(z=r\cos\theta\), respectively. The red cross marks the initial position of the charged particle in each plot. Left: \(\omega_{B}=\omega_{1}=-5.59\times 10^{12}\); Right: \(\omega_{B}=\omega_{2}=-1.7\).
line is the contour of the effective potential where the charged particle lies at the initial time in each plot of Fig. 2. In addition, the values of \(V_{\rm eff}\) are not shown in the white region since the values near the axis are too large. In the left plot of Fig. 2, we set \(\omega_{B}=\omega_{1}\) and see that the effective potential above the initial position is lower and the motions of electrons are confined in the narrow stripe enclosed by the dashed line. Hence, electrons cannot be captured by the BH. Instead, they could spray out. Also, noting the axial symmetry of spacetime, the two stripes in the hemisphere are connected when considering the \(\phi\)-direction. In the right one of Fig. 2, we set \(\omega_{B}=\omega_{2}\) and find that apart from the contours of the effective potential similar to the case at \(\omega_{B}=\omega_{1}\) there are other contours of the effective potential around the BH. However, although the effective potentials inside the contours are all lower, the charged particles still cannot move toward the BH since there are barriers between the two lower potential areas. As a result, we can see that in both cases discussed above, charged particles have to go outward under the influence of the Lorentz force.
For comparison, we also show two examples in Fig. 3, where we set \(\omega_{B}=-1\) in the left plot and \(\omega_{B}=+1.7\) in the right one. One can easily see that in the left plot although \(\omega_{B}<0\), the charged particle has to fall into the BH since \(|\omega_{B}|<1.48\) and the outward Lorentz force is not sufficient to resist the attraction of gravity. As for the right plot, obviously the charged particle cannot go outward since \(\omega_{B}>0\) and the Lorentz force is inwardly directed.
In Fig. 4 and Fig. 5, we present the trajectories of the motions of charged particles at \(\omega_{B}=\omega_{1}\) and \(\omega_{B}=\omega_{2}\) in the Kerr-Schild coordinates. Recall that the relation of spatial coordinates between
Figure 3: The effective potential of falling charged particles with \(p_{\phi}(\tau=0)=0\). The horizontal and vertical coordinates are \(\rho=r\sin\theta\) and \(z=r\cos\theta\). The red cross marks the initial position of the charged particle in each plot. Left: \(\omega_{B}=-1\); Right: \(\omega_{B}=+1.7\)
the Kerr-Schild coordinates and the BL ones is
\[x=\left(r\cos\phi-a\sin\phi\right),\quad y=\left(r\sin\phi+a\cos\phi\right), \quad z=r\cos\theta\,. \tag{2.19}\]
Also, note that the charged particle is released at \((0,2,\pi/9,0)\) in BL coordinates with vanishing 3-velocity in the LNRF. We can see that each trajectory of charged particles for \(\omega_{B}=\omega_{1}\) and \(\omega_{B}=\omega_{2}\) starts with a curve and turns to a straight line at first glance. And the straight line is basically along the direction of the magnetic field. However, when we zoom in on the straight trajectory, we find it is actually spiral because of the magnetic field. The main reason why the degree of the spiral is small is that the velocity of the particle perpendicular to the direction of the magnetic field is small throughout the trajectory. In addition, as the specific charge \(|q/m|\) increases, the Lorentz force on the charged particle increases, so that the spiral effect of the magnetic field will be more suppressed. Comparing Fig. 4 with Fig. 5, one can easily see that the spiral motion is more significant for \(\omega_{B}=\omega_{2}\). In order to see a trajectory of which the magnitude of the spiral is large enough to be comparable with the horizon of the BH, we set \(\omega_{B}=-3\) and release the charged particle with an initial momentum \(p_{\phi}=1\) at \((0,2,\pi/9,0)\) in the BL coordinates. The trajectory is shown in Fig. 6. One can see that the spiral motion upwards at the scale of the horizon of the BH and the rotation axis is very near the spin axis of the BH. Now we are convinced that the SVMs as expected can occur in the jet region under practical
Figure 4: Trajectory of charged particles along a SVM with \(p_{\phi}(\tau=0)=0\) for \(\omega_{B}=\omega_{1}\). Left: After a short curvilinear motion, the electron almost follows the direction of the magnetic field from a macro perspective. Right: Zoom in on the trajectory: it appears obviously spiraling upward when \(z\) is large.
condition.
## 3 Polarized images of SVM radiations
Considering that the accelerated charged particles in the magnetic field emit radiations synchronously, the SVMs in the jet region can provide polarized radiation profiles which can be observed by the EHT. Thus, in this section we move forward to study the polarized images of SVM radiations. A covariant method to evaluate the directions and intensities of electromagnetic radiations in a curved spacetime has been developed in our previous work [50], which has been used to calculate the polarized images of synchrotron radiations emitted from circular motions in [51, 52] combining with the numerical backward ray-tracing method detailed in our another work [53]. In this work, we push our previous analysis in [50, 53] to obtain the polarized images of the SVM radiations.
### Polarization, intensity and numerical method
Now, we set up our problems and briefly introduce the numerical techniques that we need to use later. For simplicity, we assume the source comprises one or several trajectories. Each trajectory is
Figure 5: Trajectory of charged particles along a SVM with \(p_{\phi}(\tau=0)=0\) for \(\omega_{B}=\omega_{2}\). Left: After a period of curvilinear motion, the electron also almost follows the direction of the magnetic field from a macro perspective. Right: Zoom in on the trajectory: it appears obviously spiraling and moving obliquely upward.
formed by a charged ball-like object with a small constant radius \(b\) in the BL coordinates moving along a SVM. We assume the radiations are emitted from the surface of the ball-like object, and the four-velocity of each point on the surface is approximatively described by the four-velocity of the center of the thing. The images generated by our treatment would be similar to time-integrated images of a hotspot [54, 55, 56, 57, 58, 59, 60]. In addition, in the limit \(b\to 0\), the charged ball-like object reduces to a point-like charged particle.
Following the results in [50], an accelerated point-like charged particle in a magnetic field would produce electromagnetic radiations, whose polarization vector is given by
\[f^{\mu}=N^{-1}(K_{\nu}u^{\nu}a^{\mu}-K_{\nu}a^{\nu}u^{\mu})\,, \tag{3.1}\]
where \(K^{\mu}\) is the wave vector of the electromagnetic radiation, \(N\) is a normalization factor, which is not important here since \(f^{\mu}\) only encodes the directional information of the electromagnetic radiations. And \(a^{\mu}\) is the 4-acceleration of the charged particle, which takes
\[a^{\mu}=\frac{q}{m}F^{\mu\nu}u_{\nu}=u^{\nu}\nabla_{\nu}u^{\mu}\,. \tag{3.2}\]
In addition, different from the model in [50] where the source is considered as a spot, in this paper the source is considered as the trajectories of charged body, which can be regarded as an extended light
Figure 6: Trajectory of charged particles along a SVM for \(\omega_{B}=-3\). The charged particle is released with an initial momentum \(p_{\phi}=1\). The magnitude of the spiral is comparable with the horizon of the BH.
source. Thus, the expression of the emission intensity is slightly different from that appeared in [50], which approximately takes
\[I_{s}=\frac{4q^{2}}{b^{2}}\Bigg{[}\bigg{(}\frac{K_{\mu}a^{\mu}}{K_{ \nu}u^{\nu}}\bigg{)}^{2}+a^{\mu}a_{\mu}\Bigg{]}\,. \tag{3.3}\]
Note that in Eqs. (3.1) and (3.3) we have reparameterized the wave vector \(K^{\alpha}\) here compared to the wave vector \(k^{\alpha}\) defined in [50]. More precisely, the relation between \(K^{\mu}\) and \(k^{\mu}\) reads
\[k^{\mu}=-\frac{K^{\mu}}{K_{\nu}u^{\nu}}\,, \tag{3.4}\]
which follows the gauge \(k^{\mu}u_{\mu}=-1\). Then, when we apply the numerical backward ray-tracing method to obtain the image of the electromagnetic radiations from the trajectories, we are able to set \(K_{\mu}Z^{\mu}=1\) without loss of generality, where \(Z^{\mu}\) is the 4-velocity of the so-called zero-angular-momentum-observer (ZAMO) with the coordinates \((t_{o},r_{o},\theta_{o},\phi_{o})\). Here we want to stress that \(K^{\mu}\) is past-directed vector in the backward ray-tracing procedure. The "redshift" factor between the source and the ZAMO is given by
\[g=\frac{K_{\mu}Z^{\mu}}{K_{\mu}u^{\mu}}=\frac{1}{K_{\mu}u^{\nu}}\,, \tag{3.5}\]
and then the intensity at the ZAMO is
\[I_{o}=g^{4}I_{s}\,. \tag{3.6}\]
Note that we have integrated the observed frequency that give rise to \(g^{4}\) in the above equation. Then, we normalize \(I_{o}\) as
\[T=\frac{1}{\log_{2}\!\left(1+\frac{1}{I_{o}/I_{o}^{\rm max}} \right)}\,, \tag{3.7}\]
where \(I_{o}^{\rm max}\) is defined as the maximum value of \(I_{o}\), so that \(T\) falls between 0 and 1. It is worth stressing that when defining \(T\) we refer to the expression of the brightness temperature [61], which is defined as the measured temperature of the emissions from a black body. To image the source on the screen of the ZAMO, we employ the the stereographic projection by introducing standard Cartesian coordinates \((\mathcal{X},\mathcal{Y})\) on the screen, which is also called the fisheye camera model. To begin with, we define the frame of the ZAMO, whose tetrad \(\{\hat{e}^{\nu}_{(\mu)}\}\) can be found from Eqs. (3.5) to (3.8) in [62]
\[\hat{e}_{(0)}=\frac{g_{\phi\phi}\partial_{t}-g_{\phi t}\partial_ {\phi}}{\sqrt{g_{\phi\phi}\Big{(}g_{\phi t}^{2}-g_{\phi\phi}g_{tt}\Big{)}}} \,,\qquad\hat{e}_{(1)}=-\frac{\partial_{r}}{\sqrt{g_{rr}}}\,,\quad\hat{e}_{(2 )}=\frac{\partial_{\theta}}{\sqrt{g_{\theta\theta}}}\,,\quad\hat{e}_{(3)}=- \frac{\partial_{\phi}}{\sqrt{g_{\phi\phi}}}\,, \tag{3.8}\]
which are very similar to the tetrad of the LNRF in Eq. (2.5). Note that the minus signs in \(\hat{e}_{(1)}\) and \(\hat{e}_{(3)}\) are added to facilitate the backward ray-tracing method, that is, \(\hat{e}_{(1)}=-e_{(1)}\) and \(\hat{e}_{(3)}=-e_{(3)}\)
The details of the backward ray-tracing method used in this work can be found in the Appendix. B of [53]. Following the convention in [53], the celestial coordinates are defined as
\[\cos\Theta=\frac{K_{\mu}\hat{e}_{(1)}^{\mu}}{|K_{\mu}\hat{e}_{(0)}^{ \mu}|}\,,\quad\tan\Psi=\frac{K_{\mu}\hat{e}_{(3)}^{\mu}}{K_{\mu}\hat{e}_{(2)}^{ \mu}}\,, \tag{3.9}\]
and the relation between the celestial coordinates and the standard Cartesian coordinates \((\mathcal{X},\mathcal{Y})\) is given by [63]
\[\mathcal{X}=-2\tan\frac{\Theta}{2}\sin\Psi\,,\quad\mathcal{Y}=-2 \tan\frac{\Theta}{2}\cos\Psi\,. \tag{3.10}\]
In addition, the information of \(f^{\mu}\) given at the source can be decoded on the screen of the ZAMO with the aid of the Penrose-Walker (PW) constant \(\kappa\)[64] in the Kerr spacetime, which takes
\[\vec{\mathcal{E}}=(\mathcal{E}_{\mathcal{X}},\mathcal{E}_{ \mathcal{Y}})=\frac{1}{K_{t}(\mathcal{Y}_{o}^{2}+\mathcal{Z}_{o}^{2})}(\kappa _{2}\mathcal{Y}_{o}-\kappa_{1}\mathcal{Z}_{o},\kappa_{1}\mathcal{Y}_{o}+ \kappa_{2}\mathcal{Z}_{o})\,,\quad\mathcal{Z}_{o}=-(\mathcal{X}_{o}+a\sin \theta_{o})\,, \tag{3.11}\]
and
\[\kappa_{1}+i\kappa_{2}\equiv\kappa=-2K^{\mu}f^{\nu}\Big{(}\hat{l}_ {[\mu}\hat{n}_{\nu]}-\hat{m}_{[\mu}\hat{\bar{m}}_{\nu]}\Big{)}(r-ia\cos\theta )\,, \tag{3.12}\]
where \(\{\hat{l},\hat{n},\hat{m},\hat{\bar{n}}\}\) are the Newman-Penrose tetrads whose explicit expressions can be found from Eq. (10) in [64]. Obviously, Eq. (3.12) can be calculated at the source and then one can obtain the direction of the polarization on the screen from Eq. (3.11) considering that \(\kappa\) is a constant along a null geodesic in Kerr spacetime.
### Results
Now, we are ready to explore the images of the charged source in the jet region. Since the BH is very far from us in the universe, we set \(r_{o}=300\gg r_{h}\) in our numerical program. In terms of the Killing vectors \(\partial_{t}\) and \(\partial_{\phi}\) in the spacetime, we fix \(t_{o}=\phi_{o}=0\) in the following. As for the observational angle, we focus on two situations: one is that the observer is located on the equatorial plane, that is \(\theta_{o}=90^{\circ}\), and the other one is that we set \(\theta_{o}=17^{\circ}\) which corresponds to the observational angle of viewing the supermassive BH in the center of M87 galaxy. Recall that the spin of the Kerr BH and the magnitude of the magnetic field have been chosen as \(a=0.94\) and \(B_{0}=10\) Gauss in studying the SVMs in the subsection 2.2, we set \(b=0.15\ll r_{h}\simeq 1.34\). The remaining parameter that is not determined is the specific charge or say the parameter \(\omega_{B}=qB_{0}/m\), which was defined in the subsection 2.2. In this work, our main interest is to see the images of the trajectories along the SVMs of electrons, we will have an investigation on the images for \(\omega_{B}=\omega_{1}\) with a vanishing initial velocity later. In addition, considering that the SVM becomes visible at the scale of the horizon of the BH when \(\omega_{B}=-3\) with \(p_{r}(\tau=0)=p_{\theta}(\tau=0),p_{\phi}(\tau=0)=1\), we would like to first make some explorations in this simple case.
In Fig. 7, we show the images for the case of the apparent spiral motion observed at \(\theta_{o}=90^{\circ}\) and \(\theta_{o}=17^{\circ}\). More precisely, we have \(\theta_{o}=90^{\circ}\) for the upper panel of Fig. 7 and \(\theta_{o}=17^{\circ}\) for the lower one. On the left panel, we show the BH shadows corresponding to null geodesics falling into the horizon and shapes of the images of the source. On the middle board, we offer the polarization directions of the electromagnetic radiations along the trajectories. On the right panel, we show the intensities of images. The source is a single trajectory along the SVM presented in the Fig. 6, where the charged particles with \(\omega_{B}=-3\) are released at \(r_{s}=2,\theta_{s}=\pi/9,\phi_{s}=0\) with a nonvanishing initial momentum \(p_{\phi}=1\). From Fig. 7, we can see the primary image of the source in the northern hemisphere of each plot both for \(\theta_{o}=90^{\circ}\) and \(\theta_{o}=17^{\circ}\). A demagnified secondary image can be seen clearly in the southern hemisphere, and higher order images are barely visible around the shadow curve at \(\theta_{o}=90^{\circ}\). While at \(\theta_{o}=17^{\circ}\), higher order images are visual, although the secondary image is less clear. As a result, we can roughly observe a photon ring structure in the photo at \(\theta_{o}=17^{\circ}\), which implies that the photon rings might be universally observed when the SVM radiation serves as the light source, without involving the accretion disc. In addition, we can see a big difference between the
Figure 7: Images of a single trajectory along the SVM with \(\omega_{B}=-3\). Charged particles are released at \(r_{s}=2,\theta_{s}=\pi/9,\phi_{s}=0\) with an initial momentum \(p_{\phi}=1\) in the northern hemisphere. Left: Images with the BH shadow; Middle: Polarization directions; Right: The intensity of the image; Upper: \(\theta_{o}=90^{\circ}\); Lower: \(\theta_{o}=17^{\circ}\).
shapes of the primary images observed at \(\theta_{o}=90^{\circ}\) and \(\theta_{o}=17^{\circ}\) due to the strong gravitational lensing around the BH. Furthermore, we can see that the intensity on the left side of the trajectory is more significant than that on the other side because the object moves in a counterclockwise spiral in the view of the north pole. When the source reaches the left side, its velocity flows toward the observer. On the contrary, when it arrives at the right side, its velocity is opposite to the observer's. Due to the Doppler's effect, the intensity on the left side is larger than the one on the other side. Moreover, the polarization directions along the trajectories change periodically for the primary images.
In Fig. 8 we show the images for the radiations from the electrons in SVMs observed at \(\theta_{o}=90^{\circ}\) and \(\theta_{o}=17^{\circ}\). In this case we have \(\omega_{B}=\omega_{1}\). Recall that the motion trail of a SVM electron is almost a straight line and unchanged along \(x\) and \(y\) directions on a macro scale as presented in Fig. 4, in order to have a better imaging result we set the light source to multiple trajectories along the corresponding SVMs in this situation. In the northern and southern hemispheres, we place sixteen
Figure 8: Images of sixteen trajectories along the SVM with \(\omega_{B}=\omega_{1}\). The sixteen trajectories are symmetric pairwise about the equatorial plane. In the northern hemisphere, four charged particles of the trajectories in red are released at \(r_{s}=2,\theta_{s}=\pi/9,\phi_{s}=0\), \(\pi/2\), \(\pi\), \(3\pi/2\) and the other fours of the trajectories in blue are released at \(r_{s}=3,\theta_{s}=\pi/9,\phi_{s}=\pi/4,\,3\pi/4,\,5\pi/4,\,7\pi/4\). Left: Images with the BH shadow; Middle: Polarization directions; Right: Intensity of the images; Upper: \(\theta_{o}=90^{\circ}\); Lower: \(\theta_{o}=17^{\circ}\).
trajectories pairwise-symmetrically as the light source. And the eight trajectories in the northern hemisphere start at a conical surface with the half-angle of projection \(\theta_{s}=\pi/9\). Furthermore, we set half of the starting points at \(r_{s}=2\) with \(\phi_{s}=0,\,\pi/2,\,\pi,\,3\pi/2\), and the other half at \(r_{s}=3\) with \(\phi_{s}=\pi/4,\,3\pi/4,\,5\pi/4,\,7\pi/4\), so that their images can be distinguished as much as possible on the screen of the observers. Then considering the \(Z_{2}\) symmetry of the Kerr BH spacetime, the trajectories in the southern can be understood easily. Similar to Fig. 7, we also show the images observed at \(\theta_{o}=90^{\circ}\) on the upper panel and the lower panel gives the results for \(\theta_{o}=17^{\circ}\). The left two plots illustrate the BH shadows and the images of trajectories, where the red colour signifies the results for \(r_{s}=2\) and the blue one denotes the results for \(r_{s}=3\). In the middle plots we show the polarization directions along the images, and in the right plots we present the intensities of the images.
According to Fig. 8, the images obtained by the equatorial observers remain \(Z_{2}\) symmetric, and the intensity of primary images decreases as the particles move farther away from the BH. Furthermore, the high-order images for equatorial observers cannot form a complete ring structure because of the limitation of \(\theta\)-oscillation of null geodesics [65]. In contrast, a photon ring structure can be formed by the primary and high-order images of the SVM source in the screen of an observer at \(\theta_{o}=17^{\circ}\). Due to the Doppler effect, the radiations from the northern hemisphere are much brighter than those in the southern hemisphere. In addition, we draw two circles \(C_{1,2}\) in the lower right plot in Fig. 8, which cross the starting points of four trajectories for \(r_{s}=2\) and \(r_{s}=3\), respectively. One can infer that all the starting points of trajectories for \(r_{s}=2\) and \(\phi_{s}\in[0,2\pi]\) could form a closed curve close to \(C_{1}\) and something similar is true for \(C_{2}\). Thus we can imagine that the trajectories starting at a larger value of \(r_{s}\) lead to a larger circle, and they may occupy the whole BH shadow region if charged particles are ejected at each \(r_{s}\in(r_{h},\infty)\). As a result, we may lose the so-called BH shadow. The main reason for the disappearance of BH shadow is that our source is assumed to be opaque. One can expect a significant intensity gap if the SVM source becomes optically thin. Moreover, for the observers at \(\theta_{o}=90^{\circ}\) and \(\theta_{o}=17^{\circ}\), the polarization of high-order images is indistinct and in disorder while the primary images have specific approximately parallel polarization directions.
## 4 Summary and discussion
In this present paper, we studied the polarized images of SVM radiations near the Kerr BH with a vertical and uniform magnetic field, which is described by the Wald solution. Our work mainly includes two aspects. In the first part, we focused on the SVMs of charged particles and found a critical value \(\omega_{c}\) of the parameter \(\omega_{B}=qB_{0}/m\) when charged particles were subjected to an outward electric field force in the LNRF. In the jet region, that is \(0<\theta<\theta_{c}\), the charged particles at rest in the LNRF would be ejected when \(|\omega_{B}|>|\omega_{c}|\), or they would fall into the BH. Then, we presented the
trajectories of SVMs for charged particlces with various \(\omega_{B}\) including the electrons.
In the second part, we investigated the polarized images of the radiations from the trajectories along the SVM at \(\theta_{o}=90^{\circ}\) and \(\theta_{o}=17^{\circ}\). As a warm up, we showed the image of a single trajectories along the SVM with \(\omega_{B}=-3\), which were released at \(r_{s}=2,\theta_{s}=\pi/9,\phi_{s}=0\) with an initial momentum \(p_{\phi}=1\). Then, we considered sixteen trajectories along the SVM charged particles with \(\omega_{B}=\omega_{1}\), which were symmetric about the equatorial plane. From the images observed at \(\theta_{o}=90^{\circ}\) and \(\theta_{o}=17^{\circ}\), we found that the VM radiations could form the photon ring structure without the radiations from the accretion disk.
In our model, the images are quite different from the pictures of the supermassive BHs in M87 and Milky Way galaxies except for the photon ring structure. There are two main reasons, as follows. First, the covariant method we used to calculate intensity in [50] is independent of photon frequency, while a real observation must include the effects of the radiation frequency. Hence, the contributions of SVM radiations may be not significant when observed at 230 GHz, which is applied in the observation of EHT. Secondly, the electromagnetic radiation of charged particles is likely weaker than the thermal radiations of the electrons in the fluid mode; that is, the signals of SVM radiations may be overwhelmed by the thermal radiations. Nevertheless, our results are still of theoretical and potential astrophysical interest when considering the images of the radiations from objects along SVMs.
## Acknowledgments
We thank Jiewei Huang, Changkai Luo and Ye Shen for helpful discussions. The work is partly supported by NSFC Grant No. 12275004, 12205013 and 11873044. MG is also endorsed by "the Fundamental Research Funds for the Central Universities" with Grant No. 2021NTST13.
|
2304.12997
|
Automatic selfadjoint-ideal semigroups for finite matrices
|
The notion of automatic selfadjointness of all ideals in a multiplicative
semigroup of the bounded linear operators on a separable Hilbert space B(H)
arose in a 2015 discussion with Heydar Radjavi who pointed out that B(H) and
the finite rank operators F(H) possessed this unitary invariant property which
category we named SI semigroups (for automatic selfadjoint ideal semigroups).
Equivalent to the SI property is the solvability, for each A in the semigroup,
of the bilinear operator equation A^* = XAY which we believe is a new
connection relating the semigroup theory with the theory of operator equations.
We found in our earlier works in the subject that even at the basic level of
singly generated semigroups, the investigation of SI semigroups led to
interesting algebraic and analytic phenomena when generated by rank one
operators, normal operators, partial and power partial isometries,
subnormal-hyponormal-essentially normal operators, and weighted shift
operators; and generated by commuting families of normal operators. In this
paper, we focus on a separate M_n(C) treatment for singly generated SI
semigroups that requires studying the solvability of the bilinear matrix
equation A^* = XAY in a multiplicative semigroup of finite matrices. This
separate focus is needed because the techniques employed in our earlier works
we could not adapt to finite matrices. In this paper we find that for certain
classes of generators, being a partial isometry is equivalent to generating an
SI semigroup. Such classes are: degree 2 nilpotent matrices, weighted shifts,
and non-normal Jordan matrices. For the key tools used to establish these
equivalences, we developed a number of necessary conditions for singly
generated semigroups to be SI for the very general classes: nonselfadjoint
matrices, nonzero nilpotent matrices, nonselfadjoint invertible matrices, and
Jordan blocks.
|
Sasmita Patnaik, Sanehlata, Gary Weiss
|
2023-04-25T17:11:04Z
|
http://arxiv.org/abs/2304.12997v1
|
# Automatic selfadjoint-ideal semigroups for finite matrices
###### Abstract.
The notion of automatic selfadjointness of all ideals in a multiplicative semigroup of the bounded linear operators on a separable Hilbert space \(B(\mathcal{H})\) arose in a 2015 discussion with Heydar Radjavi who pointed out that \(B(\mathcal{H})\) and the finite rank operators \(F(\mathcal{H})\) possessed this unitary invariant property which category we named SI semigroups (for automatic selfadjoint ideal semigroups). Equivalent to the SI property is the solvability, for each \(A\) in the semigroup, of the _bilinear_ operator equation \(A^{*}=XAY\) which we believe is a new connection relating the semigroup theory with the theory of operator equations.
We found in our earlier works in the subject that even at the basic level of singly generated semigroups, the investigation of SI semigroups led to interesting algebraic and analytic phenomena when generated by rank one operators, normal operators, partial and power partial isometries, subnormal-hyponormal-essentially normal operators, and weighted shift operators; and generated by commuting families of normal operators.
In this paper, we focus on a separate \(M_{n}(\mathbb{C})\) treatment for singly generated SI semigroups that requires studying the solvability of the bilinear matrix equation \(A^{*}=XAY\) in a multiplicative semigroup of finite matrices. This separate focus is needed because the techniques employed in our earlier works we could not adapt to finite matrices. In this paper we find that for certain classes of generators, being a partial isometry is equivalent to generating an SI semigroup. Such classes are: degree 2 nilpotent matrices, weighted shifts, and non-normal Jordan matrices. For the key tools used to establish these equivalences, we developed a number of necessary conditions for singly generated semigroups to be SI for the very general classes: nonselfadjoint matrices, nonzero nilpotent matrices, nonselfadjoint invertible matrices, and Jordan blocks. We also show, for a nonselfadjoint matrix generator in an SI semigroup, the matrix being a partial isometry is equivalent to having norm one. And as an aside, we also prove necessary generator conditions for the SI property when generated by matrices with nonnegative entries.
2020 Mathematics Subject Classification: Primary: 47D03, 20M12, 20M05, 15A20, 15A24
Secondary: 47A05, 47A65, 20M10, 15A06, 15A18 *Supported by Science and Engineering Research Board, Core Research Grant 002514.
**Partially supported by Simons Foundation collaboration grants 245014 and 636554.
## 1. Introduction
The _SI_ property of semigroups is a generalization of the _SI_ property of semigroups in \(B(\mathcal{H})\), which is the _SI_ property of semigroups in \(B(\mathcal{H})\). The _SI_ property of semigroups in \(B(\mathcal{H})\) is the _SI_ property of semigroups in \(B(\mathcal{H})\), which is the _SI_ property of semigroups in \(B(\mathcal{H})\).
bilinear matrix equations \(A^{*}=XAY\) in \(\mathcal{S}(A,A^{*})\) seemed intractable. So despite the fact that Jordan matrices are not totally general (in the unitarily equivalent sense), they are direct sums of Jordan blocks which are special triangular matrices, and with obtaining some preliminary somewhat general results, we were able to completely characterize those Jordan matrices that singly generate an SI semigroup.
To summarize, in this paper we investigate singly generated SI semigroups of finite matrices by offering a new perspective on the interplay between the study of a particular matrix equation with the structure of the generating matrix of the SI semigroup. And in some cases, this leads to a classification of non-simple SI semigroups and simple semigroups. The main result of this paper is Theorem 3.14 where we provide a characterization of those singly generated SI semigroups \(\mathcal{S}(T,T^{*})\) generated by a Jordan matrix (see the definition in Section 3, third paragraph). This also provides an alternate characterization of power partial isometry. (A power partial isometry is a partial isometry with all its powers also partial isometries.) When \(T\) belongs to either the class of nilpotent matrices of degree 2 or the class of weighted shift matrices, here also we provide an alternate SI characterization of partial isometry: \(T\) is a partial isometry if and only if \(\mathcal{S}(T,T^{*})\) is SI (see Corollaries 2.10-2.11). Moreover, as we know partial isometries have norm one but not all norm one matrices are partial isometries, in the SI environment they are equivalent. That is, we prove that under the SI property of \(\mathcal{S}(T,T^{*})\) generated by a nonselfadjoint matrix \(T\), if \(||T||=1\) then \(T\) is a partial isometry (Theorem 3.18).
We found that the general problem of characterizing SI semigroups \(\mathcal{S}(T,T^{*})\) for a general matrix \(T\) is still open but the complications we encountered in dealing with the case of Jordan matrices illustrates the value of sparsifying a matrix via unitary equivalence because of the unitary invariance of SI semigroups and the motivation for focusing first on Jordan matrices.
### Preliminaries. Definitions (1.1-1.5) and Terminology
We recall below the general \(B(\mathcal{H})\) definitions and terminology from [7], but instead for finite matrices.
_Definition 1.1_.: A semigroup \(\mathcal{S}\) in \(M_{n}(\mathbb{C})\) is a subset closed under multiplication. A selfadjoint semigroup \(\mathcal{S}\) is a semigroup also closed under adjoints, i.e., \(\mathcal{S}^{*}:=\{T^{*}\mid T\in\mathcal{S}\}\subset\mathcal{S}\).
_Definition 1.2_.: An ideal \(J\) of a semigroup \(\mathcal{S}\) in \(M_{n}(\mathbb{C})\) is a subset of \(\mathcal{S}\) closed under products of operators in \(\mathcal{S}\) and \(J\). That is, \(XT,TY\in J\) for \(T\in J\) and \(X,Y\in\mathcal{S}\). And so also \(XTY\in J\).
The next definition is new to the field of multiplicative \(B(\mathcal{H})\)-semigroups, motivated by Radjavi and first published in [7].
_Definition 1.3_.: A selfadjoint-ideal (SI) semigroup \(\mathcal{S}\) in \(M_{n}(\mathbb{C})\) is a semigroup for which every ideal \(J\) of \(\mathcal{S}\) is closed under adjoints, i.e., \(J^{*}:=\{T^{*}\mid T\in J\}\subset J.\)
Because this selfadjoint ideal property in Definition 1.3 concerns selfadjointness of all ideals in a semigroup, we call these semigroups selfadjoint-ideal semigroups (SI semigroups for short).
_Semigroups generated by \(\mathcal{A}\subset M_{n}(\mathbb{C})\)_
_Definition 1.4_.: The semigroup generated by a set \(\mathcal{A}\subset M_{n}(\mathbb{C})\), denoted by \(\mathcal{S}(\mathcal{A})\), is the intersection of all semigroups containing \(\mathcal{A}.\) Also define \(\mathcal{A}^{*}:=\{A^{*}|A\in\mathcal{A}\}.\)
For short we denote by \(\mathcal{S}(T)\) the semigroup generated by \(\{T\}\) (called generated by \(T\) for short). It should be clear for the semigroup \(\mathcal{S}(\mathcal{A})\) that Definition 1.4 is equivalent to the semigroup consisting of all possible words of the form \(A_{1}A_{2}\cdots A_{k}\) where \(k\in\mathbb{N}\) and \(A_{i}\in\mathcal{A}\) for each \(1\leq i\leq k\).
_Definition 1.5_.: The selfadjoint semigroup generated by a set \(\mathcal{A}\subset M_{n}(\mathbb{C})\) denoted by \(\mathcal{S}(\mathcal{A}\cup\mathcal{A}^{*})\) or \(\mathcal{S}(\mathcal{A},\mathcal{A}^{*})\), is the intersection of all selfadjoint semigroups containing \(\mathcal{A}\cup\mathcal{A}^{*}\). Let \(\mathcal{S}(T,T^{*})\) denote for short \(\mathcal{S}(\{T\},\{T^{*}\})\) and call it the singly generated selfadjoint semigroup generated by \(T\).
It is clear that \(\mathcal{S}(\mathcal{A},\mathcal{A}^{*})\) is a selfadjoint semigroup. Moreover, it is clear that Definition 1.5 conforms to the meaning of \(\mathcal{S}(\mathcal{A}\cup\mathcal{A}^{*})\) in terms of words discussed above. That is, it consists of all words of the form \(A_{1}A_{2}\cdots A_{k}\) where \(k\in\mathbb{N}\) and \(A_{i}\in\mathcal{A}\cup\mathcal{A}^{*}\) for each \(1\leq i\leq k\).
The focus of this paper is the investigation of the singly generated SI semigroups \(\mathcal{S}(T,T^{*})\) generated by \(T\in M_{n}(\mathbb{C})\). So, we provide a description of the elements of \(\mathcal{S}(T,T^{*})\) here ([7, Proposition 1.6]).
For \(T\in B(H)\), the semigroup \(S(T,T^{*})\) generated by the set \(\{T,T^{*}\}\) is given by
\[S(T,T^{*})=\{T^{n},T^{*n},\Pi_{j=1}^{k}T^{n_{j}}T^{*m_{j}},(\Pi_{j=1}^{k}T^{n_{ j}}T^{*m_{j}})T^{n_{k+1}},\Pi_{j=1}^{k}T^{*m_{j}}T^{n_{j}},(\Pi_{j=1}^{k}T^{*m_{j}} T^{n_{j}})T^{*m_{k+1}}\} \tag{1}\]
where \(n\geq 1\), \(k\geq 1\), \(n_{j},m_{j}\geq 1\,\text{for}\,1\leq j\leq k\), and \(n_{k+1},m_{k+1}\geq 1\). The product \(\Pi_{j=1}^{k}\) in the semigroup list is meant to denote an ordered product. Indeed, this follows directly by taking \(\mathcal{A}=\{T\}\).
Alternatively, \(\mathcal{S}(T,T^{*})\) consists of: words only in \(T\), words only in \(T^{*}\), words that begin and end in \(T\), words that begin with \(T\) and end with \(T^{*}\), and words that begin with \(T^{*}\) and end with \(T\) and words that begin and end with \(T^{*}\).
## 2. \(\mathcal{S}(T,T^{*})\) characterizations and necessary conditions on \(T\)
for the SI property and simplicity
In [7, Section 3] we obtained a characterization (i.e., a set of necessary and sufficient conditions depending on the class in which \(T\) resides) for the SI property of semigroups \(\mathcal{S}(T,T^{*})\) generated by a rank-one operator \(T\); and in some cases the SI property characterized the simplicity of \(\mathcal{S}(T,T^{*})\). (A summary of this complete classification is provided in [7, before Remark 3.21].) The various levels of difficulty and limited techniques at our disposal complicated there our approach to this case by case characterization for the SI semigroup \(\mathcal{S}(T,T^{*})\) in this simplest case of rank-ones. We began our study of general rank-one operators (instead of considering matrix forms) with the hope that we could extend our results to higher ranks, which turned out not to be the case. So for us to make progress in the study of SI semigroups \(\mathcal{S}(T,T^{*})\) generated by finite rank operators, here we reduce the study of SI semigroups \(\mathcal{S}(T,T^{*})\) generated by finite ranks to the study of SI semigroups \(\mathcal{S}(T,T^{*})\) generated by finite matrices using the following observation. And then we develop differing methods for various classes of generating matrices.
Observe that for \(T\in\mathcal{F}(\mathcal{H})\) (the set of finite rank operators on a Hilbert space \(\mathcal{H}\)), \(T^{*}\) is also finite rank (via an argument using the polar decomposition), and hence the subspace \(\mathcal{H}_{n}=T\mathcal{H}+T^{*}\mathcal{H}\) is a finite dimensional reducing subspace for \(T\) with \(T\) unitarily equivalent to \(T_{n}\oplus 0\) for some \(T_{n}\in\mathcal{B}(\mathcal{H}_{n})\). Furthermore, as every operator on a finite dimensional space is unitarily equivalent to an upper triangular matrix, \(T_{n}\) is unitarily equivalent to an operator in \(B(\mathcal{H}_{n})\) whose matrix is upper triangular and hence \(T\) has a basis in which its matrix has form \(T_{n}\oplus 0\) and is upper triangular. In other words, \(T\in\mathcal{F}(\mathcal{H})\) is unitarily equivalent to \(T_{n}\oplus 0\) where the matrix representation of \(T_{n}\) with respect to some orthonormal basis in \(\mathcal{H}_{n}\) is upper triangular.
On unitary invariance of the SI property, recall that if \(T\) and \(S\) are unitarily equivalent, then \(\mathcal{S}(T,T^{*})\) is SI if and only if \(\mathcal{S}(S,S^{*})\) is SI (a special case of [7, Theorem 1.21]). In particular, for \(T\) a finite rank operator, as discussed in the previous paragraph, \(T\) is unitarily equivalent to \(T_{n}\oplus 0\) where the finite matrix representation of \(T_{n}\) with respect to some orthonormal basis in \(\mathcal{H}_{n}\) is upper triangular. Hence we have that \(\mathcal{S}(T,T^{*})\) is SI if and only if \(\mathcal{S}(T_{n},T_{n}^{*})\) is SI. Recall also [7, Lemma 1.9] that in general, \(\mathcal{S}(T,T^{*})\) having the SI property is equivalent to solving the equation \(W^{*}=XWY\) for every word \(W\) in \(T\) and \(T^{*}\) for \(X,Y\in\mathcal{S}(T,T^{*})\cup\{I\}\). So herein our study of the SI property of \(\mathcal{S}(T,T^{*})\) we reduce to the study of the SI property of \(\mathcal{S}(T_{n},T_{n}^{*})\). That is, we consider upper triangular matrix representations of \(T_{n}\) to study the SI property of \(\mathcal{S}(T_{n},T_{n}^{*})\) and the solvability of the aforementioned Bilinear equation.
So the study of when \(\mathcal{S}(T,T^{*})\) possesses the SI property for \(T\) finite rank is reduced to the case when \(T\in M_{n}(\mathbb{C})\) and our first result in this direction is a necessary condition for \(\mathcal{S}(T,T^{*})\) to be an SI
semigroup when \(T\in M_{n}(\mathbb{C})\) is nonselfadjoint. (When \(T\) is selfadjoint, \(\mathcal{S}(T,T^{*})=\{T^{n}\mid n\geq 1\}\) is clearly automatically SI. See [7, Remark 1.13(i)-(ii)] for a detailed discussion of this case.)
In summary, going forward here and in the last section we continue our investigation of SI semigroups \(\mathcal{S}(T,T^{*})\) generated by a finite rank operator \(T\) beyond rank-one by focusing on finite matrix cases. In this section, we first provide a necessary condition for \(\mathcal{S}(T,T^{*})\) to be an SI semigroup when \(T\) is nonselfadjoint (Theorem 2.5 below). Then for the class of nilpotent matrices (equivalently those with unitarily equivalent strictly upper triangular matrix representations), we provide some new connections between the SI property of \(\mathcal{S}(T,T^{*})\) and partial isometries. In particular, we give a necessary condition for \(\mathcal{S}(T,T^{*})\) to be SI when generated by a nilpotent matrix (Corollary 2.6); and as a consequence, we provide partial answers in Corollaries 2.10-2.11 to [7, Question 2.7]: Characterize which partial isometries \(T\) have their generated semigroups \(\mathcal{S}(T,T^{*})\) possessing the SI property, and among those determine which possess the stronger property of simpleness. Earlier Popov-Radjavi had proved the alternate (to definition) characterization of power partial isometry: that an operator \(T\) is a power partial isometry if and only if \(\mathcal{S}(T,T^{*})\) consists of _only_ partial isometries [5, Proposition 2.2]. By combining this result with [7, Corollary 1.15], Patnaik-Weiss in [7, Remark 2.3] proved that if \(T\) is a power partial isometry, then \(\mathcal{S}(T,T^{*})\) is SI. And regarding the converse, in Remark 2.9 below, using Corollary 2.11, we obtain that the converse also holds if \(T\) is a unilateral weighted shift matrix.
_Remark 2.1_.: We note here that the next Propositions 2.2-2.4 are proved for \(M_{n}(\mathbb{C})\), but a simple argument shows they also hold for finite rank operators in \(B(\mathcal{H})\) because, as said earlier, every finite rank operator is unitarily equivalent to a finite rank operator whose matrix representation is the direct sum of a finite-dimensional upper triangular matrix and an infinite-dimensional zero matrix. We believe that Propositions 2.2-2.4 are well known, but we have presented them here for completeness and because they will be employed repeatedly in some later parts of this section.
**Proposition 2.2**.: _For \(T\in M_{n}(\mathbb{C})\) a nonzero nilpotent matrix, one has_
\[\operatorname{ran}T^{*}\not\subset\operatorname{ran}T\quad\text{and}\quad \operatorname{ran}T\not\subset\operatorname{ran}T^{*}.\]
Proof.: It is well-known that every square matrix \(T\) is unitarily equivalent to an upper (or lower) triangular matrix \(A\), and nilpotency being a unitary invariant, it is easily verified that the diagonal of \(A\) must be zero. And it is straightforward to verify that the range non-inclusions in the statement of the proposition are also unitarily invariant. Hence it suffices to prove the proposition for \(A=[a_{ij}]\) a nilpotent upper triangular matrix, that is, \(a_{ij}=0\) for all \(i\geq j\).
To obtain \(\operatorname{ran}A^{*}\not\subset\operatorname{ran}A\), let \(k\) be the maximum index such that the column \(A^{*}e_{k}\neq 0\) (such a \(k\) exists as \(A\neq 0\)). Since \(A\) has diagonal \(0\), \(A^{*}e_{n}=\bar{a}_{nn}e_{n}=0\) and hence \(k<n\). Then also \(a_{ij}=0\) for each \(i>k\) and for all \(j\). This implies that the span of the \(A\) columns, that is, \(\operatorname{ran}A\subset\operatorname{span}\{e_{1},\cdots,e_{k}\}\). Since \(A^{*}\) is strictly lower triangular and \(A^{*}e_{k}\neq 0\), \(A^{*}e_{k}=\bar{a}_{k,k+1}e_{k+1}+\bar{a}_{k,k+2}e_{k+2}+...+\bar{a}_{k,n}e_{n}\) where at least one of the coefficients is nonzero. Therefore \(\operatorname{ran}A^{*}\not\subset\operatorname{ran}A\) or equivalently, \(\operatorname{ran}T^{*}\not\subset\operatorname{ran}T\).
To obtain \(\operatorname{ran}T\not\subset\operatorname{ran}T^{*}\), since the adjoint of a nilpotent matrix is nilpotent, apply the previous case to \(T^{*}\).
**Proposition 2.3**.: _For \(A,B\in M_{n}(\mathbb{C})\), \(\operatorname{rank}(AB)\leq\min\{\operatorname{rank}A,\operatorname{rank}B\}\). More generally, for \(A_{1},\cdots,A_{k}\in M_{n}(\mathbb{C})\), \(\operatorname{rank}(A_{1}\cdots A_{k})\leq\min\{\operatorname{rank}A_{1}, \cdots,\operatorname{rank}A_{k}\}\)._
Proof.: Recall that for \(T\in M_{n}(\mathbb{C}),\operatorname{rank}T=\operatorname{rank}T^{*}\). Since \(\operatorname{ran}AB\subset\operatorname{ran}A\), \(\operatorname{rank}(AB)\leq\operatorname{rank}A\). Also, \(\operatorname{ran}(AB)^{*}=\operatorname{ran}(B^{*}A^{*})\subset\operatorname {ran}B^{*}\), which implies that \(\operatorname{rank}(AB)\leq\operatorname{rank}(B)\). Therefore, \(\operatorname{rank}(AB)\leq\min\{\operatorname{rank}A,\operatorname{rank}B\}\). By induction this holds for any finite product of matrices in \(M_{n}(\mathbb{C})\).
**Proposition 2.4**.: _For \(A\) a nonzero nilpotent finite matrix, one has_
\[\operatorname{rank}A^{2}<\operatorname{rank}A.\]
Proof.: Since \(\operatorname{ran}A^{2}\subset\operatorname{ran}A\) one has \(\operatorname{rank}A^{2}\leq\operatorname{rank}A\). But if \(\operatorname{rank}A^{2}=\operatorname{rank}A\) then \(\operatorname{ran}A^{2}=\operatorname{ran}A\) implying by a simple induction that \(\operatorname{ran}A^{n}=\operatorname{ran}A\) for all \(n\geq 1\), contradicting nonzero nilpotency.
We are now ready to prove a necessary condition involving kernels and partial isometries for \(\mathcal{S}(T,T^{*})\) to be an SI semigroup.
**Theorem 2.5**.: _Let \(T\in M_{n}(\mathbb{C})\) be a nonselfadjoint matrix. If \(\mathcal{S}(T,T^{*})\) is an SI semigroup, then_
\[\text{either }\ker T=\ker T^{2}\text{ or }T\text{ is a partial isometry.}\]
Proof.: Suppose \(\mathcal{S}(T,T^{*})\) is an SI semigroup. Then \((T)_{\mathcal{S}(T,T^{*})}\) is selfadjoint. Therefore, \(T^{*}=XTY\) for some \(X,Y\in\mathcal{S}(T,T^{*})\cup\{I\}\) where either \(X\) or \(Y\neq I\) (since \(T\) is nonselfadjoint). Recalling by Definition 1.4 that all members of \(\mathcal{S}(T,T^{*})\) are words in \(T\) and \(T^{*}\), if \(XTY\) contains any powers higher than one of \(T\) or \(T^{*}\), then by Proposition 2.3, one obtains \(\operatorname{rank}(XTY)\leq\operatorname{rank}T^{2}\) or \(\operatorname{rank}T^{*2}\). So \(T^{*}=XTY\) together with the fact that \(\operatorname{rank}T^{*}=\operatorname{rank}T\) implies that \(\operatorname{rank}T=\operatorname{rank}(XTY)\leq\operatorname{rank}(T^{2})\). Also, \(\operatorname{rank}(T^{2})\leq\operatorname{rank}T\) (also Proposition 2.3). Hence \(\operatorname{rank}(T^{2})=\operatorname{rank}T\). By the rank-nullity theorem, \(\dim(\ker T)=\dim(\ker T^{2})\). Since \(\ker T\subset\ker T^{2}\) and \(\dim(\ker T)=\dim(\ker T^{2})\), one has \(\ker T=\ker T^{2}\).
If \(XTY\) does not contain any higher powers of \(T\) or \(T^{*}\) and recalling not both \(X,Y\) are the identity operator, then by Equation (1) and avoiding all cases where \(T\) or \(T^{*}\) have powers higher than one,
\[T^{*}=XTY\in\{T,T^{*},(TT^{*})^{k},(TT^{*})^{k}T,(T^{*}T)^{k},(T^{*}T)^{k}T^{*}\}\]
for some \(k\geq 1\). Note that since \(T\) is not selfadjoint and \(XTY\) contains no higher powers than one, but must contain a \(T\), it follows that \(XTY\) cannot be of the first, second, third or the fifth form in the above display. So either \(T^{*}=(TT^{*})^{k}T\) or \(XTY=(T^{*}T)^{k}T^{*}\) for some \(k\geq 1\). In the fourth case \(XTY=(TT^{*})^{k}T\), one has \(T^{*}=XTY=(TT^{*})^{k}T\) and by right multiplying \(T\) on both sides, one obtains, \(T^{*}T=(TT^{*})^{k}T^{2}\) which implies that \(\ker T^{2}\subset\ker T^{*}T\). But as is well-known and obvious to prove, \(\ker T^{*}T=\ker T\). Therefore \(\ker T^{2}\subset\ker T\) which, along with the obvious reverse inclusion, implies that \(\ker T=\ker T^{2}\), proving the theorem in this case. On the other hand in the sixth case, if \(XTY=(T^{*}T)^{k}T^{*}\), then \(T^{*}=(T^{*}T)^{k}T^{*}\), and so by right multiplying \(T\) on both sides, one obtains, \(T^{*}T=(T^{*}T)^{k+1}\). Therefore by the spectral theorem, \(T^{*}T\) is a projection, or equivalently, \(T\) is a partial isometry.
Recalling again that all square matrices are unitarily equivalent to some upper triangular matrix, the next result about nilpotent matrices interests us because these are the ones whose unitarily equivalent upper triangular forms, by a direct computation, are those that are strictly upper triangular.
**Corollary 2.6**.: _Let \(T\in M_{n}(\mathbb{C})\) be a nonzero nilpotent matrix. If \(\mathcal{S}(T,T^{*})\) is an SI semigroup, then \(T\) is a partial isometry._
Proof.: Since \(T\) is a nonzero nilpotent matrix, \(T\) is not selfadjoint. From Proposition 2.4, \(\operatorname{rank}T^{2}<\operatorname{rank}T\). Therefore, by the rank-nullity theorem, \(\dim(\ker T)<\dim(\ker T^{2})\) and so \(\ker T\neq\ker T^{2}\). Since \(\mathcal{S}(T,T^{*})\) is an SI semigroup, by Theorem 2.5, \(T\) is a partial isometry.
The converse of Corollary 2.6 does not hold in general, see Example 2.7 below where we provide a class of such nilpotent matrices for which the converse does not hold. But if the nilpotency degree is \(2\), then Corollary 2.10 below proves the converse and so yields a characterization of those semigroups \(\mathcal{S}(T,T^{*})\) generated by a nilpotent matrix of degree \(2\) that are SI.
_Example 2.7_.: Let \(T\in M_{n}(\mathbb{C})\) be a nilpotent partial isometry with nilpotency degree equal to \(3\) (hence nonselfadjoint). If \(||T^{2}||<1\) (for instance \(\begin{pmatrix}0&1/\sqrt{2}&0&1/\sqrt{2}\\ 0&0&1&0\\ 0&0&0&0\\ 0&0&0&0\end{pmatrix}\) is such a nilpotent of degree \(3\) partial
isometry), then \(\mathcal{S}(T,T^{*})\) is not SI. Indeed, without loss of generality we may assume that \(T\) is a strictly upper triangular matrix and it suffices to show that \((T^{2})_{\mathcal{S}(T,T^{*})}\) is not selfadjoint. Suppose otherwise that \((T^{2})_{\mathcal{S}(T,T^{*})}\) is selfadjoint. Then \({T^{*}}^{2}=XT^{2}Y\) for some \(X,Y\in\mathcal{S}(T,T^{*})\cup\{I\}\) but not both can be the identity (as \({T^{*}}^{2}\neq T^{2}\) because \(T^{2}\) is also strictly upper triangular). Moreover, Proposition 2.2 applied to \({T^{*}}^{2}=XT^{2}Y\) and its adjoint equation \(T^{2}=Y^{*}{T^{*}}^{2}X^{*}\) implies via its contrapositive that \(X\neq I\) and \(Y\neq I\) so both \(X,Y\in\mathcal{S}(T,T^{*})\). And since \(T^{3}=0\), so \(X,Y\neq T\). Also note that neither \(X\) nor \(Y\) has any higher than one power of \(T\) or \(T^{*}\), since otherwise from the assumptions \(||T||=1\) (as \(T\) is a partial isometry), and \(||T^{2}||<1\) and the assumption \({T^{*}}^{2}=XT^{2}Y\), one obtains either \(||X||<1\) or \(||Y||<1\). This then implies that \(||T^{2}||=||{T^{*}}^{2}||=||XT^{2}Y||<||T^{2}||\), a contradiction.
Then with the additional hypothesis that \(T\) is a partial isometry (with equivalences: \(T=TT^{*}T\), \(T^{*}=T^{*}TT^{*}\), \(T^{*}T\) is a projection, or \(TT^{*}\) is a projection [4, Corollary 3 of Problem 127]), so \(\mathcal{S}(T,T^{*})\) consisting of all words in \(T\) and \(T^{*}\), those words without higher powers than one fall into the four categories of alternating \(T\) and \(T^{*}\): starting and ending with each of \(T\) or \(T^{*}\). Those starting and ending with \(T\), clearly via the identity \(T=TT^{*}T\), reduce to \(T\), except the case \(X\) or \(Y=T\) which was ruled out above; those starting and ending with \(T^{*}\), clearly via the identity \(T^{*}=T^{*}TT^{*}\), reduce to \(T^{*}\); those starting with \(T\) and ending with \(T^{*}\), clearly via \(TT^{*}\) being a projection, reduce to \(TT^{*}\); and those starting with \(T^{*}\) and ending with \(T\), clearly via \(T^{*}T\) being a projection, reduce to \(T^{*}T\). That is,
\[X,Y\in\{T^{*},TT^{*},T^{*}T\}.\]
And moreover since \(T^{3}=0\), it becomes clear via the contrapositive, because \(T^{2},{T^{*}}^{2}\neq 0\) with \({T^{*}}^{2}=XT^{2}Y\), that
\[X\in\{T^{*},TT^{*}\}\text{ and }Y\in\{T^{*},T^{*}T\}.\]
By considering these four cases \((X,Y)=(T^{*}\) or \(TT^{*},T^{*}\) or \(T^{*}T)\) we can now prove the unsolvability of \({T^{*}}^{2}=XT^{2}Y\). Indeed, when \(X=T^{*}\), one has \({T^{*}}^{2}=XT^{2}Y=T^{*}T^{2}Y\), which by left multiplying \(T^{*}\) one obtains \(0={T^{*}}^{3}={T^{*}}^{2}T^{2}Y\). So in the case \(X=T^{*}\), \(Y=T^{*}\), one has \(0={T^{*}}^{2}T^{2}T^{*}\). Then right multiplying by \(T\) and substituting \(T=TT^{*}T\) (an equivalent characterization of partial isometry) we obtain \(0=({T^{*}}^{2}T^{2}T^{*})T=({T^{*}}^{2}T)(TT^{*}T)={T^{*}}^{2}T^{2}=||T^{2}||^ {2}=0\) implying \(T^{2}=0\), contradicting the \(T\) nilpotency of degree \(3\). On the other hand in the case \(Y=T^{*}T\), substituting \(T=TT^{*}T\) one has \({T^{*}}^{2}=XT^{2}Y=T^{*}T^{2}T^{*}T=T^{*}T^{2}\) and taking adjoints yields \(T^{2}={T^{*}}^{2}T\) implying that \(\operatorname{ran}T^{2}\subset\operatorname{ran}{T^{*}}^{2}\), a contradiction to Proposition 2.2 applied to the nonzero nilpotent operator \(T^{2}\). Therefore, when \(X=T^{*}\) and \(Y\in\{T^{*},T^{*}T\}\), then \({T^{*}}^{2}\neq XT^{2}Y\).
For the cases when \(X=TT^{*}\) and \(Y\in\{T^{*},T^{*}T\}\), then substituting again \(T=TT^{*}T\), either \({T^{*}}^{2}=XT^{2}Y=TT^{*}T^{2}T^{*}=T^{2}T^{*}\) or \({T^{*}}^{2}=(TT^{*}T)(TT^{*}T)=T^{2}\). The former equation \({T^{*}}^{2}=T^{2}T^{*}\), after right multiplying by \(T^{*}\), as before implies that \(T^{2}=0\), against \(T\) nilpotent degree \(3\) (or one can apply Proposition 2.2 applied to the nonzero nilpotent operator \(T^{2}\) for a contradiction); and the latter equation implies that \(T^{2}\) is selfadjoint, against \(T^{2}\) a nonzero nilpotent matrix. Therefore, neither of the required equations hold. This completes the proof of the nonsolvability of the equation \({T^{*}}^{2}=XT^{2}Y\) in \(\mathcal{S}(T,T^{*})\cup\{I\}\), thereby proving that \(\mathcal{S}(T,T^{*})\) is not SI.
_Remark 2.8_.: The condition \(||T^{2}||<1\) in the above example cannot be dropped. For instance, consider \(T:=\begin{bmatrix}0&1&0\\ 0&0&1\\ 0&0&0\end{bmatrix}\). Then \(\mathcal{S}(T,T^{*})\) is SI because \(T\) is a power partial isometry [7, Corollary1.15] but its square has norm one.
_Remark 2.9_.: In Example 2.7, it took a \(4\times 4\) matrix example to produce a degree \(3\) nilpotent partial isometry whose square has norm strictly less than one. In fact, after reducing the problem to a strictly upper triangular matrix, with some work it can be shown that there is no such \(3\times 3\) example.
For the nilpotent degree \(2\) case, we have a partial isometry characterization for \(\mathcal{S}(T,T^{*})\) being SI.
**Corollary 2.10**.: _Let \(T\in M_{n}(\mathbb{C})\) be a nilpotent matrix of degree \(2\). Then \(\mathcal{S}(T,T^{*})\) is SI if and only if \(T\) is a partial isometry._
Proof.: \(\Rightarrow\): This is Corollary 2.6.
\(\Leftarrow\): Suppose \(T\) is a partial isometry. Since \(T^{2}=0\), \(T\) is a power partial isometry. Therefore \(\mathcal{S}(T,T^{*})\) is SI by [7, Corollary 1.15].
We next turn to weighted shifts. That is, \(T\in M_{n}(\mathbb{C})\) for which \(Te_{j}=\alpha_{j}e_{j+1}\) for \(1\leq j\leq n-1\) and \(Te_{n}=0\).
**Corollary 2.11**.: _Let \(T\in M_{n}(\mathbb{C})\) be a weighted shift matrix with weights \(\{\alpha_{j}\}_{j=1}^{n-1}\). Then \(\mathcal{S}(T,T^{*})\) is SI if and only if \(T\) is a partial isometry._
Proof.: Clearly SI and partial isometry are unitary invariant properties. Since \(T\) is unitarily equivalent to the matrix with weights \(\{|\alpha_{j}|\}_{j=1}^{n-1}\) ([4, Problem 89]), without loss of generality we may assume \(\alpha_{j}\geq 0\). Suppose \(\mathcal{S}(T,T^{*})\) is SI. Then by Corollary 2.6, it follows that \(T\) is a partial isometry. Conversely, if \(T\) is a partial isometry, this equivalent to \(T^{*}T=\operatorname{diag}(\alpha_{1}^{2},\cdots,\alpha_{n-1}^{2},0)\) being a projection. This further implies that the nonzero \(\alpha_{j}\)'s must be equal to \(1\). Then by a straightforward computation one sees that \(T^{*k}\) and \(T^{k}\) are respectively upper and lower \(k\) diagonals with weights products of the \(\alpha_{j}\)'s and that the diagonal matrix \(T^{*k}T^{k}\) is also a projection for each \(k\geq 2\). Therefore, \(T\) is a power partial isometry and hence by [7, Corollary 1.15], \(\mathcal{S}(T,T^{*})\) is SI.
**Theorem 2.12**.: _For weighted shifts, the following are equivalent._
1. \(\mathcal{S}(T,T^{*})\) _is SI._
2. \(T\) _is a partial isometry._
3. \(T\) _is a power partial isometry._
We conclude this section with a theorem on the simplicity of semigroups when generated by an invertible matrix whose application is seen in Corollary 3.4 and Theorem 3.5. In general, a group is always simple (since having inverses, every nonzero multiplicative ideal contains the identity, hence is the whole group), but a semigroup need not be simple ([7] abounds with examples, and as well herein any of the non SI semigroups cannot be simple). In Theorem 2.13, we prove that under the SI property of \(\mathcal{S}(T,T^{*})\), the invertibility of a nonselfadjoint \(T\) implies that \(\mathcal{S}(T,T^{*})\) is a group and hence simple. That is, for the class of semigroups \(\mathcal{S}(T,T^{*})\) with \(T\) nonselfadjoint and invertible, SI and simplicity are equivalent.
**Theorem 2.13**.: _Let \(T\in M_{n}(\mathbb{C})\) be a nonselfadjoint invertible matrix. Then \(\mathcal{S}(T,T^{*})\) is an SI semigroup if and only if \(\mathcal{S}(T,T^{*})\) is simple._
Proof.: If \(\mathcal{S}(T,T^{*})\) is simple, then it has no ideals and so is vacuously SI.
But to prove that SI semigroups \(\mathcal{S}(T,T^{*})\) are simple requires work. First, we recall the semigroup list Equation (1):
\(\mathcal{S}(T,T^{*})=\{T^{n},T^{*n},\Pi_{j=1}^{k}T^{n_{j}}T^{*m_{j}},(\Pi_{j= 1}^{k}T^{n_{j}}T^{*m_{j}})T^{n_{k+1}},\Pi_{j=1}^{k}T^{*m_{j}}T^{n_{j}},(\Pi_{ j=1}^{k}T^{*m_{j}}T^{n_{j}})T^{*m_{k+1}}\},\)
where \(n\geq 1\), \(k\geq 1\), \(n_{j},m_{j}\geq 1\) for \(1\leq j\leq k\) and \(m_{k+1}\geq 1,n_{k+1}\geq 1\).
Now suppose \(\mathcal{S}(T,T^{*})\) is SI. Then \((T)_{\mathcal{S}(T,T^{*})}\) is a selfadjoint-ideal, in particular. So, \(T^{*}=XTY\) for some \(X,Y\in\mathcal{S}(T,T^{*})\cup\{I\}\), where both \(X,Y\) are not identity becasue \(T\) is nonselfadjoint. Observe that the only word in the above semigroup list without a \(T\) in it is the second term \(T^{*n}\); and because \(T\) is nonselfadjoint, \(XTY\) being \(T^{*}\) cannot take the form \(T^{n}\) for \(n=1\). Therefore \(XTY\) may only take at least one of the remaining forms, all of which by observation take one of the following four possible forms.
\[\text{(i)}\,XTY=T^{*}WT,\quad\text{(ii)}\,XTY=TWT^{*},\quad\text{(iii)}\,XTY=T ^{*}WT^{*},\quad\text{ and}\quad\text{(iv)}\,XTY=TWT,\]
where \(W\in\mathcal{S}(T,T^{*})\cup\{I\}\).
Case (i). If \(XTY=T^{*}WT\), then \(T^{*}=T^{*}WT\). Since \(T\) is not selfadjoint, \(W\neq I\). Left multiplying equation \(T^{*}=T^{*}WT\) by the inverse \(T^{*}{}^{-1}\), one obtains \(I=WT\in\mathcal{S}(T,T^{*})\) and so \(T^{-1}=W\in\mathcal{S}(T,T^{*})\).
Hence in this case, since the inverse of the generator is also in the semigroup, every word in \(T\) and \(T^{*}\) has an inverse in the semigroup, that is, every member of \(\mathcal{S}(T,T^{*})\) has its inverse in the semigroup and so the semigroup contains the identity \(I\). And so the semigroup is a group, hence it is simple. Case (ii) can be handled similarly as in Case (i).
Case (iii). If \(XTY=T^{*}WT^{*}\), then \(T^{*}=T^{*}WT^{*}\). Here also, \(W\neq I\). Otherwise, \(T^{*}=T^{*2}\) and the invertibility of \(T^{*}\) implies that \(T^{*}=I\) which is selfadjoint contradicting the nonselfadjointness of \(T\). Then since \(W\neq I\), again left multiplying by the inverse \(T^{*-1}\) in the equation \(T^{*}=T^{*}WT^{*}\), one obtains, \(I=WT^{*}\). So, \(T^{*-1}=W\in\mathcal{S}(T,T^{*})\). Therefore, \(\mathcal{S}(T,T^{*})\) forms a group and hence is simple.
Case (iv). If \(XTY=TWT\), then \(T^{*}=TWT\). We first claim that if \(T^{*}=TWT\), then \(I\in\mathcal{S}(T,T^{*})\). Since \(T^{*}=TWT\), one has \(T^{*}T^{-1}=WT\) and \(T^{-1}T^{*}=TW\). Therefore, \(T^{*}T^{-1}\) and \(T^{-1}T^{*}\in\mathcal{S}(T,T^{*})\). Then \(TT^{*-1}=(T^{-1}T^{*})^{*}\in\mathcal{S}(T,T^{*})\), since \(\mathcal{S}(T,T^{*})\) is a selfadjoint semigroup. Hence, \(I=(T^{*}T^{-1})(TT^{*-1})\in\mathcal{S}(T,T^{*})\). And since \(T^{*}=TWT\), so \(T^{-1}T^{*}T^{-1}=W\in\mathcal{S}(T,T^{*})\). But then also one has \(T=T^{*}W^{*}T^{*}\) implying \(TT^{*-1}=T^{*}W^{*}\in\mathcal{S}(T,T^{*})\), where the latter inclusion holds because \(W\in\mathcal{S}(T,T^{*})\cup\{I\}\) so \(W^{*}\in\mathcal{S}(T,T^{*})\cup\{I\}\) again by the selfadjointness of \(\mathcal{S}(T,T^{*})\), hence \(TT^{*-1}=T^{*}W^{*}\in\mathcal{S}(T,T^{*})\). Therefore \((T^{*}T^{-1})^{-1}=TT^{*-1}\in\mathcal{S}(T,T^{*})\). And finally because \(T^{-1}T^{*}T^{-1}\in\mathcal{S}(T,T^{*})\) one has \(T^{-1}=(T^{-1}T^{*}T^{-1})((T^{*}T^{-1})^{-1})\in\mathcal{S}(T,T^{*})\) so all words in \(T\) and \(T^{*}\) have inverses in \(\mathcal{S}(T,T^{*})\) and \(I\in\mathcal{S}(T,T^{*})\). So again \(\mathcal{S}(T,T^{*})\) is a group and hence is simple.
An immediate application of this Theorem 2.13 combined with our first paper on the subject [7, Summary preceding Remark 3.21] classifies all SI semigroups \(\mathcal{S}(T,T^{*})\) generated by \(T\in M_{2}(\mathbb{C})\). We caution that this result may not hold for rank two operators in \(F(\mathcal{H})\).
**Corollary 2.14**.: _Every matrix in \(M_{2}(\mathbb{C})\) has rank one or two. For a complete classification of when \(\mathcal{S}(T,T^{*})\) is SI:_
_In the rank one case, see [7, Summary preceding Remark 3.21] for a complete classification._
_In the rank two case, the nonselfadjoint matrix is invertible and hence its generated semigroup is SI if and only if it is simple._
## 3. Characterization of SI semigroups \(\mathcal{S}(T,T^{*})\) generated by a Jordan matrix \(T\)
In our earlier study of the SI property of semigroups, we recall from [7, Theorem 1.21] that the SI property of a semigroup is unitary invariant, but not similarity invariant. In particular, if \(A\) and \(B\) are similar matrices (but not unitarily similar) and \(\mathcal{S}(A,A^{*})\) is an SI semigroup, then \(\mathcal{S}(B,B^{*})\) may not be an SI semigroup (see [7, Remark 3.22]). From linear algebra, we know that every finite matrix \(A\) is unitarily equivalent to an upper triangular matrix \(B\). But when \(A\) and \(B\) are unitarily equivalent, \(\mathcal{S}(A,A^{*})\) is an SI semigroup if and only if \(\mathcal{S}(B,B^{*})\) is an SI semigroup, so one would naturally be inclined instead to investigate the SI property of \(\mathcal{S}(B,B^{*})\).
For invertible matrices \(B\), \(\mathcal{S}(B,B^{*})\) is SI if and only if it is simple (Theorem 2.13)). But for a non-invertible \(B\), the study of the SI property of \(\mathcal{S}(B,B^{*})\) where \(B\) is an upper triangular matrix, even in the case of \(3\times 3\) matrix, seemed to us computationally complicated, and in higher dimensions, even more intractable.
Nevertheless, we know from linear algebra that every matrix is similar (via an invertible, not necessarily unitary) to a matrix in Jordan form and rational cannonical form (i.e., direct sum of Jordan and rational cannonical blocks). These similarity equivalent matrices are central to the study of linear algebra in part because some of the crucial properties of matrices are similarity invariant, as for instance, rank, nullity, determinant, and trace. But because rational cannonical matrices are much more complicated than Jordan matrices to compute with in solving the necessary bilinear equations, in this section we investigate the SI nature of Jordan matrices. We found Jordan matrices much more approachable albeit nontrivial.
In this section, we focus on a characterization of SI semigroups \(\mathcal{S}(T,T^{*})\) generated by a Jordan matrix \(T\). By a Jordan matrix \(T\) (i.e., Jordan form) we will always mean a block diagonal matrix \(T=\oplus_{i=1}^{k}J_{m_{i}}(\lambda_{i})\), where \(k\geq 1\) and \(J_{m_{i}}(\lambda_{i})\) is Jordan block (\(m_{i}\times m_{i}\) block corresponding to eigenvalue \(\lambda_{i}\) which has all the diagonal entries equal to \(\lambda_{i}\) and all subdiagonal entries equal to one (see below Equation (2)). Note that a Jordan matrix is a nonnormal matrix whenever at least one of its blocks is larger than \(1\times 1\). And a Jordan block is invertible if and only if its \(\lambda\neq 0\).
Our focus in this section is to obtain a necessary and sufficient condition on the Jordan matrix \(A\) for \(\mathcal{S}(A,A^{*})\) to be SI. Theorem 2.5 provides in the nonselfadjoint case an either/or necessary condition (\(\ker A=\ker A^{2}\) or \(A\) is a partial isometry) for \(\mathcal{S}(A,A^{*})\) to be SI. Also, since \(\mathcal{S}(A,A^{*})\) being SI implies each \(\mathcal{S}(J_{m_{i}}(\lambda_{i}),J_{m_{i}}^{*}(\lambda_{i}))\) is SI for \(1\leq i\leq k\) (Proposition 3.9), then using Proposition 3.11, a necessary condition for each nonselfadjoint invertible Jordan block to generate an SI semigroup is that \(|\lambda_{i}|=1\) for each nonzero \(\lambda_{i}\). So in order to obtain a sufficient condition on \(A\) for \(\mathcal{S}(A,A^{*})\) to be SI, we must restrict to the unit circle the nonzero \(\lambda\)'s that appear in the Jordan blocks of the Jordan matrix \(A\). We first investigate the selfadjoint semigroup generated by each Jordan block and establish several results that lead to a sufficient condition for \(\mathcal{S}(A,A^{*})\) to be an SI semigroup. We then prove in Theorem 3.14 the necessary and sufficient condition that Jordan matrix \(A\) be a power partial isometry in order for \(\mathcal{S}(A,A^{*})\) to be an SI semigroup.
For the study of the SI property of \(\mathcal{S}(T,T^{*})\) generated by a Jordan matrix \(T\), motivated by Theorem 2.5, we split the class of Jordan matrices into two cases: (i) \(\ker T\neq\ker T^{2}\) and (ii) \(\ker T=\ker T^{2}\). Since a Jordan matrix is the direct sum of Jordan blocks and its SI status is determined by the SI status of its Jordan blocks (Proposition 3.9), we will first consider the special case when \(T\) is a single nonselfadjoint Jordan block, i.e., \(T=J_{m}(\lambda)\) with \(m\geq 2,\text{ and in the special case when }|\lambda|=1\) we will prove that for such a matrix \(T\), \(\mathcal{S}(T,T^{*})\) is not an SI semigroup. Later on, we will use this special case to obtain a characterization of SI semigroups \(\mathcal{S}(T,T^{*})\) generated by a Jordan matrix \(T\). We may occasionally write \(J_{m}\) for \(J_{m}(\lambda)\) and \(\mathcal{W}\) for a word \(\mathcal{W}(T,T^{*})\) in \(T,T^{*}\) whenever \(\lambda\) and the matrix \(T\) is clear from the context, respectively. First we make a few observations about \(J_{N}(\lambda)\). Firstly, \(1\times 1\) Jordan blocks are simply those that have entry \(\lambda\), the only normal Jordan blocks. For higher dimensions nonnormality is easily verified.
**Observations:** For any \(N\geq 2\) the Jordan block matrix \(J_{N}(\lambda)\) is given by:
\[J_{N}(\lambda)=\begin{bmatrix}\lambda&1&0&\cdots&0\\ 0&\lambda&1&\cdots&0\\ \vdots&\vdots&\vdots&\ddots&1\\ 0&0&0&\cdots&\lambda\end{bmatrix}_{N\times N} \tag{2}\]
Let \(W\) be the constant weighted shift matrix for \(\lambda\neq 0\) defined as:
\[We_{i}:=\begin{cases}(1/\bar{\lambda})\,e_{i+1}\ \text{ if }1\leq i\leq N-1\\ 0&\text{ if }i=N\end{cases} \tag{3}\]
Then, one can rewrite \(J_{N}(\lambda)\) as
\[J_{N}(\lambda)=\begin{bmatrix}\lambda&1&0&\cdots&0\\ 0&\lambda&1&\cdots&0\\ \vdots&\vdots&\vdots&\ddots&1\\ 0&0&0&\cdots&\lambda\end{bmatrix}_{N\times N}=\lambda(I+W^{*}) \tag{4}\]
Note that \(W^{N}=W^{*N}=0\) and \(W^{N-1}=W^{*N-1}\neq 0\). Let \(T:=I+W^{*}\), then \(J_{N}(\lambda)=\lambda T\) from Equation (4). We rewrite \(T\) this way and work with \(W\) instead because it is computationally more convenient and useful for characterizing the SI semigroup \(\mathcal{S}(T,T^{*})\).
We next show via the binomial expansion that an arbitrary word in \(T\) and \(T^{*}\) has a certain form in terms of a certain kind of polynomial in \(W\) and \(W^{*}\).
\[T^{n}=(I+W^{*})^{n}=I+\tbinom{n}{1}W^{*}+\tbinom{n}{2}W^{*2}+\cdots+\tbinom{n}{k_ {n}}W^{*k_{n}}\quad\text{ for }n\geq 1 \tag{5}\]
where \(k_{n}\) is a positive integer depending on \(n\) such that \(k_{n}=n\) for \(n<N-1\) and \(k_{n}=N-1\) for \(n\geq N-1\) (because \(W^{*N}=0\)). So also,
\[T^{*m}=(I+W)^{m}=I+\tbinom{m}{1}W+\tbinom{m}{2}W^{2}+\cdots+\tbinom{m}{k_{m}}W ^{k_{m}}\quad\text{ for }m\geq 1 \tag{6}\]
where \(k_{m}=m\) for \(m<N-1\) and \(k_{m}=N-1\) for \(m\geq N-1\) (as \(W^{N}=0\)). Note that all the coefficients of the powers of \(W^{*}\) and \(W\) in Equations (5)-(6) are positive integers, respectively. From these Equations (5)-(6) it clearly follows that any word \(\mathcal{W}(T,T^{*})\) in \(T\) and \(T^{*}\) is given by:
\[\mathcal{W}(T,T^{*})=I+\mathcal{P}(W,W^{*}) \tag{7}\]
where \(I\) is the \(N\times N\) identity matrix and \(\mathcal{P}(W,W^{*})\) is a polynomial in \(W,W^{*}\) with no constant term and the coefficient of each monomial that appears in the polynomial is a positive integer.
For the constant \(\lambda\) weighted shift \(W\) defined in Equation (3), the case \(|\lambda|=1\) needed for this paper, next we prove the following proposition about words in \(W,W^{*}\):
**Proposition 3.1**.: _Let \(W\) be a constant \(\lambda\) weighted shift. For \(\mathcal{W}\) a word in \(W,W^{*}\) with nonzero diagonal, its nonzero diagonal entries are all equal to \({|\lambda|}^{-2p}\) for some \(p\geq 1\), and 1 in the case \(|\lambda|=1\).._
Proof.: It follows from the definition given in Equation (3) that for any powers \(n,m\geq 1\) one has:
\[W^{n}e_{i}=(1/\bar{\lambda}^{n})e_{i+n}\quad\text{if}\quad W^{n}e_{i}\neq 0 \quad\text{(i.e., $1\leq n\leq N-1$ and $1\leq i\leq N-n$)}. \tag{8}\]
Also for any power \(m\geq 1\) one has:
\[W^{*m}e_{i}=(1/\lambda^{m})e_{i-m}\quad\text{if}\quad W^{*m}e_{i}\neq 0\quad \text{(i.e., $1\leq m\leq N-1$ and $m+1\leq i\leq N$)}. \tag{9}\]
Also any word in \(W,W^{*}\) has at least one of the following possible forms by Equation (1):
\[\{W^{n},W^{*n},\Pi^{k}_{j=1}W^{*m_{j}}W^{n_{j}},(\Pi^{k}_{j=1}W^{*m_{j}}W^{n_ {j}})W^{*m_{k+1}},\Pi^{k}_{j=1}W^{n_{j}}W^{*m_{j}},(\Pi^{k}_{j=1}W^{n_{j}}W^{* m_{j}})W^{n_{k+1}}\} \tag{10}\]
where \(n\geq 1\), \(k\geq 1\), \(n_{j},m_{j}\geq 1\,\,\,\text{for }\,\,1\leq j\leq k\) and \(n_{k+1},m_{k+1}\geq 1\). It is clear as weighted shifts or their adjoints, or from Equations (8)-(9), that for \(n\geq 1\), \(W^{n}\) and \(W^{*n}\) have their main diagonal entries all equal to zero. By hypothesis, since \(\mathcal{W}\) has its main diagonal entries not all zero, so \(\mathcal{W}\) must have one of the last four forms in above Equation (10). Suppose \(\mathcal{W}\) has the the third form, i.e., \(\mathcal{W}=\Pi^{k}_{j=1}W^{*m_{j}}W^{n_{j}}\) with at least some \(1\leq i\leq N\) with \(\langle\mathcal{W}e_{i},e_{i}\rangle\neq 0\). Then
\[0\neq\mathcal{W}e_{i}=\Pi^{k}_{j=1}W^{*m_{j}}W^{n_{j}}e_{i} \tag{11}\]
Since \(\mathcal{W}e_{i}\neq 0\), one has \(W^{*m_{k}}W^{n_{k}}e_{i}\neq 0\), therefore \(W^{n_{k}}e_{i}\neq 0\) and \(W^{*m_{k}}e_{i+n_{k}}\neq 0\), so by Equations (8)-(9) one obtains,
\[0\neq W^{*m_{k}}W^{n_{k}}e_{i}=(1/\bar{\lambda}^{n_{k}})W^{*m_{k}}e_{i+n_{k}}=( 1/\bar{\lambda}^{n_{k}}\lambda^{m_{k}})e_{i+(n_{k}-m_{k})}\]
with the nonzeroness dictating \(1\leq i+(n_{k}-m_{k})\leq N\) and the other three constraints in Equations (8)-(9).
And notice the coefficient remains fixed at \((1/\bar{\lambda}^{n_{k}}\lambda^{m_{k}})\) for all such \(i\). Then proceeding to peel off terms, Equation (11) becomes, for the first equality when \(k\geq 2\) and for the second equality when \(k\geq 3\),
\[0\neq\mathcal{W}e_{i} =(1/\bar{\lambda}^{n_{k}}\lambda^{m_{k}})\Pi^{k-1}_{j=1}W^{*m_{j} }W^{n_{j}}e_{i+(n_{k}-m_{k})}\] \[=(1/\bar{\lambda}^{n_{k}}\lambda^{m_{k}})\Pi^{k-2}_{j=1}W^{*m_{j} }W^{n_{j}}(W^{*m_{k-1}}W^{n_{k-1}}e_{i+(n_{k}-m_{k})}).\]
That is,
\[0\neq\mathcal{W}e_{i}=(1/\bar{\lambda}^{n_{k}}\lambda^{m_{k}})\Pi^{k-2}_{j=1}W^ {*m_{j}}W^{n_{j}}(W^{*m_{k-1}}W^{n_{k-1}}e_{i+(n_{k}-m_{k})}) \tag{12}\]
and again \({W^{*}}^{m_{k-1}}{W^{n_{k-1}}}e_{i+(n_{k}-m_{k})}\neq 0\), so using again Equations (8)-(9) one obtains,
\[{W^{*}}^{m_{k-1}}{W^{n_{k-1}}}e_{i+(n_{k}-m_{k})}=(1/\bar{\lambda}^{n_{k-1}} \lambda^{m_{k-1}})e_{i+(n_{k}-m_{k})+(n_{k-1}-m_{k-1})}\]
and as before these subscripts from the nonzeroness are dictated by the contraints in Equations (8)-(9). Then substituting this value in Equation (12) one has:
\[0\neq\mathcal{W}e_{i}=(1/\bar{\lambda}^{(n_{k}+n_{k-1})}\lambda^{(m_{k}+m_{k- 1})})\Pi_{j=1}^{k-2}{W^{*}}^{m_{j}}{W^{n_{j}}}e_{i+(n_{k}-m_{k})+(n_{k-1}-m_{k-1 })}.\]
Continuing this successively, after \(k\)-steps, one finally obtains,
\[\mathcal{W}e_{i}=(1/\bar{\lambda}^{p}\lambda^{q})e_{i+p-q} \tag{13}\]
where \(p:=\sum_{j=1}^{k}n_{j}\) and \(q:=\sum_{j=1}^{k}m_{j}\) are the sum of powers of \(W\) and the sum of powers of \(W^{*}\) that appear in \(\mathcal{W}\), respectively. Since \(\langle\mathcal{W}e_{i},e_{i}\rangle\neq 0\), it follows that
\[0\neq\langle\mathcal{W}e_{i},e_{i}\rangle=(1/\bar{\lambda}^{p}\lambda^{q}) \left\langle e_{i+p-q},e_{i}\right\rangle,\]
hence \(\left\langle e_{i+p-q},e_{i}\right\rangle\neq 0\). Therefore, \(i+p-q=i\), or equivalently, \(p=q\) and hence for all such \(i\),
\[\langle\mathcal{W}e_{i},e_{i}\rangle=\left|\lambda\right|^{-2p}\]
This completes the proof for the case when \(\mathcal{W}\) has third form.
For the other cases when \(\mathcal{W}\) is in the other forms in the list given by Equation (10), one can similarly compute the expressions for \(\mathcal{W}e_{i}\) using Equations (8)-(9) to obtain the same conclusion given in Equation (13). Therefore, all the nonzero diagonal entries of the main diagonal of \(\mathcal{W}\) are equal to \(\left|\lambda\right|^{-2p}\).
_Remark 3.2_.: Let \(\mathcal{P}(W,W^{*})\) be a polynomial in \(W,W^{*}\) with positive integer coefficients and with no constant term. Then when \(\lambda\in\mathbb{S}^{1}\), that the diagonal entries of the main diagonal of \(\mathcal{P}(W,W^{*})\) are nonnegative integers follows from Proposition 3.1. More generally, again from Proposition 3.1, for \(\lambda\neq 0\), the diagonal entries of the main diagonal of \(\mathcal{P}(W,W^{*})\) are nonnegative numbers.
Recall that our goal is to prove that for \(N\geq 2\) and \(\lambda\in\mathbb{S}^{1}\), the semigroup \(\mathcal{S}(T,T^{*})\) generated by \(J_{N}(\lambda)\) is not an SI semigroup (see Theorem 3.5 below). Towards proving this, we first show that for \(T:=I+W^{*}\) where \(W\) is defined in Equation (3)), the \((1,1)\)-diagonal entry of each member in \(({T^{n}}{T^{*}}^{m})_{\mathcal{S}(T,T^{*})}\), the principal ideal in \(\mathcal{S}(T,T^{*})\) generated by \({T^{n}}{T^{*}}^{m}\), for \(n,m\geq 1\), is a number strictly greater than one. As a consequence, in Corollary 3.4, we prove that the semigroup \(\mathcal{S}(T,T^{*})\) is not SI when \(T=I+W^{*}\).
**Lemma 3.3**.: _For \(T=I+W^{*}\) where \(W\) is defined in Equation (3) with \(\lambda\neq 0\) and \(n,m\geq 1\), each member of \(({T^{n}}{T^{*}}^{m})_{\mathcal{S}(T,T^{*})}\) has its \((1,1)\)-diagonal entry is greater than or equal to \(1+nm|\lambda|^{-2}\)._
Proof.: Using Equations (5)-(6), we express \({T^{n}}{T^{*}}^{m}\) in terms of \(W,W^{*}\) as follows:
\[{T^{n}}{T^{*}}^{m}= \{I+{n\choose 1}W^{*}+{n\choose 2}{W^{*}}^{2}+\cdots{n\choose k _{n}}{W^{*}}^{k_{n}}\}\{I+{m\choose 1}W+{m\choose 2}W^{2}+\cdots+{m\choose k _{m}}W^{k_{m}}\}\] \[=\{I+n_{1}W^{*}+n_{2}{W^{*}}^{2}+\cdots{n_{k_{n}}}{W^{*}}^{k_{n}} \}\{I+m_{1}W+m_{2}W^{2}+\cdots+m_{k_{m}}W^{k_{m}}\}\] \[(n_{i},m_{j}\text{ positive integers depending on }n,m\text{ with }1\leq i\leq k_{n},1\leq j \leq k_{m}\text{ and }n_{1},m_{1}=n,m)\] \[=I+nmW^{*}W+\mathcal{Q}(W,W^{*})\]
where \(\mathcal{Q}(W,W^{*})\) is some polynomial in \(W,W^{*}\) with no constant term and the coefficients of all nonzero terms in \(\mathcal{Q}(W,W^{*})\) are positive integers. Observe that since \(W^{*}W=|\lambda|^{-2}I_{N-1}\oplus 0\), one has for \(I+nmW^{*}W\), its \((1,1)\)-diagonal entry equal to \(1+nm|\lambda|^{-2}\).
For every \(B\in({T^{n}}{T^{*}}^{m})_{\mathcal{S}(T,T^{*})}\) one has \(B=X({T^{n}}{T^{*}}^{m})Y\) for some \(X,Y\in\mathcal{S}(T,T^{*})\cup\{I\}\)[7, Lemma 1.7]. If \(X\) (or \(Y\)) \(\mathcal{S}(T,T^{*})\), then \(X\) (or \(Y\)) is a word in \(T,T^{*}\) and so from Equation (7), \(X\) (or \(Y\)) is of the form \(I+\mathcal{P}(W,W^{*})\), where \(\mathcal{P}(W,W^{*})\) is a polynomial in \(W,W^{*}\) with no constant term and with
positive integer coefficients. For the cases \(X\) (or \(Y)=I\), we take that \(\mathcal{P}(W,W^{*})\) is the zero polynomial. With this convention, one can rewrite \(X(T^{n}T^{*m})Y\) as:
\[X(T^{n}T^{*m})Y=(I+\mathcal{P}_{1}(W,W^{*}))(I+nmW^{*}W+\mathcal{Q}(W,W^{*}))(I+ \mathcal{P}_{2}(W,W^{*})) \tag{14}\]
where \(\mathcal{P}_{i}(W,W^{*})\) is either the zero polynomial or a polynomial in \(W,W^{*}\) with no constant term and with positive integer coefficients for \(i=1,2\), and \(\mathcal{Q}(W,W^{*})\) is a polynomial in \(W,W^{*}\) with no constant term and which also has positive integer coefficients. Therefore, one can further simplify the expression in Equation (14) and rewrite it as
\[X(T^{n}T^{*m})Y=I+nmW^{*}W+\mathcal{P}(W,W^{*}),\]
where \(\mathcal{P}(W,W^{*})\) is a ploynomial in \(W,W^{*}\) with no constant term and with positive integer coefficients. From Remark 3.2, \(\mathcal{P}(W,W^{*})\) has all the diagonal entries of its main diagonal as nonnegative numbers. Also, we discussed earlier that \(I+nmW^{*}W\) has its \((1,1)\)-diagonal entry \(1+nm|\lambda|^{-2}\), and hence \(I+nmW^{*}W+\mathcal{P}(W,W^{*})\) has its \((1,1)\)-diagonal entry greater or equal to \(1+nm|\lambda|^{-2}\). Therefore \(X(T^{n}T^{*m})Y\) has its \((1,1)\)-diagonal entry greater or equal to \(1+nm|\lambda|^{-2}\). Since \(B\) is arbitrary, this completes the proof.
**Corollary 3.4**.: _For \(T=I+W^{*}\) where \(W\) is defined in Equation (3) with \(\lambda\neq 0\), the semigroup \(\mathcal{S}(T,T^{*})\) is not an SI semigroup._
Proof.: Suppose \(\mathcal{S}(T,T^{*})\) is an SI semigroup. Since \(T\) is a nonselfadjoint invertible matrix, it follows from the SI equivalence Theorem 2.13 that \(\mathcal{S}(T,T^{*})\) is simple. This implies that \(T\) is contained in every nonzero principal ideal, in particular, in the principal ideal \((TT^{*})_{\mathcal{S}(T,T^{*})}\). But by Lemma 3.3, every element of this principal ideal \((XTT^{*}Y\) with \(X,Y\in\mathcal{S}(T,T^{*})\cup\{I\})\) has its \((1,1)\)-diagonal entry bigger than one and the \((1,1)\)-diagonal entry of \(T\) is precisely one. So, \(T\) is not in this principal ideal, contradicting the simplicity. Therefore, \(\mathcal{S}(T,T^{*})\) is not an SI semigroup.
We now prove the nonsimplicity of the semigroup \(\mathcal{S}(A,A^{*})\) generated by \(A=J_{N}(\lambda)\), where \(N\geq 2\) and \(\lambda\in\mathbb{S}^{1}\).
**Theorem 3.5**.: _For any \(N\geq 2,|\lambda|\geq 1\) and \(A:=J_{N}(\lambda)\), the semigroup \(\mathcal{S}(A,A^{*})\) is not an SI semigroup._
Proof.: As mentioned in Equation (4), \(A=\lambda(I+W^{*})=\lambda T\), where \(T=I+W^{*}\). By Theorem 2.13, since \(A\) is invertible, \(\mathcal{S}(A,A^{*})\) possesses the SI property if and only if it is simple. So it suffices to prove nonsimplicity. For proving the non-simplicity of \(\mathcal{S}(A,A^{*})\), it suffices to show that \(A\notin(AA^{*})_{\mathcal{S}(A,A^{*})}\). Suppose \(A\in(AA^{*})_{\mathcal{S}(A,A^{*})}\). Then
\[A=X(AA^{*})Y\]
for some \(X,Y\in\mathcal{S}(A,A^{*})\cup\{I\}\) at least one of which is not the identity since \(A\) is nonselfadjoint. Replacing \(A\) by \(\lambda T\) in the above display one obtains:
\[\lambda T=\lambda^{r}\overline{\lambda}^{k}(X^{\prime}TT^{*}Y^{\prime}) \tag{15}\]
where \(r,k\geq 1,r+k\geq 3\) and \(X^{\prime},Y^{\prime}\in\mathcal{S}(T,T^{*})\cup\{I\}\). Note that \(X^{\prime}TT^{*}Y^{\prime}\in(TT^{*})_{\mathcal{S}(T,T^{*})}\) and hence by Lemma 3.3 (the case \(n,m=1\)), the \((1,1)\)-diagonal entry of \(X^{\prime}TT^{*}Y^{\prime}\) is greater than or equal to \(1+|\lambda|^{-2}\). Hence, it follows that the matrix \(\lambda^{r}\overline{\lambda}^{k}(X^{\prime}TT^{*}Y^{\prime})\) in Equation (15) has its \((1,1)\)-diagonal entry with modulus greater than or equal to \(|\lambda|^{p}(1+|\lambda|^{-2})\) for some \(p\geq 3\), which itself is strictly greater than \(|\lambda|\) as \(|\lambda|\geq 1\). So we arrive at a contradiction as \(\lambda T\) has all its diagonal entries equal to \(\lambda\). Hence, \(A\notin(AA^{*})_{\mathcal{S}(A,A^{*})}\) and so \(\mathcal{S}(A,A^{*})\) is not simple.
We are now ready to investigate the general case of nonselfadjoint Jordan matrices and obtain a power partial isometry characterization of all SI semigroups \(\mathcal{S}(A,A^{*})\) generated by nonselfadjoint Jordan matrices \(A\) (Theorem 3.14). As all but \(1\times 1\) Jordan blocks are obviously nonselfadjoint (even nonnormal as a direct computaton shows), motivated by Theorem 2.5 we divide our approach into studying three
cases: \(\ker A\neq\ker A^{2}\), \(\ker A=\ker A^{2}\) and the partial isometry case. Towards this, we first characterize partially isometric Jordan matrices in the following proposition. (A partially isometric Jordan matrix is a Jordan matrix which is also a partial isometry.) Following that we investigate the SI property of \(\mathcal{S}(A,A^{*})\) by considering the two cases separately for the Jordan matrix \(A\): \(\ker A\neq\ker A^{2}\) and \(\ker A=\ker A^{2}\).
**Proposition 3.6**.: _A partially isometric Jordan matrix \(A\) is unitarily equivalent to \(U\oplus B\), (any one summand maybe absent) where \(U\) is a diagonal unitary matrix and \(B\) is a direct sum of shifts._
Proof.: Any direct sum is partially isometric if and only if each of its blocks is partially isometric. This is clear using the facts that partial isometries are those operators whose absolute values are projections, and projections are selfadjoint idempotents. Therefore Jordan matrix \(A\) is a partial isometry if and only if each of its Jordan blocks is. It is clear (from these characterizations for instance) that a \(1\times 1\) Jordan block is a partial isometry if and only if \(|\lambda|=0\) or \(1\), and for \(N\geq 2\), the larger shifts are partial isometries (the cases \(\lambda=0\)), but for \(\lambda\neq 0\), those Jordan blocks have norms bounded below by some column norms \(\sqrt{1+|\lambda|^{2}}\) exceeding the norm of nonzero partial isometries which is \(1\). In short, reordering the basis, the direct sum of all nonzero \(1\times 1\) Jordan matrices, if any, forms \(U\), and the direct sum of the rest (the \(1\times 1\) zero blocks and the larger shifts), if any, forms \(B\).
The following corollary may be known, but we will need it going forward.
**Corollary 3.7**.: _A partially isometric Jordan matrix \(A\) is always a power partial isometry._
Proof.: By Proposition 3.6, it suffices to show all powers of \(U\) and \(B\) are partial isometries. Clearly all powers of unitary operators are unitary and hence partial isometries, as well as powers of zero matrices. And a straightforward computation shows powers of shifts are partial isometries. And then powers of direct sums of power partial isometries are partial isometries as mentioned in the previous proof.
**Corollary 3.8**.: _For a Jordan matrix \(A\) with \(\ker A\neq\ker A^{2}\), \(\mathcal{S}(A,A^{*})\) is an SI semigroup if and only if \(A\) is a partial isometry._
Proof.: Suppose \(\mathcal{S}(A,A^{*})\) is an SI semigroup. Since \(\ker A\neq\ker A^{2}\), by Theorem 2.5, \(A\) is a partial isometry. Conversely, if \(A\) is a partial isometry, then by Corollary 3.7, \(A\) is a power partial isometry. Therefore, as proved in [7, Remark 2.4], \(\mathcal{S}(A,A^{*})\) is an SI semigroup.
We next consider the Jordan matrix case when \(\ker A=\ker A^{2}\). In this case, \(A\) may or may not be invertible. Suppose \(A\) is not invertible and \(\ker A=\ker A^{2}\). Noninvertibility implies at least one Jordan block has zero eigenvalue. Then the size of a Jordan block corresponding to the zero eigenvalue of \(A\) must be one because the presence of a Jordan block \(J_{m}(0)\) (where \(m\geq 2\)) violates the condition \(\ker A=\ker A^{2}\). Indeed, any Jordan matrix \(A\) with the Jordan block \(J_{m}(0)\) (where \(m\geq 2\)) must have a column \(Ae_{i}\) such that \(Ae_{i}=e_{i-1}\) and \(Ae_{i-1}=0\) for some \(i\geq 2\). Therefore, \(e_{i}\in\ker A^{2}\) but \(e_{i}\notin\ker A\). Therefore, by rearranging the Jordan blocks corresponding to zero and nonzero eigenvalues together, \(A\) must be unitarily equivalent to \(A_{1}\oplus 0\), where \(A_{1}\) is an invertible Jordan matrix and \(0\) is a matrix of size at least one. Then by a straightforward argument one can prove that \(\mathcal{S}(A,A^{*})\) is an SI semigroup if and only if \(\mathcal{S}(A_{1},A_{1}^{*})\) is an SI semigroup. Based on these observations, our SI investigation for the case when the Jordan matrix \(A\) is not invertible reduces to the SI investigation for the invertible corner of the Jordan matrix \(A\). Therefore, when \(\ker A=\ker A^{2}\), we will consider only the invertible Jordan matrices to study the SI property of \(\mathcal{S}(A,A^{*})\), starting with Proposition 3.11. But first some preliminaries.
**Proposition 3.9**.: _Let \(A=A_{1}\oplus A_{2}\) be a block diagonal matrix. If \(\mathcal{S}(A,A^{*})\) is an SI semigroup, then \(\mathcal{S}(A_{i},A_{i}^{*})\) is an SI semigroup for each \(i=1,2\)._
Proof.: Suppose \(\mathcal{S}(A,A^{*})\) is an SI semigroup. Multiplication in the semigroup \(\mathcal{S}(A,A^{*})\) is defined in [7, Section 4, first paragraph] as componentwise products. For proving that \(\mathcal{S}(A_{1},A_{1}^{*})\) forms an SI semigroup, it suffices to show that \((T_{1})_{\mathcal{S}(A_{1},A_{1}^{*})}\) is a selfadjoint ideal for each \(T_{1}\in\mathcal{S}(A_{1},A_{1}^{*})\)[7, Lemma
1.9(i)\(\Leftrightarrow\)(ii)]. For \(T_{1}\in\mathcal{S}(A_{1},A_{1}^{*})\) (i.e., any word in \(A_{1},A_{1}^{*}\)), choose \(T_{2}\in\mathcal{S}(A_{2},A_{2}^{*})\) in that same form, so that \(T:=T_{1}\oplus T_{2}\in\mathcal{S}(A,A^{*})\) (for instance, if \(T_{1}=A_{1}^{2}A_{1}^{*}A_{1}^{3}\) then take \(T_{2}=A_{2}^{3}A_{2}^{*}A_{2}^{3}\)). Since \(\mathcal{S}(A,A^{*})\) is an SI semigroup, one has \(T^{*}=XTY\) for some \(X,Y\in\mathcal{S}(A,A^{*})\cup\{I\}\). Using the explicit forms for \(X\) and \(Y\), i.e., \(X=X_{1}\oplus X_{2}\) and \(Y=Y_{1}\oplus Y_{2}\) for some \(X_{1},Y_{1}\in\mathcal{S}(A_{1},A_{1}^{*})\cup\{I_{1}\}\) and for some \(X_{2},Y_{2}\in\mathcal{S}(A_{2},A_{2}^{*})\cup\{I_{2}\}\), where \(I=I_{1}\oplus I_{2}\), we rewrite the matrix equation \(T^{*}=XTY\) in block diagonal form as:
\[T_{1}^{*}\oplus T_{2}^{*}=(T_{1}\oplus T_{2})^{*}=(X_{1}T_{1}Y_{1})\oplus(X_{2 }T_{2}Y_{2}).\]
Therefore, from the above display one obtains:
\[T_{1}^{*}=X_{1}T_{1}Y_{1}\quad\text{for some }X_{1},Y_{1}\in\mathcal{S}(A_{1},A_{1 }^{*})\cup\{I_{1}\}.\]
This proves the selfadjointness of the ideal \((T_{1})_{\mathcal{S}(A_{1},A_{1}^{*})}\). Since \(T_{1}\) was chosen arbitrarily, \(\mathcal{S}(A_{1},A_{1}^{*})\) is an SI semigroup. Likewise \(\mathcal{S}(A_{2},A_{2}^{*})\) is also an SI semigroup.
Interestingly the converse of Proposition 3.9 can fail, i.e., if both \(\mathcal{S}(A_{1},A_{1}^{*})\), \(\mathcal{S}(A_{2},A_{2}^{*})\) are SI semigroups, then \(\mathcal{S}((A_{1}\oplus A_{2}),(A_{1}\oplus A_{2})^{*})\) may not be an SI semigroup. For instance,
_Example 3.10_.: Consider \(A_{1}=\begin{bmatrix}0&1&0\\ 0&0&1\\ 0&0&0\end{bmatrix}\) and \(A_{2}=\begin{bmatrix}2&0\\ 0&2\end{bmatrix}\). Since \(A_{1}\) is a power partial isometry, \(\mathcal{S}(A_{1},A_{1}^{*})\) is an SI semigroup [7, Corollary 1.15] and \(A_{2}\) is a selfadjoint matrix, therefore \(\mathcal{S}(A_{2},A_{2}^{*})\) is automatically an SI semigroup [7, Remark 1.13]. But for \(A:=A_{1}\oplus A_{2}\), \(\mathcal{S}(A,A^{*})\) is not an SI semigroup. Indeed, suppose otherwise that \(\mathcal{S}(A,A^{*})\) forms an SI semigroup. Then \((A)_{\mathcal{S}(A,A^{*})}\) is a selfadjoint ideal, in particular. Therefore, \(A^{*}=XAY\) for some \(X,Y\in\mathcal{S}(A,A^{*})\cup\{I\}\) with not both \(X\) and \(Y\) equal to the identity matrix \(I=I_{1}\oplus I_{2}\) as \(A\) is not selfadjoint. Suppose \(X\neq I\). Since \(X\) is a word in \(A,A^{*}\) so \(X\) is a direct sum of that same word in \(A_{1},A_{1}^{*}\) and \(A_{2},A_{2}^{*}\). Let \(X=X_{1}\oplus X_{2}\) and \(Y=Y_{1}\oplus Y_{2}\); and rewriting the matrix equation \(A^{*}=XAY\) in block diagonal form one obtains:
\[A_{1}^{*}\oplus A_{2}^{*}=X_{1}A_{1}Y_{1}\oplus X_{2}A_{2}Y_{2}\]
for some \(X_{i},Y_{i}\in\mathcal{S}(A_{i},A_{i}^{*})\cup\{I_{i}\}\). Since \(X\neq I\), then \(X\) is a word in \(\mathcal{S}(A,A^{*})\) which is a direct sum of that same word in \(A_{i},A_{i}^{*}\) for \(i=1,2\), and \(X_{2}\neq I_{2}\) as \(X_{2}\) is that same word in \(A_{2},A_{2}^{*}\). So from the above display one further obtains:
\[A_{2}^{*}=X_{2}A_{2}Y_{2}\]
where \(X_{2},Y_{2}\in\mathcal{S}(A_{2},A_{2}^{*})\cup\{I_{2}\}\) and not both \(X_{2},Y_{2}\) equal to the identity \(I_{2}\). But also \(\mathcal{S}(A_{2},A_{2}^{*})=\{A_{2}^{k}:k\geq 1\}\), since \(A_{2}\) is selfadjoint [7, Remark 1.13]. Hence, \(X_{2}A_{2}Y_{2}=A_{2}^{k}\) for some \(k\geq 2\) and then from above display one obtains \(A_{2}=A_{2}^{*}=A_{2}^{k}\) where \(k\geq 2\), which is not possible for our choice of \(A_{2}\). Hence \(\mathcal{S}(A,A^{*})\) is not an SI semigroup.
Continuing our strategy discussed in the paragraph preceding Proposition 3.9, the invertible case, we have
**Proposition 3.11**.: _For \(A\in M_{n}(\mathbb{C})\) a nonselfadjoint invertible matrix, if \(\mathcal{S}(A,A^{*})\) is an SI semigroup then \(|\det A|=1\)._
Proof.: Suppose \(\mathcal{S}(A,A^{*})\) is an SI semigroup. Then \((A)_{\mathcal{S}(A,A^{*})}\) is a selfadjoint ideal. Therefore, \(A^{*}=XAY\) for some \(X,Y\in\mathcal{S}(A,A^{*})\cup\{I\}\), where \(X,Y\) cannot both be the identity because \(A\) is nonselfadjoint. Applying the determinant one obtains
\[0\neq\overline{\det A}=(\overline{\det A})^{m}(\det A)^{n}\]
for some \(m\geq 0,n\geq 1\). For \(m=0\) (this happens when the words \(X\) and \(Y\) have no \(A^{*}\) term) one must have \(n\geq 2\) otherwise \(A^{*}=A\) contradicting nonselfadjointness of \(A\). Then one obtains
\[|\det A|=|\det A|^{m+n}\]
where \(m+n\geq 2\). And since \(\det A\neq 0\), one obtains \(|\det A|=1\).
Hence we have
**Corollary 3.12**.: _For \(A=\oplus_{i=1}^{k}J_{m_{i}}(\lambda_{i})\) a nonselfadjoint invertible Jordan matrix, \(\mathcal{S}(A,A^{*})\) is an SI semigroup if and only if \(A\) is a unitary matrix (equivalently, all \(m_{i}=1\) with \(|\lambda_{i}|=1\))._
Proof.: If \(A=\oplus_{i=1}^{k}J_{m_{i}}(\lambda_{i})\) is a unitary matrix, then \(\mathcal{S}(A,A^{*})\) is a group hence simple and so trivially SI (see paragraph preceding Theorem 2.13 for why group implies simple). Conversely, suppose \(\mathcal{S}(A,A^{*})\) is an SI semigroup. We will first show that \(m_{i}=1\) for all \(1\leq i\leq k\). Indeed, since \(A=\oplus_{i=1}^{k}J_{m_{i}}(\lambda_{i})\) and \(\mathcal{S}(A,A^{*})\) is an SI semigroup, it follows from Proposition 3.9 that \(\mathcal{S}(J_{m_{i}}(\lambda_{i}),J_{m_{i}}^{*}(\lambda_{i}))\) is an SI semigroup for each \(m_{i}\). Suppose there exists \(i\geq 1\) for which \(m_{i}\geq 2\). Then \(J_{m_{i}}(\lambda_{i})\) is a nonselfadjoint invertible matrix. Nonselfadjointness is clear. Invertibility holds because \(\det J_{m_{i}}(\lambda_{i})\neq 0\) which follows from the invertibility of \(A\) via \(0\neq\det A=\Pi_{j=1}^{k}\det J_{m_{j}}(\lambda_{j})\). Since \(\mathcal{S}(J_{m_{i}}(\lambda_{i}),J_{m_{i}}^{*}(\lambda_{i}))\) is an SI semigroup, by Proposition 3.11, \(|\det(J_{m_{i}}(\lambda_{i}))|=1\) and so \(|\lambda_{i}|=1\). But by Theorem 3.5, \(\mathcal{S}(J_{m_{i}}(\lambda_{i}),J_{m_{i}}^{*}(\lambda_{i}))\) is not an SI semigroup whenever \(|\lambda_{i}|=1\), contradicting \(\mathcal{S}(J_{m_{i}}(\lambda_{i}),J_{m_{i}}^{*}(\lambda_{i}))\) possessing the SI property. Therefore, for each \(i\geq 1\), \(m_{i}=1\). This implies that \(A\) is a diagonal matrix with diagonal Jordan blocks of size one each equal to its eigenvalue \(\lambda_{i}\), and hence \(A\) is a nonselfadjoint normal matrix which is invertible. Additionally, as \(\mathcal{S}(A,A^{*})\) is also SI, so it follows from [7, Theorem 2.1] that \(A\) is unitary.
The conclusion in Corollary 3.12 may not hold if we drop the hypothesis that \(A\) is a Jordan matrix as seen from the following example of a nonselfadjoint invertible _nonunitary_ matrix with a selfadjoint generated SI semigroup.
_Example 3.13_.: For \(A=\begin{bmatrix}0&1/2\\ 2&0\end{bmatrix}\) a nonselfadjoint invertible matrix (which is not a Jordan matrix), \(\mathcal{S}(A,A^{*})\) is an SI semigroup, but \(A\) is not a unitary matrix. Indeed since \((A^{*}A)(AA^{*})=I=(AA^{*})(A^{*}A)\), one has that \(A^{*}\) and \(A\) have their inverses in \(\mathcal{S}(A,A^{*})\) and hence all its elements (all words in \(A,A^{*}\)) have their inverses in \(\mathcal{S}(A,A^{*})\) which makes it a group, hence simple (see first line of previous proof), and hence SI.
We can now characterize SI semigroups \(\mathcal{S}(A,A^{*})\) generated by a nonselfadjoint Jordan matrix \(A\). This can be viewed as: among nonselfadjoint Jordan matrices, an alternate SI characterization of power partial isometries, or equivalently by Corollary 3.7, of partial isometries (because for Jordan matrices, they are the same class).
In [7, Remark 1.13(i)-(ii) and Theorem 2.1] we characterized SI semigroups \(\mathcal{S}(A,A^{*})\) for \(A\) normal. Here we do so for non-normal Jordan matrices \(A\).
**Theorem 3.14**.: _(A characterization of SI semigroups generated by Jordan matrices.) For non-normal Jordan matrices \(A\),_
\[\mathcal{S}(A,A^{*})\text{ is an SI semigroup if and only if }A\text{ is a partial isometry.}\]
Proof.: Suppose \(\mathcal{S}(A,A^{*})\) is an SI semigroup. Since \(A\) is a nonselfadjoint matrix and \(\mathcal{S}(A,A^{*})\) is SI, by Theorem 2.5, one has either \(\ker A=\ker A^{2}\) or \(A\) is a partial isometry. So if \(\ker A\neq\ker A^{2}\), then \(A\) is a partial isometry and then by Corollary 3.7, \(A\) is a power partial isometry.
Next we show that under the SI assumption of \(\mathcal{S}(A,A^{*})\) for a non-normal Jordan matrix \(A\), \(\ker A\neq\ker A^{2}\). Indeed, suppose \(\ker A=\ker A^{2}\). We consider the invertible and noninvertible cases separately. If \(A\) is not invertible, then the discussion in the paragraph after Corollary 3.8 proves that \(A\) is unitarily equivalent to \(A_{1}\oplus 0\), where \(A_{1}\) is an invertible Jordan matrix and \(0\) is a matrix of size at least one. Since \(\mathcal{S}(A,A^{*})\) is an SI semigroup so \(\mathcal{S}(A_{1},A_{1}^{*})\) is an SI semigroup (as mentioned in that paragraph, follows by a straightforward argument). Since \(A_{1}\) is a nonselfadjoint invertible Jordan matrix and the SI property of \(\mathcal{S}(A_{1},A_{1}^{*})\) implies that \(A_{1}\) is a unitary matrix by Corollary 3.12. Therefore \(A\) is unitarily
equivalent to \(U\oplus 0\) which is a contradiction to the non-normality of \(A\). In the case of invertible \(A\), it follows again from Corollary 3.12 that \(A=U\) where \(U\) is a unitary matrix, hence normal which again is a contradiction. Therefore in both the invertible and noninvertible case, the SI property of \(\mathcal{S}(A,A^{*})\) implies that \(\ker A\neq\ker A^{2}\). So, \(A\) must be a partial isometry.
Conversely, if \(A\) is a partial isometry, then by Corollary 3.7, \(A\) is a power partial isometry and then \(\mathcal{S}(A,A^{*})\) is an SI semigroup by [7, Corollary 1.15].
We have now established all the results that are required to obtain a characterization for the simplicity of semigroups \(\mathcal{S}(A,A^{*})\) generated by \(A\) from the class of nonselfadjoint Jordan matrices:
**Corollary 3.15**.: _For \(T\in M_{n}(\mathbb{C})\) a nonselfadjoint Jordan matrix, one has_
\[\mathcal{S}(A,A^{*})\]
_is simple if and only if \(A\) is unitarily equivalent to \(U\oplus 0\),_
_where \(U\) is unitary matrix and \(0\) is a zero matrix (the second summand maybe absent)._
Proof.: Suppose \(\mathcal{S}(A,A^{*})\) is simple. Then \(\ker A=\ker A^{2}\). Indeed, if \(\ker A\neq\ker A^{2}\), then \(\ker A\subsetneq\ker A^{2}\). This implies that \(\dim\ker A<\dim\ker A^{2}\), so by the rank nullity theorem, \(\operatorname{rank}A>\operatorname{rank}A^{2}\). We next show that \(\operatorname{rank}A>\operatorname{rank}A^{2}\) implies that \(A\notin(A^{2})_{\mathcal{S}(A,A^{*})}\). Suppose \(A\in(A^{2})_{\mathcal{S}(A,A^{*})}\). Then \(A=XA^{2}Y\) for some \(X,Y\in\mathcal{S}(A,A^{*})\cup\{I\}\). Using Proposition 2.3 and the fact that \(\operatorname{rank}A>\operatorname{rank}A^{2}\), we obtain
\[\operatorname{rank}A=\operatorname{rank}(XA^{2}Y)\leq\operatorname{rank}A^{2 }<\operatorname{rank}A,\]
which is absurd. This implies nonsimplicity of \(\mathcal{S}(A,A^{*})\), contradicting the simplicity assumption on \(\mathcal{S}(A,A^{*})\). So \(\ker A=\ker A^{2}\). We next consider the invertible and the noninvertible cases separately. If \(A\) is invertible, then it follows from Corollary 3.12 that \(A\) is a unitary matrix (simple semigroups are automatically SI semigroups). If \(A\) is not invertible, then the discussion in the paragraph after Corollary 3.8 proves that \(A\) is unitarily equivalent to \(A_{1}\oplus 0\), where \(A_{1}\) is an invertible Jordan matrix and \(0\) is a matrix of size at least one. And a straightforward argument proves that \(\mathcal{S}(A,A^{*})\) is simple if and only if \(\mathcal{S}(A_{1},A_{1}^{*})\) is simple. Therefore, when \(A\) is not invertible, the simplicity of \(\mathcal{S}(A,A^{*})\) reduces to the simplicity of \(\mathcal{S}(A_{1},A_{1}^{*})\) where \(A_{1}\) is the invertible corner of the Jordan matrix \(A\). Also, since \(A_{1}\) is a nonselfadjoint invertible matrix, \(\mathcal{S}(A_{1},A_{1}^{*})\) is simple if and only if \(\mathcal{S}(A_{1},A_{1}^{*})\) is SI (follows from Theorem 2.13). Furthermore, by Corollary 3.12, \(\mathcal{S}(A_{1},A_{1}^{*})\) being SI implies that \(A_{1}\) is a unitary matrix. Therefore, if \(\mathcal{S}(A,A^{*})\) is simple, it follows that \(A\) is unitarily equivalent to \(U\oplus 0\).
Conversely, if \(A\) is unitarily equivalent to \(U\oplus 0\), then \(\mathcal{S}(U\oplus 0,U^{*}\oplus 0)\) forms a group and so also \(\mathcal{S}(A,A^{*})\) forms a group, and hence \(\mathcal{S}(A,A^{*})\) is simple.
We end this section by providing a characterization, solely via its norm, of a partial isometry when \(\mathcal{S}(A,A^{*})\) is an SI semigroup generated by a nonselfadjoint \(A\). This is for more general matrices (not necessarily Jordan matrices) (Theorem 3.18 below). To prove this, we require the concept of \(s\)-numbers (singular number sequence) of a matrix. The \(s\)-numbers of a matrix \(A\in M_{n}(\mathbb{C})\) is defined as the \(n\)-tuple of eigenvalues of diagonalized \(|A|:=(A^{*}A)^{1/2}\) arranged in decreasing order. So for instance the first \(s\)-number, \(s_{1}(A)=||A||\).
In general, if \(A\) is a partial isometry, then \(||A||=1\), but the converse need not be true. For instance,
\[A=\begin{bmatrix}0&1/2\\ 1&0\end{bmatrix}.\]
has \(||A||=1\), but \(A\) is not a partial isometry because \(A^{*}A\) is not a projection.
But for SI semigroups \(\mathcal{S}(A,A^{*})\), we prove in Theorem 3.18 that if \(||A||=1\), then \(A\) must be a partial isometry. This result can fail in infinite dimensions (see Example 3.19 below).
Let \(\{s_{j}(A)\}_{j=1}^{n}\) denote the \(s\)-numbers of \(A\in M_{n}(\mathbb{C})\).
**Proposition 3.16**.: _For \(A\in M_{n}(\mathbb{C})\), \(A\) is a partial isometry if and only if \(s_{j}(A)\in\{0,1\}\) for all \(1\leq j\leq n\)._
Proof.: If \(A\) is a partial isometry, then \(A^{*}A\) is a projection, so the eigenvalues of \(A^{*}A\) are in the set \(\{0,1\}\), whose square roots are also then the \(s\)-numbers of \(A\). Conversely, if \(s_{j}(A)\in\{0,1\}\) for all \(1\leq j\leq n\). Then \(|A|\) has all its eigenvalues contained in the set \(\{0,1\}\). Moreover, since \(|A|\geq 0\), so \(|A|\) is unitarily diagonalizable. Therefore there exists a unitary matrix \(U\) and a diagonal matrix \(D\) such that \(|A|=UDU^{*}\), where \(D\) has all its diagonal entries in the set \(\{0,1\}\). Since \(D\) is a projection, \(|A|\) is also a projection and hence its square \(A^{*}A\) is also a projection, or equivalently, \(A\) is a partial isometry.
We recall here the matrix version of a set of \(s\)-number inequalities in Gohberg and Krein [3] which we need for the lemma following.
[3, Corollary 4.1]_For any two matrices \(A,B\in M_{n}(\mathbb{C})\) one has_
\[\sum_{j=1}^{k}\!s_{j}(AB)\leq\sum_{j=1}^{k}\!s_{j}(A)s_{j}(B),\qquad(k=1, \cdots,n).\]
_And as the authors indicate, the inequality can naturally be generalized to the case of \(m\) matrices \(A_{1},\cdots,A_{m}\)._
\[\sum_{j=1}^{k}\!s_{j}(A_{1}\cdots A_{m})\leq\sum_{j=1}^{k}\!s_{j}(A_{1})\cdots s _{j}(A_{m}),\qquad(k=1,\cdots,n). \tag{16}\]
Nonzero partial isometries always have norm one, but the converse clearly does not hold. However, in the SI environment, it does.
**Lemma 3.17**.: _For \(A\in M_{n}(\mathbb{C})\) a nonselfadjoint matrix with \(\mathcal{S}(A,A^{*})\) an SI semigroup, if \(||A||=1\) then \(A\) is a partial isometry._
Proof.: Suppose \(\mathcal{S}(A,A^{*})\) is an SI semigroup. Then the principal ideal \((A)_{\mathcal{S}(A,A^{*})}\) is selfadjoint, or equivalently as usual, \(A^{*}=XAY\) for some \(X,Y\in\mathcal{S}(A,A^{*})\cup\{I\}\), where not both \(X,Y\) are equal to the identity matrix because of the nonselfadjointness of \(A\). Then the word \(XAY\) is a word in powers of \(A\) and \(A^{*}\) with at least two terms. So by applying \(s\)-number inequality Equation (16) to \(XAY\) which is a finite product of powers of \(A\) and \(A^{*}\), and using the fact that \(s_{j}(A^{*})=s_{j}(A)\) for \(1\leq j\leq n\), one obtains for some \(m\geq 2\),
\[\sum_{j=1}^{k}s_{j}(A)=\sum_{j=1}^{k}s_{j}(A^{*})=\sum_{j=1}^{k}s_{j}(XAY) \leq\sum_{j=1}^{k}s_{j}^{m}(A),\quad\text{for each $1\leq k\leq n$}.\]
Since \(||A||=1,\ s_{1}(A)=1\) and \(0\leq s_{j}(A)\leq 1\) for all \(1\leq j\leq n\) since \(s\)-numbers are in decreasing order. We will next show that \(s_{j}(A)\in\{0,1\}\) for all \(1\leq j\leq n\). Suppose there exists some \(1\leq j\leq n\) such that \(0<s_{j}(A)<1\), and choose \(r\) to be the smallest such index so that \(0<s_{r}(A)<1\). Then from the above display, for \(k=r\), one has
\[\sum_{j=1}^{r}s_{j}(A)\leq\sum_{j=1}^{r}s_{j}^{m}(A).\]
Since \(s_{j}(A)=1\) for \(1\leq j\leq r-1\), this further implies that
\[s_{r}(A)\leq s_{r}^{m}(A)\]
contradicting \(s_{r}^{m}(A)<s_{r}(A)\) because \(0<s_{r}(A)<1\) and \(m\geq 2\). Hence, \(s_{j}(A)\in\{0,1\}\) for all \(1\leq j\leq n\) and by Proposition 3.16, \(A\) is a partial isometry.
Lemma 3.17 immediately provides the following norm characterization of a partial isometry \(A\) under the SI property of \(\mathcal{S}(A,A^{*})\):
**Theorem 3.18**.: _For \(A\in M_{n}(\mathbb{C})\) a nonselfadjoint matrix, let \(\mathcal{S}(A,A^{*})\) be an SI semigroup. Then_
\[A\]
_is a partial isometry if and only if \[||A||=1\]_._
We now give the example promised prior to Proposition 3.16 that the above lemma may not hold for infinite rank operators.
_Example 3.19_.: Let \(W\) be a weighted shift operator on the Hilbert space \(l^{2}(\mathbb{N})\) with the weight sequence \((1/2,1,1,1,\ldots)\). Then \(||W||=1\) and one can check that \(W\) satisfies the relation \(W^{*}W^{2}=W\). Therefore, it follows from Proposition 3.20 proved below that \(\mathcal{S}(W,W^{*})\) is simple and hence trivially an SI semigroup. But \(W\) is not a partial isometry because \(W^{*}W\) is not a projection (as \(W^{*}We_{1}=1/4e_{1}\)).
We note that the following proposition with essentially the same proof also holds for operators \(T\in B(\mathcal{H})\).
**Proposition 3.20**.: _Let \(T\in M_{n}(\mathbb{C})\) and \(T\) satisfies \((T^{*}T)T=T\). Then \(\mathcal{S}(T,T^{*})\) is simple._
Proof.: Since \((T^{*}T)T=T\), by induction one obtains, for all \(n\geq 2\),
\[{T^{*}}^{n}T^{n+1}=T \tag{17}\]
Hence also, for all \(n\geq 1\),
\[{T^{*}}^{n+1}T^{n}=T^{*} \tag{18}\]
Recall the semigroup list \(\mathcal{S}(T,T^{*})=\{T^{n},T^{*}{}^{n},\Pi_{j=1}^{k}{T^{*}}^{m_{j}}T^{n_{j}},(\Pi_{j=1}^{k}{T^{*}}^{m_{j}}T^{n_{j}})T^{*}{}^{m_{k+1}},\Pi_{j=1}^{k}{T^{n_{ j}}}T^{*}{}^{m_{j}}\), \((\Pi_{j=1}^{k}{T^{n_{j}}}T^{*}{}^{m_{j}})T^{n_{k+1}}\}\) where \(n\geq 1,\,k\geq 1,\,n_{j},m_{j}\geq 1\) for \(1\leq j\leq k\) and \(n_{k+1},m_{k+1}\geq 1\). To prove \(\mathcal{S}(T,T^{*})\) is simple, it suffices to show that the principal ideal generated by each form in the semigroup list coincides with the entire semigroup \(\mathcal{S}(T,T^{*})\). Furthermore, it suffices to show that the principal ideals generated by all the fourth and sixth forms coincide with \(\mathcal{S}(T^{*},T)\) because each principal ideal generated by each of the other forms contains a fourth and a sixth form.
Consider a matrix \(A\) in the fourth form. So \(A=(\Pi_{j=1}^{k}{T^{*}}^{m_{j}}T^{n_{j}}){T^{*}}^{m_{k+1}}\) for some \(m_{j},n_{j}\geq 1\) and \(m_{k+1}\geq 1\). Let \(s=\sum_{j=1}^{k}n_{j}\) and \(r=\sum_{j=1}^{k+1}m_{j}\). Then,
\[{T^{*}}^{A} ={T^{*}}^{s}({T^{*}}^{m_{1}}T^{n_{1}})(\Pi_{j=2}^{k}{T^{*}}^{m_{j }}T^{n_{j}}){T^{*}}^{m_{k+1}}\] \[={T^{*}}^{s+m_{1}-n_{1}-1}({T^{*}}^{n_{1}+1}{T^{n_{1}}})(\Pi_{j=2 }^{k}{T^{*}}^{m_{j}}T^{n_{j}}){T^{*}}^{m_{k+1}}\quad\text{(add and substract $n_{1}+1$ from the power $s$ of $T^{*})$}\] \[={T^{*}}^{s+m_{1}-n_{1}}(\Pi_{j=2}^{k}{T^{*}}^{m_{j}}T^{n_{j}}){T^ {*}}^{m_{k+1}}\qquad\text{(from Equation (\ref{eq:T1}) above ${T^{*}}^{n_{1}+1}T^{n_{1}}=T^{*}$)}\] \[={T^{*}}^{s+(m_{1}-n_{1})+(m_{2}-n_{2})}(T^{*}{}^{n_{2}+1}{T^{n_{ 2}}})(\Pi_{j=3}^{k}{T^{*}}^{m_{j}}T^{n_{j}}){T^{*}}^{m_{k+1}}\] \[\vdots\] \[={T^{*}}^{\sum_{j=1}^{k}m_{j}}{T^{*}}^{m_{k+1}}\] \[={T^{*}}^{r}\qquad\text{(from Equation (\ref{eq:T1}) above)}\]
Since \({T^{*}}^{s+1}AT^{r}\in(A)_{\mathcal{S}(T,T^{*})}\) and \({T^{*}}^{s+1}AT^{r}={T^{*}}({T^{*}}^{s}A)T^{r}={T^{*}}^{r+1}T^{r}={T^{*}}\) (from Equation 18), one obtains \({T^{*}}\in(A)_{\mathcal{S}(T,T^{*})}\). Also note that \(({T^{*}}^{s}A)T^{r+1}={T^{*}}^{r}T^{r+1}=T\) (from Equation (17)), so \(T\in(A)_{\mathcal{S}(T,T^{*})}\). And since \(T,T^{*}\in(A)_{\mathcal{S}(T,T^{*})}\), \((A)_{\mathcal{S}(T,T^{*})}=\mathcal{S}(T,T^{*})\).
We next consider the sixth form. So \(A=(\Pi_{j=1}^{k}{T^{n_{j}}}{T^{*}}^{m_{j}}){T^{n_{k+1}}}\) for some \(n_{j},m_{j}\geq 1\), \(1\leq j\leq k\), and \(n_{k+1}\geq 1\). The matrix \({T^{*}}^{n_{1}}AT^{*}{}^{n_{k+1}}\in(A)_{\mathcal{S}(T,T^{*})}\). Note that \({T^{*}}^{n_{1}}AT^{*}{}^{n_{k+1}}\) is back in the fourth form. Hence \(({T^{*}}^{n_{1}}AT^{*}{}^{n_{k+1}})_{\mathcal{S}(T,T^{*})}=\mathcal{S}(T,T^{*})\). But \(({T^{*}}^{n_{1}}AT^{*}{}^{n_{k+1}})_{\mathcal{S}(T,T^{*})}\subset(A)_{ \mathcal{S}(T,T^{*})}\) so \((A)_{\mathcal{S}(T,T^{*})}=\mathcal{S}(T,T^{*})\).
## 4. SI semigroups \(\mathcal{S}(A,A^{*})\) generated by matrices \(A\) with nonnegative entries
Our first attempts to progress beyond the characterizations regarding SI for rank one operators in [7] were to investigate matrices (finite and infinite) with nonnegative entries. The results we obtain are more
elementary than the results obtained earlier here and in [[8]], so we present them here to complete this paper.
**Proposition 4.1**.: _Let \(A=[a_{ij}]\) and \(B=[b_{ij}]\) denote nonzero matrices in \(M_{n}(\mathbb{C})\) such that the nonzero entries of \(A\) and \(B\) are greater than \(1\). Let \(a=\min\limits_{1\leq i,j\leq n}\{a_{ij}:a_{ij}\neq 0\}\) and \(b=\min\limits_{1\leq i,j\leq n}\{b_{ij}:b_{ij}\neq 0\}.\) If \(AB=[c_{ij}]\) is nonzero, then_
\[\min\limits_{1\leq i,j\leq n}\{c_{ij}:c_{ij}\neq 0\}>ab.\]
_Consequently, \(\min\limits_{1\leq i,j\leq n}\{c_{ij}:c_{ij}\neq 0\}>a\) and \(\min\limits_{1\leq i,j\leq n}\{c_{ij}:c_{ij}\neq 0\}>b.\)_
Proof.: Since the nonzero entries of \(A\) and \(B\) are greater than \(1\), the nonzero entries of \(AB:=C\) are greater than \(1\). Indeed, \(C=[c_{ij}]=[\sum_{k=1}^{n}a_{ik}b_{kj}]\). Then each \(c_{ij}\neq 0\) (if any) has \(c_{ij}\geq a_{ik}b_{kj}\geq ab\) for some \(1\leq k\leq n\). \(c_{ij}=\sum_{k=1}^{n}a_{ik}b_{kj}\). Then since \(C\) is nonzero, \(c_{io_{j}0}\neq 0\) for some \(1\leq i_{0},j_{0}\leq n\). And so there exists \(1\leq s\leq n\) such that \(a_{i_{0}s}b_{sj_{0}}\neq 0\). Since \(a_{ij},b_{ij}\geq 0\) for \(1\leq i,j\leq n\),
\[c_{i_{0}j_{0}}=\sum_{k=1}^{n}a_{i_{0}k}b_{kj_{0}}\geq a_{i_{0}s}b_{sj_{0}} \geq ab.\]
Therefore, \(\min\limits_{1\leq i,j\leq n}\{c_{ij}:c_{ij}\neq 0\}\geq ab\). Since the nonzero entries of \(A\) and \(B\) are greater than \(1\), \(a,b>1\) and hence \(ab>a\) and \(b\).
One can easily extend Proposition 4.1 to finite products of finite matrices.
**Corollary 4.2**.: _For each \(1\leq k\leq m,letA_{k}=[a_{ij}^{(k)}]\) denote a matrix in \(M_{n}(\mathbb{C})\) with nonzero entries of each \(A_{k}\) greater than \(1\). If \(A=\prod_{k=1}^{m}A_{k}=[a_{ij}]\) is nonzero, then for \(1\leq k\leq m\), we have for each \(k\),_
\[\min\limits_{1\leq i,j\leq n}\{a_{ij}:a_{ij}\neq 0\}\geq\Pi_{l=1}^{m}\min \limits_{1\leq i,j\leq n}\{a_{ij}^{(l)}:a_{ij}^{(l)}\neq 0\}>\min\limits_{1 \leq i,j\leq n}\{a_{ij}^{(k)}:a_{ij}^{(k)}\neq 0\}.\]
_Remark 4.1_.: Proposition 4.1 may not hold for infinite matrices. Consider \(A=B=diag(1+1/n)_{n=1}^{\infty}\). Then \(1=\inf\{a_{ii}^{2}:i\in\mathbb{N}\}=\inf\{(1+1/n)^{2}:n\in\mathbb{N}\}=\inf\{( 1+1/n):n\in\mathbb{N}\}=\inf\{a_{ii}:i\in\mathbb{N}\}.\)
**Corollary 4.3**.: _If \(A=[a_{ij}]\) is a matrix in \(M_{n}(\mathbb{C})\) with all nonzero entries greater than 1 (or all less than \(-1\)), then \(\mathcal{S}(A,A^{*})\) is a non-SI semigroup._
Proof.: If \(\mathcal{S}(A,A^{*})\) is an SI semigroup, then the principal ideal \((A)_{\mathcal{S}(A,A^{*})}\) of \(\mathcal{S}(A,A^{*})\) is selfadjoint. That is, \(A^{*}=XAY\) where \(X,Y\in\mathcal{S}(A,A^{*})\cup\{I\}\) and \(X\) and \(Y\) cannot both be the identity (as \(A^{*}\neq A\)). By Corollary 4.2 applied to the product \(XAY\), the minimum nonzero entry of \(XAY\) is greater than the minimum nonzero entry of \(A\) (or \(A^{*}\)). This contradicts the fact that the minimum nonzero entry of \(XAY\) must be equal to the minimum nonzero entry of \(A^{*}\) because of the equality \(A^{*}=XAY\). Therefore the ideal \((A)_{\mathcal{S}(A,A^{*})}\) is not selfadjoint. Hence, \(\mathcal{S}(A,A^{*})\) is a non-SI semigroup. And similarly if all entries are less than \(-1\).
The proof of the following version combining Proposition 4.1 and Corollary 4.2 but for infinite matrices is straightforward and is left to the reader. And likewise for Corollary 4.3.
**Proposition 4.4**.: _Let for each \(1\leq k\leq m,letA_{k}=[a_{ij}^{(k)}]\) denote matrix representations of some operators in \(\mathcal{B}(\mathcal{H})\) in a common orthonormal basis such that the nonzero entries of each \(A_{k}\) are greater than 1. If \(A:=\prod_{k=1}^{m}A_{k}=[a_{ij}]\) is nonzero, then \(\inf\limits_{i,j}\{a_{ij}:a_{ij}\neq 0\}\geq\prod_{k=1}^{m}\{\inf\limits_{i,j} \{a_{ij}^{(k)}:a_{ij}^{(k)}\neq 0\}\}.\) Moreover if for all \(1\leq k\leq m,\inf\limits_{i,j}\{a_{ij}^{(k)}:a_{ij}^{(k)}\neq 0\}>1,\) then_
\[\inf\limits_{i,j}\{a_{ij}:a_{ij}\neq 0\}\geq\Pi_{l=1}^{m}\inf\limits_{i,j}\{a_{ ij}^{(l)}:a_{ij}^{(l)}\neq 0\}>\inf\limits_{i,j}\{a_{ij}^{(k)}:a_{ij}^{(k)}\neq 0\}\ ;\ \text{for all}\ 1\leq k\leq m.\]
**Corollary 4.5**.: _Let \(A=[a_{ij}]\) be a matrix representation of an operator in \(\mathcal{B}(\mathcal{H})\) with respect to some orthonormal basis such that the nonzero entries of \(A\) are greater than \(1\). If \(\inf_{i,j}\{a_{ij}:a_{ij}\neq 0\}>1\), then \(\mathcal{S}(A,A^{*})\) is a non-SI semigroup. And similarly for the \(-1\) case._
## 5. Data availability
Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.
## 6. Declarations
The first author was supported by Science and Engineering Research Board, Core Research Grant 002514. The last author was partially supported by Simons Foundation collaboration grants 245014 and 636554. The authors have no other conflicts of interest to report.
|
2306.15659
|
Electromagnetic Cascade Emission from Neutrino-Coincident Tidal
Disruption Events
|
The potential association between Tidal Disruption Events (TDEs) and
high-energy astrophysical neutrinos implies the acceleration of cosmic rays.
These accelerated particles will initiate electromagnetic (EM) cascades
spanning from keV to GeV energies by the processes related to neutrino
production. We model the EM cascade and neutrino emissions by numerically
solving the time-dependent transport equations and discuss the implications for
AT2019dsg and AT2019fdr in the X-ray and $\gamma$-ray bands. We show that the
$\gamma$-ray constraints from \emph{Fermi} can constrain the size of the
radiation zone and the maximum energy of injected protons, and that the
corresponding expected neutrino event numbers in follow-up searches are limited
to be less than about 0.1. Depending on the efficiency of $p\gamma$
interactions, the X-ray and $\gamma$-ray signals can be expected closer to the
peak of the optical-ultraviolet (OUV) luminosity, or to the time of the
neutrino production.
|
Chengchao Yuan, Walter Winter
|
2023-06-27T17:54:29Z
|
http://arxiv.org/abs/2306.15659v2
|
# Electromagnetic Cascade Emission from Neutrino-Coincident Tidal Disruption Events
###### Abstract
The potential association between Tidal Disruption Events (TDEs) and high-energy astrophysical neutrinos implies the acceleration of cosmic rays. These accelerated particles will initiate electromagnetic (EM) cascades spanning from keV to GeV energies by the processes related to neutrino production. We model the EM cascade and neutrino emissions by numerically solving the time-dependent transport equations and discuss the implications for AT2019dsg and AT2019fdr in the X-ray and \(\gamma\)-ray bands. We show that the \(\gamma\)-ray constraints from _Fermi_ can constrain the size of the radiation zone and the maximum energy of injected protons, and that the corresponding expected neutrino event numbers in follow-up searches are limited to be less than about 0.1. Depending on the efficiency of \(p\gamma\) interactions and the time at which the target photons peak, the X-ray and \(\gamma\)-ray signals can be expected closer to the peak of the optical-ultraviolet (OUV) luminosity, or to the time of the neutrino production.
Tidal disruption; Radiative processes; Neutrino astronomy 0000-0002-4002-2885]Chengchao Yuan ([0000-0002-0001-9000])
0000-0002-2885-7085]Walter Winter
0000-0002-4882-2858]Walter Winter
## 1 Introduction
Recent observations have revealed that TDEs can produce intense flares of radiation lasting from months to years, powered by infalling material from a tidally disrupted star. Observationally, TDEs commonly exhibit thermal emissions predominantly in the OUV range, with sub-populations also observed in the X-ray and radio bands (see, e.g., Saxton et al., 2020; Alexander et al., 2020; van Velzen et al., 2021). The TDE catalog keeps rapidly expanding as more and more TDEs have been continuously identified by the Zwicky Transient Facility (ZTF, e.g., Hammerstein et al., 2023).
Three TDE candidates, AT2019dsg (Stein et al., 2021), AT2019fdr (Reusch et al., 2022) and AT2019aalc (van Velzen et al., 2021), are potentially correlated to the three IceCube astrophysical neutrino events, IC191001A, IC200530A and IC19119A, respectively. Two of these neutrino associations (AT2019dsg and AT2019fdr) have been found by follow-up searches. It has been pointed out in van Velzen et al. (2021) that these associations have a strong dust echo in common - which is delayed re-processed radiation from the OUV and X-ray ranges into the infrared (IR) range by surrounding dust; this has led to the identification of the third candidate (AT2019aalc). Remarkably, these three TDEs are located in the 90% containment boxes of the corresponding neutrino events, confirmed in the recent catalog of IceCube neutrino tracks (Abbasi et al., 2023), which consolidates the multi-messenger correlations. Apart from the strong dust echoes and OUV luminosities, AT2019dsg and AT2019fdr have been observed in X-rays, and the neutrinos have come delayed by half a year to a year with respect to the OUV peak. The neutrino counterparts of these three TDEs have a profound impact on TDE models and bring significant momentum to related researches since astrophysical neutrinos are primarily generated in hadronic processes requiring the acceleration of cosmic-ray primaries.
Extensive efforts have been made on theoretical modeling, simulations and observations to establish a unified physical picture for TDEs. In this picture, the accretion disks, sub-relativistic outflows or winds, and possibly a jet and a dust torus are typically included (see, e.g., Dai et al., 2018). Motivated by the observation of the luminous jetted TDE Swift J1644+57 in 2011 (Burrows et al., 2011), relativistic jets have been proposed as one promising origin of the non-thermal emissions of TDEs
(Wang et al., 2011; Wang and Liu, 2016; Dai and Fang, 2017; Lunardini and Winter, 2017; Senno et al., 2017). In the jetted models, the energy released is beamed in a narrow region where the power is sufficient high to explain the intense emissions. Another jetted TDE candidate, AT2022cmc (Andreoni et al., 2022), has been recently reported to exhibit a very high non-thermal X-ray luminosity \(\gtrsim 10^{47}-10^{48}\) erg s\({}^{-1}\), which is likely produced by a collimated jet with an opening angle \(\theta_{j}\lesssim 1^{\circ}\)(Pasham et al., 2023). It was also pointed out that only around one percent of TDEs are expected to have relativistic jets; however, this small fraction could be consistent with the fraction of neutrino-emitting TDEs from diffuse flux considerations (Winter and Lunardini, 2023).
Apart from jets, quasi-isotropic emissions from accretion disks (Hayasaki and Yamazaki, 2019), wide-angle outflows/winds (Fang et al., 2020) or tidal stream interactions (Dai et al., 2015; Hayasaki and Yamazaki, 2019) have been considered, particularly for TDEs that do not exhibit direct jet signatures, such as radio signals. The TDE jets, disks, and winds can be efficient cosmic-ray (CR) accelerators, potentially producing ultra-high-energy cosmic rays (UHECRs, Farrar and Gruzinov, 2009; Farrar and Piran, 2014; Zhang et al., 2017; Guepin et al., 2018; Biehl et al., 2018). Consequently, these sites may also serve as promising neutrino emitters. For AT2019dsg, various models such as jets, disks, corona and hidden winds and outflow-cloud interactions have been proposed to explain the neutrino counterparts (Liu et al., 2020; Winter and Lunardini, 2021; Wu et al., 2022; Murase et al., 2020; Hayasaki, 2021). Similarly, for AT2019fdr, Reusch et al. (2022) proposed the corona, hidden wind and jet models, whereas a disk model for all three neutrino-coincident TDEs has been proposed by van Velzen et al. (2021). Motivated by the absence of definitive evidence for jet signatures (e.g., Mohan et al., 2022), Winter and Lunardini (2023) have recently presented unified time-dependent interpretations of the neutrino emissions by considering three models (named M-IR, M-OUV, and M-X) where thermal IR, OUV and X-ray photons respectively dominate the neutrino productions depending on the maximally available proton acceleration energy. An alternative model, proposed by Zheng et al. (2022), has considered that the jets are choked inside the ejecta.
High-energy astrophysical neutrinos typically originate from charged pions produced in cosmic ray interactions with matter or radiation. While the charged pion decay chain also leads to electrons and positrons as decay products, neutral pions co-produced with the charged pions directly decay into gamma-rays. Therefore, a similar amount of energy is expected to be injected into neutrinos and the electromagnetic (EM) cascade in source, where it is a priori not clear where in energy the EM signatures dominate, and what the time-dependent behavior of the EM cascade is - these questions require dedicated theoretical modeling, which we perform in this study. A joint analysis of the non-thermal X-ray and \(\gamma\)-ray observations with the neutrino detection will then provide further insights: The X-ray light curve for AT2019dsg was measured using Swift X-ray Telescope (XRT, Burrows et al., 2005), XMM-Newton (XMM, Jansen et al., 2001), and NICER (Stein et al., 2021; Cannizzaro et al., 2021). As for AT2019fdr, upper limits were obtained from XRT and eROSITA (Saxton et al., 2020; Reusch et al., 2022; Predehl et al., 2021). The _Fermi_ Large Area Telescope (Atwood et al., 2009) did not detect significant \(\gamma\)-ray signals from both of them, resulting in a time-dependent up limit of \(10^{-9}-10^{-8}\) GeV cm\({}^{-2}\) s\({}^{-1}\)(Stein et al., 2021). Given the relatively incomplete X-ray and \(\gamma\)-ray datasets for AT2019aalc (van Velzen et al., 2021), we choose AT2019dsg and AT2019fdr as two prototypes to study the implications of the EM cascades.
We adopt the quasi-isotropic model proposed by Winter and Lunardini (2023) and specifically study the time-dependent EM cascade emissions in radiation zones characterized generically by their sizes and the maximum energies of injected protons. Compared to Winter and Lunardini (2023), where the simulations were based on the NeuCosmA (Neutrinos from Cosmic Accelerators) code (Hummer et al., 2010; Biehl et al., 2018), we change technology to self-consistently the electromagnetic cascade in-source. We use the AM\({}^{3}\)(Astrophysical Modeling with Multiple Messengers) software (Gao et al., 2017), which has been successfully used for lepto-hadronic models of Active Galactic Nuclei blazars (see, e.g., Gao et al., 2019; Rodrigues et al., 2019, 2021) and Gamma-Ray Bursts (Rudolph et al., 2022, 2023), and apply it for the first time to TDEs. We address several key questions regarding EM cascade emissions in TDEs, e.g., what is the spectral shape of EM cascade emissions? What kind of observational features can be attributed to EM cascades? At what timescales and in which energy ranges can we expect to observe these EM cascades? What insights can be derived from X-ray and \(\gamma\)-ray observations, combined with neutrino counterparts, to constrain the parameter space of the model? For most of the results, we focus on the cases M-IR and M-OUV, where the thermal IR and OUV photons, respectively, dominate the neutrino production. The M-X case, for which X-ray targets dominate, will only be included in some of the results as the qualitative conclusions are similar to M-IR.
The paper is organized as follows: in Sec. 2 we provide a brief description of the TDE models, particle interactions and numerical methods. EM cascade spectra and light curves for each TDE for the cases M-IR and M-OUV will be presented in Sec. 3. In Sec. 4, we show the \(\gamma\)-ray constraints on model parameters and predicted neutrino numbers for IceCube. We discuss our results and summarize our conclusions in Sec. 5 & 6.
## 2 Model Description
In this section, we describe the TDE model in this work, and the numerical approaches employed for computing neutrino and EM cascade emissions.
### Overview
We focus on the description of a spherical radiation zone of radius \(R\) leading to quasi-isotropic emission of neutrinos and photons, following Winter & Lunardini (2023). The acceleration is assumed to occur inside this region at a radius \(R_{\rm acc}\lesssim R\).1 We do not simulate the accelerator explicitly, we instead parameterize the acceleration zone by the maximal proton energy \(E_{p,\rm max}\) and the dissipation efficiency \(\varepsilon_{\rm diss}\), which describes the conversion efficiency from the mass accretion rate into non-thermal protons. Because of the quasi-isotropic emission, large values of \(\varepsilon_{\rm diss}\simeq 0.05-0.2\) are required to obtain reasonable neutrino event rates; we will use a value on the upper end of this range to saturate the EM cascade bounds. Note that its precise value is not so relevant for the conclusions of this work, as it scales the neutrino and EM emissions in the same way. The accelerated protons are assumed to be magnetically confined in magnetic fields of strength \(B\) as long as the Larmor radius is smaller than the size of the region, but they can diffuse out in the Bohm limit.
Footnote 1: This relationship applies to models M-OUV and M-IR (see below), whereas \(R_{\rm acc}\sim R\) for model M-X.
Protons in the radiation zone may interact with IR, OUV, and X-ray radiation parameterized as thermal spectra with temperatures \(T_{\rm IR}\), \(T_{\rm OUV}\), and \(T_{\rm X}\), respectively; the higher the target photon energy is, the lower is the required \(E_{p,\rm max}\). Consequently, the models are labeled M-X (lowest \(E_{p,\rm max}\), interactions with X-ray target only), M-OUV (intermediate \(E_{p,\rm max}\), interactions dominated by OUV target), and M-IR (highest \(E_{p,\rm max}\), interactions with IR target possible). Our interaction targets are based solely on observations, which means that the do not add any other theoretically motivated ingredients which have not been directly observed. For the scenarios M-OUV and M-IR, the radii of radiation zones are determined by the regions of respective isotropized thermal target photons, whereas the acceleration is expected to take place somewhere inside these regions. For instance, we choose \(R\sim 10^{15}\) cm (which is close to OUV photons) and \(R\sim 10^{17}\) cm (which is close to IR photons originated from the dust torus) for the M-OUV and M-IR scenarios, respectively. Our models are fully time-dependent, taking into account the time-dependence of the proton injections assumed to follow the OUV light curves, and the time-dependence of the target photons. This is illustrated in Fig. 1, where the time-dependent proton injection luminosities and the time evolutions of the photon targets are shown for AT2019dsg and AT2019fdr; further model details are listed in Tab. 1. It is especially noteworthy that the IR photons are assumed to be be produced by the dust as so-called "dust echoes", where dust heated by X-ray and OUV radiation may re-emit IR photons (van Velzen et al., 2016, 2021; Reusch et al., 2022). We further on use the dust echo IR light curves obtained by Winter & Lunardini (2023).
We focus on two scenarios, M-IR and M-OUV first, as they are representative for our results. However, we will later consider extended radiation zones (e.g., located close to the dust radius) and compact radiation zones (e.g., located close to the OUV-BB), respectively, as we will use \(R\) as a free parameter in Sec. 4. Further model details are given in the following subsections.
### Proton injection
Possible locations for the acceleration zone include the accretion disk corona, relativistic jets, or the disk winds; correspondingly, \(R_{\rm acc}\) may vary significantly. Near the X-ray photosphere or the disk corona, the size can be \(\sim(10-30)R_{s}\)(e.g., Murase et al., 2020; van Velzen et al., 2021), where \(R_{s}=2GM/c^{2}\) is the Schwarzschild radius of a SMBH with mass \(M\). Inside the jet, particle acceleration is likely to occur at a radius of a few hundred to 1000 \(R_{s}\)(e.g., Dai et al., 2018), depending on the location of the shocks. Alternatively, if particle acceleration in the wind is considered, \(R_{\rm acc}\) can extend to the boundary of the wind envelope, e.g., \(R_{\rm acc}\sim 10^{16}-10^{17}\) cm (Stein et al., 2021), in proximity to the dust radius.
In this work, we focus on the description of the radiation zone characterized by the radius \(R\gtrsim R_{\rm acc}\) without explicitly specifying the particle acceleration site, indicating that the physical conditions of the radiation and acceleration regions, such as magnetic field strength, may not necessarily be identical. Note that even though the proton emission from the accelerator may be anisotropic, the magnetic field in the radiation zone may isotropize the protons in the calorimetric limit; an example would be an off-axis jet as accelerator in a large radiation zone (model M-IR). Using an accelera
tion rate \(t_{\rm acc}^{-1}=\eta_{\rm acc}(c/R_{L,{\rm acc}})\lesssim c/R_{L,{\rm acc}}\)(Hillas, 1984), the protons could be accelerated to \(10^{8}-10^{9}\) GeV near the accretion disk corona (\(R_{\rm acc}\lesssim 10^{14}\) cm) with a strong magnetic field strength or in the extended disk wind (\(R_{\rm acc}\sim 10^{17}\) cm), where \(R_{L,{\rm acc}}=E_{p}/(eB_{\rm acc})\) is the Larmor radius of protons in the acceleration zone characterized by the magnetic field strength \(B_{\rm acc}\), \(\eta_{\rm acc}\lesssim 1\) is the acceleration efficiency and \(t_{\rm acc}\) is the acceleration time required to energize protons to \(E_{p}\) (see discussion in Winter & Lunardini, 2023).
We adopt an energy-differential power-law proton injection spectrum (per unit volume per unit time) \(Q_{p}(E_{p},t)\propto E_{p}^{-2}\exp(-E_{p}/E_{p,{\rm max}})\) where \(E_{p,{\rm max}}\) is the maximum energy of injected proton spectrum and is treated as a free parameter. The spectrum can be normalized via \(\int E_{p}Q_{p}dE_{p}=L_{p}/V\), where \(L_{p}\) denotes the time-dependent proton luminosity and \(V=4\pi R^{3}/3\) is the volume of the radiation region. Typically, the minimum proton energy can be written as \(E_{p,{\rm min}}=\Gamma_{\rm rel}m_{p}c^{2}\), where \(\Gamma_{\rm rel}\) is the relative Lorentz factor between the acceleration site and the radiation zone. Since the absence of the specification of the acceleration site, the conservative value of \(E_{p,{\rm min}}=1\) GeV is used. As depicted by the red curves in Fig. 1, OUV observa
\begin{table}
\begin{tabular}{c|c c|c c} \hline \hline & \multicolumn{2}{c|}{**AT2019dsg\({}^{a}\)**} & \multicolumn{2}{c}{**AT2019fdr\({}^{b}\)**} \\ & \(z=0.051,\ M=5\times 10^{6}M_{\odot}\), \(t_{\rm dyn}=670\) d & \(z=0.267,\ M=1.3\times 10^{7}M_{\odot},\ t_{\rm dyn}=1730\) d \\ \hline \(k_{B}T_{\rm X,\ OUV,\ IR}\) & 72 eV, 3.4 eV, 0.16 eV & & 56 eV, 1.2 eV, 0.14 eV \\ \(E_{\nu}\) & 217 TeV (IC191001A) & & 82 TeV (IC200530A) \\ \(t_{\nu}-t_{\rm pk}\) & 154 d & & 324 d \\ \(N_{\nu}({\rm GFU})^{c}\) & \multicolumn{2}{c}{\(0.008-0.76\)} & \multicolumn{2}{c}{\(0.007-0.13\)} \\ \hline Scenario & M-IR & M-OUV & M-IR & M-OUV \\ \hline \(R\) [cm] & \(5.0\times 10^{16}\) & \(5.0\times 10^{14}\) & \(2.5\times 10^{17}\) & 5.0\(\times 10^{15}\) \\ \(E_{p,{\rm max}}\) [GeV] & \(5.0\times 10^{9}\) & \(1.0\times 10^{8}\) & \(5.0\times 10^{9}\) & 1.0\(\times 10^{8}\) \\ \hline \end{tabular} a
b
c
\end{table}
Table 1: Observational and TDE modeling parameters for AT2019dsg and AT2019fdr. In all scenarios, the universal values of energy dissipation efficiency \(\varepsilon_{\rm diss}=0.2\) and magnetic field strength \(B=0.1\) G are used.
Figure 1: Time-dependent evolution of IR, OUV, X-ray, and injected proton luminosities, shown as the orange, red, blue, and black solid curves respectively. The dissipation efficiency \(\varepsilon_{\rm diss}=L_{p}/(\dot{M}c^{2})\simeq 0.2\) is used to obtain the proton injection luminosity \(L_{p}\) for AT2019dsg (left panel) and AT2019fdr (right panel). The horizontal green dashed curves represents the Eddington luminosity \(L_{\rm Edd}\). The OUV luminosity peaks (\(t_{\rm pk}\)) and neutrino detection times (\(t_{\nu}-t_{\rm pk}\)) are illustrated as the vertical gray solid and magenta dotted curves, respectively.
tions reveal that TDEs generically exhibit characteristic light curves, with an initial rapid increase and a subsequent slower decay (\(\propto t^{-5/3}\)) that is consistent with the predicted mass fallback rate. This behavior suggests the OUV luminosity (\(L_{\rm OUV}\)) reflects the accretion history. Henceforth, following the methodology in Winter and Lunardini (2023), we assume that a fraction of the accreted energy onto the SMBH is dissipated into protons and the accretion rate aligns with the observed temporal evolution of OUV light curves. We write down \(L_{p}\) in terms of the Eddington accretion rate \(\dot{M}_{\rm Edd}\), which represents the accretion rate required for a black hole to radiate at the Eddington luminosity \(L_{\rm Edd}\simeq 1.3\times 10^{45}\) erg s\({}^{-1}M/(10^{7}M_{\odot})\), as
\[L_{p}=\varepsilon_{\rm diss}\dot{M}c^{2}=\varepsilon_{\rm diss}\dot{M}_{\rm Edd }c^{2}\frac{L_{\rm OUV}}{L_{\rm OUV}(t_{\rm pk})}. \tag{1}\]
In this expression, \(t_{\rm pk}\) is the time when OUV light curve reaches its peak \(L_{\rm OUV}(t_{\rm pk})\), and \(\zeta\equiv\dot{M}(t_{\rm pk})/\dot{M}_{\rm Edd}\) is the ratio of the peak accretion rate to the Eddington accretion rate \(\dot{M}_{\rm Edd}\). Generally, \(\dot{M}_{\rm Edd}\) can be estimated from the radiation efficiency \(\eta_{\rm rad}\sim 0.01-0.1\)(e.g., McKinney et al., 2015), that represents the fraction of accreted energy reprocessed into radiation energy at the level of \(L_{\rm Edd}\), via \(\dot{M}_{\rm Edd}=L_{\rm Edd}/(\eta_{\rm rad}c^{2})\). Considering the super-Eddington accretion nature of TDEs during the peak accretion phase, e.g., \(\zeta\sim 10-100\)(e.g., Dai et al., 2018), we establish the relationship, \(\dot{M}(t_{\rm pk})=\zeta\dot{M}_{\rm Edd}=(\zeta/\eta_{\rm rad})L_{\rm Edd}/c^ {2}\). We adopt \(\zeta/\eta_{\rm rad}=100\) in the following analysis; this value can be also justified by comparing the time-integrated accretion mass with the the mass of disrupted stars (Winter and Lunardini, 2023). Under these parameterizations, the thick black curves in the left and right panels of Fig. 1 depict the injected proton luminosity for AT2019dsg and AT2019fdr. We show also the Eddington luminosity as green dashed lines for reference.
### Interactions and numerical methods
In the radiation zone, neutrinos can be generated through photomeson (\(p\gamma\)) and hadronuclear (\(pp\)) interactions. In the \(p\gamma\) process, the quasi-thermal X-ray, OUV, and IR emissions serve as the target photon fields. With the time-dependent luminosity \(L\) obtained from direct observations of the IR/OUV/X-ray components, we calculate the target photon density through
\[\int\varepsilon_{\gamma}n(\varepsilon_{\gamma})d\varepsilon_{\gamma}=\frac{L}{ 4\pi R^{2}c}, \tag{2}\]
where
\[n\propto\frac{\varepsilon_{\gamma}^{2}}{\exp\left(\frac{\varepsilon_{\gamma}} {k_{B}T}\right)-1} \tag{3}\]
is the black body spectrum (in the units of eV\({}^{-1}\) cm\({}^{-3}\)) and \(T\) is the characteristic temperature determined by fitting the observed spectra. For OUV components, the photon density can be straightforwardly obtained from Eqs. (2) and (3) using the observed \(L_{\rm OUV}\) and \(T_{\rm OUV}\). In the context of X-ray photons originated from accretion disks, we assume that the in-source X-ray luminosity remains relatively stable (e.g., Wen et al., 2020). The time-dependent fluctuations observed in X-ray observations could be attributed to absorption and geometrical obscuration. Therefore, we consider the high-level observed luminosity as the representative X-ray luminosity within the radiation zone, as described by Winter and Lunardini (2023). Note that this assumption is only relevant in Sec. 4 for small enough proton energies. Regarding the IR emissions measured subsequent to the OUV peak, the dust echo interpretation suggests that echo IR photons are generated by dust located at a distance of approximately \(R_{\rm IR}\sim 10^{16}-10^{17}\) cm. The dust heated by the X-ray and OUV radiation will re-emit IR photons with a temperature of approximately \(k_{B}T_{\rm IR}\simeq 0.16\) eV (van Velzen et al., 2016, 2021; Reusch et al., 2022). Additionally, the deflection of these photons with respect to the line of sight explains the observed time delay. Reusch et al. (2022) have demonstrated that the IR light curve can be explained by convolving the OUV luminosity with a normalized box function. Using this method, Winter and Lunardini (2023) performed a least-square fitting of the IR observations and obtained the IR light curves for AT2019dsg and AT2019fdr. Building upon the work of Winter and Lunardini (2023), we incorporate the IR light curves presented in their study directly into our modeling of the target photon density. Figure 1 illustrates the time dependence of IR (\(L_{\rm IR}\); orange lines), OUV (\(L_{\rm OUV}\); red lines), and X-ray (\(L_{X}\); blue lines) luminosities. The vertical purple dotted lines represent the time of neutrino detection, \(t_{\nu}-t_{\rm pk}\). The time axes in this figure are defined with the OUV luminosity peak time \(t_{\rm pk}\) as the origin. The IR, OUV, and X-ray target photon densities are determined by Eq. (2) and Eq. (3), using the respective luminosities. The measured temperatures of the IR, OUV, and X-ray black body emissions are provided in Tab. 1.
While propagating inside the radiation zone, the injected protons will also interact with the thermal protons in the sub-relativistic outflow or wind. Given the wind mass rate \(\dot{M}_{w}=\eta_{w}\dot{M}\)(Dai et al., 2018; Yuan et al., 2021), where \(\eta_{w}\equiv\dot{M}_{w}/\dot{M}\) is the fraction of accreted mass converted to wind, we estimate the average target proton number density \(n_{w}\sim\eta_{w}\dot{M}/(4\pi m_{p}v_{w}R^{2})\) and
the \(pp\) interaction rate
\[t_{pp}^{-1}\simeq cn_{w}\sigma_{pp}\sim\frac{1}{4\pi}\sigma_{pp}\beta_{w}^{-1} \eta_{w}\frac{\dot{M}}{R^{2}m_{p}}\,. \tag{4}\]
Here \(v_{w}=\beta_{w}c\sim\mathcal{O}(0.1c)\) is the wind velocity and \(\sigma_{pp}\) is the cross section for inelastic \(pp\) collisions. For super-Eddington accretors, \(\eta_{w}\) typical ranges from \(10^{-2}\) to \(10^{-1}\) in the case of the efficient magnetically arrested disks (see e.g., Ohsuga et al., 2009; Jiang et al., 2019; Yuan et al., 2020). We will demonstrate that the \(p\gamma\) process is significantly more efficient than \(pp\), even if an optimistic value of \(\eta_{w}\sim 0.1-0.2\) is used.
We also consider the electromagnetic cascade emission resulting from the secondary particles generated by the \(p\gamma\) and \(pp\) processes. Charged pions produced from \(p\gamma\) and \(pp\) interactions decay, yielding neutrinos, electrons, and positrons, while neutral pions directly decay into photons (denoted as '\(\pi^{0}\) decay'). The energy deposited in the secondary electrons and positrons can then be redistributed to the radiation field through synchrotron (SY) and inverse Compton (IC) radiation in the presence of magnetic fields. We denote this process as '\(pp\)/\(p\gamma\)-SY/IC'. As a competing channel to the \(p\gamma\) process, protons can also lose energy through Bethe-Heitler (BH) interactions, leading to the production of electron-positron (\(e^{\pm}\)) pairs. Additionally, the \(\gamma\gamma\) attenuatios between the EM cascade emissions and the abundant thermal IR, OUV and X-ray photons give rise to \(e^{\pm}\) pairs. These \(e^{\pm}\) pairs from BH and \(\gamma\gamma\) processes are expected to contribute to the EM cascade radiation field, represented as 'BH-SY/IC' and '\(\gamma\gamma\)-SY/IC' respectively.
To obtain the neutrino and EM cascade spectra, we numerically solve the coupled time-dependent transport equations for all relevant particle species
\[\frac{\partial n_{i}}{\partial t}=Q_{i,\text{ext}}+\sum_{k}Q_{k\to i}- \frac{\partial}{\partial E}(\dot{E}\cdot n_{i})-\frac{n_{i}}{t_{i,\text{esc}}} \tag{5}\]
in a self-consistent way using AM\({}^{3}\) software (e.g., Gao et al., 2017). In Eq. (5), \(i\) is the label of the particle, \(n_{i}\equiv dN_{i}/dVdE\) is the in-source particle density differential in energy and volume, \(N_{i}\) is the total number, \(Q_{i,\text{ext}}\) is the external particle injection rate, e.g., \(Q_{p,\text{ext}}=Q_{p}(E_{p},t)\), \(\sum_{k}Q_{k\to i}\) represents the in-source particle injection from other radiation processes, \(\dot{E}=dE/dt\) is the total energy loss rate, and \(t_{i,\text{esc}}\) is the particle escape time. Since we focus on the EM cascade emission induced by accelerated protons and do not consider the primary leptonic loading in the acceleration zone, the external electron/positron injection rate is set to zero, e.g., \(Q_{e,\text{ext}}=0\). The influence of primary electron injections will be discussed in Sec. 5.
Regarding the particle escape times, the photons, neutrinos, and neutrons produced in \(p\gamma\) processes escape the radiation zone with the free-streaming time \(t_{\text{esc}}=t_{\text{fs}}=R/c\), whereas the escape rate of charged particles is determined by the diffusion rate. We use as proton escape rate \(t_{p,\text{esc}}^{-1}=\text{min}[t_{\text{fs}}^{-1},t_{p,\text{diff}}^{-1}]\)(Baerwald et al., 2013), where \(t_{p,\text{diff}}^{-1}\equiv D/R^{2}\) is the diffusion rate, \(D\sim R_{L}c\) is the diffusion coefficient in Bohm limit, and \(R_{L}=E_{p}/(eB)\) is the Larmor radius of protons with energy \(E_{p}\) in the magnetic field \(B\). For simplicity, we choose the magnetic field strength \(B=0.1\) G as the fiducial value for both AT2019dsg and AT2019dfr as in Winter and Lunardini (2023). This choice of magnetic field is consistent with the case of outflows from AGNs and is sufficiently strong to confine protons with energies \(E_{p}\gtrsim 1\) PeV in the radiation zone. We will investigate the impacts on neutrino and EM cascade emissions by varying \(B\) from 0.1 G to 1.0 G in Sec. 4.
We have checked the consistency of our results with the earlier computations in Winter and Lunardini (2023) using NeuCosmA. While our results overall agree very well, we find small differences e.g. in the shapes of the neutrino spectra at the level expected from different softwares using different solver and particle interaction implementations (see e.g. Cerruti et al., 2022); for example, these come from the different implementations of photohadronic interactions: while AM\({}^{3}\) uses Hummer et al. (2010), NeuCosmA is based on Biehl et al. (2018).
### M-IR and M-OUV scenarios
Based on the formulation described before, the physical properties of the radiation zone can be characterized by two parameters: \(R\) and \(E_{p,\text{max}}\), assuming a fixed magnetic field strength of \(B=0.1\) G. We investigate the EM cascade emissions of two scenarios, M-IR and M-OUV, for AT2019dsg and AT2019dfr. The corresponding parameter sets are listed in Tab. 1.
_M-IR scenario._ As for \(p\gamma\) process, the proton threshold energy is
\[E_{p,\text{th}}=\frac{m_{\pi}(2m_{p}+m_{\pi})c^{4}}{4E_{\gamma}}\simeq 7.1\times 1 0^{8}\;\text{GeV}\left(\frac{E_{\gamma}}{0.1\;\text{eV}}\right)^{-1}, \tag{6}\]
where \(E_{\gamma}\) is target photon energy and \(m_{\pi}\simeq 140\;\text{MeV}/\text{c}^{2}\) is the pion mass. We choose \(R=5\times 10^{16}\;\text{cm}\lesssim\text{R}_{\text{IR}}\) and \(E_{p,\text{max}}=5.0\times 10^{9}\;\text{GeV}\gtrsim E_{p,\text{th}}\) for the M-IR scenario of AT2019dsg to ensure that \(p\gamma\) interactions with thermal IR photons with energy \(E_{\gamma,\text{IR}}=k_{B}T_{\text{IR}}\simeq 0.16\) eV dominate the neutrino production. The left panel of Fig. 2 shows the proton interaction rates at neutrino detection time \(t_{\nu}\) due to \(p\gamma\) (\(t_{p\gamma}^{-1}\), blue curve), \(pp\) (\(t_{pp}^{-1}\) with \(\eta_{w}=0.2\), green line) and BH (\(t_{\text{BH}}^{-1}\), orange curve) interactions. The horizontal black line is the particle
free-streaming rate \(t_{\rm fs}^{-1}\), whereas the magenta horizontal line depicts the reciprocal of the dynamic time, e.g., \(t_{\rm dyn}^{-1}\). We define the dynamic time (\(t_{\rm dyn}\)) as the duration of the super-Eddington phase, e.g., \(\dot{M}c^{2}>L_{\rm Edd}\). The energy-dependent escape rate is illustrated as the red line. The vertical gray line and the shaded region describe the maximum proton energy \(E_{p,\rm max}\).
From this figure, we can deduce that the system is \(p\gamma\) optically thin, indicated by \(\tau_{p\gamma}^{\rm fs}=t_{p\gamma}^{-1}/t_{\rm fs}^{-1}<1\), but nearly calorimetric at ultra-high energies, as \(\tau_{p\gamma}^{\rm cal}=t_{p\gamma}^{-1}/t_{\rm esc}^{-1}\sim 1\) around \(10^{9}\) GeV. This means that in the M-IR scenario, the TDE can be a promising very-high-energy neutrino emitter, but a time delay in the scale of \(t_{p\gamma}\) of the \(p\gamma/pp-\)SY/IC cascade channel is typically expected. Furthermore, the \(p\gamma\) process dominates the neutrino production as \(t_{p\gamma}^{-1}\gg t_{pp}^{-1}\).
_M-OUV scenario._ According to Eq. (6), the proton threshold energy decreases with increasing energy of the target photon. We therefore consider a compact radiation zone with \(R=5\times 10^{14}\) cm \(\ll\) R\({}_{\rm IR}\) and a relatively lower maximum proton energy \(E_{p,\rm max}=1.0\times 10^{8}\) GeV for the M-OUV scenario of AT2019dsg. In this case, \(p\gamma\) interactions with OUV target photons dominate the neutrino flux. As shown in the right panel of Fig. 2, the IR bump in the \(p\gamma\) interaction rate (blue curve) is suppressed due to the higher density of OUV photons (\(n_{\rm OUV}\propto R^{-2}\)) compared to IR photons (\(n_{\rm IR}\propto R_{\rm IR}^{-2}\)). Notably, in this scenario, the source can be \(p\gamma\) optically thick from \(t_{\rm pk}\) to \(t_{\nu}\), as indicated by the right panel of Fig. 2, where \(\tau_{p\gamma}^{\rm fs}=t_{p\gamma}^{-1}/t_{\rm fs}^{-1}>1\) is satisfied at \(t_{\nu}\). The \(p\gamma\) interactions can be much faster at \(t_{\rm pk}\) where the OUV luminosity is approximately one order of magnitude higher (see the red curve in the left panel of Fig. 1).
We use AT2019dsg as an example to illustrate the specifications of the M-IR and M-OUV scenarios. Similar conclusions can be drawn for AT2019fdr using the parameters listed in Tab. 1.
## 3 Neutrino and EM cascade emissions
In this section, we present the spectral energy distributions (SEDs), i.e., \(E_{i}^{2}\mathcal{F}_{i}=t_{\rm fs}^{-1}E_{i}^{2}dN_{i}/dE_{i}/(4\pi d_{L}^{2})\), of neutrino and electromagnetic cascade emissions for M-IR (Sec. 3.1) and M-OUV (Sec. 3.2) scenarios of AT2019dsg and AT2019fdr, where \(d_{L}\) is the luminosity distance to the TDE and the red shift corrections to fluxes and energies are not explicitly shown in this equation for simplicity. To establish a connection between our models and observations, we provide also light curves in the relevant X-ray and gamma-ray energy bands, and compare our predictions with measured data points or upper limits.
### M-IR: extended radiation zone (\(R\simeq R_{\rm IR}\))
Using the methodology described in Section 2 and the parameter values for M-IR scenario provided in Tab. 1, we present in Fig. 3 the SEDs and light curves for AT2019dsg (upper row) and AT2019fdr (lower row) in the observer's frame. The left column shows the SEDs at neutrino detection time \(t_{\nu}\), depicting the thermal IR, OUV and X-ray spectra (in blue dotted curves denoted as 'BB'), as well as the single-flavor neutrino (e.g., \(\nu_{\mu}\), in magenta curves) and EM cascade spectra spanning from MeV to 100 PeV. The black curves represent
Figure 2: Proton interaction rates at neutrino detection time \(t_{\nu}\) for AT2019dsg, as a function of proton energy (in the observer’s frame). The left and right panels correspond to the M-IR and M-OUV scenarios. In each figure, the \(p\gamma\), \(pp\) with \(\eta_{w}=0.2\), Bethe-Heitler (BH), escape, and dynamical rates are shown as the blue, green, orange, red, and magenta curves. The horizontal curves depict the free streaming rate \(t_{\rm fs}^{-1}=(R/c)^{-1}\), whereas the vertical gray curves show the maximum injected proton energies \(E_{p,\rm max}\).
the cascade spectra without taking into account the \(\gamma\gamma\) attenuation with extragalactic background light (EBL) and cosmic microwave background (CMB). The different components of the electromagnetic cascades, including \(pp/p\gamma-\)SY/IC', '\(\gamma\gamma-\)SY/IC', 'BH-SY/IC', and '\(\pi^{0}\) decay', are depicted as the blue dashed, orange dash-dotted, green dotted, and red solid curves, respectively. The gray shaded areas in the figure highlight the energy ranges of cosmic \(\gamma\gamma\) attenuation with EBL and CMB. One notable feature of the EM cascade spectra is the pronounced dips in the energy range \(\sim 1~{}\mathrm{GeV}-1~{}\mathrm{PeV}\). These dips can be attributed to the in-source \(\gamma\gamma\) absorption with thermal photons. Considering the characteristic energy of thermal photons \(E_{\mathrm{BB}}\), the \(\gamma\gamma\) attenuation occurs at \(E_{\gamma\gamma}\sim(m_{e}c^{2})^{2}/E_{\mathrm{BB}}\simeq 2.6~{}\mathrm{GeV}\) (\(E_{\mathrm{BB}}/100~{}\mathrm{eV})^{-1}\) in the laboratory frame. For instance, X-ray photons with energy of \(k_{B}T_{X}=72~{}\mathrm{eV}\) will lead to the first absorption dip at \(\sim 4~{}\mathrm{GeV}\) in the black curve of the upper left panel.
Figure 3: **Left column:** Electromagnetic cascade and neutrino spectra at the time of the neutrino observation \(t_{\nu}\). The black and magenta curves show the spectra of EM cascade and single-flavor neutrino emissions. The black body IR, OUV and X-ray spectra are depicted as the thin cyan dotted curves. The thin blue dashed, orange dash-dotted, green dotted, and red curves represent the components contributing to the EM cascade. The blue, red, and green shaded areas depict the energy ranges of XRT, _Fermi_-LAT, and HAWC observations, whereas the gray-shaded areas indicate where \(\gamma\gamma\) attenuation with CMB and EBL prohibits the observation of the emitted gamma rays. **Right column:** Neutrino, \(\gamma\)-ray and X-ray light curves. The the magenta, green, orange, and blue solid curves illustrate the single-flavor neutrino (\(L_{\nu}/4\pi d_{L}^{2}\)), very-high-energy (VHE) \(\gamma\)-ray (0.3 - 100 TeV), \(\gamma\)-ray (0.1 - 800 GeV), and X-ray light curves, respectively. The flux up limits from HAWC, _Fermi_, and XRT/XMM/NICER or eROSITA (Stein et al., 2021; Cannizzaro et al., 2021; Predehl et al., 2021; Reusch et al., 2022) are shown as the green, orange, and blue points, respectively. Furthermore, the neutrino detection times are marked as the vertical magenta dotted curves. The OUV fluxes are also plotted as the gray dashed curves for reference. Light blue shaded areas and curves correspond to XRT (0.3 - 10 keV), whereas dark blue shaded areas and curves to eROSITA (0.3 - 2 keV). The upper and lower rows correspond to AT2019dsg and AT2019fdr, respectively. All panels are shown for model M-IR in the observer’s frame.
For AT2019dsg, the blue, red, and green shaded areas in the upper left panel illustrate the energy ranges for the observations or limits using XRT/XMM/NICER (0.3-10 keV), _Fermi_ LAT (0.1-800 GeV) and High-Altitude Water Cherenkov Observatory (HAWC, 0.3-100 TeV) (see Ayala & HAWC Collaboration, 2019; Stein et al., 2021; Cannizzaro et al., 2021; Reusch et al., 2022; Predehl et al., 2021, and references therein). The corresponding light curves are shown in the upper right panel. The orange and green data points are the energy-integrated upper limits obtained by _Fermi_ and HAWC, whereas the XRT/XMM/NICER observations are plotted as blue points. The _Fermi_\(\gamma\)-ray up limits are obtained in three time slots, \(t-t_{\rm pk}\sim-20-100~{}{\rm d},~{}100-200~{}{\rm d}\) and \(200-500~{}{\rm d}\). The upper limit in the first time slot is more stringent according to the upper right panel of Fig. 3 and will be used in Sec. 4 for the constraints on the \(R-E_{p,\rm max}\) plane. The multiwavelength light curves indicate that the \(\gamma\)-ray upper limits are satisfied in the M-IR scenario, although the limits are not far way from the predictions. This also implies that much higher predicted neutrino fluxes would contradict the \(\gamma\)-ray limits.
For reference purposes, we include the thermal OUV light curve, e.g., \(L_{\rm OUV}/4\pi d_{L}^{2}\), as the black dashed curve. The peaks of EM cascade light curves appear roughly \(30-50\) days after the OUV peak, which is commonly referred to as the time delay. In the M-IR scenario, where the radiation zone is calorimetric but optically thin to \(p\gamma\) interactions (\(\tau_{p\gamma}^{\rm fs}<1\)), the time delay observed in the EM cascade emissions, primarily initiated via \(p\gamma\) processes, can be interpreted as the \(p\gamma\) time scale, e.g., \(t_{p\gamma}\sim 30-50\) days2. This reflects the time-dependent nature of EM cascade as a redistribution of energy via a series of successive radiation processes. Hence, if future observed TDEs exhibit time delays in their X-ray and \(\gamma\)-ray light curves, one can employ \(p\gamma\) optically-thin models to interpret and understand such temporal signatures. Furthermore, we observe that the orange light curve is additionally delayed compared to the X-ray and VHE \(\gamma\)-ray light curves. In the energy range of 0.1 - 800 GeV, \(\gamma\gamma\) attenuation is predominantly influenced by OUV as well as X-ray photons. Close to the peak time of the OUV blackbody light curve, where the OUV photon density is higher, the resulting broadband integrated \(\gamma\)-ray flux is more heavily attenuated. This could lead to the later emergence of the \(\gamma\)-ray peak.
Footnote 2: Typically, one should take into account the rates of \(\gamma\gamma\) attenuation, pion/muon decay, and secondary electron synchrotron/inverse Compton to estimate cascade time scale. In this calculation, the interactions mentioned here are much faster than \(p\gamma\) process and do not contribute significantly to the time delay. We show the interaction rates for secondary processes in Appendix A to support this statement.
The rapid (exponential) decay of observed X-ray light curve in the time interval \(0\lesssim t-t_{\rm pk}\lesssim 50~{}{\rm d}\) (illustrated as the blue dashed line in the upper right panel of Fig. 3) might be caused by the cooling of the accretion disk or by dust obscuration. Interestingly, there is an additional X-ray (Swift-XRT) data point around \(t-t_{\rm pk}=100~{}{\rm d}\) which was identified in Cannizzaro et al. (2021) and does not fit this rapid decay picture. Notably, the X-ray cascade emission can describe that data point, which means that Swift may have actually seen an electromagnetic cascade signature there. One may speculate that if the populations of neutrino-emitting TDEs and X-ray bright TDEs have overlap, the often complicated and puzzling behavior of X-ray observations may in some cases be explained by the contribution of the EM cascade, which can be substantially delayed with respect to the BB peak (see e.g. example for AT2019fdr below). The identification of a spectral hardening in the X-ray range (see upper left panel) may provide future evidence.
Our calculation also provide the neutrino spectra and light curves, as shown in the magenta curves in Fig. 3. Since \(p\gamma\) interactions dominate the neutrino production compared to the \(pp\) process, we show only the single-flavor neutrino spectra obtained from the \(p\gamma\) channel in the left column. The black arrows indicate the most-likely energies of the coincident neutrinos. From the upper right panel, we observe that the energy-integrated neutrino flux, \(L_{\nu}/4\pi d_{L}^{2}\), exhibits a similar time delay compared to the X-ray light curve. The reason is that, in the M-IR scenario of AT2019dsg, despite the radiation zone being nearly calorimetric to UHECRs, i.e., \(\tau_{p\gamma}^{\rm cal}=t_{p\gamma}^{-1}/t_{p,\rm esc}^{-1}\sim 1\), the \(p\gamma\) interaction rate is lower than the free streaming rate (see the left panel of Fig. 2). This implies that the \(p\gamma\) interactions are efficient (over the escape timescale) but not very fast (see Winter & Lunardini, 2023, for a more detailed discussion).
Correspondingly, the results for AT2019fdr are displayed in the lower row of Fig. 3. The areas, data points, and curves have the same interpretation as in the AT2019dsg cases. However, in this case, the deep blue areas, data points, and curves specifically represent the observations and predictions in the energy range of 0.3\(-\)2 keV, associated with the eROSITA upper limits. The upper limit on \(\gamma\)-ray emissions is only measured within the time interval \(-20~{}{\rm d}\lesssim{\rm t}-{\rm t}_{\rm pk}\lesssim 300~{}{\rm d}\). Additionally, X-ray follow-up observations using Swift XRT provide an upper bound around the time of the OUV peak, while the later time constraints (represented by the green points) are measured by eROSITA. The
M-IR scenario of AT2019fdr is consistent with the X-ray and \(\gamma\)-ray observational limits. However, it is challenging for the EM cascade models to explain the late-time X-ray peak observed at \(t-t_{\rm pk}\simeq 600\) days by eROSITA. This particular signature may be produced by other mechanisms or regions that are distinct from the neutrino-emitting zones discussed here, especially considering that it appears roughly 300 days after the detection of neutrinos.
Time delays in the X-ray and \(\gamma\)-ray light curves are also observed, and can be attributed jointly to the \(p\gamma\) interaction times and the peak time of the IR light curve. Since a larger radius of the radiation zone, e.g., \(R=2.5\times 10^{17}\) cm, is used for AT2019fdr, compared to that of AT2019dsg (\(R=5\times 10^{16}\) cm), a prolonged time delay of approximately 100\(-\)200 days is expected due to the decreasing \(p\gamma\) interaction rate3. Moreover, the IR light curve of AT2019fdr exhibits a strong peak at around 300 days, compared to the relatively flat IR light curve of AT2019dsg. This peak in the photon distribution, combined with the \(t_{p\gamma}\) value, contributes to the time delays observed in X-ray and \(\gamma\)-ray emissions. Unlike the case of AT2019dsg, the neutrino light curve of AT2019fdr no longer simply follows the time dependence of OUV luminosity. The reason is that the neutrino production is determined not only by the \(p\gamma\) interaction rate but also the in-source proton distributions. For AT2019fdr, the protons are more strongly confined within the radiation zone in comparison to AT2019dsg. As a consequence, there is a buildup or "piling-up" of
Figure 4: Electromagnetic cascade spectra and neutrino spectra at \(t_{\nu}\) (left column), and neutrino, \(\gamma\)-ray and X-ray light curves (right column), for the M-OUV model. The curves, areas, and data points have the same meaning as in Fig. 3.
protons, which enhances the late-time neutrino productions and lead to the time delay4.
Footnote 4: Here, we only provide a qualitative explanation for the time dependence of neutrino emission, as this paper primarily focuses on the EM cascades. A more quantitative discussion for the neutrino light curves of AT2019dsg and AT2019fdr can be found in Winter and Lunardini (2023).
In summary, in the M-IR scenario where the radiation zone can extend to the dust radius, the \(p\gamma\) optical depth is typically optically thin. This leads to time delays in the EM cascade light curves. One advantage of this scenario is that the predicted X-ray and \(\gamma\)-ray emissions obey well with the observational upper limits, as shown in the right column of Fig. 3. Remarkably, the M-IR scenario of AT2019dsg is capable of explaining the X-ray observations during the time interval \(50\) d \(\lesssim t-t_{\rm pk}\lesssim 100\) d.
### M-OUV: compact radiation zone (\(R\ll R_{\rm IR}\))
Here we consider more compact radiation zones in the M-OUV scenarios and discuss the EM cascade signatures in the \(p\gamma\) optically thick cases, for AT2019dsg and AT2019fdr. Similar to the Fig. 3, the SEDs and light curves for the M-OUV scenarios are demonstrated in Fig. 4. The parameters used here are summarized in Tab. 1. We find that the neutrino peak fluxes in the M-OUV scenarios for both AT2019dsg and AT2019fdr are significantly higher compared to the M-IR scenarios. This enhancement is attributed to the efficient \(p\gamma\) interactions in the M-OUV scenarios, as indicated in the right panel of Fig. 2, facilitated by the much denser target photons. As a result, the neutrino fluxes in the M-OUV scenarios are enhanced by a factor of \(2-3\) compared to those in the M-IR scenarios. Additionally, the peaks of the neutrino spectra closely align with the energies of the coincident neutrinos. This, in combination with the high rate of efficient \(p\gamma\) interactions, makes TDEs in M-OUV scenarios as the promising neutrino emitters. From the right column, we find that the neutrino light curves exhibit the same shape as the OUV luminosities (\(L_{\rm OUV}/4\pi d_{L}^{2}\) is not shown for simplicity). This can be straightforwardly interpreted by the optically thick nature of the radiation zones, i.e., the protons interact very quickly before they can escape even compared to the size of the region, where the condition \(\tau_{p\gamma}^{\rm fs}=t_{p\gamma}^{-1}/t_{p,{\rm fs}}^{-1}>1\) is satisfied, as depicted in the right panel of Fig. 2. Consequently, no significant time delays are observed.
The EM cascade fluxes, similar to the neutrinos, are enhanced as well, particularly at the peak time \(t_{\rm pk}\), and there are no significant time delays with respect to the OUV curve. For AT2019dsg, the upper right panel of Fig. 4 demonstrates that the _Fermi_\(\gamma\)-ray upper limits are violated in the M-OUV scenarios with the radius of \(R=5\times 10^{14}\) cm. It is worth noting that when the radius is increased to \(10^{15}-10^{16}\) cm, the peak flux shows a significant drop, and the upper limits from _Fermi_ observations are no longer violated. This is observed in the M-OUV scenario of AT2019fdr, as shown in the lower right panel. In this discussion, we provide a intuitive conclusion regarding the _Fermi_ constraints. A more detailed analysis of the implications of these limits on the \(R-E_{p,{\rm max}}\) plane will be presented in Sec. 4.
One common feature of the EM cascade SEDs in the M-IR and M-OUV scenarios is that the '\(pp\)/\(p\gamma-\)SY/IC' (blue dashed curves) and '\(\gamma\gamma-\)SY/IC' (orange dashed-dotted curves) components in the left columns of Figs. 3 and 4 give rise to the low-energy (\(\sim 0.1-10\) MeV) and high-energy (\(\sim 0.1-10\) GeV) humps, respectively. Noting that in \(p\gamma\) channel around 5% of the proton energy is inherited by secondary leptons, we estimate the characteristic Lorentz factor of electrons and positrons to be \(\gamma_{e,p\gamma}\simeq(E_{p,{\rm cal}}/20)/(m_{e}c^{2})\sim 10^{8}(E_{p,{\rm cal }}/(10^{6}\,{\rm GeV}))\), where \(E_{p,{\rm cal}}\simeq 10^{5}-10^{6}\) GeV is the proton energy at which the radiation zone becomes calorimetric, e.g., \(t_{p\gamma}^{-1}\sim t_{\rm esc}^{-1}\). We then infer the peak energy of synchrotron radiation from \(p\gamma\) electrons and positrons
\[E_{p\gamma,{\rm sy}}=\frac{3}{4\pi}h\gamma_{e,p\gamma}^{2}\frac{eB}{m_{e}c} \simeq 17\ {\rm MeV}\left(\frac{E_{p,{\rm cal}}}{10^{6}\ {\rm GeV}}\right)^{2}, \tag{7}\]
where the magnetic field strength \(B=0.1\) G is used. The corresponding inverse Compton components are typically invisible due to \(\gamma\gamma\) absorption. Regarding the '\(\gamma\gamma-\)SY/IC' channel, the interpretation is more complicated since the \(\gamma\gamma\) cross section exhibits slow decay beyond the threshold, suggesting a wide energy range of absorption. From the EM cascade spectra, we observe that the strongest absorption occurs close to the peak energies of \(\gamma\)-rays from \(\pi_{0}\) decays, as predicted by \(E_{\gamma\gamma,{\rm max}}\sim E_{p,{\rm th}}/10\), where \(E_{p,{\rm th}}\) is defined in Eq. (6) and we consider that approximately 10% of the proton energy is transferred to \(\gamma\)-rays via the \(\pi_{0}\rightarrow\gamma\gamma\) channel. The peak energy of the resulting synchrotron emission, produced by electrons of \(E_{\gamma\gamma,{\rm max}}/2\), is given by \(E_{\gamma\gamma,{\rm sy}}\sim{\cal O}(1\ {\rm TeV})(E_{\gamma}/0.1{\rm eV})^{-2}\), where \(E_{\gamma}\) is the energy of target photons. In this case, the synchrotron emission in the '\(\gamma\gamma-\)SY/IC' channel suffers from attenuation before reaching its maximum \(E_{\gamma\gamma,{\rm sy}}\), which explains the sharp spikes appearing in the EM cascade spectra. This semi-theoretical estimation justifies the numerical results shown in Figs. 3 and 4.
## 4 \(\gamma\)-ray constraints on \(E_{p,{\rm max}}\) and \(R\)
In the previous sections, we examined EM cascade and neutrino emissions in two special scenarios, M-IR
and M-OUV, for AT2019dsg and AT2019fdr. It is useful to investigate the model-predicted neutrino numbers as a function of \(R\) and \(E_{p,\rm max}\) and explore the potential constraints on source parameters using _Fermi_\(\gamma\)-ray up limits.
We choose \(R\in[5\times 10^{14}~{}{\rm cm},10^{18}~{}{\rm cm}]\) (\([10^{15}~{}{\rm cm},10^{18}~{}{\rm cm}]\) for AT2019fdr) and \(E_{p,\rm max}\in[10^{6}~{}{\rm GeV},10^{10}~{}{\rm GeV}]\) as the fiducial ranges to allow for a continuous interpolation among the different models. From the left panels of Figs. 3 and 4, we notice that the _Fermi_\(\gamma\)-ray constraints are more stringent compared to the X-ray and HAWC \(\gamma\)-ray observations. Hence, we use the _Fermi_ constraints in the energy range \(0.1-800~{}{\rm GeV}\) to study the permissible parameter sets that are compatible with observations.
Our main results for parameter constraints are illustrated in Fig. 5. The upper and lower rows correspond to the TDEs AT2019dsg and AT2019fdr, whereas the magnetic fields of \(B=0.1~{}{\rm G}\) and \(1.0~{}{\rm G}\) are used in the left and right columns respectively to demonstrate the impact of different magnetic field strengths. The M-OUV and M-IR scenarios are indicated by blue and red stars, respectively. We also include the parameters (e.g., \(R=5\times 10^{15}~{}{\rm cm},~{}E_{p,\rm max}=5\times 10^{6}~{}{\rm GeV}\) and \(B=1.0~{}{\rm G}\)) for the M-X scenarios discussed in Winter and Lunardini (2023) as the magenta stars. The red contours represent the _Fermi_ upper limits (denoted as '_Fermi_ UL'). For a TDE radiation zone described by the parameter sets above the red contours, the EM cascade emission will overshoot the _Fermi_ upper limits. This conclusion
Figure 5: Number of expected single-flavor neutrino events as a function of \(R\) and \(E_{p,\rm max}\) as given by the color scales; we use the IceCube Gamma-Ray Follow-up (GFU) effective areas to compute the neutrino numbers. The panels correspond to AT2019dsg (upper row) and AT2019fdr (lower row), and magnetic field strengths \(B=0.1~{}{\rm G}\) (left column) and \(1~{}{\rm G}\) are (right column). The _Fermi_\(\gamma\)-ray upper limits on the \(R-E_{p,\rm max}\) plane are shown as red curves (the parameters in the upper left corners are excluded), and the regions where the optical thickness \(\tau_{p\gamma}^{\rm fs}(E_{p\gamma,\rm max})=1\) are shown as blue dashed curves (the parameters in the upper left corners are optically thick); here the \(p\gamma\) optical depth is evaluated at \(t_{\nu}\) and \(E_{p\gamma,\rm max}\) and the definition of \(E_{p\gamma,\rm max}\) can be found in the main text. Moreover, the models M-X, M-IR and M-OUV models are marked by stars.
is consistent with the results in Sec. 3, which show that the compact radiation zones violate the upper limits.
To understand the compactness of the radiation zone, we present the critical conditions for the source to be \(p\gamma\) optically thick in the dashed blue contours as well. These conditions are obtained through \(\tau_{p\gamma}^{\rm fs}(E_{p\gamma,{\rm max}})=1\), where \(E_{p\gamma,{\rm max}}<E_{p,{\rm max}}\) is the proton energy at which \(t_{p\gamma}^{-1}\) reaches its maximum. We find that there exists a critical radius beyond which the system can no longer be \(p\gamma\) optically thick even for UHE protons. The reason is that \(t_{p\gamma}^{-1}\) is limited by the maximum target photon density and the optical depth decreases with respect to \(R\), e.g., \(\tau_{p\gamma}^{\rm fs}=t_{p\gamma}^{-1}/t_{\rm fs}^{-1}\propto R^{-1}\), as \(t_{p\gamma}^{-1}\propto R^{-2}\) and \(t_{\rm fs}^{-1}\propto R^{-1}\). Above the blue dashed lines, the EM cascade emissions do not exhibit significant time delays due to the faster secondary radiation processes compared to photon free streaming. However, below the blue dashed line, a time delay on the scale of \(t_{p\gamma}\) can be expected.
To discuss the implications of _Fermi_\(\gamma\)-ray constraints on neutrino detection, we show the model-predicted neutrino numbers as a function of \(R\) and \(E_{p,{\rm max}}\) in the form of meshed color maps with the red contours in Fig. 5. The IceCube gamma-ray follow-up (GFU)5 neutrino numbers can be estimated via
Footnote 5: The GFU neutrino numbers, estimated using IceCube GFU effective areas (IceCube Collaboration et al., 2016), are more suitable when comparing the model predictions to actual follow-up observations, whereas point source (PS) neutrino numbers are typically used for independent point-source analyses.
\[N_{\nu,\mu}({\rm GFU})=\int dE_{\nu}A_{\rm eff}(E_{\nu})F_{\nu}(E_{\nu},t_{ \nu}) \tag{8}\]
where \(F_{\nu}(E_{\nu},t_{\nu})=\int^{t_{\nu}}dt{\cal F}_{\nu}(E_{\nu},t)\) is the cumulative neutrino fluence until \(t_{\nu}\) and \(A_{\rm eff}\) is the IceCube GFU effective area. The black contours in Fig. 5 indicate the single-flavor neutrino numbers \(N_{\nu,\mu}({\rm GFU}){=1,\ 0.1}\), and \(0.01\). From this figure, we find that to avoid overshooting the _Fermi_\(\gamma\)-ray up limits, the model-predicted neutrino numbers detected by IceCube are constrained to be \(N_{\nu,\mu}({\rm GFU})\lesssim 0.1\). This conclusion applies to both cases with \(B=0.1\) G (left column) and \(1.0\) G (right column). Specifically, our findings indicate that the M-X scenarios share similarities with the M-IR cases. Both scenarios are situated in the \(p\gamma\) optically thin regime, and they both predict a similar neutrino event rate and satisfy the upper limits set by the _Fermi_ observations.
When comparing the left column with the right column, it is worth noting that a stronger magnetic field would result in a higher predicted neutrino number, assuming fixed source parameters. Additionally, a stronger magnetic field would also increase the likelihood of exceeding the \(\gamma\)-ray limits due to enhanced confinement of charged particles within the radiation zones. This enhanced confinement would lead to more efficient \(p\gamma\) interactions and subsequent EM cascades.
From a multi-messenger perspective, we observe a correlation among three regions: high optical thickness for \(p\gamma\) interactions, high rates of neutrino events, and high fluxes of \(\gamma\)-rays. Specifically, high neutrino event rates indicate the presence of intense EM cascades, which are facilitated by efficient and rapid \(p\gamma\) interactions. The fact that time delays in neutrino signals with respect to OUV peaks are observed while no \(\gamma\)-ray induced by EM cascades has been seen implies that the neutrino production cannot be too efficient, e.g., the predicted upper limit for neutrino event rates from _Fermi_ constraints ranges between \(0.01\) and \(0.1\) events per TDE, as one can read off from the Fig. 5, and the system is probably optically thin for \(p\gamma\) interactions. Future multi-messenger observations, in particularly incorporating the \(\gamma\)-rays measurements, could shed more lights onto the radiation processes and physical conditions in the radiation zones.
## 5 Discussion
### EM cascades from TDEs
As noted before in Sec. 3, the EM cascade emissions exhibit some commonalities across different scenarios and TDEs. One prominent feature of the SEDs is the deep dip in the energy range approximately from 10-100 GeV to 1-10 PeV, which is a result of the in-source \(\gamma\gamma\) attenuation with thermal IR, OUV, and X-ray photons. Therefore, only a part of the whole EM cascade spectra, ranging from approximately 1-10 keV to 1-10 GeV, can be measured by eROSITA, Swift XRT, XMM-Newton, NICER, and _Fermi_ LAT in the observer's frame. Additionally, the multi-wavelength light curves of the EM cascades shown in the right columns of Figs. 3 and 4 also display unique signatures with respect to the IR, OUV and X-ray BB emissions. For instance, in M-IR cases where the \(p\gamma\) interactions are slow, the peaks of EM cascade light curves usually emerge after the OUV peak as well, leading to the time delays. More generally, similar time delays are also expected if the radiation zone has a large radius or the maximum energy of injected protons is low, as indicated by the regions below the blue dashed lines in Fig. 5. In this case, X-ray and \(\gamma\)-ray follow-ups after the identification of OUV peaks would reveal these delayed emissions. The fully time-dependent treatment in this paper provides a more accurate modeling for the neutrino and EM cascade emissions compared to steady-state approximations, particularly in cases where the interactions are efficient (calorimetric) but not sufficiently
fast (\(p\gamma\) optically thin). On the other hand, if the radiation zone is compact and \(E_{p,\rm max}\) is high (see the regions above the blue dashed lines in Fig. 5), the system can be \(p\gamma\) optically thick and the light curves of EM cascade emissions reach their peak roughly at the same time with the neutrino and thermal OUV light curves. In both cases, the X-ray and \(\gamma\)-ray fluxes can reach the detectable levels, making the follow-up observations possible.
In addition to the neutrino-coincident TDEs AT2019dsg and AT2019fdr, whose neutrino counterparts are identified via follow-up searches, AT2019aalc stands out with the highest dust echo flux among all ZTF transients and is also likely to be associated with the neutrino event IC 191119A. The SMBH mass and AT2019aalc is comparable to that of AT2019fdr, while the _Fermi_ limits and the time difference of neutrino arrival measured relative to the OUV peak are similar to those of AT2019dsg. However, no significant X-ray limits or observations for AT2019aalc has been reported. Winter & Lunardini (2023) systematically studied the neutrino emissions from all three neutrino-coincident TDEs and confirmed the similarities between them. Despite not shown explicitly in this paper, we investigated the EM cascade emissions from AT2019aalc as well and got similar conclusions with AT2019dsg and AT2019fdr. For example, high neutrino event rates imply strong EM cascades and violation of _Fermi_ upper limits. For instance, the M-OUV scenario of AT2019aalc would also overshoot the _Fermi_ upper limits obtained by 207 days of observations from the discovery of the optical emission. This on the other hand demonstrates that our model can be widely applied to TDEs.
### Leptonic loading in TDEs
The EM cascade emissions discussed in this work are initiated by hadronic processes without taking into account the injection of primary electrons or positrons. In practice, we also expect the acceleration of leptons at the acceleration sites where protons are energized. However, the accretion power converted to leptons is typically less than that of protons. The leptonic loading factor \((K_{e/p})\), defined as the ratio of the luminosities of injected electrons/positrons to protons, is typically less than unity, e.g., \(K_{e/p}\equiv L_{e}/L_{p}<1\). Simulations of non-relativistic diffusive shock acceleration in supernova remnants have demonstrated that the efficiency of electron energy deposition, compared to proton acceleration, is at least two orders of magnitude lower, e.g., \(K_{e/p}\lesssim 10^{-2}\)(see, e.g., Jones, 2011; Morlino & Caprioli, 2012). Furthermore, the particle-in-cell simulation gives a similar value, e.g., \(K_{e/p}\simeq 10^{-3}\)(Katz & Waxman, 2008; Caprioli & Spitkovsky, 2014; Park et al., 2015). For relativistic shocks, a leptonic loading factor on the order of \(K_{e/p}\sim\mathcal{O}(10^{-4}-1)\) is widely used in active galactic nucleus (AGN) models (roughly corresponding to the baryon loading factor \(\xi_{\rm cr}\sim 1-10^{4}\), see e.g., Murase et al., 2014; Kang et al., 2014; Gao et al., 2019; Petropoulou et al., 2020). Therefore, in order to obtain a more comprehensive understanding of the radiation zone, we compare the EM cascade spectra with the synchrotron and inverse Compton emissions (in the synchrotron self-Compton picture) from leptonic loading by varying \(K_{e/p}\) from \(10^{-4}\) to the extreme value \(K_{e/p}=1.0\). We consider a power law electron/positron injection \(Q_{e}\propto\gamma_{e}^{-2}\exp(-\gamma_{e}/\gamma_{e,\rm max})\) with the minimum and maximum Lorentz factors, \(\gamma_{e,\rm min}=300\) and \(\gamma_{e,\rm max}=10^{5}\), comparable to AGNs (e.g., Bottcher et al., 2013). Similarly, the injected electron distributions can be normalized via \(\int(\gamma_{e}m_{e}c^{2})Q_{e}d\gamma_{e}=L_{e}/V\). Given the magnetic field strength \(B=0.1\), we estimate the electron cooling Lorentz factor to be \(\gamma_{e,c}\simeq 6\pi m_{e}c/(t_{\rm fs}\sigma_{T}B^{2})\sim 4\times 10^{4}\), where \(\sigma_{T}\) is the Thomson cross section. The cooling Lorentz factor is significantly larger than \(\gamma_{e,\rm min}\), which indicates that the electrons are in the slow-cooling regime. The presence of slow-cooling electrons together with a relatively weak magnetic field would result in an
Figure 6: Comparison of EM cascade emission (black solid curve) with the leptonic contribution (dashed curves). The shaded areas have the same meaning as those in the upper left panel of Fig. 3. To obtain the EM cascade spectrum, the parameters used in M-IR scenario of AT2019dsg are applied (e.g., \(B=0.1\) G). From the upper to lower dashed curves, the ratio of injected electron luminosity to the proton luminosity, \(K_{e/p}\equiv L_{e}/L_{p}\), varies from 1 to \(10^{-4}\). The blue and orange dotted curves show the synchrotron and inverse Compton components of the \(L_{e}/L_{p}=1\) case.
enhanced inverse Compton component compared to the synchrotron emission.
In the M-IR case of AT2019dsg, the EM emissions from leptonic injections and EM cascades are illustrated as the dashed and black solid curves in Fig. 6 (see captions for the meaning of each curve). We find that the leptonic injection predicts a more broad spectrum from radio to \(\gamma\)-ray bands. Moreover, the leptonic contribution (especially the inverse Compton component) is comparable to the EM cascade for a moderate leptonic loading factor \(K_{e/p}\simeq 0.01\). The less efficient leptonic loading (\(K_{e/p}\lesssim 0.01\)) implies that the EM cascade initiated by hadronic processes could dominate the EM emissions6. Meanwhile, the contributions from primary electron/positron injections could become increasingly important for efficient leptonic loading with \(K_{e/p}\gtrsim 0.01\). If we take the _Fermi_ upper limits into account, the leptonic loading factor is constrained to be \(K_{e/p}\lesssim 0.01\), which falls within the expected ranges from particle acceleration simulations or the AGN models. Near this upper limit, relying solely on leptonic models or EM cascade models with purely hadronic origins is insufficient to comprehend the multi-messenger emissions from TDEs, emphasizing the necessity of a lepto-hadronic model. One caveat in this discussion is that the limits on \(K_{e/p}\) depend on the assumptions of the electron/positron distributions. If a higher \(\gamma_{e,\rm min}\) or a stronger magnetic field is used, the electrons may be in the fast-cooling regime, and the synchrotron component would become increasingly important, leading to more stringent X-ray constraints. A more quantitative discussion of leptonic injections is beyond the scope of this work and will be provided in future lepto-hadronic models of TDEs.
Footnote 6: This conclusion is consistent with the secondary emissions in merging galaxies (Yuan et al., 2019).
## 6 Summary and Conclusions
In this work, we have presented a comprehensive examination of the quasi-isotropic EM cascade emissions in TDE radiation zones in a fully time-dependent manner and investigated the interconnections between the neutrino and EM cascade counterparts. We have demonstrated that in the observer's frame, EM cascades can be measured in the X-ray to \(\gamma\)-ray ranges at the OUV peak time or exhibit a time delay depending on the \(p\gamma\) optical thickness. For instance, if the \(p\gamma\) system is optically thin and the luminosity of target photons reaches its maximum at a later time, the peak of EM cascade emission appears after the OUV peak. The resulting time delay is jointly determined by the timescale of \(p\gamma\) interactions and the peak time of target photons, spanning from tens to hundreds of days. In cases where the target photon evolution with time is negligible, such as the M-IR scenario of AT2019dsg, the time delay can be solely attributed to the \(p\gamma\) interaction time.
For AT2019dsg, an X-ray data point measured around 100 days after the OUV peak, which is potentially incompatible with the early exponential decay of the X-ray flux, can be described by the X-ray cascade emission in the M-IR scenario, suggesting that the EM cascade emissions could be detectable for Swift-XRT, XMM-Newton, and NICER. Additionally, we find that if no \(\gamma\)-ray from the TDEs is seen, the source parameters, such as the radii of the radiation zones and the maximum energies of injected protons, can be stringently constrained. Specifically, for AT2019dsg and AT2019fdr, the _Fermi_\(\gamma\)-ray upper limits imply that the source may not be a very efficient neutrino emitter, and the radiation zone is likely \(p\gamma\) optically thin, where the predicted neutrino event rate (i.e., 0.01-0.1 neutrino events per TDE) is consistent with the expected neutrino numbers from GFU searches (Stein et al., 2021; Reusch et al., 2022).
The joint analysis of the EM cascades and neutrino counterparts presented in this paper provides an intriguing template for the multi-messenger studies of TDEs. Future observations of the SEDs and light curves of X-ray, \(\gamma\)-ray, and neutrino emissions will allow us to infer the physical conditions of the radiation zones and test our EM cascade models. In addition to the TDEs considered in this paper, the quasi-isotropic EM cascade models can be widely applied to more TDEs that do not exhibit a direct connection to relativistic jets. Moreover, EM cascades could play a significant role in jetted TDEs as well, thereby highlighting the utility of coherent lepto-hadronic modeling. Detailed discussions on this topic will be presented in our forthcoming works.
We would like to thank Maria Petropoulou for useful comments during the Lepto-Hadronic Workshop in Bochum, Germany, and Claire Guepin for questions motivating this work. We also extend our appreciation to Steffen Hallmann and Xin-Yue Shi for their thorough internal review.
## Appendix A Interaction Rates for Secondary Particles
In Fig. 7, we present the interaction rates of secondary particles contributing to the cascade emission, and compare them to the rate of the \(p\gamma\) processes (depicted by the black solid curve) that initiate the electromagnetic (EM) cascade. These curves are derived within the M-IR scenario of AT2019dsg, at the neutrino detection time \(t_{\nu}\). The decay rates of muons, for instance, \(t_{\mu,{\rm dec}}^{-1}=(\gamma_{\mu}t_{\mu})^{-1}\), and charged pions, such as \(t_{\pi^{\pm},{\rm dec}}^{-1}=(\gamma_{\pi}t_{\pi^{\pm}})^{-1}\), are represented as magenta and green lines, respectively. Here, \(\gamma_{\mu}\equiv E_{\mu}/(m_{\mu}c^{2})\) stands for the Lorentz factor of muons with energy \(E_{\mu}\), \(t_{\mu}\) is the muon rest-frame lifetime, and corresponding quantities are defined for charged pions. The decay of neutral pions is exceedingly rapid and can be treated as an instantaneous process. We illustrate the synchrotron and inverse Compton rates for secondary electrons/positrons originating from \(\gamma\gamma\) annihilation and muon decays using blue and orange dashed curves. Additionally, the red dash-dotted curve represents the \(\gamma\gamma\) attenuation rate.
In general, the radiation rate for cascade emissions can be estimated as the summation of all interaction rates that build up EM cascade. From Fig. 7, we noticed that the decay rates and interaction rates of all secondary particles are much larger than \(p\gamma\) rate. Especially, the \(\gamma\gamma\) and \(p\gamma\) rates at their respective threshold energies can be connected via (e.g., Murase et al., 2016; Yuan et al., 2022)
\[\frac{t_{\gamma\gamma}^{-1}}{t_{p\gamma}^{-1}}\approx\frac{\sigma_{T}}{\sigma_ {p\gamma}}\frac{t_{\rm fs}^{-1}}{t_{p,{\rm esc}}^{-1}}\sim 10^{3}\frac{t_{ \rm fs}^{-1}}{t_{p,{\rm esc}}^{-1}}\gtrsim 10^{3},\] (A1)
where \(\sigma_{T}\simeq 6.65\times 10^{-25}\) cm is the Thomson cross section, \(\sigma_{p\gamma}\sim 5\times 10^{-28}\) cm is the \(p\gamma\) cross section at \(\Delta\) resonance, and we applied the relation \(t_{\rm fs}^{-1}/t_{p,{\rm esc}}^{-1}\geq 1\). This analytical estimation is consistent with the curves in Fig. 7. The much faster secondary processes imply that we can use the \(p\gamma\) time scale to approximate the time scale to develop EM cascade. This conclusion is valid for both AT2019dsg and AT2019fdr in M-IR and M-OUV scenarios.
|
2303.06945
|
CoGANPPIS: A Coevolution-enhanced Global Attention Neural Network for
Protein-Protein Interaction Site Prediction
|
Protein-protein interactions are of great importance in biochemical
processes. Accurate prediction of protein-protein interaction sites (PPIs) is
crucial for our understanding of biological mechanism. Although numerous
approaches have been developed recently and achieved gratifying results, there
are still two limitations: (1) Most existing models have excavated a number of
useful input features, but failed to take coevolutionary features into account,
which could provide clues for inter-residue relationships; (2) The
attention-based models only allocate attention weights for neighboring
residues, instead of doing it globally, which may limit the model's prediction
performance since some residues being far away from the target residues might
also matter.
We propose a coevolution-enhanced global attention neural network, a
sequence-based deep learning model for PPIs prediction, called CoGANPPIS.
Specifically, CoGANPPIS utilizes three layers in parallel for feature
extraction: (1) Local-level representation aggregation layer, which aggregates
the neighboring residues' features as the local feature representation; (2)
Global-level representation learning layer, which employs a novel
coevolution-enhanced global attention mechanism to allocate attention weights
to all residues on the same protein sequences; (3) Coevolutionary information
learning layer, which applies CNN & pooling to coevolutionary information to
obtain the coevolutionary profile representation. Then, the three outputs are
concatenated and passed into several fully connected layers for the final
prediction. Extensive experiments on two benchmark datasets have been
conducted, demonstrating that our proposed model achieves the state-of-the-art
performance.
|
Jiaxing Guo, Xuening Zhu, Zixin Hu, Xiaoxi Hu
|
2023-03-13T09:27:34Z
|
http://arxiv.org/abs/2303.06945v4
|
CoGANPPIS: Coevolution-enhanced Global Attention Neural Network for Protein-Protein Interaction Site Prediction
###### Abstract
Protein-protein interactions are of great importance in biochemical processes. Accurate prediction of the protein-protein interaction sites (PPIs) from protein sequences deepens our understanding of biological mechanism and is crucial for new drug design. However, conventional experimental methods for PPIs prediction are costly and time-consuming so that many computational approaches, especially ML-based approaches, have been developed recently. Although these approaches have achieved gratifying results, there are still two limitations: (1) Most existing models have excavated a number of useful input features, but failed to take coevolutionary features into account, which could provide clues for inter-residue relationships and could be helpful for PPIs prediction; (2) The attention-based models only allocate attention weights for neighboring residues, instead of doing it globally, which may limit the model's prediction performance since some residues being far away from the target residues in the protein sequence might also matter.
We propose a coevolution-enhanced global attention neural network, a sequence-based deep learning model for PPIs prediction, called CoGANPPIS. Specifically, CoGANPPIS utilizes three layers in parallel for feature extraction: (1) Local-level representation aggregation layer, which aggregates the neighboring residues' features as the local feature representation similar to previous studies; (2) Global-level representation learning layer, which employs a novel coevolution-enhanced global attention mechanism to allocate attention weights to all the residues on the same protein sequences; (3) Coevolutionary information learning layer, which applies CNN & pooling to coevolutionary information to obtain the coevolutionary profile representation. Then, the three outputs are concatenated and passed into several fully connected layers for the final prediction. Extensive experiments on two benchmark datasets have been conducted, demonstrating that our proposed model achieves the state-of-the-art performance. The source code is publicly available at [https://github.com/Slam1423/CoGANPPIS_source_code](https://github.com/Slam1423/CoGANPPIS_source_code).
## 1 Introduction
Proteins participate in a variety of biological processes in organisms. They rarely act alone, instead, they usually carry out various functions by interacting with different kinds of molecules, such as DNA, lipids, carbohydrates, and other proteins (Branden and Tooze, 2012; Murray et al., 2009; Ardejani et al., 2017). The process of establishing physical contacts of high specificity between two or more protein molecules is known as protein-protein interaction, which plays an important role in many biochemical processes including immune response, muscle contraction, and signal transduction. Considering the high practical and research values of PPIs prediction, many approaches have been proposed so far. There are some conventional experimental
methods, such as two-hybrid screening, relationship purification, and intragenic complementation, being commonly applied to identify PPIs (Westermarck et al., 2013; Terentive et al., 2009; Brettner and Masel, 2012; Smallwood et al., 2002). However, these experimental methods suffer from being costly and time-consuming so that more accurate and efficient computational predictors for PPIs are of great value for biologists.
With the rapid development of computer science, a lot of computational approaches, especially ML-based approaches, have been developed, which take protein sequences or structures as input and be known as sequence-based and structure-based respectively (Hou et al., 2016). Although Structure-based methods have achieved some promising progress in recent years (Gainza et al., 2020; Yuan et al., 2022; Huang et al., 2023), they may cause problems for biological researchers since the number of proteins with available structures is limited. For example, AlphaFold2 has shown promising performance in protein structure prediction, but its effectiveness on some newly-discovered proteins and some exotic proteins still remains to be tested. At the same time, its requirement for computational resources could be too high for most researchers (Jumper et al., 2021). In contrast, sequence-based methods are more practical since protein sequences are easier to be obtained with the noticeable development of high-throughput techniques.
Sequence-based methods could be classified as partner-specific and non partner-specific sequence-based PPIs prediction (Casadio et al., 2022), and in this paper we focus on the later one. The partner-specific sequence-based PPIs prediction aims to identify the interaction residue pairs of two given proteins, which has not been covered in our present work.
Sequence-based methods can be further classified into 2 categories: traditional machine learning approaches and deep learning approaches. The commonly-used traditional machine learning approaches include SVM (Yan et al., 2004; Wang et al., 2006; Chen and Li, 2010; Chen et al., 2012; Porollo and Meller, 2007), Naive Bayes (Yan et al., 2004; Murakami and Mizuguchi, 2010), shallow neural network (Ofran and Rost, 2003, 2007), random forest (Chen and Jeong, 2009; Northey et al., 2018; Wang et al., 2019), and logistic regression (Dhole et al., 2014; Zhang and Kurgan, 2019). However, these methods cannot capture the relationships among different residues located on the same protein sequences since they treat every residue as an independent sample.
In recent years, due to the great success of deep learning in many fields such as computer vision, speech recognition and natural language processing, the models based on deep learning have also been used in PPIs prediction. Among these, DELPHI, a fine-tuned ensemble model combining recurrent neural networks and convolutional neural networks, showed significant improvement in PPIs prediction compared with traditional machine learning models (Li et al., 2021). DeepPPISP, a convolutional neural network-based model, achieved good outcomes by combining both local and global sequence contexts as input and processing them respectively (Zeng et al., 2020). On the basis of DeepPPISP, ACNN, an attention-based convolutional neural network, made a better understanding of the local environment of the target residues by giving different attention weights to the neighboring residues (Lu et al., 2021). HANPPIS improved the performance and interpretability by using a double-layer attention mechanism (Tang et al., 2021). Besides, ensnet_p, an ensemble model combining several neural net architectures, achieved stable and peak prediction accuracy (Stringer et al., 2022).
Conventional sequence-based input features can be roughly classified into 4 categories: raw sequence features, evolutionary information, residue physiochemical properties, and predicted structural features (Casadio et al., 2022). Raw sequence features refer to the features that can be straightly obtained by the protein sequences, the most commonly used of which are amino acid types. Evolutionary information usually refers to the position-specific scoring matrices (PSSM) as well as other conservative scores of the proteins, which are mainly calculated from multiple sequence alignments (MSA) and very informative for protein-related prediction tasks. Residue physiochemical properties (such as residue's charge and polarity) have been applied in many models in recent years, which can be obtained from databases or some specific predictors. Besides, in the absence of protein structure, the predicted structural features (such as hydrophobicity, secondary structure, disorder) can also be helpful.
Recently, another sequence-based feature, coevolutionary information based feature, has been applied to another important protein-related problem, protein contact-map prediction, and brings about significant performance improvement (Wang et al., 2017; Hanson et al., 2018; Li et al., 2021). These features are mainly obtained by direct coupling analysis (DCA) and could quantify the inter-residue coevolutionary relationships. Intuitively, for a residue, its properties and behaviours should be more similar to the residues closely related
to it, which inspires us to introduce it into our model.
To evaluate the performance of our model, we compare it with other seven sequence-based models (PSIVER, ISIS, SPRINGS, DELPHI, DeepPPISP, ACNN and ensnet_p) on two benchmark datasets Dset422 and Dset448. The experimental results show that our model achieves state-of-the-art performance for PPIs prediction. The main contributions of this paper are as follows:
(1) To the best of our knowledge, this is the first time to introduce coevolutionary information as input features into deep learning based PPIs prediction model and we verify its usefulness in PPIs prediction by ablation analysis.
(2) We propose a novel coevolution-enhanced global attention mechanism for global-level representation learning, which allocates attention weights based on a better understanding of the whole protein sequences and the coevolutionary relationships among the residues.
(3) We conduct extensive experiments on two benchmark datasets, the results of which demonstrate that CoGANPPIS achieves state-of-the-art performance.
The rest of this paper is organized as follows: Section 2 introduces the datasets and the input features. Section 3 presents the architecture of our model, the experimental results and experiment analysis. In the end, section 4 summarizes the discussion of this paper.
## 2 Materials and Methods
### Datasets
In this study, two benchmark datasets, Dset422 and Dset448, are used in experiments. Dset422 consists of Dset72 (Murakami and Mizuguchi, 2010), Dset186, and Dset164 (Singh et al., 2014), whose protein sequences were collected from Protein Data Bank (Sussman et al., 1999). The protein sequence homology is less than 25% and if an amino acid has absolute solvent proximity less than 1 A\({}^{2}\) before and after binding with other proteins, it will be defined as an interaction site (Zeng et al., 2020). Dset448 is sourced from the BioLip database, where residues are defined as interaction sites if the distance between an atom of this residue and an atom of a given protein-partner less than 0.5 A plus the sum of the Van der Waal's radii of the two atoms (Zhang and Kurgan, 2019). First, the protein sequences were mapped into Uniprot databases to collect binding residues across different complexes. Then, they were clustered by Blastclust at 25% similarity, after which one protein was selected from each cluster to ensure that proteins in Dset448 share similarities less than 25%. Besides, a dataset of 4392 protein sequences was constructed in the paper of PIPENN (Stringer et al., 2022). Here, we name it Dset4392. The data of Dset4392 is from the BioLip Database, where binding sites are defined similar to Dset448. We utilize it for pretraining.
For Dset422 and Dset448, we randomly divided them into training set (about 83% of randomly selected proteins), validation set (about 5% of randomly selected proteins), and test set (the remaining proteins) respectively. Consequently, for Dset422, there are 352 proteins in the training set, 21 proteins in the validation set, and 49 proteins in the test set. And for Dset448, there are 373 proteins in the training set, 22 proteins in the validation set, and 53 proteins in the test set.
We count the distribution of sequence lengths of the three datasets. As shown in Table 1, only a small proportion of protein sequences in the two datasets are longer than 500. Hence, for the convenience of model training, we preprocess the protein sequences by unifying their lengths to 500, that is, if a protein sequence is longer than 500, we truncate it to 500; if a protein sequence is shorter than 500, we pad their length to 500 with zeros.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Length range & 1-100 & 101-200 & 201-300 & 301-400 & 400-500 & 500+ & total \\ \hline Dset422 & 85(20.1\%) & 176(41.7\%) & 68(16.1\%) & 56(13.3\%) & 23(5.5\%) & 14(3.3\%) & 422(100.0\%) \\ Dset448 & 34(7.6\%) & 127(28.3\%) & 134(29.9\%) & 95(21.2\%) & 38(8.5\%) & 20(4.5\%) & 448(100.0\%) \\ Dset4392 & 345(7.9\%) & 1266(28.8\%) & 969(22.1\%) & 716(16.3\%) & 511(11.6\%) & 585(13.3\%) & 4392(100.0\%) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of protein sequence lengths
### Input Features
The features commonly used in previous research, including raw sequence features, the position-specific scoring matrix, and predicted secondary structures, are applied in our model. Besides, we also introduce coevolutionary information into our model, which, to the best of our knowledge, is the first time to be used as input features in deep learning based PPIs prediction. These features are described in detail as follows.
#### 2.2.1 Raw sequence features
In this study, we include two raw features, amino acid type and sequence length into our model. Most proteins consist of 20 different amino acids. Hence, we encode each amino acid residue as a 20D one-hot vector representing the amino acid type at this position. Besides, we utilize an integer to represent the length of the sequence for each residue.
#### 2.2.2 Position-specific scoring matrix
Position-specific scoring matrix (PSSM) contains evolutionary information, which has been shown effective for PPIs prediction (cheol Jeong et al., 2010). We perform the PSI-BLAST algorithm on each input sequence against NCBI's non-redundant sequence database with three iterations and an E-value threshold of 0.001 to obtain its PSSM, where every amino acid residue on the underlying sequence is encoded as a vector with 20 elements.
#### 2.2.3 Predicted secondary structures
NetSurfP-3.0 is a tool for predicting secondary structures from protein sequences (Hoe et al., 2022). Here we utilize it to predict relative solvent accessibility (RSA), access surface area (ASA), 3-state secondary structure as well as 8-state secondary structure for each residue. Each amino acid residue is encoded as a vector with 13 elements, which represents the predicted RSA, predicted ASA, and the probabilities of being the corresponding secondary structure states at the position.
#### 2.2.4 Coevolutionary information
Coevolutionary relationships between amino acid residues refers to the interdependent changes that occur in pairs of residues on the same protein sequences, which help maintain proteins' stability, function, and folding (De Juan et al., 2013). As mentioned earlier, coevolutionary information based input features brought about great performance improvement in protein-contact map prediction, inspiring us to apply it in this study.
Direct-coupling analysis (DCA) is one of the main computational approaches to capture proteins' coevolutionary information. The key idea of DCA is to disentangle direct pairwise couplings of each two amino acid residues on the same protein sequences. For each protein, DCA takes its multiple sequence alignments (MSA) as the input, which is obtained by BLASTP, and returns a \(N\times N\) matrix, where \(N\) refers to the length of this protein sequence. The \((i,j)\) element of this matrix refers to the direct coupling degree of the \(i\)th residue and the \(j\)th residue on this protein sequence. The larger this value is, the higher the coevolutionary relationship exists between these two residues. In our study of predicting whether an amino acid residue is an interaction site or not, the corresponding column of the target amino acid residue in the DCA matrix has been extracted as its coevolutionary information feature.
There are three usual DCA algorithms, mpDCA (Weigt et al., 2009), mfDCA (Morcos et al., 2011), and plmDCA (Ekeberg et al., 2013). The mpDCA uses a semi-heuristic message-passing approach, and its slow computation speed makes it difficult to be applied to a large-scale dataset. The mfDCA uses a mean-field approach based on the maximum-entropy model, which greatly improves the computational speed. On the basis of mfDCA, the plmDCA applies pseudo-likelihood to the Potts model and achieves higher accuracy than mfDCA. Based on the above comparison, we utilize plmDCA to generate DCA matrices in this study.
### Model Architecture
Figure 1 is an overview of the proposed framework. First, we extract the local features, global features, and coevolutionary features of the primary protein sequence with different components respectively, which, in
this paper, are referred to as local-level representation aggregation layer, global-level representation learning layer, and coevolutionary information learning layer. Each layer outputs a feature representation vector. Then we concatenate three feature representation vectors as the output of the feature extraction and pass it into the prediction layer consisting of four fully connected layers for the final prediction about whether the target amino acid residue is an interaction site or not. Now we introduce the three layers in feature extraction in detail.
#### 2.3.1 Local-level representation aggregation layer
For each target residue, a sliding window of length (\(2n+1\)) is used to aggregate the features of itself and its neighboring \(2n\) residues. For the \(i\)th residue on the protein sequence, we denote its local feature representation as \(h_{i}^{local}\).
#### 2.3.2 Global-level representation learning layer
It has been shown that the global features of protein sequences are critical for PPIs prediction (Zeng et al., 2020). Also, the coevolutionary information could quantify the inter-residue coevolutionary relationships (Wang et al., 2017; Hanson et al., 2018; Li et al., 2021). Hence, we consider to utilize a coevolution-enhanced global attention mechanism to distinguish the importance of residues. Assuming that the \(i\)th residue is the predicting target, all residues on the same sequence are linearly added according to the attention scores,
\[h_{i}^{p}=\sum_{j=1}^{N}\alpha_{ij}h_{j}, \tag{1}\]
where \(h_{j}\) refers to the PSSM, predicted secondary structure, and raw protein sequence features of residue \(j\). \(\alpha_{ij}\) refers to the attention score, which estimates the importance weight of residue \(j\) on target residue \(i\). Intuitively, different residues on the same protein sequence should have different importance for the target residue, and those which match the characteristics of the whole protein and share a close relationship with the target residue should be paid more attention to. Therefore, \(\alpha_{ij}\) is calculated as follows:
\[\alpha_{ij}=\text{softmax}(\pi(i,j))=\frac{\exp{(\pi(i,j))}}{\sum_{k=1}^{N} \exp(\pi(i,k))}, \tag{2}\]
\[\pi(i,j)=q_{1}^{\text{T}}\text{LeakyRelu}([W_{1}(p\odot h_{j})\|W_{2}(h_{i} \odot h_{j})]\|w_{ij}). \tag{3}\]
Figure 1: The model architecture of CoGANPPIS. (A) An overall illustration of the proposed model. The feature extraction consists of three layers: local-level representation aggregation layer, global-level representation learning layer and coevolutionary information learning layer, whose outputs are passed into the prediction layer for the final prediction. (B) The structure of CNN & pooling layer in coevolutionary information learning layer. We use batchnormalization, multiple convolution kernels, activation as well as maxpooling to capture the coevolutionary information from DCA. (C) The structure of the prediction layer. We concatenate the outputs from feature extraction and utilize four fully-connected layers to predict whether the residue is an interaction site or not.
Here, we use LeakyRelu as the activation function. \(\odot\) indicates element-wise product, \(\|\) indicates concatenation operation, and \(w_{ij}\in\mathbb{R}^{1}\) refers to the \((i,j)\) element of DCA matrix of the current protein, which provides us some clues for the coevolutionary relationship between the residue \(i\) and residue \(j\). \(W_{1}\in\mathbb{R}^{d\times d}\), \(W_{2}\in\mathbb{R}^{d\times d}\) and \(q_{1}\in\mathbb{R}^{2d+1}\) are trainable parameters. \(p\) can be seen as the feature representation of the whole protein sequence, which is obtained by mean-pooling on all the residues' features on this protein,
\[p=\frac{1}{N}\sum_{j=1}^{N}h_{j}. \tag{4}\]
Our approach makes the attention weights between the target residue and other residues dependent on not only the whole protein sequence feature representation but also the coevolutionary relationships between them, suggesting that those residues which match the characteristics of the whole protein and are closely related to the target residue will be attached more importance.
Then we concatenate the global representation of the target residue \(h_{i}^{p}\) and its original feature \(h_{i}\),
\[h_{i}^{global}=h_{i}^{p}\|h_{i}, \tag{5}\]
where \(h_{i}^{global}\) is the result of the global-level representation learning layer.
#### 2.3.3 Coevolutionary information learning layer
We have introduced coevolutionary information into the attention mechanism to exploit the relationship among residues as above. Now we further utilize the coevolutionary information on a larger scale, i.e, on the whole protein sequence level. Suppose we are predicting the \(i\)th residue on the protein sequence. First, we take its corresponding column in the DCA matrix as its coevolutionary information. Then we pass it into the CNN & pooling layer as shown in Figure 1:
\[h_{i,k}^{dca}=\text{Relu}(\text{conv1d}^{(k)}(\text{BN}(\text{ DCA}[:,i]))),k\in[1,K], \tag{6}\]
\[h_{i}^{dca}=\|_{k=1}^{K}h_{i,k}^{dca}, \tag{7}\]
where \(\text{DCA}[:,i]\) is the \(i\)th column of the DCA matrix of the underlying protein. BN refers to the Batch-Normalization operation and the 1D convolution operation conv1d extracts the normalized coevolutionary features. We use Relu as the activation. Here we use \(K\) different convolution kernels for a better extraction of coevolutionary features. Finally, we obtain the coevolutionary representation \(h_{i}^{dca}\) by concatenating all \(K\) results linearly.
#### 2.3.4 Prediction layer
To predict whether an amino acid residue is an interaction site or not, first we concatenate three former feature extraction results to obtain the final representation:
\[h_{i}^{pred}=h_{i}^{local}\|h_{i}^{global}\|h_{i}^{dca}, \tag{8}\]
where \(h_{i}^{local}\), \(h_{i}^{global}\) and \(h_{i}^{dca}\) are the results of local-level representation aggregation layer, global-level representation learning layer, and coevolutionary information learning layer. \(h_{i}^{pred}\) is the final representation of the residue, which will be passed into fully connected layers:
\[x^{(t)}=\text{Relu}(W^{(t)}x^{(t-1)}+b^{(t)}),t\in[1,T], \tag{9}\]
where \(x^{(t)}\) and \(x^{(t-1)}\) refer to the input vector and output vector of the \(t\)th fully connected layer, respectively. Here, \(x^{(0)}=h_{i}^{pred}\). \(W^{(t)}\) denotes the weight matrix and \(b^{(t)}\) denotes the bias. Besides, ReLU and dropout are utilized in each layer except the last one. After the last layer, a Sigmoid function is used to generate the final prediction:
\[\hat{y}=\frac{1}{1+e^{-x^{(T)}}}, \tag{10}\]
where \(\hat{y}\) denotes the predicted probability of the residue being the interaction site. And \(1-\hat{y}\) is the predicted probability of the residue being the non-interaction site.
### Evaluation Metrics
PPIs prediction can be seen as a binary classification problem for identifying whether an amino acid residue is an interaction site or not. Consequently, there could be four types of results based on the residue's true category and predicted category, i.e., true positives (TP), true negatives (TN), false positives (FP), and false negatives (FN). Here, TP and TN refer to the correctly predicted interaction sites and non-interaction sites respectively; FP and FN refer to the incorrectly predicted interaction sites and non-interaction sites respectively.
We select six evaluation metrics to comprehensively evaluate the predictive performance, including area under the precision-recall curve (AUPRC), accuracy (ACC), recall, precision, F-measure (F\({}_{1}\)), and Matthews correlation coefficient (MCC). Considering that our dataset is imbalanced with more non-interaction sites than interaction sites, F\({}_{1}\) and MCC indices deserve more attention. The formulas for calculating these metrics are as follows:
\[\text{ACC}=\frac{\text{TP}+\text{TN}}{\text{TP}+\text{TN}+\text{FP}+\text{ FN}}, \tag{11}\]
\[\text{Recall}=\frac{\text{TP}}{\text{TP}+\text{FN}}, \tag{12}\]
\[\text{Precision}=\frac{\text{TP}}{\text{TP}+\text{FP}}, \tag{13}\]
\[\text{F}_{1}=\frac{2\times\text{Precision}\times\text{Recall}}{\text{ Precision}+\text{Recall}}, \tag{14}\]
\[\text{MCC}=\frac{\text{TP}\times\text{TN}-\text{FP}\times\text{FN}}{\sqrt{( \text{TP}+\text{FP})(\text{TP}+\text{FN})(\text{TN}+\text{FP})(\text{TN}+ \text{FN})}}. \tag{15}\]
## 3 Results and Discussion
### Implementation
In the feature extraction part, the sliding window length in the local-level representation aggregation layer is set as 7. We use three convolution kernels in the CNN & pooling layer and the sizes are set as 3, 5 and 7, respectively. In the classification part, we conducted four fully connected layers of 1024, 256, 8, and 1 node, with the first three fully connected layers accompanied by a dropout ratio of 0.1. We use weighted cross-entropy loss as the loss function:
\[L=-\frac{1}{m}\sum_{i=1}^{m}(wy_{i}log(\hat{y_{i}})+(1-y_{i})log(1-\hat{y_{i}} )), \tag{16}\]
where \(m\) is the number of training samples. \(w\) refers to the weight and is set to 4. Interaction site is labeled as 1 (\(y_{i}=1\)) and non-interaction site is labeled as 0 (\(y_{i}=0\)). \(\hat{y}_{i}\) is the predicted probability of being interaction site of the sample \(i\). Besides, we utilize Adaptive Momentum (Adam) as the optimizer with a learning rate of 0.0001. The batch size is set to 256. The model is implemented by PyTorch and trained on NVIDIA GTX 1080 Ti.
Fine tuning is used in this model. Before training on Dset422 and Dset448, we first trained our model on Dset4392. In the epoch of achieving the best performance on Dset4392, the parameters of the model are saved to files. When training on Dset422 and Dset448, we loaded the saved weights into the feature extraction part from the file and froze the weights so that during the process of training, the parameters in feature extraction stayed unchanged. Training and validation data are used only to train the fully connected layers in the prediction layer.
### Comparison with competing methods
To evaluate the predictive performance of our model (CoGANPPIS), we compare it with seven popular sequence-based competing methods (PSIVER, ISIS, SPRINGS, DELPHI, DeepPPISP, ACNN and ensnet_p). Specifically, PSIVER (Murakami and Mizuguchi, 2010) utilizes Naive Bayes Classifier and kernel density estimation method to predict PPIs based on sequence features. ISIS (Ofran and Rost, 2007) combines predicted structural features with evolutionary information to predict PPIs based on shallow neural networks. SPRINGS utilizes an artificial neural network to generate PPIs predictions (Singh et al., 2014). DELPHI (Li et al., 2021) employs a fine-tuned ensemble model by combining several recurrent neural networks and convolutional neural networks. DeepPPISP (Zeng et al., 2020) considers both local contextual and global information and applies a convolutional neural network to predict PPIs. ACNN (Lu et al., 2021) employs a local attention mechanism to make PPIs prediction. And ensnet_p is an ensemble model combining different neural net models (Stringer et al., 2022). In words, all these traditional approaches do not apply coevolutionary information or global attention mechanism.
Table 2 presents the experimental results of seven sequence-based competitive PPIs prediction models and our proposed model. It can be observed that CoGANPPIS achieves the best performance across both two datasets in terms of all six metrics consistently, which ascertains its effectiveness. The ROC and PR curves of CoGANPPIS and other competing methods on Dset422 and Dset448 are shown in Figure 2. It demonstrates that AUPRC and AUC of CoGANPPIS are higher than that of other competing methods.
### Ablation analysis
In this part, we test the usefulness of introducing coevolutionary information into PPIs prediction. First, we evaluate the performance of the model with coevolutionary information included. Then we remove coevolutionary information from the model and train the model again to measure the comparing performance. The model without coevolutionary information is named as CoGANPPIS\({}^{\odot}\). Table 3 demonstrates the performance of CoGANPPIS and CoGANPPIS\({}^{\odot}\). We can find that CoGANPPIS outperforms CoGANPPIS\({}^{\odot}\) on
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline \hline Dataset & \multicolumn{6}{c}{Dset422} & \multicolumn{6}{c}{Dset448} \\ \cline{2-13} Method & AUPRC & ACC & Recall & Precision & F\({}_{1}\) & MCC & AUPRC & ACC & Recall & Precision & F\({}_{1}\) & MCC \\ \hline PSIVER & 0.230 & 0.553 & 0.426 & 0.204 & 0.276 & 0.086 & 0.238 & 0.631 & 0.337 & 0.225 & 0.270 & 0.131 \\ ISIS & 0.284 & 0.671 & 0.532 & 0.249 & 0.339 & 0.198 & 0.316 & 0.624 & 0.514 & 0.293 & 0.373 & 0.255 \\ SPRINGS & 0.279 & 0.547 & 0.772 & 0.189 & 0.303 & 0.120 & 0.305 & 0.601 & 0.546 & 0.190 & 0.282 & 0.134 \\ DELPHI & 0.311 & 0.628 & 0.533 & 0.288 & 0.371 & 0.232 & 0.320 & 0.633 & 0.588 & 0.280 & 0.379 & 0.234 \\ DeepPPISP & 0.320 & 0.655 & 0.577 & 0.303 & 0.397 & 0.206 & 0.351 & 0.661 & 0.588 & 0.307 & 0.404 & 0.276 \\ ACNN & 0.306 & 0.689 & 0.598 & 0.254 & 0.356 & 0.224 & 0.301 & 0.693 & 0.603 & 0.259 & 0.362 & 0.232 \\ ensnet_p & 0.377 & 0.696 & 0.600 & 0.291 & 0.392 & 0.246 & 0.385 & 0.745 & 0.580 & 0.302 & 0.397 & 0.259 \\ CoGANPPIS (not pretrained) & 0.364 & **0.753** & 0.431 & **0.382** & 0.405 & 0.251 & 0.355 & 0.674 & 0.551 & **0.324** & 0.408 & 0.252 \\ CoGANPPIS & **0.392** & 0.702 & **0.625** & 0.323 & **0.425** & **0.304** & **0.393** & **0.746** & **0.630** & **0.324** & **0.428** & **0.307** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance comparison
Figure 2: ROC and PR plots of CoGANPPIS and other seven competing models. (a) ROC plot on Dset422, (b) PR plot on Dset422, (c) ROC plot on Dset448, (d) PR plot on Dset448. CoGANPPIS clearly outperforms other models
both two datasets, which indicates that introducing coevolutionary information could help improve predictive accuracy and thus validates its effectiveness in PPIs prediction.
### Model performance on proteins of different lengths
Considering that protein sequence length varies greatly from each other, it could be necessary to study the predictive performance on proteins of different length. To answer this question, we plot the experimental results of CoGANPPIS, CoGANPPIS\({}^{\ominus}\) as well as ensnet_p in terms of \(F_{1}\) and MCC under proteins of different lengths on the two datasets in Figure 3.
Through the results, we have the following observations: First, we can observe that with the protein sequence length increasing, the performance of three models on two datasets all show an overall downward trend. This can be explained as the protein structure and function become more complex with the increase of length, making PPIs more difficult to be predicted. Second, it is interesting that the performance improvement of CoGANPPIS and CoGANPPIS\({}^{\ominus}\) compared with ensnet_p increases as the increase of the protein sequence length. Take the \(F_{1}\) on Dset422 in Figure 3 as an example, when the length is less than 100, the \(F_{1}\) of
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline \hline Dataset & \multicolumn{8}{c}{Dset422} & \multicolumn{8}{c}{Dset448} \\ \cline{2-13} Method & AUPRC & ACC & Recall & Precision & F\({}_{1}\) & MCC & AUPRC & ACC & Recall & Precision & F\({}_{1}\) & MCC \\ \hline CoGANPPIS\({}^{\ominus}\) & 0.383 & 0.647 & 0.613 & 0.316 & 0.417 & 0.293 & 0.388 & 0.670 & **0.637** & 0.316 & 0.422 & 0.300 \\ CoGANPPIS & **0.392** & **0.702** & **0.625** & **0.323** & **0.425** & **0.304** & **0.393** & **0.746** & 0.630 & **0.324** & **0.428** & **0.307** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation analysis
Figure 3: Models’ performance under various protein lengths. (a) \(F_{1}\) on Dset422, (b) \(F_{1}\) on Dset448, (c) MCC on Dset422, (d) MCC on Dset448
CoGANNPPIS and CoGANNPPIS\({}^{\odot}\) are 0.450 and 0.443 respectively, which are only 0.023 and 0.016 higher than that of ensnet_p (0.427). When the protein length is between 200 and 300, the improvements increase to 0.029 and 0.019 (0.414 vs 0.385 and 0.404 vs 0.385). When the protein length is greater than 500, the gaps further increase to 0.031 and 0.02 (0.403 vs 0.372 and 0.392 vs 0.372). This clearly shows that the longer the protein sequence is, the more PPIs prediction relies on global information extraction, which can be better captured by our global attention mechanism (even without coevolutionary information). Third, we pay attention to the comparison between CoGANNPPIS and CoGANNPPIS\({}^{\odot}\). The two metrics \(F_{1}\) and MCC of CoGANNPPIS on both two datasets are better than the ones of CoGANNPPIS\({}^{\odot}\), which also verifies the effectiveness of coevolutionary information in PPIs prediction and confirms the conclusion that we obtained in ablation analysis.
### Impact of coevolution-enhanced global attention mechanism
We have verified the effectiveness of coevolutionary information and global attention mechanism. Now let's further study how the coevolution-enhanced global attention mechanism works. First, for each pair of residues, we examine the relationships between its direct coupling degree and labels. As shown in Figure 4(a), the larger the direct coupling degree, the higher the probability that the pairs of residues have the same label. Further, let's take protein Q2T3W4 in Dset448 as an example. We first extract the attention weights of the first residue on Q2T3W4 in the training process. Then we plot a scatter diagram and fit a trend line of the points with attention weights larger than 0.001 as shown in Figure 4(b). We find that the slope of the trend line is positive, which implies that in general, the larger the direct coupling degrees, the higher the attention weights. Hence, the target residue could be paid more attention to the residues with high correlation during training process, which is an noticeable advantage of this attention mechanism.
### Visualization
As mentioned above, the attention weights between two residues show a positive correlation with their direct coupling degrees. It prompts us to explore to which extent our attention mechanism enhances compared with allocating weights purely according to DCA. For a protein sequence with \(N\) amino acid residues, we sort residue pairs according to their attention weights and direct coupling degrees respectively, and keep the \(5N\), \(10N\) as well as \(20N\) highest-ranking pairs. Figure 5 shows an example of the protein 7cei_A from Dset422. The three rows refer to the \(5N\), \(10N\) and \(20N\) situations in turn (Only the corresponding number of selected residue pairs are colored). The pairs with same labels are indicated in red and the pairs with different labels are
Figure 4: Analysis of coevolution-enhanced global attention mechanism. (a) We plot the probabilities of residue pairs having the same label (orange) and the probabilities of having different labels (green) under different direct coupling degrees in our dataset, (b) We take protein Q2T3W4 as an example and plot the scatter diagram of its attention weights under different direct coupling degrees. We also paint the trend line of the scattered points with attention weights larger than 0.001, which shows the positive correlation between the two variables
shown in green. It is evident that our attention mechanism works consistently better than pure DCA because of the obviously higher proportion of red points in all three situations. To become more quantitative, we have binned the predicted pairs according to their separation along the protein sequence as shown in the third column in Figure 5. We observe that our attention mechanism captures the residues with same labels more accurately than pure DCA. Also, our attention mechanism could attach more attention weights to distant residues, whereas pure DCA tends to pay more attention to neighboring aggregated residues, which could be attributed to the consideration of the whole protein feature representation of our attention mechanism.
Figure 5: Visualization of attention weights of protein 7cei_A using our attention mechanism (a, d and g) and pure DCA (b, e and h). Resistive pairs are sorted according to the attention weights and the direct coupling degrees respectively, and kept the SN (a-c), 10N (d-f) as well as 20N (g-i) highest-ranking pairs. These pairs with same labels are indicated in red and the pairs with different labels are shown in green. The right-most panels (c, f and i) bin weights of residue pairs according to their separation along the protein sequence. The overall bars count all predictions, the shaded parts refer to the TPs. Note that our attention mechanism captures the residues with same labels more accurately than pure DCA. Besides, the residues paid attention to by our attention mechanism are more evenly distributed than pure DCA
Conclusion
The aim of this paper is to improve the PPIs prediction performance solely based on protein sequences, which is important for understanding the biological mechanism of proteins both experimentally and theoretically. A dozen of sequence-based PPIs predictors have been developed in recent years. However, most of these works just utilize some commonly used features without considering coevolutionary information which provides rich clues for inter-residue relationships. Also, they are not good at predicting PPIs of long-length proteins.
Here, we propose a coevolution-enhanced global attention neural network (CoGANPPIS). Specifically, we employ a coevolution-enhanced global attention mechanism both for better inter-residue relationship capture and for better prediction of long-length proteins. We further aggregate the local residue features and apply a CNN & pooling layer to the coevolutionary information features as a supplement. Then we utilize several fully connected layers to generate the final prediction. Extensive experiments of CoGANPPIS and other seven popular methods on two standardized datasets show that our proposed model CoGANPPIS achieves the state-of-the-art performance.
Further experimental analysis shows that: (1) Coevolutionary information can improve the performance of PPIs prediction. (2) CoGANPPIS can bring more performance improvement compared with previous methods as the protein sequence becomes longer, implying that CoGANPPIS has a better understanding of the whole protein sequences. (3) Compared with allocating attention weights according to pure DCA, the proposed coevolution-enhanced global attention mechanism pays more attention to the residues with same labels and shows a more evenly distributed attention weights instead of locally aggregated attention weights.
Although CoGANPPIS shows advantages over previous methods, it has some limitations: First, CoGANPPIS takes a lot of computation time due to its usage of multiple sequence alignments and direct coupling analysis to generate coevolutionary information. In addition, DCA's accuracy depends on the number of homologs, making it difficult for those proteins with little homologs. In the future, we would be commited to find useful, practical and time-saving features to make prediction faster and more accurate.
## Funding
This study was supported by funding from the National Natural Science Foundation of China (72222009, 71991472 to X.Z), the National Natural Science Foundation of China (3210040426 to Z.H.), the Shanghai Rising-Star Program (21QB1400900 to Z.H.), and was also partly supported by a grant from the major project of Study on Pathogenesis and Epidemic Prevention Technology System (2021YFC2302500) by the Ministry of Science and Technology of China.
|
2305.04204
|
Congruences on tropical rational function semifields and tropical curves
|
We define tropical rational function semifields
$\overline{\boldsymbol{T}(X_1, \ldots, X_n)}$ and prove that a tropical curve
$\varGamma$ is realized (except for points at infinity) as the congruence
variety $V \subset \boldsymbol{R}^n$ associated with a congruence on
$\overline{\boldsymbol{T}(X_1, \ldots, X_n)}$ by giving a specific map
$\varGamma \to V$. Also, we shed light on the relation between congruences $E$
on $\overline{\boldsymbol{T}(X_1, \ldots, X_n)}$ and congruence varieties
associated with them and reveal the quotient semifield
$\overline{\boldsymbol{T}(X_1, \ldots, X_n)} / E$ to play the role of
coordinate rings that determine isomorphism classes of affine varieties in the
classical algebraic geometry.
|
JuAe Song
|
2023-05-07T06:55:42Z
|
http://arxiv.org/abs/2305.04204v2
|
# Congruences on tropical rational function semifields and tropical curves
###### Abstract.
We define tropical rational function semifields \(\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}\) and prove that a tropical curve \(\varGamma\) is realized (except for points at infinity) as the congruence variety \(V\subset\boldsymbol{R}^{n}\) associated with a congruence on \(\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}\) by giving a specific map \(\varGamma\to V\). Also, we shed light on the relation between congruences \(E\) on \(\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}\) and congruence varieties associated with them and reveal the quotient semifield \(\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}/E\) to play the role of coordinate rings that determine isomorphism classes of affine varieties in the classical algebraic geometry.
Key words and phrases:tropical rational function semifields, congruences, congruence varieties, tropical curves 2020 Mathematics Subject Classification: Primary 14T10, 14T20; Secondary 15A80
## 1. Introduction
Congruences on the tropical polynomial semiring in \(n\)-variables \(\boldsymbol{T}[X_{1},\ldots,X_{n}]\) are central objects of interest in [1] and [12]. Bertram and Easton ([1]) aimed to construct a general framework for algebraic geometry in the tropical setting. They first studied basic properties of semirings and congruences on them, and then a strong form of tropical Nullstellensatz for congruences on \(\boldsymbol{T}[X_{1},\ldots,X_{n}]\). [12] is devoted to studying prime congruences on polynomial semirings on additively idempotent semifields (containing \(\boldsymbol{T}[X_{1},\ldots,X_{n}]\)) and to prove an improvement of the result of Bertram and Easton for tropical Nullstellensatz (for tropical Nullstellensatz, see also [6]).
The current paper also aims to develop an algebraic foundation for tropical geometry, but, in addition, is interested in the relationship between algebra and geometry; congruences and congruence varieties associated with them. In this paper, we focus on congruences on the tropical rational function semifield in \(n\)-variables \(\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}\), which is the fraction semifield of the tropical polynomial function semiring in \(n\)-variables.
One motivation to study congruences on \(\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}\) is the rational function semifield \(\operatorname{Rat}(\varGamma)\) of a tropical curve \(\varGamma\). By [9, Theorem 1.1], \(\operatorname{Rat}(\varGamma)\) is finitely generated as a semifield over \(\boldsymbol{T}\). Using this fact, we can make a surjective \(\boldsymbol{T}\)-algebra homomorphism \(\psi:\operatorname{Rat}(\varGamma)\to\operatorname{Rat}(\varGamma)\) by using the fact that \(\operatorname{Rat}(\varGamma)\) is a semiring in \(\operatorname{Rat}(\varGamma)\). In particular, \(\operatorname{Rat}(\varGamma)\) is a semiring in \(\operatorname{Rat}(\varGamma)\).
The \(\operatorname{Rat}(\varGamma)\)-algebra homomorphism \(\psi:\operatorname{Rat}(\varGamma)\to\operatorname{Rat}(\varGamma)\) is a semiring in \(\operatorname{Rat}(\varGamma)\). The \(\operatorname{Rat}(\varGamma)\)-algebra homomorphism \(\psi:\operatorname{Rat}(\varGamma)\to\operatorname{Rat}(\varGamma)\) is a semiring in \(\operatorname{Rat}(\varGamma)\). The \(\operatorname{Rat}(\varGamma)\)-algebra homomorphism \(\psi:\operatorname{Rat}(\varGamma)\to\operatorname{Rat}(\varGamma)\) is a semiring in \(\operatorname{Rat}(\varGamma)\). The \(\operatorname{Rat}(\varGamma)\)-algebra homomorphism \(\psi:\operatorname{Rat}(\varGamma)\to\operatorname{Rat}(\varGamma)\) is a semiring in \(\operatorname{Rat}(\varGamma)\).
The \(\operatorname{Rat}(\varGamma)\)-algebra homomorphism \(\psi:\operatorname{Rat}(\varGamma)\to\operatorname{Rat}(\varGamma)\) is a semiring in \(\operatorname{Rat}(\varGamma)\). The \(\operatorname{Rat}(\varGamma)\)-algebra homomorphism \(\psi:\operatorname{Rat}(\varGamma)\to\operatorname{Rat}(\varGamma)\) is a semiring in \(\operatorname{Rat}(\varGamma)\). The \(\operatorname{Rat}(\varGamma)\)-algebra homomorphism \(\psi:\operatorname{Rat}(\varGamma)\to\operatorname{Rat}(\varGamma)\) is a semiring in \(\operatorname{Rat}(\varGamma)\). The \(\operatorname{Rat}(\varGamma)\)-algebra homomorphism \(\psi:\operatorname{Rat}(\varGamma)\to\operatorname{Rat}(\varGamma)\) is a semiring in \(\operatorname{Rat}(\varGamma)\). The \(\operatorname{Rat}(\varGamma)\)-algebra homomorphism \(\psi:\operatorname{Rat}(\varGamma)\to\operatorname{Rat}(\varGamma)\) is a semiring in \(\operatorname{Rat}(\varGamma)\).
The \(\operatorname{Rat}(\varGamma)\)-algebra homomorphism \(\psi:\operatorname{Rat}(\varGamma)\to\operatorname{Rat}(\varGamma)\) is a semiring in \(\operatorname{Rat}(\varGamma)\). The \(\operatorname{Rat}(\varGamma)\)-algebra homomorphism \(\psi:\operatorname{Rat}(\varGamma)\to\operatorname{Rat}(\varGamma)\) is a semiring in \(\operatorname{Rat}(\varGamma)\).
\(\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}\twoheadrightarrow\operatorname{Rat}(\Gamma)\). By [11, Theorem 1.1], the geometric structure of \(\Gamma\) as a tropical curve (i.e., the topological structure and the metric structure) is defined by the \(\boldsymbol{T}\)-algebra structure of \(\operatorname{Rat}(\Gamma)\), and hence the quotient semifield \(\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}/\operatorname{Ker}(\psi)\) of \(\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}\) by the kernel congruence \(\operatorname{Ker}(\psi)\) of \(\psi\). Since the \(\boldsymbol{T}\)-algebra structure of \(\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}/\operatorname{Ker}(\psi)\) is given by \(\operatorname{Ker}(\psi)\), we expect that we can extract the geometric structure of \(\Gamma\) as a tropical curve from \(\operatorname{Ker}(\psi)\). In fact, we can do it: Proposition 3.12. This proposition states that \(\Gamma\) is realized except for points at infinity as the congruence variety \(\boldsymbol{V}(\operatorname{Ker}(\psi))\) associated with \(\operatorname{Ker}(\psi)\). Note that we consider \(\boldsymbol{V}(\operatorname{Ker}(\psi))\) not just as a topological space but as a metric space with lattice length.
Also by this proposition, we know that \(\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}/\operatorname{Ker}(\psi)\) plays the role of coordinate rings that determine isomorphism classes of affine varieties in the classical algebraic geometry. From this fact, we expect that \(\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}/E\) also plays the role of coordinate rings for any congruence \(E\) on \(\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}\). In fact, it does: Theorem 3.15 and Proposition 3.20.
The rest of this paper is organized as follows. In Section 2, we give the definitions of semirings, algebras and semifields, congruences, tropical curves, rational functions and chip firing moves, and morphisms between tropical curves. Section 3 is our main section; contains the statements of Propositions 3.12, 3.20 and Theorem 3.15 and their proofs. In that section, we first give the definition of \(\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}\) and study congruences on it and congruence varieties associated with them. Then we prove the above three assertions. To verify whether the conditions in our assertions are sufficient or necessary in those assertions, we give three examples: Examples 3.14, 3.21, 3.22.
## Acknowledgements
The author thanks my supervisor Masanori Kobayashi and Yasuhito Nakajima for their helpful comments.
## 2. Preliminaries
In this section, we recall several definitions which we need later. We refer to [5] (resp. [13]) for an introduction to the theory of semirings (resp. tropical geometry) and employ definitions in [1] and [12] (resp. [9]) related to semirings (resp. tropical curves). Also we employ the Chan's definition of morphisms between tropical curves in [2].
### Semirings, algebras and semifields
In this paper, a _semiring_\(S\) is a commutative semiring with the absorbing neutral element \(0_{S}\) for addition \(+\) and the identity \(1_{S}\) for multiplication \(\cdot\). If every nonzero element of a semiring \(S\) is multiplicatively invertible and \(0_{S}\neq 1_{S}\), then \(S\) is called a _semifield_.
A map \(\varphi:S_{1}\to S_{2}\) between semirings is a _semiring homomorphism_ if for any \(x,y\in S_{1}\),
\[\varphi(x+y)=\varphi(x)+\varphi(y),\ \varphi(x\cdot y)=\varphi(x)\cdot\varphi(y), \ \varphi(0)=0,\ \text{and}\ \varphi(1)=1.\]
Give a semiring homomorphism \(\psi:S_{1}\to S_{2}\), if both \(S_{1}\) and \(S_{2}\) are semifield and \(\psi\) is injective, then \(S_{2}\) is a _semifield over \(S_{1}\)_.
Given a semiring homomorphism \(\varphi:S_{1}\to S_{2}\), we call the pair \((S_{2},\varphi)\) (for short, \(S_{2}\)) a \(S_{1}\)_-algebra_. For a semiring \(S_{1}\), a map \(\psi:(S_{2},\varphi)\to(S_{2}^{\prime},\varphi^{\prime})\) between \(S_{1}\)-algebras is a \(S_{1}\)_-algebra homomorphism_ if \(\psi\) is a semiring homomorphism and \(\varphi^{\prime}=\psi\circ\varphi\). When there is no confusion, we write \(\psi:S_{2}\to S_{2}^{\prime}\) simply. A bijective \(S_{1}\)-algebra homomorphism \(S_{2}\to S_{2}^{\prime}\) is a \(S_{1}\)_-algebra isomorphism_. Then \(S_{2}\) and \(S_{2}^{\prime}\) are said to be _isomorphic_.
The set \(\boldsymbol{T}:=\boldsymbol{R}\cup\{-\infty\}\) with two tropical operations:
\[a\oplus b:=\max\{a,b\}\quad\text{and}\quad a\odot b:=a+b,\]
where \(a,b\in\boldsymbol{T}\) and \(a+b\) stands for the usual sum of \(a\) and \(b\), becomes a semifield. Here, for any \(a\in\boldsymbol{T}\), we handle \(-\infty\) as follows:
\[a\oplus(-\infty)=(-\infty)\oplus a=a\quad\text{and}\quad a\odot(-\infty)=(- \infty)\odot a=-\infty.\]
\(\boldsymbol{T}\) is called the _tropical semifield_. The subset \(\boldsymbol{B}:=\{0,-\infty\}\) of \(\boldsymbol{T}\) becomes a semifield with tropical operations of \(\boldsymbol{T}\) and is called the _boolean semifield_. A \(\boldsymbol{B}\)-algebra \(S\) is said to be _cancellative_ if whenever \(x\cdot y=x\cdot z\) for some \(x,y,z\in S\), then either \(x=0_{S}\) or \(y=z\). If \(S\) is cancellative, then we can define the semifield \(Q(S)\) of fractions as with in the case of integral domains. In this case, the map \(S\to Q(S);x\mapsto x/1_{S}\) becomes an injective \(\boldsymbol{B}\)-algebra homomorphism.
The _tropical polynomials_ are defined as in the usual way and the set of all of tropical polynomials in \(n\) variables is denoted by \(\boldsymbol{T}[X_{1},\ldots,X_{n}]\). It becomes a semiring with two tropical operations and called the _tropical polynomial semiring_. By the follwing example, we know that tropical polynomial semirings are not cancellative:
**Example 2.1**.: Let \(X:=X_{1}\). Then we have
\[(X\oplus 0)\odot(X^{\odot 2}\oplus(-2)\odot X\oplus 0)\] \[= X^{\odot 3}\oplus(-2)\odot X^{\odot 2}\oplus X\oplus X^{\odot 2 }\oplus(-2)\odot X\oplus 0\] \[= X^{\odot 3}\oplus X^{\odot 2}\oplus X\oplus 0\]
and
\[(X\oplus 0)\odot(X^{\odot 2}\oplus(-1)\odot X\oplus 0)\] \[= X^{\odot 3}\oplus(-1)\odot X^{\odot 2}\oplus X\oplus X^{\odot 2 }\oplus(-1)\odot X\oplus 0\] \[= X^{\odot 3}\oplus X^{\odot 2}\oplus X\oplus 0.\]
A vector \(\boldsymbol{v}\in\boldsymbol{R}^{n}\) is _primitive_ if all its components are integers and their greatest common divisor is one. When \(\boldsymbol{v}\) is primitive, for \(\lambda\geq 0\), the _lattice length_ of \(\lambda\boldsymbol{v}\) is defined as \(\lambda\) (cf. [10, Subsection 2.1]).
### Congruences
A _congruence_\(E\) on a semiring \(S\) is a subset of \(S\times S\) satisfying
(1) for any \(x\in S\), \((x,x)\in E\),
(2) if \((x,y)\in E\), then \((y,x)\in E\),
(3) if \((x,y)\in E\) and \((y,z)\in E\), then \((x,z)\in E\),
(4) if \((x,y)\in E\) and \((z,w)\in E\), then \((x+z,y+w)\in E\), and
(5) if \((x,y)\in E\) and \((z,w)\in E\), then \((x\cdot z,y\cdot w)\in E\).
The diagonal set of \(S\times S\) is denoted by \(\Delta\) and called the _trivial_ congruence on \(S\). It is the unique smallest congruence on \(S\). The set \(S\times S\) becomes a semiring with the operations of \(S\) and is a congruence on \(S\). This is called the _improper_ congruence on \(S\). Songruences other than the trivial congruence nor the improper congruence are said to be _proper_. Quotients by congruences can be considered in the usual sense and the quotient semiring of \(S\) by the congruence \(E\) is denoted by \(S/E\). Then the natural surjection \(\pi_{E}:S\twoheadrightarrow S/E\) is a semiring homomorphism.
For a semiring homomorphism \(\psi:S_{1}\to S_{2}\), the _kernel congruence_\(\operatorname{Ker}(\psi)\) of \(\psi\) is the congruence \(\{(x,y)\in S_{1}\times S_{1}\,|\,\psi(x)=\psi(y)\}\). For semirings and congruences on them, the fundamental homomorphism theorem holds ([3, Proposition 2.4.4]). Then, for the above \(\pi_{E}\), we have \(\operatorname{Ker}(\pi_{E})=E\).
### Tropical curves
In this paper, a _graph_ is an unweighted, undirected, finite, connected nonempty multigraph that may have loops. For a graph \(G\), the set of vertices is denoted by \(V(G)\) and the set of edges by \(E(G)\). A vertex \(v\) of \(G\) is a _leaf end_ if \(v\) is incident to only one edge and this edge is not loop. A _leaf edge_ is an edge of \(G\) incident to a leaf end.
A _tropical curve_ is the underlying topological space of the pair \((G,l)\) of a graph \(G\) and a function \(l:E(G)\to\boldsymbol{R}_{>0}\cup\{\infty\}\), where \(l\) can take the value \(\infty\) only on leaf edges, together with an identification of each edge \(e\) of \(G\) with the closed interval \([0,l(e)]\). The interval \([0,\infty]\) is the one point compactification of the interval \([0,\infty)\). We regard \([0,\infty]\) not just as a topological space but as almost a metric space. The distance between \(\infty\) and any other point is infinite. When \(l(e)=\infty\), the leaf end of \(e\) must be identified with \(\infty\). If \(E(G)=\{e\}\) and \(l(e)=\infty\), then we can identify either leaf ends of \(e\) with \(\infty\). When a tropical curve \(\Gamma\) is obtained from \((G,l)\), the pair \((G,l)\) is called a _model_ for \(\Gamma\). There are many possible models for \(\Gamma\). We frequently identify a vertex (resp. an edge) of \(G\) with the corresponding point (resp. the corresponding closed subset) of \(\Gamma\). A model \((G,l)\) is _loopless_ if \(G\) is loopless. For a point \(x\) of a tropical curve \(\Gamma\), if \(x\) is identified with \(\infty\), then \(x\) is called a _point at infinity_, else, \(x\) is called a _finite point_. \(\Gamma_{\infty}\) denotes the set of all points at infinity of \(\Gamma\). If \(x\) is a finite point, then the _valence_\(\operatorname{val}(x)\) is the number of connected components of \(U\setminus\{x\}\) with any sufficiently
small connected neighborhood \(U\) of \(x\); if \(x\) is a point at infinity, then \(\operatorname{val}(x):=1\). We construct a model \((G_{\circ},l_{\circ})\) called the _canonical model_ for \(\varGamma\) as follows. Generally, we define \(V(G_{\circ}):=\{x\in\varGamma\mid\operatorname{val}(x)\neq 2\}\) except for the following two cases. When \(\varGamma\) is homeomorphic to a circle \(S^{1}\), we define \(V(G_{\circ})\) as the set consisting of one arbitrary point of \(\varGamma\). When \(\varGamma\) has the pair \((T,l)\) as its model, where \(T\) is a tree consisting of three vertices and two edges and \(l(E(T))=\{\infty\}\), we define \(V(G_{\circ})\) as the set of two points at infinity and any finite point of \(\varGamma\). The word "an edge of \(\varGamma\)" means an edge of \(G_{\circ}\).
### Rational functions and chip firing moves
Let \(\varGamma\) be a tropical curve. A continuous map \(f:\varGamma\to\boldsymbol{R}\cup\{\pm\infty\}\) is a _rational function_ on \(\varGamma\) if \(f\) is a constant function of \(-\infty\) or a piecewise affine function with integer slopes, with a finite number of pieces and that can take the value \(\pm\infty\) at only points at infinity. For a point \(x\) of \(\varGamma\) and a rational function \(f\in\operatorname{Rat}(\varGamma)\setminus\{-\infty\}\), \(x\) is a _zero_ (resp. _pole_) of \(f\) if the sign of the sum of outgoing slopes of \(f\) at \(x\) is positive (resp. negative). If \(x\) is a point at infinity, then we regard the outgoing slope of \(f\) at \(x\) as the slope of \(f\) from \(y\) to \(x\) times minus one, where \(y\) is a finite point on the leaf edge incident to \(x\) such that \(f\) has a constant slope on the interval \((y,x)\). \(\operatorname{Rat}(\varGamma)\) denotes the set of all rational functions on \(\varGamma\). For rational functions \(f,g\in\operatorname{Rat}(\varGamma)\) and a point \(x\in\varGamma\setminus\varGamma_{\infty}\), we define
\[(f\oplus g)(x):=\max\{f(x),g(x)\}\quad\text{and}\quad(f\odot g)(x):=f(x)+g(x).\]
We extend \(f\oplus g\) and \(f\odot g\) to points at infinity to be continuous on the whole of \(\varGamma\). Then both are rational functions on \(\varGamma\). Note that for any \(f\in\operatorname{Rat}(\varGamma)\), we have
\[f\oplus(-\infty)=(-\infty)\oplus f=f\]
and
\[f\odot(-\infty)=(-\infty)\odot f=-\infty.\]
Then \(\operatorname{Rat}(\varGamma)\) becomes a semifield with these two operations. Also, \(\operatorname{Rat}(\varGamma)\) becomes a \(\boldsymbol{T}\)-algebra and a semifield over \(\boldsymbol{T}\) with the natural inclusion \(\boldsymbol{T}\hookrightarrow\operatorname{Rat}(\varGamma)\). Note that for \(f,g\in\operatorname{Rat}(\varGamma)\), \(f=g\) means that \(f(x)=g(x)\) for any \(x\in\varGamma\).
A _subgraph_ of a tropical curve is a closed subset of the tropical curve with a finite number of connected components. Let \(\varGamma_{1}\) be a subgraph of a tropical curve \(\varGamma\) which has no connected components consisting of only a point at infinity, and \(l\) a positive number or infinity. The _chip firing move_ by \(\varGamma_{1}\) and \(l\) is defined as the rational function \(\operatorname{CF}(\varGamma_{1},l)(x):=-\min\{\operatorname{dist}(\varGamma_{ 1},x),l\}\) with \(x\in\varGamma\), where \(\operatorname{dist}(\varGamma_{1},x)\) denotes the distance between \(\varGamma_{1}\) and \(x\).
### Morphisms between tropical curves
Let \(\varphi:\varGamma\to\varGamma^{\prime}\) be a continuous map between tropical curves. \(\varphi\) is a _morphism_ if there exist loopless models \((G,l)\) and \((G^{\prime},l^{\prime})\) for \(\varGamma\) and \(\varGamma^{\prime}\), respectively, such that \(\varphi\) can be regarded as a map \(V(G)\cup E(G)\to V(G^{\prime})\cup E(G^{\prime})\) satisfying \(\varphi(V(G))\subset V(G^{\prime})\) and for \(e\in\varphi(E(G))\), there exists a nonnegative integer \(\deg_{e}(\varphi)\) such that for any points \(x,y\) of \(e\), \(\operatorname{dist}_{\varphi(e)}(\varphi(x),\varphi(y))=\deg_{e}(\varphi) \cdot\operatorname{dist}_{e}(x,y)\), where \(\operatorname{dist}_{\varphi(e)}(\varphi(x),\varphi(y))\) denotes the distance between \(\varphi(x)\) and \(\varphi(y)\) in \(\varphi(e)\). A map between tropical curves \(\varphi:\varGamma\to\varGamma^{\prime}\) is an _isomorphism_ if \(\varphi\) is a bijective morphism and the inverse map \(\varphi^{-1}\) of \(\varphi\) is also a morphism. By this definition, \(\varphi\) is an isomorphism if and only is \(\varphi\) is continuous on the whole of \(\varGamma\) and is a local isometry on \(\varGamma\setminus\varGamma_{\infty}\).
## 3. Main results
In this section, tropical rational function semifields and congruences on them play a central role. We start our consideration by defining tropical rational function semifields.
For \(V\subset\textbf{{T}}^{n}\), we define
\[\textbf{{E}}(V)_{0}:=\{(f,g)\in\textbf{{T}}[X_{1},\dots,X_{n}]\times\textbf{{T }}[X_{1},\dots,X_{n}]\,|\,\forall x\in V,f(x)=g(x)\}.\]
Then \(\textbf{{E}}(V)_{0}\) is a congruence on \(\textbf{{T}}[X_{1},\dots,X_{n}]\).
**Lemma 3.1**.: \(\textbf{{T}}[X_{1},\dots,X_{n}]/\textbf{{E}}(\textbf{{T}}^{n})_{0}\) _is cancellative._
Proof.: It follows from [1, Theorem 1] and [12, Proposition 5.5 and Theorem 4.14(v)].
**Definition 3.2**.: Let \(\overline{\textbf{{T}}[X_{1},\dots,X_{n}]}\) denote \(\textbf{{T}}[X_{1},\dots,X_{n}]/\textbf{{E}}(\textbf{{T}}^{n})_{0}\). We call it the _tropical polynomial function semiring_. By Lemma 3.1, the semifield of fractions of \(\overline{\textbf{{T}}[X_{1},\dots,X_{n}]}\) is given. We call it the _tropical rational function semifield_ and write it \(\overline{\textbf{{T}}(X_{1},\dots,X_{n})}\).
**Remark 3.3**.: We employed the notation in [3, 3.4. Example] as our notation of the tropical polynomial function semiring \(\overline{\textbf{{T}}[X_{1},\dots,X_{n}]}\) in Definition 3.2. Other authors use other notations; e.g., \(\operatorname{Poly}[\textbf{{T}}^{n}]\) in [1] and \(\textbf{{T}}[\textbf{{X}}^{\pm}]_{\operatorname{fcn}}\) (exactly, it is not the tropical polynomial function semiring and is the tropical Laurent polynomial semiring) in [7]. The notaion of the tropical rational function semifield follows that of the tropical polynomial function semiring and is first defined in this paper.
In what follows, by abuse of notation, we write the image of \(X_{i}\) in \(\overline{\textbf{{T}}(X_{1},\dots,X_{n})}\) as \(X_{i}\).
**Lemma 3.4**.: _Let \(E\) be a congruence on \(\overline{\textbf{{T}}(X_{1},\dots,X_{n})}\). Then the following are equivalent:_
1. _if_ \((f,-\infty)\in E\)_, then_ \(f=-\infty\)_;_
2. \(\overline{\textbf{{T}}(X_{1},\dots,X_{n})}/E\) _is a semifield; and_
3. \(E\) _is a semifield._
Proof.: \(\overline{\mathbf{T}(X_{1},\ldots,X_{n})}/E\) is a semiring. Let \(\pi_{E}:\overline{\mathbf{T}(X_{1},\ldots,X_{n})}\twoheadrightarrow\overline{\mathbf{T} (X_{1},\ldots,X_{n})}/E\) be the natural surjective semiring homomorphism. Remark that for \(f,g\in\overline{\mathbf{T}(X_{1},\ldots,X_{n})}\), \((f,g)\in E\) if and only if \(\pi_{E}(f)=\pi_{E}(g)\).
\((1)\Rightarrow(2):\) by \((1)\), we have \(\pi_{E}(0)\neq\pi_{E}(-\infty)\). Let \(f\in\overline{\mathbf{T}(X_{1},\ldots,X_{n})}\) be such that \(\pi_{E}(f)\neq\pi_{E}(-\infty)\). By \((1)\), \(f\neq-\infty\), and thus \(f^{\odot(-1)}\neq-\infty\). Hence we have \(\pi_{E}(f)\odot\pi_{E}(f)^{\odot(-1)}=\pi_{E}(f\odot f^{\odot(-1)})=\pi_{E}(0 )\neq\pi_{E}(-\infty)\). This means that \(\overline{\mathbf{T}(X_{1},\ldots,X_{n})}/E\) is a semifield.
\((2)\Rightarrow(1):\) we assume that there exists \(f\in\overline{\mathbf{T}(X_{1},\ldots,X_{n})}\setminus\{-\infty\}\) such that \((f,-\infty)\in E\). Since \((0,-\infty)=(f^{\odot(-1)}\odot f,f^{\odot(-1)}\odot(-\infty))\in E\), for any \(g\in\overline{\mathbf{T}(X_{1},\ldots,X_{n})}\), we have \((g,-\infty)=(g\odot 0,g\odot(-\infty))\in E\). This means that \(\overline{\mathbf{T}(X_{1},\ldots,X_{n})}/E=\{\pi_{E}(-\infty)\}\) and hence it is not a semifield.
\((1)\Rightarrow(3):\) clearly \((0,0)\neq(-\infty,-\infty)\). For any \((f,g)\in E\setminus\{(-\infty,-\infty)\}\), since \(f\neq-\infty\) and \(g\neq-\infty\) by \((1)\), we have \(f^{\odot(-1)}\neq-\infty\) and \(g^{\odot(-1)}\neq-\infty\). Thus we have \((g^{\odot(-1)},f^{\odot(-1)})=(f\odot f^{\odot(-1)}\odot g^{\odot(-1)},g\odot f ^{\odot(-1)}\odot g^{\odot(-1)})\in E\), and hence \((f^{\odot(-1)},g^{\odot(-1)})\in E\). Since \((0,0)=(f\odot f^{\odot(-1)},g\odot g^{\odot(-1)})\), we have \((f,g)^{\odot(-1)}=(f^{\odot(-1)},g^{\odot(-1)})\in E\). This means that \(E\) is a semifield.
\((3)\Rightarrow(1):\) we assume that there exists \(f\in\overline{\mathbf{T}(X_{1},\ldots,X_{n})}\setminus\{-\infty\}\) such that \((f,-\infty)\in E\). Then, for any \(g,h\in\mathbf{T}(X_{1},\ldots,X_{n})\), since \((f\odot g,-\infty\odot h)=(f\odot g,-\infty)\neq(0,0)\), this \((f,-\infty)\) has no inverse element for \(\odot\) in \(E\). Hence \(E\) is not a semifield.
**Definition 3.5**.: (cf. [1, Subsection 3.1]) For a congruence \(E\) on \(\overline{\mathbf{T}(X_{1},\ldots,X_{n})}\), we define
\[\mathbf{V}(E):=\{x\in\mathbf{R}^{n}\,|\,\forall(f,g)\in E,f(x)=g(x)\}.\]
We call \(\mathbf{V}(E)\) the _congruence variety_ associated with \(E\).
**Lemma 3.6**.: _Let \(E\) be a congruence on \(\overline{\mathbf{T}(X_{1},\ldots,X_{n})}\). If there exist \(f\in\overline{\mathbf{T}(X_{1},\ldots,X_{n})}\setminus\{-\infty\}\) and \(t\in\mathbf{T}\setminus\{0\}\) such that \((f,f\odot t)\in E\), then \(\mathbf{V}(E)=\varnothing\)._
Proof.: It is clear by the definition of congruence varieties.
**Lemma 3.7**.: _Let \(E\) be a congruence on \(\overline{\mathbf{T}(X_{1},\ldots,X_{n})}\). If \(\overline{\mathbf{T}(X_{1},\ldots,X_{n})}/E\) is not a semifield over \(\mathbf{T}\), then \(\mathbf{V}(E)=\varnothing\)._
Proof.: If \(\overline{\mathbf{T}(X_{1},\ldots,X_{n})}/E\) is not a semifield, then, by Lemma 3.4, there exists \(f\in\overline{\mathbf{T}(X_{1},\ldots,X_{n})}\setminus\{-\infty\}\) such that \((f,-\infty)\in E\). Since \((f,-\infty)=(f,f\odot(-\infty))\in E\), by Lemma 3.6, \(\mathbf{V}(E)=\varnothing\).
If \(\overline{\mathbf{T}(X_{1},\ldots,X_{n})}/E\) is not a semifield over \(\mathbf{T}\) but a semifield, then there exist \(t_{1}\neq t_{2}\in\mathbf{T}\) such that \((t_{1},t_{2})\in E\). Without loss of generality, we can assume that \(t_{1}\neq-\infty\). Then \((0,0\odot t_{2}\odot t_{1}^{\odot(-1)})=(0\odot t_{1}\odot t_{1}^{\odot(-1)},0 \odot t_{2}\odot t_{1}^{\odot(-1)})\in E\). Since \(t_{2}\odot t_{1}^{\odot(-1)}\neq 0\), \(\mathbf{V}(E)=\varnothing\) by Lemma 3.6.
For \((f_{1},f_{2}),(g_{1},g_{2})\in\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}\times \overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}\), we define the _twisted product_\((f_{1},f_{2})\rtimes(g_{1},g_{2}):=(f_{1}\odot g_{1}\oplus f_{2}\odot g_{2},f_{1} \odot g_{2}\oplus f_{2}\odot g_{1})\) (cf. [1], [12]). For congruences \(E\) and \(F\) on \(\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}\), we define \(E\rtimes F\) as the congruence on \(\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}\) generated by \(\{(f_{1},f_{2})\rtimes(g_{1},g_{2})\,|\,\forall(f_{1},f_{2})\in E,\forall(g_{ 1},g_{2})\in F\}\).
**Proposition 3.8**.: (cf. [1, Lemma 3.1]) For \(\lambda\in\Lambda\), let \(E_{\lambda}\) be a semiring congruence on \(\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}\). Let \(E\) be the congruence on \(\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}\) generated by \(\bigcup_{\lambda\in\Lambda}E_{\lambda}\). Let \(F_{1}\) and \(F_{2}\) be congruences on \(\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}\). Then the following hold:
(1) \(\boldsymbol{V}(\Delta)=\boldsymbol{R}^{n}\) and \(\boldsymbol{V}(\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}\times\overline{ \boldsymbol{T}(X_{1},\ldots,X_{n})})=\varnothing\),
(2) \(\boldsymbol{V}(E)=\bigcap_{\lambda\in\Lambda}\boldsymbol{V}(E_{\lambda})\), and
(3) \(\boldsymbol{V}(F_{1})\cup\boldsymbol{V}(F_{2})=\boldsymbol{V}(F_{1}\rtimes F _{2})\).
Proof.: (1) It is clear.
(2) Since \(E\supset E_{\lambda}\) for any \(\lambda\in\Lambda\), we have \(\boldsymbol{V}(E)\subset\bigcap_{\lambda\in\Lambda}\boldsymbol{V}(E_{\lambda})\).
Conversely, for any \(x\in\bigcap_{\lambda\in\Lambda}\boldsymbol{V}(E_{\lambda})\), \(\lambda\in\Lambda\) and \((f,g)\in E_{\lambda}\), we have \(f(x)=g(x)\). Hence, for any \((h_{1},h_{2})\in E\), we have \(h_{1}(x)=h_{2}(x)\), and then \(x\in\boldsymbol{V}(E)\).
(3) We shall show that \(\boldsymbol{V}(F_{1})\cup\boldsymbol{V}(F_{2})\subset\boldsymbol{V}(F_{1} \rtimes F_{2})\). Let \(x\in\boldsymbol{V}(F_{1})\). For any \((f_{1},f_{2})\in F_{1}\) and \((g_{1},g_{2})\in F_{2}\), since \(f_{1}(x)=f_{2}(x)\), we have \(f_{1}(x)\odot g_{1}(x)\oplus f_{2}(x)\odot g_{2}(x)=f_{1}(x)\odot g_{2}(x) \oplus f_{2}(x)\odot g_{1}(x)\). Thus \(x\in\boldsymbol{V}(F_{1}\rtimes F_{2})\). Similarly, if \(x\in\boldsymbol{V}(F_{2})\), then \(x\in\boldsymbol{V}(F_{1}\rtimes F_{2})\).
We shall show that \(\boldsymbol{V}(F_{1})\cup\boldsymbol{V}(F_{2})\supset\boldsymbol{V}(F_{1} \rtimes F_{2})\). Assume that there exists \(f\in\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}\setminus\{-\infty\}\) such that \((f,-\infty)\in F_{1}\). By Lemmas 3.4, 3.7, we have \(\boldsymbol{V}(F_{1})=\varnothing\). Let \(x\in\boldsymbol{V}(F_{1}\rtimes F_{2})\). For any \((g_{1},g_{2})\in F_{2}\), since \(f(x)\odot g_{1}(x)\oplus(-\infty)\odot g_{2}(x)=f(x)\odot g_{2}(x)\oplus(- \infty)\odot g_{1}(x)\), \(f(x)\in\boldsymbol{R}\) and \(g_{1}(x),g_{2}(x)\in\boldsymbol{T}\), we have \(g_{1}(x)=g_{2}(x)\). Therefore \(x\in\boldsymbol{V}(F_{2})\). In the same way, we can show thta if there exists \(g\in\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}\setminus\{-\infty\}\) such that \((g,-\infty)\in F_{2}\), then \(\boldsymbol{V}(F_{1}\rtimes F_{2})\subset\boldsymbol{V}(F_{1})\). Assume that there are not \(f,g\in\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}\setminus\{-\infty\}\) such that \((f,-\infty)\in F_{1}\) and \((g,-\infty)\in F_{2}\). Let \(x\in\boldsymbol{V}(F_{1}\rtimes F_{2})\). Assume that \(x\not\in\boldsymbol{V}(F_{1})\). In this case, there exists \((f_{1},f_{2})\in F_{1}\) such that \(f_{1}(x)\neq f_{2}(x)\). By assumption, \(f_{1}(x)\neq-\infty\) and \(f_{2}(x)\neq-\infty\). Without loss of generality, we can assume that \(f_{1}(x)>f_{2}(x)\). For any \((g_{1},g_{2})\in F_{2}\),
\[f_{1}(x)\odot g_{1}(x)\oplus f_{2}(x)\odot g_{2}(x)=f_{1}(x)\odot g_{2}(x) \oplus f_{2}(x)\odot g_{1}(x).\]
If the left-hand side is \(f_{1}(x)\odot g_{1}(x)\) and the right-hand side is \(f_{1}(x)\odot g_{2}(x)\), then \(g_{1}(x)=g_{2}(x)\) since \(f_{1}(x)\neq-\infty\). If the left-hand side is \(f_{1}(x)\odot g_{1}(x)\) and the right-hand side is \(f_{2}(x)\odot g_{1}(x)\), then \(g_{1}(x)=-\infty\) since \(f_{1}(x)>f_{2}(x)>-\infty\). Thus \(g_{2}(x)=-\infty\) in this case. By the same argument, if the left-hand side is \(f_{2}(x)\odot g_{2}(x)\), then \(g_{1}(x)=g_{2}(x)\). Hence \(x\in\boldsymbol{V}(F_{2})\).
By Proposition 3.8, we can define the _tropical Zariski topology_ on \(\boldsymbol{R}^{n}\) as in the classical way.
**Proposition 3.9**.: _The tropical Zariski topology coincides with the Euclidean topology on \(\boldsymbol{R}^{n}\)._
Proof.: By [4, Lemma 3.7.4], the tropical Zariski topology defined by congruences on tropical polynomial semirings coincides with the Euclidean topology. With this fact, the natural surjective \(\boldsymbol{T}\)-algebra homomorphism \(\boldsymbol{T}[X_{1},\ldots,X_{n}]\twoheadrightarrow\overline{\boldsymbol{T}[X _{1},\ldots,X_{n}]}\) and the natural injective \(\boldsymbol{T}\)-algebra homomorphism \(\overline{\boldsymbol{T}[X_{1},\ldots,X_{n}]}\hookrightarrow\overline{ \boldsymbol{T}(X_{1},\ldots,X_{n})}\), we can easily have the conclusion.
Let \(\varGamma\) be a tropical curve and let \(f_{1},\ldots,f_{n}\in\operatorname{Rat}(\varGamma)\setminus\{-\infty\}\). Note that, by [9, Theorem 1.1], we can choose these \(f_{1},\ldots,f_{n}\) to generate \(\operatorname{Rat}(\varGamma)\) as a semifield over \(\boldsymbol{T}\), i.e., \(\operatorname{Rat}(\varGamma)=\boldsymbol{T}(f_{1},\ldots,f_{n})\).
**Remark 3.10**.: When \(\operatorname{Rat}(\varGamma)=\boldsymbol{T}(f_{1},\ldots,f_{n})\), for any \(x\in\varGamma_{\infty}\), there exists a number \(i\) such that \(f_{i}(x)=\infty\) or \(f_{i}(x)=-\infty\). In fact, if it does not hold, then \(f_{j}(x)\in\boldsymbol{R}\) for any \(j\), and thus \(f_{1},\ldots,f_{n}\) cannot generate \(\operatorname{Rat}(\varGamma)\).
**Lemma 3.11**.: _The correspondence \(X_{i}\mapsto f_{i}\) induces a \(\boldsymbol{T}\)-algebra homomorphism \(\psi:\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}\to\operatorname{Rat}(\varGamma)\). In particular, if \(\operatorname{Rat}(\varGamma)=\boldsymbol{T}(f_{1},\ldots,f_{n})\), then \(\psi\) is surjective._
Proof.: Let \(\psi_{0}:\boldsymbol{T}[X_{1},\ldots,X_{n}]\to\operatorname{Rat}(\varGamma)\) be the \(\boldsymbol{T}\)-algebra homomorphism induced by the correspondence \(X_{i}\mapsto f_{i}\). We shall show \(\operatorname{Ker}(\psi_{0})\supset\boldsymbol{E}(\boldsymbol{T}^{n})_{0}\). Let \((f,g)\in\boldsymbol{E}(\boldsymbol{T}^{n})_{0}\). For any \(x\in\varGamma\setminus\varGamma_{\infty}\), since \(f_{i}(x)\in\boldsymbol{R}\), we have
\[\psi_{0}(f)(x) = f(f_{1}(x),\ldots,f_{n}(x))\] \[= g(f_{1}(x),\ldots,f_{n}(x))\] \[= \psi_{0}(g)(x).\]
Hence for any \(x\in\varGamma\), we have \(\psi_{0}(f)(x)=\psi_{0}(g)(x)\). Thus \(\psi_{0}(f)=\psi_{0}(g)\), i.e., \((f,g)\in\operatorname{Ker}(\psi_{0})\). This \(\psi_{0}\) induces the \(\boldsymbol{T}\)-algebra homomorphism \(\psi_{1}:\overline{\boldsymbol{T}[X_{1},\ldots,X_{n}]}\to\operatorname{Rat}( \varGamma)\) satisfying \(\pi\circ\psi_{1}=\psi_{0}\), where \(\pi\) stands for the natural surjective \(\boldsymbol{T}\)-algebra homomorphism \(\boldsymbol{T}[X_{1},\ldots,X_{n}]\twoheadrightarrow\overline{\boldsymbol{T}[X _{1},\ldots,X_{n}]}\). As \(\overline{\boldsymbol{T}[X_{1},\ldots,X_{n}]}\hookrightarrow\overline{ \boldsymbol{T}(X_{1},\ldots,X_{n})}\) naturally, this \(\psi_{1}\) is extended to a \(\boldsymbol{T}\)-algebra homomorphism \(\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}\to\operatorname{Rat}( \varGamma)\). It is \(\psi\) we wanted. When \(\{f_{1},\ldots,f_{n}\}\) is a generating set of \(\operatorname{Rat}(\varGamma)\), this \(\psi\) is surjective.
For \(V\subset\boldsymbol{R}^{n}\), we define
\[\boldsymbol{E}(V):=\{(f,g)\in\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})} \times\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}\,|\,\forall x\in V,f(x)=g (x)\}.\]
Then \(\boldsymbol{E}(V)\) is a congruence on \(\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}\).
Let \(\theta:\varGamma\backslash\varGamma_{\infty}\to\boldsymbol{R}^{n};x\mapsto(f_ {1}(x),\ldots,f_{n}(x))\) and let \(V:=\boldsymbol{V}(\operatorname{Ker}(\psi))\). Note that the image \(\operatorname{Im}(\theta)\) of \(\theta\) has the metric defined by the lattice length since rational functions on tropical curves have only integers as their slopes.
**Proposition 3.12**.: _In the above setting, the following hold:_
\((1)\) _for any \(f\in\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}\) and \(x\in\varGamma\setminus\varGamma_{\infty}\), \(\psi(f)(x)=f(\theta(x))\) holds,_
\((2)\)__\(\operatorname{Im}(\theta)\subset V\)_,_
\((3)\)__\(\theta\) _is continuous,_
\((4)\)__\(\operatorname{Ker}(\psi)=\boldsymbol{E}(V)\)_, and_
\((5)\) _if_ \(\psi\) _is surjective, i.e.,_ \(\operatorname{Rat}(\varGamma)=\boldsymbol{T}(f_{1},\ldots,f_{n})\)_, then_ \(\operatorname{Im}(\theta)\supset V\) _and_ \(\theta\) _is an injective local isometry for the lattice length._
Proof.: (1) Since \(\psi\) is a \(\boldsymbol{T}\)-algebra homomorphism, by the definition of \(\theta\), it is clear.
(2) For any \((f,g)\in\operatorname{Ker}(\psi)\) and \(x\in\varGamma\setminus\varGamma_{\infty}\), as \(\psi(f)=\psi(g)\), by (1), we have
\[f(\theta(x)) =f(f_{1}(x),\ldots,f_{n}(x))\] \[=\psi(f)(x)\] \[=\psi(g)(x)\] \[=g(f_{1}(x),\ldots,f_{n}(x))\] \[=g(\theta(x)).\]
Thus \(\theta(x)\in V\).
(3) By Proposition 3.9, \(f_{i}\) is continuous, and so is \(\theta\).
(4) Since \(V=\boldsymbol{V}(\operatorname{Ker}(\psi))\), it is clear that \(\operatorname{Ker}(\psi)\subset\boldsymbol{E}(V)\). For any \((f,g)\in\boldsymbol{E}(V)\) and \(x\in V\), \(f(x)=g(x)\) holds. Hence, by (2), for any \(y\in\varGamma\setminus\varGamma_{\infty}\), we have \(\psi(f)(y)=f(\theta(y))=g(\theta(y))=\psi(g)(y)\). Thus \(\psi(f)\) is eaqual to \(\psi(g)\) as a rational function on \(\varGamma\). This means that \((f,g)\in\operatorname{Ker}(\psi)\).
(5) We shall show that \(\theta\) is injective. For \(x,y\in\varGamma\setminus\varGamma_{\infty}\), we assume \(\theta(x)=\theta(y)\). Since \(\psi\) is surjective, there exists \(f\in\widetilde{\boldsymbol{T}}(X_{1},\ldots,X_{n})\) such that \(\psi(f)=\operatorname{CF}(\{x\},\infty)\). Since
\[0 =\,\operatorname{CF}(\{x\},\infty)(x)\] \[=\psi(f)(x)\] \[=f(\theta(x))\] \[=f(\theta(y))\] \[=\psi(f)(y)\] \[=\,\operatorname{CF}(\{x\},\infty)(y)\]
by (1), \(x\) must coincide with \(y\). Hence \(\theta\) is injective.
Assume that the difference set \(V\setminus\operatorname{Im}(\theta)\) is not empty. For any \(x\in V\setminus\operatorname{Im}(\theta)\), there exists \(\varepsilon>0\) such that the \(\varepsilon\)-neighborhood of \(x\) does not intersect \(\operatorname{Im}(\theta)\). In fact, if we cannot find such \(\varepsilon\), this \(x\) must be a boundary point of \(\operatorname{Im}(\theta)\) in \(\boldsymbol{R}^{n}\). Since a connected subgraph \(\varGamma_{1}\) of \(\varGamma\) that contains no points at infinity is compact, so is \(\theta(\varGamma_{1})\). Hence for \(x\), there exists \(y\in\varGamma_{\infty}\) such that any sequence \(\{y_{i}\}\) such that \(y_{i}\to y\)
and \(\theta(y_{i})\to x\) as \(i\to\infty\). By Remark 3.10 and since \(x\in\boldsymbol{R}^{n}\), it does not occur. This means that such \(x\) does not exist. Let \(x=:(a_{1},\ldots,a_{n})\). For
\[f:=\left(\bigoplus_{i=1}^{n}\left(a_{i}^{\odot(-1)}\odot X_{i}\oplus\left(a_{i} ^{\odot(-1)}\odot X_{i}\right)^{\odot(-1)}\right)\right)^{\odot(-1)}\oplus(- \varepsilon),\]
\(\psi(f)\) is the constant function of \(-\varepsilon\) on \(\varGamma\setminus\varGamma_{\infty}\).Thus \(\psi(f)=-\varepsilon\in\operatorname{Rat}(\varGamma)\). Since \(\psi(-\varepsilon)=-\varepsilon\), by (4), we have \((f,-\varepsilon)\in\boldsymbol{E}(V)\). Therefore, for any \(y\in V\), \(f(y)=-\varepsilon\) holds, but it is a contradiction as \(f(x)=0\neq-\varepsilon\). Hence we have \(\operatorname{Im}(\theta)\supset V\).
We shall show that \(\theta\) is a local isometry. Let \((G,l)\) be the model for \(\varGamma\) such that \(V(G)\) consists of the vertices of the underlying graph of canonical model for \(\varGamma\) and the zeros and poles of \(f_{1},\ldots,f_{n}\). For any edge \(e\) of \(G\), the greatest common divisor of the slopes of \(f_{1},\ldots,f_{n}\) on \(e\) must be one. In face, if it is zero or at least two, then there exist no rational expressions of \(f_{1},\ldots,f_{n}\) which have one as their slopes on any segment in \(e\). On the other hand, for such a segment, the chip firing move by a finite point on the segment and a sufficiently small positive number has slope one on the segment. This contradicts that \(f_{1},\ldots,f_{n}\) generate \(\operatorname{Rat}(\varGamma)\) as a semifield over \(\boldsymbol{T}\). Therefore \(\theta\) is a local isometry.
**Corollary 3.13**.: _For any \(n\geq 2\) and any tropical curve \(\varGamma\), \(\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}\) is not isomorphic to \(\operatorname{Rat}(\varGamma)\) as a \(\boldsymbol{T}\)-algebra._
Proof.: If there exists a tropical curve \(\varGamma\) such that \(\operatorname{Rat}(\varGamma)\) is isomorphic to \(\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}\) via \(\psi\) as a \(\boldsymbol{T}\)-algebra, then, since \(\operatorname{Ker}(\psi)=\Delta\) and \(\boldsymbol{V}(\Delta)=\boldsymbol{R}^{n}\), by Proposition 3.12(5), \(\varGamma\setminus\varGamma_{\infty}\) is homeomorphic to \(\boldsymbol{R}^{n}\). It is a contradiction.
By the following example, we know that the converse of Proposition 3.12(5) does not hold:
**Example 3.14**.: Let \(\varGamma:=[-\infty,\infty]\). We define \(f_{1},f_{2}\in\operatorname{Rat}(\varGamma)\) as \(f_{1}(t):=0\) and \(f_{2}(t):=t\) if \(t\leq 0\); \(f_{1}(t):=t\) and \(f_{2}(t):=0\) if \(0<t\leq 1\); \(f_{1}(t):=1\) and \(f_{2}(t):=-t\) if \(t\geq 1\). Clearly \(f_{1}\) and \(f_{2}\) do not generate \(\operatorname{Rat}(\varGamma)\) as a semifield over \(\boldsymbol{T}\). By Lemma 3.11, the correspondence \(X_{i}\mapsto f_{i}\) induces a \(\boldsymbol{T}\)-algebra homomorphism \(\psi:\overline{\boldsymbol{T}(X_{1},X_{2})}\to\operatorname{Rat}(\varGamma)\). Let \(\theta:\varGamma\setminus\varGamma_{\infty}\to\boldsymbol{R}^{2};x\mapsto(f_{ 1}(x),f_{2}(x))\). Then, since \(\operatorname{Im}(\theta)\) is closed, we can prove that \(\operatorname{Im}(\theta)\supset\boldsymbol{V}(\operatorname{Ker}(\psi))\) (and hence \(\operatorname{Im}(\theta)=\boldsymbol{V}(\operatorname{Ker}(\psi))\)) as in the proof of Proposition 3.12(5). By definition, \(\theta\) is an injective local isometry for the lattice length.
We can consider a congruence variety instead of \(\varGamma\) in Proposition 3.12. The following theorem is our main theorem:
**Theorem 3.15**.: _Let \(E\) be a congruence on \(\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}\), \(F\) a congruence on \(\overline{\boldsymbol{T}(Y_{1},\ldots,Y_{m})}\), \(V:=\boldsymbol{V}(E)\) and \(W:=\boldsymbol{V}(F)\). Let \(\pi_{E}:\varGamma\setminus\varGamma_{\infty}\to\boldsymbol{V}(E)\) be a congruence on \(\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}\), \(F\) a congruence on \(\overline
\(\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}\twoheadrightarrow\overline{ \boldsymbol{T}(X_{1},\ldots,X_{n})}/E\) be the natural surjective \(\boldsymbol{T}\)-algebra homomorphism. Let \(\psi:\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}/E\to\overline{ \boldsymbol{T}(Y_{1},\ldots,Y_{m})}/F\) be a \(\boldsymbol{T}\)-algebra homomorphism and \(\theta:W\to\boldsymbol{T}^{n};y\mapsto(\psi(\pi_{E}(X_{1}))(y),\ldots,\psi( \pi_{E}(X_{n}))(y))\). Then the following hold:_
\((1)\) _for any \(f\in\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}\) and \(y\in W\), \(\psi(\pi_{E}(f))(y)=\pi_{E}(f)(\theta(y))=f(\theta(y))\) holds,_
\((2)\)__\(\operatorname{Im}(\theta)\subset V\)_,_
\((3)\) _\(\theta\) is continuous,_
\((4)\) _if \(\psi\) is surjective, then \(\theta\) is a closed embedding,_
\((5)\) _if \(\psi\) is injective, \(F=\boldsymbol{E}(W)\) and \(\operatorname{Im}(\theta)\) is closed, then \(\operatorname{Im}(\theta)\supset V\), and_
\((6)\) _if \(\psi\) is an isomorphism and \(F=\boldsymbol{E}(W)\), then \(V\) and \(W\) are homeomorphic via \(\theta\)._
Proof.: Let \(\pi_{F}:\overline{\boldsymbol{T}(Y_{1},\ldots,Y_{m})}\twoheadrightarrow \overline{\boldsymbol{T}(Y_{1},\ldots,Y_{m})}/F\) be the natural surjective \(\boldsymbol{T}\)-algebra homomorphism.
\((1)\) Since \(\psi\) is a \(\boldsymbol{T}\)-algebra homomorphism, by the definition of \(\theta\), it is clear.
\((2)\) Let \((f,g)\in E\). By \((1)\), for any \(y\in W\), we have
\[f(\theta(y)) = \psi(\pi_{E}(f))(y)\] \[= \psi(\pi_{E}(g))(y)\] \[= g(\theta(y)).\]
Thus \(\theta(y)\in V\).
\((3)\) By \((2)\), \(\operatorname{Im}(\theta)\subset V\subset\boldsymbol{R}^{n}\). Hence, by Proposition 3.9 and the definition of \(\theta\), it is clear.
\((4)\) If \(W\) is empty, then so is \(\operatorname{Im}(\theta)\), and hence \(\theta\) is a closed embedding.
Assume that \(W\) is nonempty and \(\psi\) is surjective.
We shall show that \(\theta\) is injective. For \(x,y\in W\), we assume \(\theta(x)=\theta(y)\) and let \(x=:(a_{1},\ldots,a_{n})\). Note that \(a_{i}=\pi_{F}(Y_{i})(x)\). Since \(\pi_{E}\) and \(\psi\) are surjective, there exists \(f\in\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}\) such that
\[\psi(\pi_{E}(f))=\left(\bigoplus_{i=1}^{m}\left(a_{i}^{\odot(-1)}\odot\pi_{F} (Y_{i})\oplus\left(a_{i}^{\odot(-1)}\odot\pi_{F}(Y_{i})\right)^{\odot(-1)} \right)\right)^{\odot(-1)}.\]
Since
\[0 = \psi(\pi_{E}(f))(x)\] \[= f(\theta(x))\] \[= f(\theta(y))\] \[= \psi(\pi_{E}(f))(y)\]
by \((1)\), \(x\) must coincide with \(y\). Hence \(\theta\) is injective.
We shall show that \(\theta\) is a closed map. If \(W\) is empty, then it is clear. Assume that \(W\) is not empty. In this case, \(V\) is also not empty since \(\varnothing\neq\operatorname{Im}(\theta)\subset V\) by \((2)\). Let \(W_{1}\) be a closed subset of \(W\). If \(W_{1}\) is
compact, then so is \(\theta(W_{1})\) in \(V\subset\boldsymbol{R}^{n}\). Hence \(\theta(W_{1})\) is a closed subset of \(V\) in this case. If \(W_{1}\) is not compact, then \(W_{1}\) is unbounded. Thus there exists a sequence \(\{y_{k}\}\) in \(W_{1}\) such that \(\pi_{F}(Y_{j})(y_{k})\to\infty\) or \(-\infty\) as \(k\to\infty\) for some \(j\). Since \(\pi_{E}\) and \(\psi\) are surjective, there exists \(f\in\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}\) such that \(\psi(\pi_{E}(f))=\pi_{F}(Y_{j})\). For any such sequence \(\{y_{k}\}\), since \(\pi_{F}(Y_{j})(y_{k})=\psi(\pi_{E}(f))(y_{k})\to\infty\) or \(-\infty\) as \(k\to\infty\), there exists a number \(l\) such that \(\psi(\pi_{E}(X_{l}))(y_{k})\to\infty\) or \(-\infty\) as \(k\to\infty\). In fact, if for any \(l^{\prime}\), \(\psi(\pi_{E}(X_{l^{\prime}}))(y_{k})\to b_{l^{\prime}}\in\boldsymbol{R}\) as \(k\to\infty\), then
\[\psi(\pi_{E}(f))(y_{k}) =f(\theta(y_{k}))\] \[=f(\psi(\pi_{E}(X_{1}))(y_{k}),\ldots,\psi(\pi_{E}(X_{n}))(y_{k}))\] \[\to b\in\boldsymbol{R}\]
as \(k\to\infty\) by (1). This is a contradiction. Hence \(\theta(W_{1})\) is a closed set.
(5) If \(W\) is empty, then \(F=\boldsymbol{E}(W)=\overline{\boldsymbol{T}(Y_{1},\ldots,Y_{m})}\times \overline{\boldsymbol{T}(Y_{1},\ldots,Y_{m})}\).
Hence \(\overline{\boldsymbol{T}(Y_{1},\ldots,Y_{m})}/F=\{\pi_{F}(-\infty)\}\). Since \(\psi\) is injective, \(\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}/E=\{\pi_{E}(-\infty)\}\). By Lemma 3.7, \(V=\boldsymbol{V}(E)\) is also empty. Hence we have the conclusion.
Assume that \(W\) is not empty. By (2), we have \(\varnothing\neq\operatorname{Im}(\theta)\subset V\). By Lemma 3.7, both \(\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}/E\) and \(\overline{\boldsymbol{T}(Y_{1},\ldots,Y_{m})}/F\) are semifields over \(\boldsymbol{T}\). Assume that the difference set \(V\setminus\operatorname{Im}(\theta)\) is not empty. Since \(\operatorname{Im}(\theta)\) is closed, for any \(x\in V\setminus\operatorname{Im}(\theta)\), there exists \(\varepsilon>0\) such that the \(\varepsilon\)-neighborhood of \(x\) does not intersect \(\operatorname{Im}(\theta)\). Let \(x=:(a_{1},\ldots,a_{n})\). For
\[f:=\left(\bigoplus_{i=1}^{n}\left(a_{i}^{\odot(-1)}\odot X_{i}\oplus\left(a_{ i}^{\odot(-1)}\odot X_{i}\right)^{\odot(-1)}\right)\right)^{\odot(-1)} \oplus(-\varepsilon),\]
\(\psi(\pi_{E}(f))\) is the constant function of \(-\varepsilon\) on \(W\) by (1). On the other hand, since both \(\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}/E\) and \(\overline{\boldsymbol{T}(Y_{1},\ldots,Y_{m})}/F\) are semifields over \(\boldsymbol{T}\), we have \(\psi(\pi_{E}(-\varepsilon))=-\varepsilon\), and thus it is also the constant function of \(-\varepsilon\) on \(W\). As \(F=\boldsymbol{E}(W)\), we have \(\psi(\pi_{E}(f))=\psi(\pi_{E}(-\varepsilon))\). Since \(\psi\) is injective, we have \(\pi_{E}(f)=\pi_{E}(-\varepsilon)\). However, \(\pi_{E}(f)(x)=0\neq-\varepsilon\), and so \(\pi_{E}(f)\neq\pi_{E}(-\varepsilon)\). This is a contradiction. Hence we have \(V\setminus\operatorname{Im}(\theta)=\varnothing\), i.e., \(\operatorname{Im}(\theta)\supset V\).
(6) It follows from \((2),\ldots,(5)\).
Theorem 3.15 have the following corollaries:
**Corollary 3.16**.: (cf. [8, Lemma 3.4], [11, Lemma 3.7]) In the same setting in Theorem 3.15, for any \(f\in\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}/E\), the following hold:
(1) \(\sup\{f(x)\,|\,x\in V\}\geq\sup\{\psi(f)(y)\,|\,y\in W\}\), and
(2) \(\inf\{f(x)\,|\,x\in V\}\leq\inf\{\psi(f)(y)\,|\,y\in W\}\).
Moreover, if \(W\) is nonempty and \(\psi\) is injective, then the equalities hold in both (1) and (2).
Proof.: By Theorem 3.15(1), (2), we have
\[\{f(x)\,|\,x\in V\}\supset\{f(\theta(y))\,|\,y\in W\}=\{\psi(f)(y)\,|\,y\in W\}.\]
Hence we have the conclusions.
Assume that \(W\) is nonempty and \(\psi\) is injective. By Theorem 3.15(2), \(\varnothing\neq\operatorname{Im}(\theta)\subset V\). Since \(V\neq\varnothing\) and \(W\neq\varnothing\), by Lemma 3.7, both \(\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}/E\) and \(\overline{\boldsymbol{T}(Y_{1},\ldots,Y_{m})}/F\) are semifields over \(\boldsymbol{T}\). Hence, for any \(t\in\boldsymbol{T}\), we have \(\psi(t)=t\). Thus, if \(f\in\boldsymbol{T}\), the assertions are clear.
Assume that \(f\) is not a constant. Let \(a:=\sup\{f(x)\,|\,x\in V\}\). In this case, since \(f\neq-\infty\), by Lemma 3.4, \(a\neq-\infty\). As \(V\neq\varnothing\), \(a\) is in \(\boldsymbol{R}\cup\{\infty\}\). For \(b\in\boldsymbol{R}\), as \(a\) is the supremum of \(f\) on \(V\), we have \(f\oplus b\neq b\) when \(b<a\). Therefore, if \(b<a\), then we have
\[\psi(f)\oplus b=\psi(f)\oplus\psi(b)=\psi(f\oplus b)\neq\psi(b)=b\]
Thus, for any such \(b\), there exists \(y\in W\) such that \(\psi(f)(y)\oplus b\neq b\). This means that \(\psi(f)(y)>b\), and hence \(a\leq\sup\{\psi(f)(y)\,|\,y\in W\}\).
For the infimums of \(f\) and \(\psi(f)\), we can obtain the conclusion by applying the supremum case for \(f^{\odot(-1)}=-f\) and \(\psi(f^{\odot(-1)})=\psi(f)^{\odot(-1)}=-\psi(f)\) since
\[\inf\{f(x)\,|\,x\in V\}=-\sup\{-f(x)\,|\,x\in V\}\]
and
\[\inf\{\psi(f)(y)\,|\,y\in W\}=-\sup\{-\psi(f)(y)\,|\,y\in W\}.\qed\]
Now we can give another proof of [8, Theorem 1.1 and Corollary 3.22]:
**Corollary 3.17**.: _Let \(\varGamma_{1}\) and \(\varGamma_{2}\) be tropical curves and \(\psi:\operatorname{Rat}(\varGamma_{1})\to\operatorname{Rat}(\varGamma_{2})\) a \(\boldsymbol{T}\)-algebra homomorphism. If \(\psi\) is injective, then there exists a unique surjective morphism \(\varphi:\varGamma_{2}\twoheadrightarrow\varGamma_{1}\) such that for any \(f\in\operatorname{Rat}(\varGamma_{1})\), \(\psi(f)=f\circ\varphi\)._
Proof.: By [9, Theorem 1.1], we can choose \(f_{1},\ldots,f_{n}\in\operatorname{Rat}(\varGamma_{1})\setminus\{-\infty\}\) and \(g_{1},\ldots,g_{m}\in\operatorname{Rat}(\varGamma_{2})\setminus\{-\infty\}\) such that \(\operatorname{Rat}(\varGamma_{1})=\boldsymbol{T}(f_{1},\ldots,f_{n})\) and \(\operatorname{Rat}(\varGamma_{2})=\boldsymbol{T}(g_{1},\ldots,g_{m})\). By Lemma 3.11, the correspondences \(X_{i}\mapsto f_{i}\) and \(Y_{i}\mapsto g_{i}\) induce the surjective \(\boldsymbol{T}\)-algebra homomorphisms
\[\psi_{1}:\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}\twoheadrightarrow \operatorname{Rat}(\varGamma_{1})\]
and
\[\psi_{2}:\overline{\boldsymbol{T}(Y_{1},\ldots,Y_{m})}\twoheadrightarrow \operatorname{Rat}(\varGamma_{2})\]
respectively. Let
\[\pi_{\operatorname{Ker}(\psi_{1})}:\overline{\boldsymbol{T}(X_{1},\ldots,X_{n })}\twoheadrightarrow\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}/ \operatorname{Ker}(\psi_{1})\]
and
\[\pi_{\operatorname{Ker}(\psi_{2})}:\overline{\boldsymbol{T}(Y_{1},\ldots,Y_{ m})}\twoheadrightarrow\overline{\boldsymbol{T}(Y_{1},\ldots,Y_{m})}/ \operatorname{Ker}(\psi_{2})\]
be the natural surjective \(\boldsymbol{T}\)-algebra homomorphisms. Let
\[\overline{\psi_{1}}:\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}/\operatorname{ Ker}(\psi_{1})\to\operatorname{Rat}(\varGamma_{1})\]
and
\[\overline{\psi_{2}}:\overline{\boldsymbol{T}(Y_{1},\ldots,Y_{m})}/\operatorname {Ker}(\psi_{2})\to\operatorname{Rat}(\varGamma_{2})\]
be the \(\boldsymbol{T}\)-algebra homomorphisms induced by \(\psi_{1}\) and \(\psi_{2}\) respectively, i.e., \(\overline{\psi_{i}}\circ\pi_{\operatorname{Ker}(\psi_{i})}=\psi_{i}\). By Proposition 3.12, \(\operatorname{Ker}(\psi_{i})=\boldsymbol{E}(\boldsymbol{V}(\operatorname{Ker} (\psi_{i})))\) and
\[\theta_{1}:\varGamma_{1}\setminus\varGamma_{1\infty}\to\boldsymbol{V}( \operatorname{Ker}(\psi_{1}));x\mapsto(f_{1}(x),\ldots,f_{n}(x))\]
and
\[\theta_{2}:\varGamma_{2}\setminus\varGamma_{2\infty}\to\boldsymbol{V}( \operatorname{Ker}(\psi_{2}));x\mapsto(g_{1}(x),\ldots,g_{m}(x))\]
are isometries. By Theorem 3.15(2), we have
\[\theta:\boldsymbol{V}(\operatorname{Ker}(\psi_{2})) \to \boldsymbol{V}(\operatorname{Ker}(\psi_{1}));\] \[y \mapsto ((\overline{\psi_{2}}^{-1}\circ\psi\circ\overline{\psi_{1}})( \pi_{\operatorname{Ker}(\psi_{1})}(X_{1}))(y),\ldots,\] \[(\overline{\psi_{2}}^{-1}\circ\psi\circ\overline{\psi_{1}})(\pi_{ \operatorname{Ker}(\psi_{1})}(X_{n}))(y)).\]
Since \(\varGamma_{i}\neq\varnothing\), we have \(\boldsymbol{V}(\operatorname{Ker}(\psi_{i}))\neq\varnothing\). As \(\overline{\psi_{2}}^{-1}\circ\psi\circ\overline{\psi_{1}}\) is injective, by Corollary 3.16, we have
\[\sup\{\pi_{\operatorname{Ker}(\psi_{1})}(X_{i})(x)\,|\,x\in \boldsymbol{V}(\operatorname{Ker}(\psi_{1}))\}\] \[= \sup\{(\overline{\psi_{2}}^{-1}\circ\psi\circ\overline{\psi_{1}}) (\pi_{\operatorname{Ker}(\psi_{1})}(X_{i}))(y)\,|\,y\in\boldsymbol{V}( \operatorname{Ker}(\psi_{2}))\}\]
and
\[\inf\{\pi_{\operatorname{Ker}(\psi_{1})}(X_{i})(x)\,|\,x\in \boldsymbol{V}(\operatorname{Ker}(\psi_{1}))\}\] \[= \inf\{(\overline{\psi_{2}}^{-1}\circ\psi\circ\overline{\psi_{1}} )(\pi_{\operatorname{Ker}(\psi_{1})}(X_{i}))(y)\,|\,y\in\boldsymbol{V}( \operatorname{Ker}(\psi_{2}))\}.\]
By Remark 3.10, for any \(x\in\varGamma_{1\infty}\), there exists \(i\) such that \((\overline{\psi_{1}}\circ\pi_{\operatorname{Ker}(\psi_{1})})(X_{i})(x)=\infty\) or \(-\infty\). Hence
\[\sup\{(\overline{\psi_{2}}^{-1}\circ\psi\circ\overline{\psi_{1}})(\pi_{ \operatorname{Ker}(\psi_{1})}(X_{i}))(y)\,|\,y\in\boldsymbol{V}(\operatorname {Ker}(\psi_{2}))\}=\infty\]
or
\[\inf\{(\overline{\psi_{2}}^{-1}\circ\psi\circ\overline{\psi_{1}})(\pi_{ \operatorname{Ker}(\psi_{1})}(X_{i}))(y)\,|\,y\in\boldsymbol{V}(\operatorname{ Ker}(\psi_{2}))\}=-\infty.\]
Since \((\psi\circ\overline{\psi_{1}})(\pi_{\operatorname{Ker}(\psi_{1})}(X_{i}))\) and \((\overline{\psi_{2}}^{-1}\circ\psi\circ\overline{\psi_{1}})(\pi_{\operatorname {Ker}(\psi_{1})}(X_{i}))\) take the same value at the points correponding by \(\theta_{2}\) by Proposition 3.12(1), this means that there exists \(y\in\varGamma_{2\infty}\) such that \((\psi\circ\overline{\psi_{1}})(\pi_{\operatorname{Ker}(\psi_{1})}(X_{i}))(y)=\infty\) or \(-\infty\). As \(\theta_{1}^{-1},\theta_{2}\) and \(\theta\) are continuous, for arbitrary sequence \(\{y_{k}\}\) conversing to \(y\), the sequence \(\{(\theta_{1}^{-1}\circ\theta\circ\theta_{2})(y_{k})\}\) converges to \(x\). Hence \(\operatorname{Im}(\theta)\) is closed. By Theorem 3.15(5), \(\theta\) is surjective. Since \(\theta_{1}^{-1}\) and \(\theta_{2}\) are isometries and \(\theta\) is continuous, \(\theta_{1}^{-1}\circ\theta\circ\theta_{2}\) are surjective and continuous. This \(\theta_{1}^{-1}\circ\theta\circ\theta_{2}\) naturally extends to a surjective continuous
map from \(\Gamma_{2}\) to \(\Gamma_{1}\) and it is the morphism \(\varphi\) we want. In fact, by the definition of \(\theta_{1}^{-1}\circ\theta\circ\theta_{2}\), for any \(y\in\Gamma_{2}\setminus\Gamma_{2\infty}\) and \(f\in\operatorname{Rat}(\Gamma_{1})\), we have \(\psi(f)(y)=f((\theta_{1}^{-1}\circ\theta\circ\theta_{2})(y))=f(\varphi(y))\), and thus for any \(y\in\Gamma_{2}\), we have \(\varphi(f)(y)=f(\varphi(y))\). This means that \(\psi(f)=f\circ\varphi\) holds. Since \((\overline{\psi_{2}}^{-1}\circ\psi\circ\overline{\psi_{1}})(\pi_{\operatorname {Ker}(\psi_{1})}(X_{i}))\) is peacewise affine with integer slopes and \(\theta_{1}^{-1}\) and \(\theta_{2}\) are isometries, \(\varphi\) is a morphism. If there exists a morphism \(\varphi^{\prime}\) satisfying \(\psi(f)=f\circ\varphi^{\prime}\) for any \(f\in\operatorname{Rat}(\Gamma_{1})\), then, for any \(y\in\Gamma_{2}\setminus\Gamma_{2\infty}\), we have
\[0 = \operatorname{CF}(\{\varphi(y)\},\infty)(\varphi(y))\] \[= (\operatorname{CF}(\{\varphi(y)\},\infty)\circ\varphi)(y)\] \[= \psi(\operatorname{CF}(\{\varphi(y)\},\infty))(y)\] \[= (\operatorname{CF}(\{\varphi(y)\},\infty)\circ\varphi^{\prime})(y)\] \[= \operatorname{CF}(\{\varphi(y)\},\infty)(\varphi^{\prime}(y)).\]
As \(\operatorname{CF}(\{\varphi(y)\},\infty)\) takes zero at and only at \(\varphi(y)\), we have \(\varphi(y)=\varphi^{\prime}(y)\). Since both \(\varphi\) and \(\varphi^{\prime}\) are continuous, we have \(\varphi=\varphi^{\prime}\).
For a nonempty set \(A\), we write the identity map of \(A\) as \(\operatorname{id}_{A}\).
By the following lemma and corollary, we know that we can drop the condition "\(F=\boldsymbol{E}(W)\)" in Theorem 3.15(6):
**Lemma 3.18**.: _Let \(E,F\) and \(G\) be congruences on \(\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}\), \(\overline{\boldsymbol{T}(Y_{1},\ldots,Y_{m})}\) and \(\overline{\boldsymbol{T}(Z_{1},\ldots,Z_{l})}\) respectively. Let \(\pi_{E}:\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}\twoheadrightarrow \overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}/E\) and \(\pi_{F}:\overline{\boldsymbol{T}(Y_{1},\ldots,Y_{m})}\twoheadrightarrow \overline{\boldsymbol{T}(Y_{1},\ldots,Y_{m})}/F\) be the natural surjective \(\boldsymbol{T}\)-algebra homomorphisms. Let \(\psi:\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}/E\to \overline{\boldsymbol{T}(Y_{1},\ldots,Y_{m})}/F\) and \(\phi:\overline{\boldsymbol{T}(Y_{1},\ldots,Y_{m})}/F\to \overline{\boldsymbol{T}(Z_{1},\ldots,Z_{l})}/G\) be \(\boldsymbol{T}\)-algebra homomorphisms. For_
\[\theta_{\psi}:\boldsymbol{V}(F)\to\boldsymbol{V}(E);y\mapsto( \psi(\pi_{E}(X_{1}))(y),\ldots,\psi(\pi_{E}(X_{n}))(y)),\] \[\theta_{\phi}:\boldsymbol{V}(G)\to\boldsymbol{V}(F);z\mapsto( \phi(\pi_{F}(Y_{1}))(z),\ldots,\phi(\pi_{F}(Y_{m}))(z))\]
_and_
\[\theta_{\phi\circ\psi}:\boldsymbol{V}(G)\to\boldsymbol{V}(E);z\mapsto(( \phi\circ\psi)(\pi_{E}(X_{1}))(z),\ldots,(\phi\circ\psi)(\pi_{E}(X_{n}))(z)),\]
_the following hold:_
\((1)\)_\(\theta_{\phi\circ\psi}=\theta_{\psi}\circ\theta_{\phi}\), and_
\((2)\) _if \(\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}/E=\overline{\boldsymbol{T}(Z_{1},\ldots,Z_{l})}/G\) and \(\phi\circ\psi=\operatorname{id}_{\overline{\boldsymbol{T}(X_{1},\ldots,X_{n}) }/E}\), then \(\theta_{\phi\circ\psi}=\operatorname{id}_{\boldsymbol{V}(E)}\)._
Proof.: (1) For any \(z\in\boldsymbol{V}(G)\), we have
\[(\theta_{\psi}\circ\theta_{\phi})(z) =\theta_{\psi}(\theta_{\phi}(z))\] \[=\theta_{\psi}(\phi(\pi_{F}(Y_{1}))(z),\ldots,\phi(\pi_{F}(Y_{m}))( z))\] \[=(\psi(\pi_{E}(X_{1}))(\phi(\pi_{F}(Y_{1}))(z),\ldots,\phi(\pi_{F }(Y_{m}))(z)),\ldots,\] \[\quad\psi(\pi_{E}(X_{n}))(\phi(\pi_{F}(Y_{1}))(z),\ldots,\phi(\pi_ {F}(Y_{m}))(z)))\] \[=(\phi(\psi(\pi_{E}(X_{1})))(z),\ldots,\phi(\psi(\pi_{E}(X_{n}))) (z))\] \[=((\phi\circ\psi)(\pi_{E}(X_{1}))(z),\ldots,(\phi\circ\psi)(\pi_{ E}(X_{n}))(z))\] \[=\theta_{\phi\circ\psi}(z).\]
(2) It is clear.
**Corollary 3.19**.: _In the same setting in Theorem 3.15, if \(\psi\) is an isomorphism, then \(V\) and \(W\) are homeomorphic via \(\theta\)._
Proof.: By Theorem 3.15(4) and Lemma 3.18, we have the conclusion.
In Theorem 3.15, we began our consideration from assuming congruences are given. In the following proposition, we will assume subsets of Euclidean spaces are given. For \(V\subset\boldsymbol{R}^{n}\), let \(F(V,\boldsymbol{T})\) denote the set of all functions \(V\to\boldsymbol{T}\). This \(F(V,\boldsymbol{T})\) becomes a \(\boldsymbol{T}\)-algebra with natural operations.
**Proposition 3.20**.: _Let \(V\) a subset of \(\boldsymbol{R}^{n}\) and \(W\) a nonempty subset of \(\boldsymbol{R}^{m}\). For \(f_{1},\ldots,f_{n}\in\overline{\boldsymbol{T}(Y_{1},\ldots,Y_{m})}/\boldsymbol {E}(W)\), assume that the image of \(\theta:W\to\boldsymbol{T}^{n};y\mapsto(f_{1}(y),\ldots,f_{n}(y))\) is contained in \(V\). Let \(\theta^{*}:\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}/\boldsymbol{E}(V) \to F(W,\boldsymbol{T});f\mapsto f\circ\theta\). Then the following hold:_
(1)_\(V\neq\varnothing\),_
(2)_\(\operatorname{Im}(\theta^{*})\subset\overline{\boldsymbol{T}(Y_{1},\ldots,Y_{ m})}/\boldsymbol{E}(W)\subset F(W,\boldsymbol{T})\),_
(3)_\(\theta^{*}\) is a \(\boldsymbol{T}\)-algebra homomorphism,_
(4) _if \(\operatorname{Im}(\theta)\supset V\), then \(\theta^{*}\) is injective, and_
(5)_\(V\) and \(W\) are homeomorphic via \(\theta\) and there exist \(g_{1},\ldots,g_{m}\in\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}/\boldsymbol {E}(V)\) such that for any \(x\in V\), \(\theta^{-1}(x)=(g_{1}(x),\ldots,g_{m}(x))\), then \(\overline{\boldsymbol{T}(X_{1},\ldots,X_{n})}/\boldsymbol{E}(V)\) and \(\overline{\boldsymbol{T}(Y_{1},\ldots,Y_{m})}/\boldsymbol{E}(W)\) are isomorphic via \(\theta^{*}\)._
Proof.: (1) Since \(W\neq\varnothing\), we have \(\operatorname{Im}(\theta)\neq\varnothing\). By assumption, \(\varnothing\neq\operatorname{Im}(\theta)\subset V\).
(2) As any element \(f\) of \(\overline{\boldsymbol{T}(Y_{1},\ldots,Y_{m})}/\boldsymbol{E}(W)\) defines a function \(W\to\boldsymbol{T}\) and for \(f,g\in\overline{\boldsymbol{T}(Y_{1},\ldots,Y_{m})}/\boldsymbol{E}(W)\), if \(f(y)=g(y)\) for any \(y\in W\), then \(f=g\) as elements of \(\overline{\boldsymbol{T}(Y_{1},\ldots,Y_{m})}/\boldsymbol{E}(W)\), hence the natural map \(\overline{\boldsymbol{T}(Y_{1},\ldots,Y_{m})}/\boldsymbol{E}(W)\to F(W, \boldsymbol{T})\) is injective.
For any \(f\in\overline{\mathbf{T}(X_{1},\ldots,X_{n})}/\mathbf{E}(V)\) and \(y\in W\), since
\[\theta^{*}(f)(y) = (f\circ\theta)(y)\] \[= f(\theta(y))\] \[= f(f_{1}(y),\ldots,f_{n}(y))\] \[= f(f_{1},\ldots,f_{n})(y),\]
we have \(\theta^{*}(f)=f(f_{1},\ldots,f_{n})\in\overline{\mathbf{T}(Y_{1},\ldots,Y_{m})}/\bm {E}(W)\).
(3) By (1), \(V\) is nonempty. By assumption, \(W\) is also nonempty. Since \(\varnothing\neq V\subset\mathbf{V}(\mathbf{E}(V))\) and \(\varnothing\neq W\subset\mathbf{V}(\mathbf{E}(W))\), by Lemma 3.7, both \(\overline{\mathbf{T}(X_{1},\ldots,X_{n})}/\mathbf{E}(V)\) and \(\overline{\mathbf{T}(Y_{1},\ldots,Y_{m})}/\mathbf{E}(W)\) are semi-fields over \(\mathbf{T}\). For any \(t\in\mathbf{T}\), \(\theta^{*}(t)=t\in\overline{\mathbf{T}(Y_{1},\ldots,Y_{m})}/\mathbf{E}(W)\) is clear. For any \(f,g\in\overline{\mathbf{T}(X_{1},\ldots,X_{n})}/\mathbf{E}(V)\), we have
\[\theta^{*}(f\oplus g)=(f\oplus g)\circ\theta=(f\circ\theta)\oplus(g\circ \theta)=\theta^{*}(f)\oplus\theta^{*}(g)\]
and
\[\theta^{*}(f\odot g)=(f\odot g)\circ\theta=(f\circ\theta)\odot(g\circ\theta)= \theta^{*}(f)\odot\theta^{*}(g).\]
Thus \(\theta^{*}\) is a \(\mathbf{T}\)-algebra homomorphism.
(4) For \(f,g\in\overline{\mathbf{T}(X_{1},\ldots,X_{n})}/\mathbf{E}(V)\), we assume that \(\theta^{*}(f)=\theta^{*}(g)\) holds. Since \(\operatorname{Im}(\theta)\supset V\), for any \(x\in V\), there exists \(y\in W\) such that \(x=\theta(y)\). Hence we have
\[f(x) = f(\theta(y))=(f\circ\theta)(y)=\theta^{*}(f)(y)\] \[= \theta^{*}(g)(y)=(g\circ\theta)(y)=g(\theta(y))=g(x).\]
Therefore \(f=g\in\overline{\mathbf{T}(Y_{1},\ldots,Y_{m})}/\mathbf{E}(W)\).
(5) By assumption and (2), the image of \((\theta^{-1})^{*}:\overline{\mathbf{T}(Y_{1},\ldots,Y_{m})}/\mathbf{E}(W)\to F(V,\mathbf{T });g\mapsto g\circ\theta^{-1}\) is contained in \(\overline{\mathbf{T}(X_{1},\ldots,X_{n})}/\mathbf{E}(V)\). For any \(f\in\overline{\mathbf{T}(X_{1},\ldots,X_{n})}/\mathbf{E}(V)\), we have
\[((\theta^{-1})^{*}\circ\theta^{*})(f) = (\theta^{-1})^{*}(\theta^{*}(f))=(\theta^{-1})^{*}(f\circ\theta)\] \[= (f\circ\theta)\circ\theta^{-1}=f\circ(\theta\circ\theta^{-1})=f \circ\operatorname{id}_{V}=f.\]
For any \(g\in\overline{\mathbf{T}(Y_{1},\ldots,Y_{m})}/\mathbf{E}(W)\), we have
\[(\theta^{*}\circ(\theta^{-1})^{*})(g) = \theta^{*}((\theta^{-1})^{*}(g))=\theta^{*}(g\circ\theta^{-1})\] \[= (g\circ\theta^{-1})\circ\theta=g\circ(\theta^{-1}\circ\theta)=g \circ\operatorname{id}_{W}=g.\]
Thus we have the conclusion.
By the following example, we know that we cannot drop the condition "\(\operatorname{Im}(\theta)\) is closed" in Theorem 3.15(5) and that the converse of Proposition 3.20(4) does not hold:
**Example 3.21**.: Let \(V:=[0,1]\subset\mathbf{R}\) and \(W:=\{(t,1/t)\in\mathbf{R}^{2}\,|\,0<t\leq 1\}\). Since \(V\) and \(W\) are closed subsets of \(\mathbf{R}\) and \(\mathbf{R}^{2}\) respectively, by Proposition 3.9, these are congruence varieties. Let
\(\{(f,g)\in\overline{\mathbf{T}(X)}\times\overline{\mathbf{T}(X)}\,|\,\forall x\in V,f(x)=g (x)\}\) and \(F:=\mathbf{E}(W)=\{(f,g)\in\overline{\mathbf{T}(Y_{1},Y_{2})}\times\overline{\mathbf{T}(Y_{ 1},Y_{2})}\,|\,\forall y\in W,f(y)=g(y)\}\). Then \(V=\mathbf{V}(E)\) and \(W=\mathbf{V}(F)\). Let \(\pi_{E}:\overline{\mathbf{T}(X)}\to\overline{\mathbf{T}(X)}/E\) and \(\pi_{F}:\overline{\mathbf{T}(Y_{1},Y_{2})}\twoheadrightarrow\overline{\mathbf{T}(Y_{1},Y_{2})}/F\) be the natural surjective \(\mathbf{T}\)-algebra homomorphisms and \(\theta:W\to V;y\mapsto\pi_{F}(Y_{1})(y)\). Then \(\mathrm{Im}(\theta)=(0,1]\not\supset V\). Let \(\theta^{*}:\overline{\mathbf{T}(X)}/E\to\overline{\mathbf{T}(Y_{1},Y_{2})}/F;f\mapsto f \circ\theta\) be the pull-back. This \(\theta^{*}\) is injective by the following argument. Let \(f,g\in\overline{\mathbf{T}(X)}/E\) such that \(\theta^{*}(f)=\theta^{*}(g)\). Since \(f\circ\theta=\theta^{*}(f)=\theta^{*}(g)=g\circ\theta\) and \(\mathrm{Im}(\theta)=(0,1]\), the restrictions \(f|_{(0,1]}\) and \(g|_{(0,1]}\) coincide. Thus, if \(f=-\infty\), then so is \(g\), and hence \(f=g\). If \(f\neq-\infty\), then \(g\neq-\infty\) and \(f(0),g(0)\in\mathbf{R}\). Since both \(f\) and \(g\) are continuous, \(f(0)=g(0)\). This means that \(f|_{V}=g|_{V}\). As \(V=\mathbf{V}(E)\), we have \(f=g\). In conclusion, \(\theta^{*}\) is injective. Note that \(\theta^{*}(\pi_{E}(X))=\pi_{F}(Y_{1})\). Also \(\pi_{F}(Y_{2})\not\in\mathrm{Im}(\theta^{*})\). In fact, if there exists \(f\in\overline{\mathbf{T}(X)}/E\) such that \(\theta^{*}(f)=\pi_{F}(Y_{2})\), then \(\theta^{*}(f)(t,1/t)\) diverges to \(\infty\) as \(t\to 0\).
By the following example, we know that we cannot drop the condition "there exist \(g_{1},\dots,g_{m}\in\overline{\mathbf{T}(X_{1},\dots,X_{n})}/\mathbf{E}(V)\) such that for any \(x\in V\), \(\theta^{-1}(x)=(g_{1}(x),\dots,g_{m}(x))\)" in Proposition 3.20(5) and that the condition "\(V\) and \(W\) are homeomorphic via \(\theta\)" is not a sufficient condition for "\(\overline{\mathbf{T}(X_{1},\dots,X_{n})}/\mathbf{E}(V)\) and \(\overline{\mathbf{T}(Y_{1},\dots,Y_{m})}/\mathbf{E}(W)\) are isomorphic via \(\theta^{*}\)":
**Example 3.22**.: Let \(V:=[0,2]\subset\mathbf{R}\) and \(W:=[0,1]\subset\mathbf{R}\). Let \(\pi_{\mathbf{E}(W)}:\overline{\mathbf{T}(Y)}\twoheadrightarrow\overline{\mathbf{T}(Y)}/ \mathbf{E}(W)\) be the natural surjective \(\mathbf{T}\)-algebra homomorphism. Let \(\theta:W\to V;x\mapsto\pi_{\mathbf{E}(W)}(Y^{\odot 2})(x)\) and \(\theta^{*}:\overline{\mathbf{T}(X)}/\mathbf{E}(V)\to\overline{\mathbf{T}(Y)}/\mathbf{E}(W)\) the pull-back. This \(\theta\) is a homeomorphism. However there exists no element of \(\overline{\mathbf{T}(X)}/\mathbf{E}(V)\) that induces \(\theta^{-1}\). Also \(\pi_{\mathbf{E}(W)}(Y)\not\in\mathrm{Im}(\theta^{*})\) in this case. Hence \(\theta^{*}\) is not surjective.
|
2304.13451
|
A Systematic Mapping Study of Code Quality in Education -- with Complete
Bibliography
|
While functionality and correctness of code has traditionally been the main
focus of computing educators, quality aspects of code are getting increasingly
more attention. High-quality code contributes to the maintainability of
software systems, and should therefore be a central aspect of computing
education. We have conducted a systematic mapping study to give a broad
overview of the research conducted in the field of code quality in an
educational context. The study investigates paper characteristics, topics,
research methods, and the targeted programming languages. We found 195
publications (1976-2022) on the topic in multiple databases, which we
systematically coded to answer the research questions. This paper reports on
the results and identifies developments, trends, and new opportunities for
research in the field of code quality in computing education.
|
Hieke Keuning, Johan Jeuring, Bastiaan Heeren
|
2023-04-26T11:18:39Z
|
http://arxiv.org/abs/2304.13451v1
|
# A Systematic Mapping Study of Code Quality in Education -
###### Abstract.
While functionality and correctness of code has traditionally been the main focus of computing educators, quality aspects of code are getting increasingly more attention. High-quality code contributes to the maintainability of software systems, and should therefore be a central aspect of computing education. We have conducted a systematic mapping study to give a broad overview of the research conducted in the field of code quality in an educational context. The study investigates paper characteristics, topics, research methods, and the targeted programming languages. We found 195 publications (1976-2022) on the topic in multiple databases, which we systematically coded to answer the research questions. This paper reports on the results and identifies developments, trends, and new opportunities for research in the field of code quality in computing education.
programming education, software engineering education, code quality, refactoring, code smells, systematic mapping study +
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software engineering education education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software engineering education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software education
+
Footnote †: journal: Computer science education; Software education; Software education
+
Footnote †: journal: Computer science education; Software education
static analysis has a broader application and can also be used to identify bugs and errors. There are also many tools to support the refactoring of code, often integrated in modern IDEs.
### Related work
_Systematic literature reviews_, which dive deep into the literature on some topic, are increasingly being conducted for Computing Education (CEd) topics (see section 4.5 for examples related to our topic). A _systematic mapping study_ aims to give a broad overview of a particular research area, usually by categorising its publications. While mapping studies are common in medicine, they are less common in other fields, such as software engineering (Krishnan et al., 2017) and CEd. A systematic mapping study on software testing in introductory programming courses by Scatalon et al. (2018) is the most relevant to our study, because they also investigate a software quality aspect in an educational context. The authors selected 293 papers and categorised them on their topic and evaluation method.
Numerous mapping studies and literature reviews have been conducted on topics related to code quality, such as a mapping study on source code metrics (Krishnan et al., 2017), and a tertiary review on smells and refactoring (Krishnan et al., 2017). However, these studies are not aimed at code quality in the context of _education_, the topic of our study.
## 3. Method
We generally follow the process from Petersen et al. (2017) for doing systematic mapping studies in software engineering. We employ a different approach to classifying studies, as described in section 3.3.
### Scope and research questions
The scope of this mapping study is:
_Research on educational activities and support concerning code quality (as defined in 2.1), such as: instruction, analysis, assessment, tool support, tasks, and feedback._
Within this scope we address the following research questions:
**RQ1**: Where are the papers published?
**RQ2**: Which topics have been addressed?
**RQ3**: Which types of studies have been conducted?
**RQ4**: For which programming language is the intervention?
**RQ5**: What are the trends over time?
**RQ6**: Which other topics are closely related to code quality?
### Search process
The inclusion and exclusion criteria are defined in table 1. We have first assembled a base list of 40 papers that have been collected by ourselves over the years and meet the criteria. Two authors have verified that all publications on the list should be included.
#### 3.2.1. Database search
We collected the keywords from the base papers, removing very general terms such as 'university' and 'examples', very specific terms such as names of tools and programming languages, and terms indicating the type of study. Based on these keywords we experimented with various search strings, checking whether the papers would end up in the search results. Because code quality is defined and named in various ways, we have used several specific terms in the search string to be as inclusive as possible. During the process we have made the scope more clear by explicitly defining the edge topics for RQ6, as discussed in section 4.5.
We chose three databases, Scopus, ACM and IEEE, which cover a wide range of publications and allow searching with a complex search string. The search includes papers up to and including 2022. Because the final searches were conducted in December 2022, a few papers from 2022 could be missing. The final search string is shown below. We applied the search string to title, abstract, and keywords, and made some adjustments to match the database requirements. From our base list of 40 papers, 36 were found by this search.
[background=lines,backgroundcolor=gray!10,linewidth=0.5] (program* OR code OR coding OR software) AND ("code quality" OR "software quality" OR "design quality" OR refactoring OR "static analysis" OR "software metrics" OR smell OR readability OR "code style" OR "coding style" OR "programming style" AND (student OR teach* OR educat* OR curriculum OR novice)
Next, we elaborate on the process steps (summarised in figure 1).
_Cleaning._ One author combined the results from the three databases, and removed entries that are not papers, or are too short, and deleted duplicates based on title automatically using a script.
_Pre-selection._ One author filtered the list for exclusion based on title, and/or publication source, which were obviously out of scope because they are not about code quality and/or educational setting.
_Selection._ Two authors assessed a subset of the remaining list by reading title/abstract/keywords and selecting yes/no/maybe. Both 'yes' and'maybe' indicated that we will consult the fulltext. If only one of the authors selected 'no', we discussed whether or not the fulltext should be consulted. We had three rounds of around 100 papers each, with an agreement of 77%, 78%, and 89%, respectively. One author assessed the remaining papers. For the fulltext selection we also checked and discussed several papers with two authors, after which one author selected the remaining papers. After this step, we had a selection of 168 papers for inclusion in our study.
#### 3.2.2. Snowballing
The ambiguity surrounding the definition of code quality prevents constructing a search string that finds all relevant research. To find additional publications, we have performed _snowballing_: identifying relevant references from (backwards) and to (forwards) a set of papers (Krishnan et al., 2017). For all 168 papers found in the previous steps, one author inspected all references from and to the paper (the latter using Google Scholar), and selected those within the scope. This inspection of thousands of references led to 27 additional papers. We stopped after one round of snowballing; we believe a second round would unlikely yield more relevant papers, because these papers would not have cited any of the papers from the database search. To answer RQ6, we kept a record of topics and papers referred to during snowballing that were outside our scope.
### Coding
To answer RQs 2-4, we coded each paper in four categories: topic, aspect, method, and language. Codes are shown in a box. We use the topics from the mapping study on testing in programming courses by Scatalon et al. (2018) as our base for RQ2 (topic), with some small adjustments to fit our scope. We assigned one topic to each paper, representing its main focus or goal.
[MISSING_PAGE_POST]
common venues are, as expected, related to Computing Education, however, the diversity of the remaining venues is broad. Papers on the topic have been published in venues on human-centric computing, software engineering, games, educational technology, program comprehension, and others.
### Topics (RQ2) and methods (RQ3)
Figure 3 shows the correlation matrix of the topics and the methods. We did not find any literature reviews or theory papers, therefore these categories were omitted. We notice two major topics: program quality (41 papers) and tools (70 papers). We have made a distinction between tools created by the authors (59), and the use of an external tool (11). Figure 4 shows the correlation matrix of the topics and code quality aspects. All aspects clearly appear in multiple papers.
#### 4.2.1. Curriculum
We have found only eight papers that revolve around integrating code quality into the (curriculum). As an example, Kirk et al. (2012) study the prevalence of code quality in introductory programming courses. Technapalokul and Tilevich (2018) advocate for the importance of integrating code quality in the teaching of programming in block-based environments, even though this code is usually not intended for practical use. Haendler and Neumann (2018) present a framework for the assessment and training of refactoring.
#### 4.2.2. Instruction
Overall, we observe that code quality in education revolves for a large part around digital tools). Looking at the code quality aspects, we notice that those tools focus on several of them, such as identifying code smells and refactoring code, often using static analysis techniques. AutoStyle(Srivastava et al., 2017) gives data-driven feedback on how to improve the style of correct programs step by step. Other recent tools are CompareCFG(Srivastava et al., 2017), and a Java critiqueer for antipatterns (Krishnan et al., 2017). RefacTutor(Krishnan et al., 2017) is a tutoring system to learn refactoring in Java. Keuning et al. (2017) present a tutoring system in which students practice with improving functionally correct code, with the help of automated feedback and hints.
In some tools a 'gamification' approach was taken. Zsigmond et al. (2018) present a system in which badges are awarded to students who adhere to good coding standards, using SonarQube for static analysis. Examples of badges are 'doc ace', 'complexity guru', and'stylish code'. PiratePlunder(Tiland, 2018) is a game in which children learn to investigate and fix code smells in a Scratch environment.
We have also found papers on tools that focus on very specific aspects of code quality. For example, the Earthworm tool gives automated suggestions of decomposition of Python code (Srivastava et al., 2017). Foobaz gives feedback on variable names (Foobaz et al., 2017). Charitisis et al. (2017) present a system based on machine learning techniques that can detect poor function names and suggest improvements.
Examples of external (professional) tools that are used in education are PMD (Krishnan et al., 2017) and CppCheck(CipCheck, 2018). These tools can also be integrated in Continuous Integration practices, such as SonarQube(Srivastava et al., 2017). We also noticed that selfmade tools often make use of external tools for specific tasks. For example, PyTA is a wrapper for pylint(Krishnan et al., 2017), adding custom code checks and improved error messages targeted at students. Hyperstyle(Krishnan et al., 2017) uses static analysers for different languages (PMD for Java, Detekt for Kotlin, and linters for JavaScript and Python), from which checks suitable for students are selected, categorized, and presented together with a grade.
Tools can be used to analyse large collections of student code (papers focussing on analysing program quality), and to support students in learning (papers focussing on a tool for instruction), and in some papers tools have a dual role: the authors conclude that student programs contain many flaws (identified by some tool), and therefore that tool could be used as an instructional aid (CipCheck, 2018). However, it remains unclear whether these tools are suitable for educating novices, which is addressed by several papers. NURURW and Higgins (Krishnan et al., 2017) analyse differences between tool assessment and human assessment, and investigate the usefulness of such tools.
We have found only six papers that discuss (ourse materials). Refactory(Krishnan et al., 2017) is a non-digital card game to learn the principles of refactoring by resolving code smells. Other work analyses the readability of example programs in programming books (Krishnan et al., 2017).
Several papers discuss (programming assignments) for example, by presenting coding guidelines (Krishnan et al., 2017) and code readability best practices for students (Tiland, 2018). Stegeman et al. have developed a rubric for assessing the code quality of student code (Krishnan et al., 2017), which we described in more detail in section 2.1. Nandigam et al. (Nandigam et al., 2017) describe assignments in which students are instructed to explore and improve open source projects by measuring quality and applying refactorings where needed. Tempero and Tu (Tempero and Tu, 2018) use code review assignments to asses how students understand the concept of'maintainability'.
Nine papers discuss teaching about the programming process)
Stoecklin et al. (Stoecklin et al., 2018) describe an approach for teaching refactoring through a series of incremental lessons. Abid et al. (Abid et al., 2018) present an experiment about the timing of refactoring in a student project. Lu et al. (Lu et al., 2018) introduce 'Continuous Inspection' of code quality in an
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Name** & **C/J Type** & **\#** \\ \hline Special Interest Group on CS Ed. (SIGCSE) & C & Computing Education & 21 \\ Innovation and Technology in CS Ed. (TICSE) & C & Computing Education & 11 \\ Frontiers in Education (FIE) & C & Engineering Education & 11 \\ Koil Calling & C & Computing Education & 7 \\ Australasian Computing Education (ACE) & C & Computing Education & 7 \\ Computer-supported education (CEDU) & C & Educational Technology & 5 \\ Computing Sciences in Colleges & J & Computing Education & 4 \\ Visual Languages and Human-Centric Computing & C & Human-Centric Comp. & 4 \\ IEEE Blocks \& Beyond & C & Block programming & 3 \\ Systems and Software & J & Software Engineering & 3 \\ IEEE Access & J & General computing & 3 \\ Learning @ Scale & C & Educational Technology & 3 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Publication venues (conference/journal) with at least 3 publications.
Figure 2. Number of publications per year.
educational setting. Passier et al. (Passier et al., 2015) describe how students can build an elegant JavaScript application step by step.
Papers labelled with (teaching method) address multiple instructional elements. Izu et al. (Izu et al., 2016) present a teaching resource, consisting of a textual explanation, a set of refactoring rules, and exercises, to help students with identifying code smells in conditional statements, and refactoring this code. Crespo et al. (Crespo et al., 2017) focus on the concept of 'technical debt', and compare two different teaching methods: a penalisation (based on SonarQube metrics) and a rewarding strategy (with the metrics shown in a leaderboard).
#### 4.2.3. Learning outcome
Program quality is a major category in papers, studying the programs that students write with respect to quality characteristics. These programs are mostly analysed automatically by a tool to identify code smells and calculate quality metrics. Examples of such large-scale studies, often analysing thousands of programs, are a study of PMD rule violations in Java programs (Pasier et al., 2016), smells in Scratch programs (Borda et al., 2017), and indicators of semantic style differences (Passier et al., 2016). Cristea et al. (Crespo et al., 2017) combine Formal Concept Analysis with Pylint, to detect issues with object-oriented design and too complex code. Groeneveld et al. (Groeneveld et al., 2017) analyse the correlation between code quality and creativity, finding preliminary evidence for a larger number of issues in projects with high creativity. Grotov et al. (Grotov et al., 2017) compare the coding style and complexity of Python programs written as regular scripts to code written in Jupyter computational notebooks. Ma and Tilevich (Ma and Tilevich, 2017) describe a set of anti-patterns that may arise when students move from Java to Python, but still write code in the much more verbose Java-style.
The majority of program quality studies employ quantitative, descriptive methods, but others take a qualitative approach. Some studies administer a survey to let teachers or students assess example code, for example to collect suggestions from expert programmers (Crespo et al., 2017). Andrade and Brunet (Andrade and Brunet, 2017) studied whether students were able to give useful feedback on the quality of other students' code.
A few papers study a specific phenomenon related to code quality, such as the 'unencapsulated collection' design smell (Passier et al., 2016). Studies that assess student code quality by hand are more rare. Some papers compare code assessment by experts with code analysed by tools.
Thirteen papers focus on student (behaviours) with regards to code quality. Gilson et al. (Gilson et al., 2017) observed how student Scrum teams deal with quality issues during a one-year project. Sripada and Reddy (Sripada and Reddy, 2017) also study student activities related to quality in multiple iterations of a development process. Senger et al. (Senger et al., 2017) replicate an earlier study with more and larger student programs, in which they run the static analyser FindBups and study the correlation between the warnings found and correctness or struggling.
Eleven studies are on (perceptions) of teachers and students, of which five mention the term'readability'. Kirk et al. (Kirk et al., 2017) study high-school teachers' ideas and needs regarding code style and quality through interviews. Wiese et al. (Wiese et al., 2017) investigate how beginner programmers assess the style of example programs, which they later replicate with a different student population (Wiese et al., 2017). Note that this is one of two replication studies we identified in our set of papers. An ITICSE working group studied the differences in perceptions of code quality between developers, teachers, and students (Wiese et al., 2017). Fleury (Crespo et al., 2017) conducted interviews with students asking them to evaluate and compare the style of several Java programs.
All three studies about (concept understanding) use a (quasi-) experimental approach. Hermans and Aivaloglou (Hermans and Aivaloglou, 2017) study the effect of smells in Scratch code when students do comprehension tasks.
A few methods were not in our main list, such as 'educational design research' for iteratively designing a code quality rubric (Sripada and Tilevich, 2017). Recently, we observed the use of machine learning techniques (Wiese et al., 2017).
### Languages (RQ4)
Figure 5 shows a treemap with the languages that are targeted in the publications. Java and Python, popular general-purpose languages often used in teaching, are the most prevalent text-based languages in this study. Less expected might be the substantial number of papers dealing with programming in a block-based editor, such as Scratch (Borda et al., 2017; Kirk et al., 2017) and Snap! (Sripada and Tilevich, 2017). These papers investigate code smells or present learning tools. Other paradigms, such as functional programming, hardly appear.
Figure 4. Correlation matrix of topics and code quality aspects (a paper can have more than one aspect).
Figure 3. Correlation matrix of topics and methods.
### Trends (RQ5)
Figure 6 shows the trends with respect to paper topics. We focus on the last twenty years, because we only have a small number of papers before that. We notice that much of the program quality research appeared in the last decade. The number of papers on tools has grown significantly, and the use of external tools is mostly a development from the last ten years. Studying perceptions is very much a recent development.
### Related fields (RQ6)
As discussed before, the term _code quality_ has no crystal clear definition. During our search for relevant publications, we regularly came across papers with a topic on the edges of our scope, as defined in section 3.1. In this section we list these topics, which are also relevant for learning and teaching about code quality, and refer to literature reviews on these topics, if available.
Software design educationWhile much of the research found in this study could be related to designing software, and in our definition we mention aspects such as decomposition and encapsulation, our mapping does not cover the broad field of teaching software design upfront. Instead, we focus on assessing the characteristics of the code after it has been written.
Design patterns educationWe included some papers dealing with design patterns, because they were used as a means to refactor existing code. There are several other papers that focus on teaching and learning of design patterns in general.
Object-orientated programmingAbstraction, decomposition, and encapsulation are prominent topics in learning object-orientation, and contribute greatly to the quality of design and code.
Interventions leading to improved code qualitySome interventions may lead to improved code quality, but are not specifically about code quality. Examples are pair programming, test-driven development (Ross and Sethna, 2017), and peer review (Ross and Sethna, 2017).
Computational thinkingAbstraction, decomposition, and modularization are important aspects of computational thinking (Ross and Sethna, 2017).
Code similarity and plagiarismRather the reverse of assessing the various ways a program can be written, several studies focus on code similarity, code clustering, and detecting plagiarism (Ross and Sethna, 2017).
Program comprehensionThis topic deals with the cognitive processes that programmers apply when trying to understand programs (Ross and Sethna, 2017).
Automated assessmentMany systems for automated assessment of student programs incorporate some kind of style feedback (Ross and Sethna, 2017).
### Threats to validity
It is non-trivial to categorise a paper by its main method and topic. By only assigning one label, we might miss some additional topics and methods. Aspects were identified by looking for specific terms in the title and abstract; this simplified method might not correctly represent an article's main focus.
Although we performed an extensive database search followed by snowballing, we might have discarded relevant work based on an unclear title or abstract. Also, we have only included papers with code quality topics as their main focus. Because code quality can be integrated in software engineering courses, and is an element of overall software quality, we might miss some relevant research.
## 5. Conclusion
One of the earliest papers identified in this study on teaching programming style concludes with "Perhaps the more recent structured languages such as PASCAL and C will make some of this emphasis less critical" (Ross and Sethna, 2017). Although tools and new languages simplify implementing good coding style, the author has not foreseen the ongoing issue with writing high-quality code.
We have conducted a systematic mapping study of code quality in education, which is the first overarching study on this topic. We identified and categorised 195 papers, studying paper characteristics, topics, domain-specific aspects, methods, and trends. Code quality is an upcoming topic with an increasing number of studies. Papers are published in a wide variety of venues on various topics. Its main focus has been on developing and evaluating tools for feedback on code smells, and suggestions for improvements and refactorings. Professional quality tools are increasingly being used in (and adapted for) education. Another major area is quality analysis of student code. We also observe that a growing number of studies target block-based programming environments, emphasising the need to start early with this topic. We have given several examples of the diversity in research, and shown related fields.
Because the goal of a mapping study is to give a broad overview, a possible direction for future work is to conduct a more in-depth literature study of a specific topic or aspect identified in this study. We would also encourage researchers to perform studies on the topics that have received little attention so far, such as integrating code quality into the computing curricula, developing and evaluating course materials, and studying student perceptions and behaviours.
Figure 5. Treemap of targeted programming languages. Languages with less than 5 papers are omitted.
Figure 6. Paper count per topic for the last 20 years, omitting topics with less than 8 papers.
|
2304.05233
|
Mask-conditioned latent diffusion for generating gastrointestinal polyp
images
|
In order to take advantage of AI solutions in endoscopy diagnostics, we must
overcome the issue of limited annotations. These limitations are caused by the
high privacy concerns in the medical field and the requirement of getting aid
from experts for the time-consuming and costly medical data annotation process.
In computer vision, image synthesis has made a significant contribution in
recent years as a result of the progress of generative adversarial networks
(GANs) and diffusion probabilistic models (DPM). Novel DPMs have outperformed
GANs in text, image, and video generation tasks. Therefore, this study proposes
a conditional DPM framework to generate synthetic GI polyp images conditioned
on given generated segmentation masks. Our experimental results show that our
system can generate an unlimited number of high-fidelity synthetic polyp images
with the corresponding ground truth masks of polyps. To test the usefulness of
the generated data, we trained binary image segmentation models to study the
effect of using synthetic data. Results show that the best micro-imagewise IOU
of 0.7751 was achieved from DeepLabv3+ when the training data consists of both
real data and synthetic data. However, the results reflect that achieving good
segmentation performance with synthetic data heavily depends on model
architectures.
|
Roman Macháček, Leila Mozaffari, Zahra Sepasdar, Sravanthi Parasa, Pål Halvorsen, Michael A. Riegler, Vajira Thambawita
|
2023-04-11T14:11:17Z
|
http://arxiv.org/abs/2304.05233v1
|
# Mask-conditioned latent diffusion for generating gastrointestinal polyp images
###### Abstract.
In order to take advantage of artificial intelligence (AI) solutions in endoscopy diagnostics, we must overcome the issue of limited annotations. These limitations are caused by the high privacy concerns in the medical field and the requirement of getting aid from experts for the time-consuming and costly medical data annotation process. In computer vision, image synthesis has made a significant contribution in recent years, as a result of the progress of generative adversarial networks (GANs) and diffusion probabilistic models (DPMs). Novel DPMs have outperformed GANs in text, image, and video generation tasks. Therefore, this study proposes a conditional DPM framework to generate synthetic gastrointestinal (GI) polyp images conditioned on given generated segmentation masks. Our experimental results show that our system can generate an unlimited number of high-fidelity synthetic polyp images with the corresponding ground truth masks of polyps. To test the usefulness of the generated data we trained binary image segmentation models to study the effect of using synthetic data. Results show that the best micro-image-inse intersection over union (IOU) of \(0.7751\) was achieved from DeepLabv3+ when the training data consists of both real data and synthetic data. However, the results reflect that achieving good segmentation performance with synthetic data heavily depends on model architectures.
diffusion model, polyp generative model, polyp segmentation, generating synthetic data +
Footnote †: 2018)
also typically small and mostly limited to polyps (Krizhevsky et al., 2017). To overcome this issue, one solution is to expand training datasets by generating synthetic data (Krizhevsky et al., 2014; Krizhevsky et al., 2014).
Endoscopy diagnostics are being enhanced by AI solutions. Especially, image synthesis has made a significant contribution to overcome the issue of the limited dataset (Krizhevsky et al., 2014). It is now common to use generative adversarial networks (GANs) to generate synthetic images because GANs produce realistic images and achieve impressive results in a wide range of applications (Beng et al., 2015; Chen et al., 2016). Thus, a GAN is a powerful generative model, however, it suffers from convergence instability.
To overcome the convergence issue in GANs, in recent years, diffusion models (Krizhevsky et al., 2016) have gained attention as a potential method for their ability to synthesize natural images. In this study, we introduce a framework consisting of two different diffusion models to create synthetic GI-tract images and corresponding masks. Our contributions are listed as follows:
* We introduce a fully synthetic polyp generation system.
* Our system is able to generate realistic-looking synthetic polyp masks using an improved diffusion model.
* Based on the generated masks, we are able to generate high-fidelity synthetic polyp images conditioned on pre-generated synthetic polyp masks using a conditional latent diffusion model.
* We provide a comprehensive evaluation of using synthetic polyp and mask data to train polyp segmentation models and overall results.
The source code of all the experiments is available at [https://github.com/simulant-host/conditional-polyp-diffusion](https://github.com/simulant-host/conditional-polyp-diffusion) and the pre-generated synthetic masks and the corresponding conditional synthetic polyp images are available at [https://huggingface.co/datasets/deepsynthbody/conditional-polyp-diffusion](https://huggingface.co/datasets/deepsynthbody/conditional-polyp-diffusion).
## 2. Related Work
There are many GI image analysis datasets available for machine learning tasks. Some of the commonly used datasets in human GI tract are: ETIS-Larib (Srivastava et al., 2015), CVC-ClinicDB (Chen et al., 2016), ASU-Mayo Clinic Polyp database (Wang et al., 2017), Kvasir (Kvasir et al., 2017), Kvasir-SEG (Kvasir et al., 2018) and Hyperv-vasir (Kvasir et al., 2018). A few datasets containing manually annotated segmentation masks for polyps. However, these real-world datasets (not limited to GI-tract data) have some limitations. The limitations include:
* Size: medical image datasets, including those for polyp detection and segmentation, are often smaller in size compared to other image datasets, such as ImageNet (Krizhevsky et al., 2014), Microsoft COCO (Krizhevsky et al., 2017) which can limit their ability to train complex machine learning models.
* Annotation quality: the accuracy and consistency of the annotations of the dataset can impact the performance of machine learning algorithms. Annotations are dependent on annotator and normally high inter-rater variability is there.
* Diversity: the diversity of the images in the dataset is important for the generalization of machine learning algorithms. If the dataset is limited to a narrow range of images, the algorithm may not perform well on new, unseen images.
* Accessibility: legal and privacy constraints can limit the accessibility of medical image datasets, making it difficult to obtain large and diverse datasets for machine learning tasks.
These limitations highlight the need for ongoing development and improvement of medical image datasets to support the advancement of machine learning in medical imaging. To overcome these limitations of real-world datasets, synthetic datasets(Fagereno et al., 2016; Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2017) have been increasingly used in medical image analysis. For instance, to generate synthetic polyps, a GAN framework has been proposed to generate a polyp mask and then synthesize the generated polyp with the real polyp image without the use of additional datasets and processes (Kvasir et al., 2017). There has also been research on the augmenting of colonoscopy images with polyps by using synthetic samples (Beng et al., 2015). Fagereno et al. (2016) present a solution to overcome the challenge of a lack of annotated data when building CAD systems for detecting polyps in the GI-tract. The authors propose a pipeline called PolypConnect, which can convert non-polyp images into polyp images to increase the size of training datasets for machine learning models. The results show \(5.1\%\) improvement in mean intersection over union (IOU) when using synthetic data in addition to real data for training. Dhariwal et al. (Dharriwal et al., 2017) compare the performance of diffusion models and GANs for image synthesis tasks. As a result, the authors found that diffusion models outperformed GANs in terms of image quality and stability of the generated images. The results of the paper indicate that diffusion models are a promising alternative to GANs in the field of image synthesis. This work provides valuable insights into the strengths and limitations of both diffusion models and GANs.
## 3. Methodology
In this section, first, we describe the improved diffusion model which is used to generate realistic synthetic polyp mask images (the blue box of Figure 1). Then, we present the latent diffusion model which is used for generating synthetic polyp images conditioned on the input masks generated from our aforementioned mask generator (the green box of Figure 1). We evaluate the quality of the generated synthetic data and quantify similarity between generated and real data, representing using the last section of Figure 1. Finally, we present our methods used to check the quality of synthetic data using image segmentation models.
### Improved diffusion model
In our pipeline, we use an _improved diffusion model_(Kvasir et al., 2017) to generate synthetic polyp masks which looks realistic to capture the distribution of the masks of the Kvasir-SEG dataset. The _improved diffusion model_ is a type of generative model that uses a gradual, multi-step process to match a data distribution and generate synthetic images. In the context of generating synthetic mask images for the GI-tract, improved diffusion models can be used to generate synthetic mask images that closely resemble real images. Therefore, these models overcome the issue of limited annotated data (Kvasir et al., 2017; Kvasir et al., 2018).
To achieve our goal, the first step is to obtain a training set for real mask images of the GI tract indicating the location of polyps. Then, the mask dataset is used to train the improved diffusion model to generate synthetic polyp images that closely resemble the real images. The improved diffusion model generates synthetic mask
images by first adding noise to a randomly selected mask image from the training set. This noise would then be gradually reversed through multiple steps until a synthetic mask image is generated.
The advantage of using _improved diffusion models_ to generate synthetic images is that we can overcome the issue of limited annotated data and train machine learning models more effectively. This can lead on to improvements in the accuracy and efficiency of CAD systems for detecting polyps in the GI-tract (Krause et al., 2017)
### Latent diffusion model
The Latent Diffusion model (Krizhevsky et al., 2014), developed by CompVis and trained on the LAION-400M dataset (Krizhevsky et al., 2014), operates through a series of denoising autoencoders and diffusion models. This model has been utilized to generate images based on text prompts, and has shown exceptional results in tasks related to image inpainting (Dosovosov et al., 2015) and various other applications, surpassing the current state of the art (Krizhevsky et al., 2014).
Latent diffusion models are a suitable choice for generating synthetic images of the GI tract for several reasons. Firstly, they possess the ability to model intricate and non-linear patterns in the data, crucial for producing convincing images of the GI tract. Secondly, they are capable of generating a large diversity of high-quality synthetic images, which enhances the generalizability of machine learning models. Lastly, they can be trained with a limited amount of real data points, which is important in medical imaging where annotated data is often scarce.
### Mask similarity
To assess the quality of generated images for generative models, the Frechet inception distance (FID) (Friedberg et al., 2015) metric is typically used, which compares the distribution of the generated images compared to real images. Because the improved diffusion model are generating binary masks of polyps, we can also introduce the similarity measure (SIM) metric that is analogous to accuracy. Consider real image \(r\) together with generated image \(g\) of the same size, then the similarity \(sim\) is defined as number of pixels that are same for both images divided by the total area of image:
\[sim(r,g)=\frac{\#(r==g)}{width*height}\]
To measure the similarity \(SIM\) of generated images to our training real images \(R\), we simply take average of the closest pairs (high similarities). If a generated image is \(g\) and associated closest real image is \(g*\), we can calculate largest similarity using,
\[SIM(R,G)=\frac{1}{|G|}\sum_{r\in R}sim(r,g*)\]
The idea here is that even if the generated images are highly similar to some of the training images (same size, position), they should differ in another aspects, such as rotation.
### Segmentation models
We have used three different well-known image segmentation models, namely UNet++ (Wang et al., 2017), feature pyramid network (FPN) (Krizhevsky et al., 2014), and DeepLabv3+ (Chen et al., 2017) for evaluating the effect of using synthetic data for training polyp segmentation tasks. Initially, we trained these three models using three different approaches, i.e., we trained the system using i) \(700\) of real polyp images; ii) using \(1000\) synthetic polyp images; and iii) a combination of \(700\) real and \(1000\) synthetic polyp images. To further analyze the effect of synthetic data, we trained these three models with another set of real and synthetic data combinations. In these combinations, we fixed the number of real images to \(100\) samples, and we increased the number of synthetic samples from \(0\) to \(1000\) sequentially in steps of \(100\). The main objective of this experiment is to identify the effect of a number of synthetic samples included in the training data. However, we limited this experiment to using only \(100\) real images because of the time limitation, but in the future we will test with different number of real images from \(200\) to \(700\) to find the optimal combination to get better performance.
We tested these models with \(300\) real images and masks (from the segmentation data of HyperKvasir dataset (Chen et al., 2017)) which were not used to train either the diffusion model or the segmentation models. Then, we measured micro and micro-imagewise IOU, F1, Accuracy, and Precision from the test dataset for all the segmentation models. Micro values were calculated by summing true positive (TP), false positive (FP), false negative (FN), and true negative (TN) pixels over all images and all classes and then computing scores. In contrast, the micro-imagewise matrices were calculated by summing TP, FP, FN, and TN pixels for each image and then computing scores for
Figure 1. The whole pipeline of generating synthetic polyps and mask. The blue box represents the diffusion model trained to generate realistic synthetic polyp masks. The green box represents the conditional latent diffusion model which is used to generate synthetic polyp conditioned on input masks. The bottom box represents the evaluation matrices.
each image. Finally, average scores over the dataset were calculated. In the micro-imagewise calculations, all images contributed equally to the final score. However, the second method takes into account class imbalance for each image.
## 4. Results and Discussion
In this section, we discuss experiment setup and the result collected from generative models and segmentation models. A server with Nvidia A100 80\(GB\) graphic processing units (GPUs), AMD EPYC 7763 64-cores processor with \(2TB\) RAM were used for all the experiments of this study. Additionally, we used Pytorch (Pytorch, 2017), the Pytorch-lightning libraries, and the Pytorch segmentation library (Krizhevsky et al., 2014) as development frameworks.
### Diffusion experiments and results
For mask generator, that is improved diffusion model we have used FID and SIM values to quantify and select appropriate model. We have generated \(1000\) masks for each of our saved model, and we compare them with \(1000\) real training masks in Table 1.
We selected the model from iteration \(200,000\) based on the results from Table 1. The reason is that the model achieves lowest FID value together with high SIM values, indicating diverse and quality masks. We also inspected the generated masks visually to confirm this conclusion.
Examples of generated masks in comparison with real masks can be seen in Figure 2. We can see different masks with different shapes and numer of polyps indicating capability to generate diverse synthetic masks. Further discussion with medical professional would be required in order to determine if masks are correct.
Interestingly, high \(SIM\) score doesn't necessary imply that model is producing identical masks, as can be seen in the Figure 3. For instance, masks may be located in similar positions but have different, smaller shapes, therefore achieving higher similarities.
We have used the generated masks made by the selected model as conditions to our latent polyp diffusion model and produced \(1000\) generated images which we used for further evaluation in Table 2.
We can see from Table 2 the model which achieved lowest FID score is at \(Epoch=135\). We inspected the generated images, similarly as in Figure 4. It can be seen that the quality of generated images deteriorates at later stages of training, reason may be overfitting. This may lead to problems while generating different samples with same condition which would be more similar.
Therefore, we selected the model from earlier stages that achieved lowest FID. We conditioned the model on one mask and generated multiple samples to see if the model generalizes well, results of this experiment can be seen in Figure 5.
### Segmentation experiments and results
We used a learning rate of \(0.0001\) with the Adam optimizer (Kingmae and Ba, 2014) to train the three segmenation models, UNet++, FPN and DeepLabv3+, DiceLoss (Zhu et al., 2015) was used in the training process as the loss function to update the weights. The encoder model of _resnet34_ was input as the encoder network for all three models (for more details of these encoder networks, please refer to the documentation (Krizhevsky et al., 2014)). Micro metrics and micro-imagewise metrics (as discussed in the Pytorch segmentation library) were calculated from the best checkpoints and the test dataset after training \(50\) epochs for all the models. The calculated micro metrics are tabulated in Table 3, and micro-imagewise values are tabulated in Table 4.
According to the results in Tables 3 and 4, it is clear that adding synthetic data can improve the results of segmentation models. However, it is not always true because some models like FPN and UNet++ show the best IOU, F1, and accuracy when the training data consists of only the real data. In contrast, DeepLabv3 shows the best performance when some synthetic data is included in the training data. Overall, the best micro-imagewise IOU of \(0.7751\) is achieved from DeepLabv3+ when the training data contains the maximum number of images from both the real data and the synthetic data. Therefore, it is a clear evidence that synthetic data has a direct influence on the final performance of segmentation models. Moreover, we noticed that precision is always better when the synthetic data is in the training data than using only the real data. This implies that synthesized TP can improve the TP predictions which is more important in the medical domain. More visual comparisons are presented in Figure 6. This figure compares the model predictions from the three segmentation models between baseline performance and improved performance (marked using \({}^{*}\) in Figure 6) using synthetic data. The improved versions of the models were selected using the metrics of Tables 3 and 4.
Another interesting finding of these segmentation experiments is that we get the best values of precision, accuracy, F1 and IOU when we use a smaller number of real images and synthetic images. For example, Unet++ and FPN shows best precision values (micro and micro-imagewise) when the training data consist of \(100\) real samples and \(200\) synthetic samples. However, this implies that there is a direct correlation between synthetic data, models, and the number of parameters. Therefore, researchers should not conclude performance gain or degrade of using synthetic data to train segmentation models just by evaluating a single model.
of segmentation models while these improvements are correlated with model architectures. In this regard, we can clearly conclude that synthetic data help to improve the performance of segmentation models. However, deep evaluations should be performed with the same model architecture as the model architecture.
Figure 4. Generated synthetic polyps conditioned on the same mask illustrating changes in quality during training stages.
Figure 3. Examples of comparison of generated masks \(g\) to real masks \(r\) based on similarity measure \(sim(r,g)\).
Figure 2. Examples of real masks in the first row, generated masks on the second. Note the variability of shapes and amount of polyps in the generated masks.
multiple model architecture to see the real gain of using synthetic data.
In future studies, we will perform more segmentation experiments to get a complete result set for Tables 3 and 4, for example, increasing the synthetic training data gradually with the full real dataset. Moreover, we will generate more synthetic data using our model to train the segmentation models with large synthetic datasets to evaluate the effect of using synthetic data deeply. Generating multiple images conditioned on the same input to train the segmentation models are an another limitation of the presented segmentation experiments. Furthermore, the quality of generated images can be improved using the style-transfer technique [11] as used in the SinGAN-Seg study [41]. Cross-dataset evaluations should be performed to measure the effect of using synthetic data to train segmentation models to improve robustness and generalizability.
###### Acknowledgements.
The research presented in this paper has benefited from the Experimental Infrastructure for Exploration of Exascale Computing (eX3), which is financially supported by the Research Council of Norway under contract 270053.
|
2306.13228
|
Semicycles and correlated asymptotics of oscillatory solutions to
second-order delay differential equations
|
We obtain several new comparison results on the distance between zeros and
local extrema of solutions for the second order delay differential equation
\begin{equation*} x^{\prime \prime }(t)+p(t)x(t-\tau (t))=0,~~t\geq s\text{ }\
\end{equation*} where $\tau :\mathbb{R}\rightarrow \lbrack 0,+\infty )$,
$p:\mathbb{R}% \rightarrow \mathbb{R}$ are Lebesgue measurable and uniformly
essentially bounded, including the case of a sign-changing coefficient. We are
thus able to calculate upper bounds on the semicycle length, which guarantee
that an oscillatory solution is bounded or even tends to zero. Using the
estimates of the distance between zeros and extrema, we investigate the
classification of solutions in the case $p(t)\leq 0,t\in \mathbb{R}.$
|
Elena Braverman, Alexander Domoshnitsky, John Ioannis Stavroulakis
|
2023-06-22T22:21:17Z
|
http://arxiv.org/abs/2306.13228v1
|
Semicycles and correlated asymptotics of oscillatory solutions to second-order delay differential equations
###### Abstract
We obtain several new comparison results on the distance between zeros and local extrema of solutions for the second order delay differential equation
\[x^{\prime\prime}(t)+p(t)x(t-\tau(t))=0,\ t\geq s\]
where \(\tau:\mathbb{R}\to[0,+\infty)\), \(p:\mathbb{R}\to\mathbb{R}\) are Lebesgue measurable and uniformly essentially bounded, including the case of a sign-changing coefficient. We are thus able to calculate upper bounds on the semicycle length, which guarantee that an oscillatory solution is bounded or even tends to zero. Using the estimates of the distance between zeros and extrema, we investigate the classification of solutions in the case \(p(t)\leq 0,t\in\mathbb{R}\).
keywords: Semicycle, oscillation, delay, comparison theorems, second-order delay equations
**AMS subject classification:** 34K25, 34K11, 34K12 +
Footnote †: journal: Journal of Mathematical Analysis and Applications
## 1 Introduction
Consider the second order nonautonomous delay differential equation
\[x^{\prime\prime}(t)+p(t)x(t-\tau(t))=0, \tag{1}\]
in the case where \(\tau:\mathbb{R}\to[0,+\infty)\), \(p:\mathbb{R}\to\mathbb{R}\) are Lebesgue measurable and uniformly essentially bounded. By a solution of (1), we understand a function \(x:[s-\tau_{m},+\infty)\), where \(\tau_{m}:=\operatorname{esssup}\{\tau(t):t\geq s\}\), which is absolutely continuous, together with its first derivative, such that (1) holds a.e. on \([s,+\infty)\). The restriction of \(x\) to \([s-\tau_{m},s]\) is called the initial data. In the case \(\tau_{m}=0\), we consider the value of the first derivative at \(s\) to be included in the initial data. We call a real function \(x:A\to\mathbb{R},A\subset\mathbb{R}\), oscillatory if it has arbitrarily large zeros. An interval between adjacent zeros of an oscillatory solution is referred to as a _semicycle_.
Both the delayed and the infinite dimensional character of (1) are essential. The space of initial functions engenders an infinite dimensional solution space, and even infinite dimensional ODEs fail to capture its complexity. In fact, to rewrite (1) in a more general context, one would need to consider difference equations in the phase space ([22, Chapters 3, 4]), retaining both the delay and the infinite solution space. In this paper we focus our attention on the family of oscillatory solutions, which is often the entire solution space, containing both bounded and unbounded solutions. The classification of oscillatory solutions based on the length of their semicycles is at the core of several classic asymptotic results on (1), obtained by two major veins of research, of Myshkis and Azbelev.
We innovate by considering the semicycles as a continuum parametrized by the delay, refining and generalizing the standard two-fold distinction between "slow" and "rapid" oscillation. Let us describe in broad strokes these known results.
Myshkis [35] studied (1) with continuous parameters and nonnegative coefficient \(p:\mathbb{R}\to[0,+\infty)\), using bounds of the semicycle length. In [35], a "large" semicycle with adjacent zeros \(a,b\) where \(a<b\), is defined by
\[b>\sup\{t:t-\tau(t)<a\},\]
otherwise, it is a "small" semicycle. Oscillatory solutions only possessing large semicycles are referred to as slowly oscillating, whereas solutions with small semicycles are rapidly oscillating. Myshkis proved several sufficient conditions for solutions to be slowly oscillating. As for rapidly oscillating solutions, he showed that when
\[\tau_{m}\sqrt{\sup_{t\in\mathbb{R}}p(t)}\leq 2\sqrt{2}, \tag{2}\]
rapidly oscillating solutions are bounded, and if, further, the strict inequality holds in (2) they tend to zero at infinity. He also showed that solutions of the equation
\[x^{\prime\prime}(t)+x(t-c)=0 \tag{3}\]
possessing a large semicycle must be unbounded when \(c\in(0,\pi]\) ([35], [36, Chapter 16, p. 303]), while Hale [22, p. 135] showed that (3) has unbounded solutions when \(c>\pi\).
In contrast, Azbelev [2] restricted attention to solutions of (1), with nonnegative coefficient \(p\), corresponding to the zero initial function
\[x(t)\equiv 0,t\in[s-\tau_{m},s) \tag{4}\]
(allowing for discontinuity of \(x\) or \(x^{\prime}\) at \(s\)), which form a two-dimensional fundamental system, generated by the solutions \(z,y\), corresponding to \(z(s)=1,z^{\prime}(s)=0\), and \(y^{\prime}(s)=1,y(s)=0\). A similar method was employed in [37]. The finite dimension of this restricted solution space significantly facilitates the study of asymptotics and enables one to gauge the behavior of the solutions by estimating the Wronskian of the fundamental system
\[W(t):=\left|\begin{array}{cc}z(t)&y(t)\\ z^{\prime}(t)&y^{\prime}(t)\end{array}\right|,t\geq s,\]
introduced by Domoshnitsky [10]. Crucial in this vein of research [4, 10, 11, 38] is slow oscillation of solutions of (1), with the zero initial function (4), in the sense of the \(h-\)condition (on which see [2]). If the function \(t\mapsto(t-\tau(t))\) is monotone and the coefficient \(p\) nonnegative, all oscillatory solutions of (3), satisfying initial condition (4), slowly oscillate. Notably, when furthermore the coefficient \(p\) is nondecreasing and positive bounded, the existence of unbounded solutions to (3), with initial condition (4), is equivalent to the divergence of \(\int_{0}^{\infty}\tau(t)dt\) (see [11, 24]).
Unbounded solutions proven to exist within this framework are generally dominant slowly oscillating solutions, as oscillation of (1) with bounded parameters and positive coefficient is equivalent to oscillation of the ODE \(x^{\prime\prime}(t)+p(t)x(t)=0\), which is known to oscillate under quite weak conditions ([5, 16, 23, 15]). For an in-depth study of several such parallels in asymptotic behavior and
classification of solutions between (1) and the corresponding ODE, the reader is referred to Dosla and Marini [14]. Furthermore, inequality (2), which implies that rapidly oscillating solutions are bounded, also implies _a fortiori_ slow oscillation of the unbounded dominant solutions. However, while the constant \(2\sqrt{2}\) is sharp in (2) by construction regarding the boundedness of rapidly oscillating solutions, it can be improved to \(\frac{\pi}{2}+\sqrt{2}\) regarding slow oscillation (see [35, p. 18-23] and independent results by Eliason [17]).
In summary, both the research of Myshkis [35], [36], and that of Azbelev [2] and Domoshnitsky [11], beg the question: is slow oscillation equivalent to unboundedness? This was answered in the negative by Myshkis [35, Paragraph 6], constructing examples of bounded, slowly oscillating, solutions, under \(\tau_{m}\in(0,\frac{\pi}{2})\). Surprisingly, despite many well-known results linking slow and rapid oscillation to unboundedness and boundedness respectively, the distinction between rapidly and slowly oscillating solutions is insufficient to determine their asymptotics. While Myshkis did consider the entire solution space, the classification of slow and rapid oscillation was not a fine enough sieve to describe (un)boundedness in terms of semicycle length.
Indeed, instead of a two-fold solution space (slowly and rapidly oscillating solutions), one expects an infinite dimensional solution space, containing oscillatory solutions that tend to zero with arbitrarily small semicycles. In the case where there exists a spectral decomposition (see [3, Chapter 12, p. 417], [22, Chapter 8]), either there exists a sequence of eigenvalues that tend to infinity in the complex norm, and the corresponding semicycles (half-periods) of the solutions tend to zero, or there exist _small_, superexponential, solutions which are expected to be oscillatory. On the (non)existence of small solutions and their oscillation, see e.g. [8, 19]. More generally, the pathologies exhibited by infinite dimensional subspaces of the phase space should prevent the families of positive and unbounded solutions from being infinite. For example, closed infinite subspaces of \(\mathcal{C}[0,1]\), must themselves contain infinite dimensional subspaces comprised of functions with infinitely many zeros, as a consequence of well-known spaceability results [18, Corollary 3.7, Corollary 3.8], [32]. The solution space being infinite is the expected behavior, one requiring _ad hoc_ conditions such as the vanishing of the coefficient or the delay to ensure finite solution spaces (see sufficient conditions for an infinite solution space in [21, 33], [22, Chapter 2]). Thus, the solution space will typically consist of finite dimensional unstable and center manifolds, along with an infinite dimensional stable manifold of oscillatory solutions that have arbitrarily small semicycles.
Moreover, the distance between zeros is intimately linked to the rate of growth/decay of oscillatory solutions, and this relation has been the subject of many investigations, e.g. [6, 7, 19, 26, 27], starting with the Morse decompositions [34]. Namely, a bound on the oscillation frequency implies a bound on the rate of decay. Our point of interest in this paper is existence and estimates of a threshold semicycle length, which characterizes oscillatory solutions that tend to zero. This follows from both known and new comparison Theorems on the distance between zeros and local extrema. We obtain an upper bound on the semicycles guaranteeing that an oscillatory solution is in the stable manifold, and conversely, a necessary lower bound on the nonoscillation intervals of unbounded solutions. As we are investigating oscillatory solutions, the sign of the coefficient may be modified, allowing for according changes in the delay. Consequently, the case of negative coefficient is closely related to the case of sign-changing coefficient, and the bounds on the ascent from zero to maximum give us a bound on the delay which guarantees that oscillatory solutions tend to zero, under \(p(t)\leq 0,t\in\mathbb{R}\). A more detailed discussion of the negative coefficient case is postponed to Section 5 of the present.
We note that a major difficulty in such estimates are regularity assumptions on the solutions.
The optimal results are expected to correspond to periodic solutions of (1) (similarly to the first order delay equation). However, rigorously proving that the bounds for regular solutions apply in the general case, to irregural solutions as well, is no trivial task. In addition, investigations without regularity assumptions such as slow oscillation are rare in the literature. In the present, we reduce the two dimensional problem to a one dimensional problem by not taking into account the derivative at the zeros, and are thus able to obtain results without any regularity assumptions.
The main results of this paper are as follows:
1. Calculation of the threshold semicycle length which distinguishes between oscillatory solutions that tend to zero, and oscillatory solutions that may be unbounded.
2. Extension of well-known comparison results and development of new methods, clarifying the relations between various comparison Theorems.
3. Classification of solutions of (1) with negative coefficient, including estimates of the critical value of the delay which guarantees that oscillatory solutions tend to zero.
We first obtain several forward and backward comparison results on (1), as well as estimates on the distances between zeros and extrema, by comparing with autonomous equations and treating the delay as a parameter (Sections 2 and 3). We subsequently utilize these growth estimates to obtain a _continuum_ (parametrized by the delay) of lower bounds on the semicycles of solutions that do not tend to zero (Section 4). Once the speed of oscillation is higher than this critical value, a solution necessarily tends to zero. We furthermore obtain new results on the classification of solutions with negative coefficient (Section 5). Section 6 contains examples illustrating the import and the sharpness of the results, as well as a discussion of the current and future directions of research.
## 2 Descent from a maximum to zero
Throughout the following, we will assume that
\[\operatorname*{esssup}_{t\in\mathbb{R}}|p(t)|\leq 1. \tag{5}\]
This assumption cannot harm the generality, in virtue of the following change of variables, introduced by Myshkis [35].
**Lemma 1** ([35, p. 16]).: _Assume that \(x\) solves (1). For a fixed \(k\in(0,+\infty)\), define the function_
\[\widetilde{x}(t):=x(kt).\]
_Then \(\widetilde{x}\) solves the following equation_
\[\widetilde{x}^{\prime\prime}(t)+k^{2}p(kt)\widetilde{x}\left(t-\frac{1}{k} \tau(kt)\right)=0.\]
Let us recall a well-known comparison Theorem. It was proven by Du and Kwong [15] for continuous parameters, and the same proof applies to the measurable case, as one retains continuous dependence (22, Theorem 2.2). The following form of the statement is a combination of (15, Theorem 1) and (15, Theorem 2), cf. the discussion in (15, p. 310).
**Lemma 2** ([15]).: _Assume that \(z\) solves (1) on \([0,+\infty)\), and \(y\) solves_
\[y^{\prime\prime}(t)+P(t)y(t-T(t))=0,t\geq 0,\]
_where \(P(t)\geq|p(t)|\) and \(T(t)\geq\tau(t),t\geq 0\). Further assume_
\[y(t)>0,t\in[-T_{m},a),a>0,\]
_where \(T_{m}=\mathrm{esssup}\{T(t):t\geq 0\}\), and that_
\[y^{\prime}(t)\leq 0,t\in[-T_{m},0].\]
_If \(\tau_{m}>0\) assume_
\[|\frac{z(t)}{z(0)}|\leq\frac{y(t)}{y(0)},t\in[-\tau_{m},0] \tag{6}\]
_and if \(\tau_{m}=0\) assume_
\[\frac{z^{\prime}(0)}{z(0)}\geq\frac{y^{\prime}(0)}{y(0)}. \tag{7}\]
_Then_
\[\frac{z(t)}{z(0)}\geq\frac{y(t)}{y(0)}>0,t\in[0,a).\]
As a consequence of this comparison, descending from a maximum to a zero, a solution of (1) is bounded from below by an appropriately scaled multiple of the following autonomous equation (see [15, 35]).
**Lemma 3** ([35, p. 25-28]).: _Consider a fixed \(\Delta\in[0,+\infty)\) and the solution \(r_{\Delta}\) of the equation_
\[r_{\Delta}^{\prime\prime}(t)+r_{\Delta}(t-\Delta) = 0,\ t\geq 0 \tag{8}\] \[r_{\Delta}(0) = 1,t\leq 0\] \[r_{\Delta}^{\prime}(0) = 0\]
_and denote its first zero by \(\vartheta_{\Delta}\). Then \(r_{\Delta}\) is strictly decreasing on \([0,\vartheta_{\Delta}]\), and satisfies the following expression for \(\Delta>0\)_
\[r_{\Delta}(t) = 1-\frac{t^{2}}{2},t\in[0,\Delta]\] \[r_{\Delta}(t) = 1-\frac{t^{2}}{2}+\frac{(t-\Delta)^{4}}{24},t\in[\Delta,2\Delta]\] \[...\]
\[r_{\Delta}(t)=\sum_{k=0}^{n}\frac{(-1)^{k}}{(2k)!}\left[t-(k-1)\Delta\right]^{ 2k},t\in[(n-1)\Delta,n\Delta],n=1,2,... \tag{9}\]
_As \(\Delta\to 0^{+}\), \(r_{\Delta}(t)\) tends uniformly to \(\cos(t),t\in[0,\frac{\pi}{2}].\) Its first zero \(\vartheta_{\Delta}\geq\sqrt{2}\) is a continuous function of \(\Delta\in[0,+\infty)\)._
**Remark 4**.: _Lemmas 2, 3, imply that under (5), if \(x\) is a solution of (1) on \([0,\vartheta_{\Delta}]\) such that \(|x(t)|\leq x(0)=1,t\in[-\tau_{m},0],\) and \(x^{\prime}(0)\geq 0,\) we have_
\[x(t)\geq r_{\tau_{m}}(t)>0,t\in[0,\vartheta_{\tau_{m}}).\]
_Thus the distance in time descending from a maximum to the next zero is at least \(\vartheta_{\tau_{m}}\geq\sqrt{2}.\)_
The following Lemma will help us prove a backward analog of Lemma 2, utilizing the Vallee Poussin and Sturm Separation Theorems [2], [4, Chapter 11], [28, Lemma 1]. We remark that this framework can provide an alternative proof to Lemma 2.
**Lemma 5** ([2], [28, Lemma 1]).: _Assume there exists a nonnegative solution of_
\[v^{\prime\prime}(t)+\chi[a,b](t-\tau(t))p(t)v(t-\tau(t)) \leq 0,t\in[a,b],\] \[v(a) > 0,\]
_where \(p(t)\geq 0,t\in[a,b]\) and \(\chi S(\cdot)\) is the characteristic function of any set \(S\subset\mathbb{R}.\) Then there exists no nontrivial solution of_
\[u^{\prime\prime}(t)+\chi[a,b](t-\tau(t))p(t)u(t-\tau(t)) \geq 0,t\in[a,b]\] \[u(a) = u(b)=0\] \[u^{\prime}(a) \geq 0.\]
**Theorem 6**.: _Assume that \(y\) solves_
\[y^{\prime\prime}(t)+P(t)y(t-T(t))=0,t\geq 0,\]
_where \(P(t)\geq|p(t)|\) and \(T(t)\geq\tau(t),t\geq 0\). Further assume_
\[y(t)>0,t\in[-T_{m},b),b>0,\]
_where \(T_{m}=\mathrm{esssup}\{T(t):t\geq 0\}\), and that_
\[y^{\prime}(t)\leq 0,t\in[-T_{m},0].\]
_Assume that \(z\) solves (1) on \([0,+\infty)\) and let \(z,y\), satisfy_
\[|z(t)| \leq y(t),t\in[-\tau_{m},0], \tag{10}\] \[|z(b)| = y(b). \tag{11}\]
_Then_
\[|z(t)|\leq y(t),t\in[0,b].\]
**Proof.** Assuming the contrary, there exists \(\xi\in(0,b)\) such that \(y(\xi)<|z(\xi)|.\) Hence the set
\[A:=\{t\in[-\tau_{m},b):y(t)<|z(t)|\}\]
is nonempty. Furthermore, by (10), we have \(A\subset(0,b)\). We denote the infinum of this set by \(a\), and note that \(a\geq 0.\) By continuity, we have
\[y(a)\leq|z(a)|.\]
If \(a>0\) we must have
\[y(a)=|z(a)|, \tag{12}\]
by continuity and the definition of \(a\), and if \(a=0\), we again have (12), because of (10).
As (12) implies \(a\notin A\), there exists a sequence of points \(t_{n}\in A\), such that \(t_{n}\stackrel{{ n\to\infty}}{{\to}}a^{+}.\) Hence,
\[\frac{y(t_{n})}{y(a)}<|\frac{z(t_{n})}{z(a)}|. \tag{13}\]
Relations (12), (13), imply
\[\frac{y^{\prime}(a)}{y(a)}\leq\frac{z^{\prime}(a)}{z(a)}. \tag{14}\]
Applying Lemma 2, we have that
\[\frac{z(t)}{z(a)}\geq\frac{y(t)}{y(a)}>0,t\in[a,b).\]
We may assume \(z(t)>0,t\in[a,b),\) without loss of generality (otherwise, we consider \(-z\)). For \(t\in[a,b]\), we have by monotonicity, positivity, of \(y\),
\[y^{\prime\prime}(t)+|p(t)|y(t-\tau(t))\leq y^{\prime\prime}(t)+P(t)y(t-\tau(t ))\leq y^{\prime\prime}(t)+P(t)y(t-T(t))=0, \tag{15}\]
and by the nonnegativity of \(z\),
\[z^{\prime\prime}(t)+\chi[a,b](t-\tau(t))|p(t)|z(t-\tau(t))\geq-\chi[a-\tau_{m},a](t-\tau(t))|p(t)||z(t-\tau(t))|. \tag{16}\]
Setting \(u(t):=z(t)-y(t),t\in[a,b]\) and subtracting (15) from (16)
\[u^{\prime\prime}(t)+\chi[a,b](t-\tau(t))|p(t)|u(t-\tau(t)) \geq \chi[a-\tau_{m},a](t-\tau(t))|p(t)|\left(y(t-\tau(t))-|z(t-\tau(t) )|\right)\] \[\geq 0,t\in[a,b]\]
where \(\chi A(\cdot)\) is the characteristic function of any set \(A\subset\mathbb{R}\). Furthermore, in virtue of (11) and (14),
\[u(a) = u(b)=0\] \[u^{\prime}(a) \geq 0.\]
It suffices to show that the assumptions of Lemma 5 are satisfied. By (15), we have that \(y=v\) is the required nonnegative solution of Lemma 5.
The following Corollary immediately follows, and a direct proof solely using Lemma 2 is also possible.
**Corollary 7**.: _For a fixed \(\Delta\in[0,+\infty),\) consider any solution \(x\) of (1) on \([0,\vartheta_{\Delta}]\) with (5), \(\tau_{m}\leq\Delta\) and \(x(\vartheta_{\Delta})=0.\) Then_
\[\left(\max_{t\in[-\tau_{m},0]}|x(t)|\right)r_{\Delta}(t)\geq|x(t)|,t\in[0, \vartheta_{\Delta}].\]
**Corollary 8**: \(\vartheta_{\Delta}\) _is a nonincreasing function of \(\Delta\). More precisely, \(\vartheta_{\Delta}=\sqrt{2},\forall\Delta\geq\sqrt{2}\), and \(\vartheta_{\Delta}\) is strictly decreasing from \(\frac{\pi}{2}\) to \(\sqrt{2}\) for \(\Delta\in[0,\sqrt{2}]\)._
**Proof.** The nonincreasing nature of \(\vartheta_{\Delta}\) follows from Lemma 2, setting \(y=r_{\Delta},z=r_{\widetilde{\Delta}}\) with \(\Delta\geq\widetilde{\Delta}\). That \(\vartheta_{\Delta}=\sqrt{2},\forall\Delta\geq\sqrt{2}\) is an immediate consequence of (9). Now assume that \(\vartheta_{\widetilde{\Delta}}=\vartheta_{\Delta}\) where \(0<\widetilde{\Delta}<\Delta\leq\sqrt{2}\). By Corollary 7, Lemma 2 (setting \(y=r_{\Delta},z=r_{\widetilde{\Delta}}\)), we have \(r_{\Delta}(t)=r_{\widetilde{\Delta}}(t),t\in[0,\vartheta_{\Delta}]\). Expression (9) and \(0<\widetilde{\Delta}<\Delta\leq\sqrt{2}\) give a contradiction.
## 3 Ascent from a zero to a maximum
We now turn our attention to the ascent from a zero to the next extremum. We reduce the two dimensional problem to a one dimensional problem, without taking into account the first derivative at the zero. This reduction allows us to obtain results that apply with arbitrary scaling and semicycle length. It furthermore allows us to directly combine the estimates on the ascent and on the descent, without imposing any regularity restrictions -such as slow oscillation- on the oscillatory solutions. For this reason, it is simplest to reverse time and consider the independent variable \(w:=-t\), with respect to which our reasoning is similar to the previous section. In order to estimate the least distance in time necessary to ascend from a zero to a maximum, we need to define a sequence of upper bounds, constructed so that it converges to a solution which ascends to a maximum in the least time. Throughout the following a fixed \(\Delta\in[0,+\infty)\) will denote a bound on the maximum delay, and \(\rho\in(0,+\infty)\) will be a scaling constant.
**Lemma 9**: _For fixed \(\Delta\in[0,+\infty),\rho\in(0,+\infty)\) consider the following sequence of functions. Setting \(\beta_{0}(t)\equiv 1\), we define for \(n=0,1,2,...\)_
\[\beta_{n+1}(t)=\left\{\begin{array}{cc}1,&t\leq-\varpi_{n}\\ 1-\int_{-\varpi_{n}}^{t}\left[\int_{-\varpi_{n}}^{s}\max\{\beta_{n}(v),\rho r _{\Delta}(\vartheta_{\Delta}-v-\Delta)\chi[-\Delta,0](v)\}dv\right]ds,&t\in[ -\varpi_{n},0]\end{array}\right. \tag{17}\]
_where \(\varpi_{n}\in(0,+\infty)\), \(n=0,1,2,...\)is the unique solution of_
\[h_{n}(-\varpi_{n})=\int_{-\varpi_{n}}^{0}\left[\int_{-\varpi_{n}}^{s}\max\{ \beta_{n}(w),\rho r_{\Delta}(\vartheta_{\Delta}-w-\Delta)\chi[-\Delta,0](w)\} dw\right]ds=1,\]
_where \(h_{n}:(-\infty,0]\rightarrow[0,+\infty)\) is the function_
\[h_{n}(v):=\int_{v}^{0}\left[\int_{v}^{s}\max\{\beta_{n}(w),\rho r_{\Delta}( \vartheta_{\Delta}-w-\Delta)\chi[-\Delta,0](w)\}dw\right]ds \tag{18}\]
_and \(\chi A(\cdot)\) is the characteristic function of any set \(A\subset\mathbb{R}\). The sequence \(\beta_{n}\) is well-defined and pointwise nonincreasing, and the sequence \(\varpi_{n}\) is nondecreasing._
**Proof.** Assuming that \(\beta_{n}\) is well-defined for \(n\leq k-1\), where \(k\) is a positive integer, we shall show that \(\beta_{k}\) is well-defined. By (17), and the definition of \(\beta_{0}(t)\equiv 1\), we have
\[\beta_{k-1}(t)>0,t<0, \tag{19}\]
and for a sufficiently large \(K>0,\)
\[\beta_{k-1}(t)=1,t<-K. \tag{20}\]
By (19), the function \(h_{k-1}\) is continuous strictly decreasing. By (20), we furthermore have
\[h_{k-1}(-K-\sqrt{2})>1.\]
Hence, there must exist a unique \(\varpi_{k-1}\in(0,+\infty)\) such that \(h_{k-1}(-\varpi_{k-1})=1\). This shows that \(b_{k}\) is well-defined, as in (17). We also note that by the definition of \(\varpi_{n}\), each \(\beta_{n}\) is strictly decreasing on \([-\varpi_{n-1},0]\), \(n=1,2,...\).
We now show that
\[\beta_{n+1}(t)\leq\beta_{n}(t),t\leq 0. \tag{21}\]
Notice that, by definition (17), inequality (21) implies \(\varpi_{n}\leq\varpi_{n+1}\). For \(n=0\), the inequality (21) is obvious, considering (17). Assuming (21) for \(n\leq k-1\), where \(k\) is a positive integer, we will prove (21) for \(n=k\). Notice that by (21) for \(n=k-1\), and definition (17),
\[\begin{array}{rl}\beta_{k+1}^{\prime\prime}(t)&\geq-\max\{\beta_{k}(t), \rho r_{\Delta}(\vartheta_{\Delta}-w-\Delta)\chi[-\Delta,0](t)\}\\ &\geq-\max\{\beta_{k-1}(t),\rho r_{\Delta}(\vartheta_{\Delta}-w-\Delta)\chi[ -\Delta,0](t)\}=\beta_{k}^{\prime\prime}(t),t\in[-\varpi_{k-1},0].\end{array} \tag{22}\]
Inequality (22) together with
\[\begin{array}{rl}\beta_{k+1}(0)&=&\beta_{k}(0)=0\\ 1&=&\beta_{k}(-\varpi_{k-1})\geq\beta_{k+1}(-\varpi_{k-1})\end{array}\]
give (21) for \(n=k\). In fact, \(f(t):=\beta_{k}(t)-\beta_{k+1}(t),t\in[-\varpi_{k-1},0]\) is concave (\(f^{\prime\prime}(t)\leq 0\)) and satisfies \(f(-\varpi_{k-1})\geq 0=f(0)\). Therefore the graph of \(f\) lies above its nonnegative chord, and \(\beta_{k}(t)\geq\beta_{k+1}(t),t\in[-\varpi_{k-1},0]\).
For fixed \(\Delta\in[0,+\infty),\rho\in(0,+\infty)\), we have shown the sequence \(\varpi_{n}\) of Lemma 9 is nondecreasing. We shall denote the limit of \(\varpi_{n}\) as \(n\rightarrow\infty\) by
\[\Psi(\rho,\Delta):=\varpi_{\infty}=\lim_{n\rightarrow\infty}\varpi_{n}. \tag{23}\]
The following Theorem shows that this limit is bounded by \(\frac{\pi}{2}.\)
**Theorem 10**.: _For fixed \(\Delta\in[0,+\infty),\rho\in(0,+\infty)\), the limit \(\Psi(\rho,\Delta)\), as in (23), satisfies_
\[\Psi(\rho,\Delta)\leq\frac{\pi}{2}.\]
_The sequence of functions \(\beta_{n}(t),\) as in Lemma 9, converges to a solution \(y_{\rho,\Delta}\) of the following boundary value problem_
\[y_{\rho,\Delta}^{\prime\prime}(w)+\max\{y_{\rho,\Delta}(w),\rho r _{\Delta}(\vartheta_{\Delta}-w-\Delta)\chi[-\Delta,0](w)\} = 0,w\in[-\Psi(\rho,\Delta),0] \tag{24}\] \[y_{\rho,\Delta}(0) = 0<y_{\rho,\Delta}(w),w\in[-\Psi(\rho,\Delta),0)\] \[y_{\rho,\Delta}(w) = 1,w\leq-\Psi(\rho,\Delta)\] \[y_{\rho,\Delta}^{\prime}(-\Psi(\rho,\Delta)) = 0\]
_For any nonnegative function \(x\) which satisfies_
\[x(0) = 0\] \[\max_{t\in[-\Psi(\rho,\Delta),0]}x(t) \leq 1\] \[x^{\prime\prime}(w)+\max\{\max_{t\in[w,0]}x(t),\rho r_{\Delta}( \vartheta_{\Delta}-w-\Delta)\chi[-\Delta,0](w)\} \geq 0,w\in[-\Psi(\rho,\Delta),0] \tag{25}\]
_we have the following bound_
\[y_{\rho,\Delta}(w)\geq x(w),w\in[-\Psi(\rho,\Delta),0]. \tag{26}\]
**Proof.** Let us show that
\[z(t)\leq\beta_{n}(t),t\leq 0 \tag{27}\]
where \(z(t):=\left\{\begin{array}{ll}1&,t<-\frac{\pi}{2},\\ \cos(t+\frac{\pi}{2}),&t\in[-\frac{\pi}{2},0].\end{array}\right.\) For \(n=0\), the inequality (27) is obvious. Assuming (27) for \(n\leq k-1\), where \(k\) is a positive integer, we will prove (27) for \(n=k\). Notice that by (27) for \(n=k-1\), and definition (17),
\[\begin{array}{c}\beta_{k}^{\prime\prime}(t)=-\max\{\beta_{k-1}(t),\rho r_{ \Delta}(\vartheta_{\Delta}-w-\Delta)\chi[-\Delta,0](t)\}\\ \leq-\max\{z(t),\rho r_{\Delta}(\vartheta_{\Delta}-w-\Delta)\chi[-\Delta,0]( t)\}\leq z^{\prime\prime}(t),t\in[-\varpi_{k-1},0].\end{array} \tag{28}\]
Inequality (28) together with
\[z(0) = \beta_{k}(0)=0\] \[1 = \beta_{k}(-\varpi_{k-1})\geq z(-\varpi_{k-1}),\]
give (27) for \(n=k\). In fact, \(f(t):=\beta_{k}(t)-z(t),t\in[-\varpi_{k-1},0]\) is concave (\(f^{\prime\prime}(t)\leq 0\)) and satisfies \(f(-\varpi_{k-1})\geq 0=f(0)\). Therefore the graph of \(f\) lies above its nonnegative chord, and \(\beta_{k}(t)\geq z(t),t\in[-\varpi_{k-1},0]\).
Integrating (27), we obtain
\[\int_{-\frac{\pi}{2}}^{0}\left[\int_{-\frac{\pi}{2}}^{s}\max\{\beta_{n}(w), \rho r_{\Delta}(\vartheta_{\Delta}-w-\Delta)\chi[-\Delta,0](w)\}dw\right]ds \geq\int_{-\frac{\pi}{2}}^{0}\left[\int_{-\frac{\pi}{2}}^{s}\cos(w+\frac{\pi}{2 })dw\right]ds=1,\]
hence (27) implies \(\frac{\pi}{2}\geq\varpi_{n}\).
By (27), Lemma 9, we have \(\varpi_{n}\leq\varpi_{n+1}\leq...\leq\varpi_{\infty}\leq\frac{\pi}{2}\). Finally, notice that the sequence \(\beta_{n}(t)\) is a nonincreasing sequence of functions that, by dominated convergence, converges to \(y_{\rho,\Delta}\) pointwise.
The proof of (26) is similar to the proof of (27), for we need only show
\[x(t)\leq\beta_{n}(t),t\in[-\Psi(\rho,\Delta),0]. \tag{29}\]
For \(n=0\), the inequality (29) is obvious. Assuming (29) for \(n\leq k-1\), where \(k\) is a positive integer, we will prove (29) for \(n=k\). Notice that by (29) for \(n=k-1\), definition (17), and the monotonicity of \(\beta_{k-1}\),
\[\begin{array}{c}\beta_{k}^{\prime\prime}(t)=-\max\{\beta_{k-1}(t),\rho r_{ \Delta}(\vartheta_{\Delta}-w-\Delta)\chi[-\Delta,0](t)\}\\ \leq-\max\{\max_{a\in[t,0]}x(a),\rho r_{\Delta}(\vartheta_{\Delta}-w-\Delta) \chi[-\Delta,0](t)\}\leq x^{\prime\prime}(t),t\in[-\varpi_{k-1},0].\end{array} \tag{30}\]
Inequality (30) together with
\[x(0) = \beta_{k}(0)=0\] \[1 = \beta_{k}(-\varpi_{k-1})\geq x(-\varpi_{k-1}),\]
give (29) for \(n=k\). In fact, \(f(t):=\beta_{k}(t)-x(t),t\in[-\varpi_{k-1},0]\) is concave (\(f^{\prime\prime}(t)\leq 0\)) and satisfies \(f(-\varpi_{k-1})\geq 0=f(0)\). Therefore the graph of \(f\) lies above its nonnegative chord, and \(\beta_{k}(t)\geq x(t),t\in[-\varpi_{k-1},0].\)
Inequality (26) shows that \(\Psi(\rho,\Delta)\) is the least distance in time necessary for a solution of (25) to descend to zero. By reversing time, we obtain an estimate of the least distance necessary for a solution of (1) to ascend from a zero to a maximum.
**Theorem 11**.: _Consider any solution \(x\) of (1) on \([-\vartheta_{\Delta},+\infty)\) with (5), \(\tau_{m}\leq\Delta,\) such that_
\[\max_{t\in[-\Delta-\vartheta_{\Delta},0]}|x(t)| \leq \rho\in(0,+\infty)\] \[x(0) = 0\] \[1 \geq x(t)\geq 0,t\in[0,A]\] \[x(A) = 1,x^{\prime}(A)=0\]
_Then, with \(\Psi(\rho,\Delta)\) as in (23), we have \(\Psi(\rho,\Delta)\leq A\) and_
\[x(t)\leq y_{\rho,\Delta}(-t),t\in[0,\Psi(\rho,\Delta)]. \tag{31}\]
**Proof.** Applying Corollary 7, we have \(|x(t)|\leq\rho r_{\Delta}(\vartheta_{\Delta}+t),t\in[-\Delta-\vartheta_{ \Delta},0].\) The rest follows from Theorem 10, condition (5), by reversing time. In fact, setting \(w:=-t,\) the function \(f(w):=\left\{\begin{array}{ll}1,&w<-A,\\ x(-w),&w\in[-A,0]\end{array}\right.\) satisfies for \(w\in[-\Psi(\rho,\Delta),0],\) where \(\Psi(\rho,\Delta)\) is defined in(23),
\[f^{\prime\prime}(w) =\left\{\begin{array}{ll}x^{\prime\prime}(-w),&w\in[-A,0]\cap[ -\Psi(\rho,\Delta),0]\\ 0,&w\in(-\infty,-A)\cap[-\Psi(\rho,\Delta),0]\end{array}\right.\] \[\geq-\max\{\max_{u\in[w,0]}f(u),\rho r_{\Delta}(\vartheta_{ \Delta}-w-\Delta)\chi[-\Delta,0](w)\}.\]
Assuming \(\Psi(\rho,\Delta)>A\) we obtain a contradiction by Theorem 10 and the strict monotonicity of \(y_{\rho,\Delta}\). We may conclude that (31) follows from Theorem 10.
**Corollary 12**.: _Consider any solution \(x\) of (1) with (5), \(\tau_{m}\leq\Delta,\) that has a semicycle \((a,b)\). Fix any \(\rho\geq\max_{t\in[-\Delta-\vartheta_{\Delta}+a,a]}|x(t)|\). Theorem 11 implies that the extremum of this semicycle is attained at points \(w\) such that_
\[w-a\geq\Psi\left(\frac{\rho}{\max_{t\in[a,b]}|x(t)|},\Delta\right),\]
_where \(\Psi(\rho,\Delta)\) is defined in (23)._
**Lemma 13**.: \(\Psi(\rho,\Delta)\)_, defined in (23), is a nonincreasing continuous function of \(\Delta\in[0,+\infty)\) for fixed \(\rho\in(0,+\infty)\), strictly decreasing and continuous with respect to \(\rho\) for fixed \(\Delta>0.\) Furthermore, \(\Psi(\rho,0)=\frac{\pi}{2}\), and \(\Psi(1,\Delta)=\sqrt{2}\) for \(\Delta\geq 2\sqrt{2}\)._
**Proof.** By application of Theorem 10, Corollary 7, \(\Psi(\rho,\Delta)\) is nonincreasing with respect to \(\Delta\), nonincreasing with respect to \(\rho\).
Assuming that \(\Psi(\rho,\Delta)=\Psi(\widetilde{\rho},\Delta)\) with \(\widetilde{\rho}>\rho>0\), by Theorem 10, \(y_{\widetilde{\rho},\Delta}(w)\geq y_{\rho,\Delta}(w),w\in[-\Psi(\rho,\Delta),0]\).Hence by (24), \(y_{\widetilde{\rho},\Delta}^{\prime\prime}(w)\leq y_{\rho,\Delta}^{\prime \prime}(w),w\in[-\Psi(\rho,\Delta),0]\) and we conclude
\[y_{\widetilde{\rho},\Delta}(w)=y_{\rho,\Delta}(w),w\in[-\Psi(\rho,\Delta),0]. \tag{32}\]
If \(\Delta>0\), for \(w\) approaching \(0\) from the left,
\[y_{\widetilde{\rho},\Delta}^{\prime\prime}(w)=-\widetilde{\rho}r_{\Delta}( \vartheta_{\Delta}-w-\Delta)\chi[-\Delta,0](w)<-\rho r_{\Delta}(\vartheta_{ \Delta}-w-\Delta)\chi[-\Delta,0](w)=y_{\rho,\Delta}^{\prime\prime}(w),\]
contradicting (32). Therefore \(\Psi(\rho,\Delta)\) is strictly increasing with respect to \(\rho\) for fixed \(\Delta>0\).
Let us prove continuity. We recall that we have uniform continuous dependence [22, Theorem 2.2]. By applying Theorem 10 with \(x(w)=\cos(\frac{\pi}{2}+w)\),\(w\in[-\frac{\pi}{2},0]\) we also have \(y_{\rho,\Delta}^{\prime}(0)\leq\cos^{\prime}(\frac{\pi}{2})=-1\).
Assume there exists a left discontinuity with respect to \(\rho\). In other words, assume there exists an increasing sequence \(\rho_{n}<\rho_{n+1}<...<\rho_{\infty}\) such that the limit function
\[\varphi(w):=\lim_{n\to\infty}y_{\rho_{n},\Delta}(w),w\in[-\frac{\pi}{2},0]\]
does not equal \(y_{\rho_{\infty},\Delta}\). Notice that this function is well defined, and \(y_{\rho_{\infty},\Delta}(w)\geq\varphi(w),w\in[-\frac{\pi}{2},0]\), in virtue of the nonincreasing nature of \(\Psi\), and Theorem 10. We have that \(z_{n}:=\frac{\rho_{n}}{\rho_{\infty}}y_{\rho_{\infty},\Delta}\) satisfies
\[z_{n}^{\prime\prime}(w)+\max\{z_{n}(w),\rho_{n}r_{\Delta}( \vartheta_{\Delta}-w-\Delta)\chi[-\Delta,0](w)\} \geq 0,w\in[-\Psi(\rho_{n},\Delta),0]\] \[\frac{\rho_{n}}{\rho_{\infty}} \geq z_{n}(w)\geq 0,w\in[-\Psi(\rho_{n},\Delta),0]\]
By Theorem 10, \(\varphi(w)\geq z_{n}(w),w\in[-\Psi(\rho_{n},\Delta),0]\). Now notice that \(y_{\rho_{\infty},\Delta}\) is the limit of \(z_{n}\), as \(n\to\infty\).
Now assume a right discontinuity with respect to \(\rho\). In other words, assume there exists a decreasing sequence \(\rho_{n}>\rho_{n+1}>...>\rho_{\infty}\) such that the limit function
\[\varphi(w):=\lim_{n\to\infty}y_{\rho_{n},\Delta}(w),w\in[-\frac{\pi}{2},0]\]
does not equal \(y_{\rho_{\infty},\Delta}\). Notice that this function is well defined, and \(y_{\rho_{\infty},\Delta}(w)\leq\varphi(w),w\in[-\frac{\pi}{2},0]\), in virtue of the nonincreasing nature of \(\Psi\), and Theorem 10. We have that \(z_{n}:=\frac{\rho_{\infty}}{\rho_{n}}y_{\rho_{n},\Delta}\) satisfies
\[z_{n}^{\prime\prime}(w)+\max\{z_{n}(w),\rho_{\infty}r_{\Delta}( \vartheta_{\Delta}-w-\Delta)\chi[-\Delta,0](w)\} \geq 0,w\in[-\Psi(\rho_{\infty},\Delta),0]\] \[\frac{\rho_{\infty}}{\rho_{n}} \geq z_{n}(w)\geq 0,w\in[-\Psi(\rho_{\infty},\Delta),0]\]
By Theorem 10, \(y_{\rho_{\infty},\Delta}(w)\geq z_{n}(w),w\in[-\Psi(\rho_{\infty},\Delta),0]\). Now notice that \(\varphi\) is the limit of \(z_{n}\), as \(n\to\infty\).
Assume there exists a left discontinuity with respect to \(\Delta\). In other words, assume there exists an increasing sequence \(\Delta_{n}<\Delta_{n+1}<...<\Delta_{\infty}\) such that the limit function
\[\varphi(w):=\lim_{n\to\infty}y_{\rho,\Delta_{n}}(w),w\in[-\frac{\pi}{2},0]\]
does not equal \(y_{\rho,\Delta_{\infty}}\). Notice that this function is well defined, and \(y_{\rho,\Delta_{\infty}}(w)\geq\varphi(w),w\in[-\frac{\pi}{2},0]\), in virtue of the nonincreasing nature of \(\Psi\), and Theorem 10. For any fixed \(\varepsilon>0\) there exists a \(K(\varepsilon)\in\mathbb{N}\) such that \(n>K(\varepsilon)\) implies that the solution \(z_{n}\) of
\[z_{n}^{\prime\prime}(w)+\max\{z_{n}(w),\rho r_{\Delta_{n}}( \vartheta_{\Delta_{n}}-w-\Delta_{n})\chi[-\Delta_{n},0](w)\} = 0,w\in[-\Psi(\rho,\Delta_{\infty}),0]\] \[z_{n}^{\prime\prime}(w) = 0,w\leq-\Psi(\rho,\Delta_{\infty})\] \[z_{n}(0) = 0,z_{n}^{\prime}(0)=y_{\rho,\Delta_{\infty}}^{\prime}(0)\]
satisfies \(|z_{n}(w)-y_{\rho,\Delta_{\infty}}(w)|+|z_{n}^{\prime}(w)-y_{\rho,\Delta_{ \infty}}^{\prime}(w)|<\varepsilon,w\in[-\frac{\pi}{2},0]\). Because \(y_{\rho,\Delta_{\infty}}^{\prime}(0)\leq-1\), for sufficiently small \(\varepsilon\), we also have \(z_{n}(w)\geq 0,w\in[-\frac{\pi}{2},0]\). Applying Theorem 10, \(\varphi(w)\geq\frac{1}{1+\varepsilon}z_{n}(w),w\in[-\Psi(\rho_{n},\Delta),0]\). Now notice that \(y_{\rho,\Delta_{\infty}}\) is the limit of \(z_{n}\), as \(\varepsilon\to 0\) and \(K(\varepsilon)<n\to\infty\).
Assume there exists a right discontinuity with respect to \(\Delta\). In other words, assume there exists an increasing sequence \(\Delta_{n}>\Delta_{n+1}>...>\Delta_{\infty}\) such that the limit function
\[\varphi(w):=\lim_{n\to\infty}y_{\rho,\Delta_{n}}(w),w\in[-\frac{\pi}{2},0]\]
does not equal \(y_{\rho,\Delta_{\infty}}\). Notice that this function is well defined, and \(y_{\rho,\Delta_{\infty}}(w)\leq\varphi(w),w\in[-\frac{\pi}{2},0]\), in virtue of the nonincreasing nature of \(\Psi\), and Theorem 10. For any fixed \(\varepsilon>0\) there exists a \(K(\varepsilon)\in\mathbb{N}\) such that \(n>K(\varepsilon)\) implies that the solution \(z_{n}\) of
\[z_{n}^{\prime\prime}(w)+\max\{z_{n}(w),\rho r_{\Delta_{\infty}}( \vartheta_{\Delta_{\infty}}-w-\Delta_{\infty})\chi[-\Delta_{\infty},0](w)\} = 0,w\in[-\Psi(\rho,\Delta_{n}),0]\] \[z_{n}^{\prime\prime}(w) = 0,w\leq-\Psi(\rho,\Delta_{n})\] \[z_{n}(0) = 0,z_{n}^{\prime}(0)=y_{\rho,\Delta_{n}}^{\prime}(0)\]
satisfies \(|z_{n}(w)-y_{\rho,\Delta_{n}}(w)|+|z_{n}^{\prime}(w)-y_{\rho,\Delta_{n}}^{ \prime}(w)|<\varepsilon,w\in[-\frac{\pi}{2},0]\). Because \(y_{\rho,\Delta_{n}}^{\prime}(0)\leq-1\), for sufficiently small \(\varepsilon\), we also have \(z_{n}(w)\geq 0,w\in[-\frac{\pi}{2},0]\). Applying Theorem 10, \(y_{\rho,\Delta_{\infty}}(w)\geq\frac{1}{1+\varepsilon}z_{n}(w),w\in[-\Psi( \rho,\Delta_{\infty}),0]\). Now notice that \(\varphi\) is the limit of \(z_{n}\), as \(\varepsilon\to 0\) and \(K(\varepsilon)<n\to\infty\).
Finally, when \(\Delta=0\), we have \(y_{\rho,0}(w)=\cos(\frac{\pi}{2}+w),w\in[-\frac{\pi}{2},0]\) and when \(\rho=1,\Delta\geq 2\sqrt{2}\), using (9) and \(\vartheta_{\Delta}=\sqrt{2}\), we have by direct calculation \(b_{n}(w)=1-\frac{1}{2}(w+\sqrt{2})^{2},w\in[-\sqrt{2},0]\) for \(n=1,2,...\)
## 4 Bounds on semicycle length
Combining the estimates of the previous sections, we can now prove the main results, calculating lower bounds on the semicycle length of oscillatory solutions which do not tend to zero.
**Theorem 14**.: _Consider an oscillatory solution \(x\) of (1), with (5). When \(\tau_{m}>0,\) assume that its semicycles satisfy_
\[x(t)\neq 0,t\in(a,b)\Longrightarrow b-a\leq\Psi(1,\tau_{m})+\vartheta_{\tau_{m}}\]
_where \(\Psi(\rho,\Delta)\) is as in (23), \(\vartheta_{(\cdot)}\) as in Lemma 3, and when \(\tau_{m}=0\), assume that_
\[x(t)\neq 0,t\in(a,b)\Longrightarrow b-a<\pi.\]
_Then, \(x\) is bounded. If, further, for a real constant \(c\in(0,\Psi(1,\tau_{m})+\vartheta_{\tau_{m}}),\)_
\[x(t)\neq 0,t\in(a,b)\Longrightarrow\Psi(1,\tau_{m})+\vartheta_{\tau_{m}}>c>b-a \tag{33}\]
_then \(x\) tends to zero at infinity._
**Proof.** Let us consider a point \(\xi\) such that \(x(\xi)=0\). We shall show that
\[|x(u)|\leq\max_{t\in[\xi-\tau_{m}-\vartheta_{\tau_{m}},\xi]}|x(t)|,u\geq\xi, \tag{34}\]
where \(\vartheta_{(\cdot)}\) is as in Lemma 3. We may assume \(\max_{t\in[\xi-\tau_{m}-\vartheta_{\tau_{m}},\xi]}|x(t)|>0\) without loss of generality. Assuming the contrary, that (34) does not hold, by considering the point
\[\psi:=\inf\{u>\xi:|x(u)|>\max_{t\in[\xi-\tau_{m}-\vartheta_{\tau_{m}},\xi]}|x( t)|\},\]
it is easy to see that this point is in a semicycle \((a,b)\) such that
\[\max_{t\in(a,b)}|x(t)|>\max_{t\in[\xi-\tau_{m}-\vartheta_{\tau_{m}},\xi]}|x(t) |=\max_{t\in[\xi-\tau_{m}-\vartheta_{\tau_{m}},\xi]}|x(t)|, \tag{35}\]
where \(a\geq\xi\). Then in \((a,b)\), the extremum is first assumed at a point \(w\) such that \(x^{\prime}(w)=0\). By Corollaries 7, 12, and (35), we have
\[w \leq b-\vartheta_{\tau_{m}}\] \[w \geq a+\Psi\left(\frac{\max_{t\in[\xi-\tau_{m}-\vartheta_{\tau_{m}}, \xi]}|x(t)|}{\max_{t\in(a,b)}|x(t)|},\tau_{m}\right),\]
where \(\Psi(\rho,\Delta)\) is defined in(23).
Under (33), considering Lemma 13, there exist \(\varepsilon>0\) and \(\rho\in(r_{\Delta}(\varepsilon),1)\) such that
\[\Psi(\frac{1}{\rho},\tau_{m})+\vartheta_{\tau_{m}}-\varepsilon>(b-a).\]
We will similarly show that \(|x(u)|\leq\rho\max_{t\in[\xi-\tau_{m}-\vartheta_{\tau_{m}},\xi]}|x(t)|,u\geq\xi\). Assuming the contrary, by considering the point
\[\psi:=\inf\{u>\xi:|x(u)|>\rho\max_{t\in[\xi-\tau_{m}-\vartheta_{\tau_{m}},\xi] }|x(t)|\},\]
it is easy to see that this point is in a semicycle \((a,b)\) such that
\[\max_{t\in(a,b)}|x(t)|>\rho\max_{t\in[\xi-\tau_{m}-\vartheta_{\tau_{m}},\xi]}| x(t)|=\rho\max_{t\in[\xi-\tau_{m}-\vartheta_{\tau_{m}},\xi]}|x(t)|, \tag{36}\]
where \(a\geq\xi\). Then in \((a,b)\), the extremum is first assumed at a point \(w\) such that \(x^{\prime}(w)=0\). By Corollaries 7, 12, and (36), we have
\[w \leq b-(\vartheta_{\tau_{m}}-\varepsilon)\] \[w \geq a+\Psi\left(\frac{\max_{t\in[\xi-\tau_{m}-\vartheta_{\tau_{m}}, \xi]}|x(t)|}{\max_{t\in(a,b)}|x(t)|},\tau_{m}\right),\]
a contradiction. The required result follows by considering any sequence of zeros \(\zeta_{n}\) with \(\zeta_{1}:=\xi\) and \(\zeta_{n+1}>\zeta_{n}+\tau_{m}+\vartheta_{\tau_{m}}\). By induction, one obtains
\[|x(u)|\leq\rho^{n}\max_{t\in[\xi-\tau_{m}-\vartheta_{\tau_{m}},\xi]}|x(t)|,u \geq\zeta_{n}.\]
## 5 Classification of solutions for negative coefficient
We now investigate the classification of solutions of (1) with nonpositive coefficient \(p(t)\leq 0,t\in\mathbb{R}\). The following was proven by Kamenskii [25] for continuous parameters and the same proof holds in the measurable case.
**Lemma 15** ([25]).: _Let \(x\) be a positive solution of (1) with nonpositive coefficient \(p(t)\leq 0,t\in\mathbb{R}\), and assume_
\[\int_{0}^{\infty}|p(t)|dt=\infty.\]
_Then \(x(t)\) has one of the following two asymptotic behaviors:_
\[i)\]
\[\lim_{t\to\infty}x(t)=\lim_{t\to\infty}x^{\prime}(t)=+\infty\]
\[ii)\]
\[\lim_{t\to\infty}x(t)=0,\text{ and }x^{\prime}(t)\leq 0,\lim_{t\to\infty}x^{ \prime}(t)=0.\]
The existence and uniqueness of such behaviors, with given initial conditions, was investigated in [31; 39; 40]. Concerning the possibility of decreasing positive solutions, we have the following results, which we present within the framework of Azbelev [2], allowing for zero initial function and discontinuity of the solution or its derivative at the initial point. We recall the definition of the Wronskian of the fundamental system
\[W(t):=\left|\begin{array}{cc}z(t)&y(t)\\ z^{\prime}(t)&y^{\prime}(t)\end{array}\right|,t\geq 0,\]
where \(z,y\) are the solutions of (1) on \([0,+\infty)\), satisfying \(z(t)=y(t)=0,t<0\) and \(z(0)=1,z^{\prime}(0)=0\), and \(y^{\prime}(0)=1,y(0)=0\).
**Lemma 16** ([20], [31, Theorem 4.3.1]).: _Assume that \(p(t)\leq 0,t\in\mathbb{R}\), the function \(t\mapsto(t-\tau(t))\) is nondecreasing, and that_
\[\lim\sup_{t\to\infty}\int_{t-\tau(t)}^{t}(s-t+\tau(t))|p(s)|ds>1. \tag{37}\]
_Then all nonoscillatory solutions of (1) are unbounded._
The following result was first proven in [29, p. 74]. For extensions to higher-order equations, we refer the reader to [12; 28; 29; 30].
**Lemma 17** ([1, Theorem 17.14], [4, Theorem 10.2], [29, p. 74]).: _Assume that \(p(t)\leq 0,t\in\mathbb{R}\). The existence of positive nonincreasing solutions of the inequality_
\[x^{\prime\prime}(t)+p(t)x(t-\tau(t))\geq 0,t\in[0,+\infty), \tag{38}\]
_is equivalent to existence of positive nonincreasing solutions of the equation_
\[x^{\prime\prime}(t)+p(t)x(t-\tau(t))=0,t\in[0,+\infty), \tag{39}\]
_and to the nonvanishing of the Wronskian of the fundamental system on \([0,+\infty)\)._
**Remark 18**.: _Lemma 17 holds in the general case of not necessarily bounded delay and integrable coefficient._
**Lemma 19** ([1, Theorem 17.14], [4, Theorem 10.2], [28], [29, p. 74]).: _Assume that_
\[\tau_{m}\sqrt{\operatorname*{esssup}_{t\in\mathbb{R}}|p(t)|}\leq\frac{2}{e}.\]
_Then the Wronskian of the fundamental system of (1) has no zeros on \([0,+\infty)\)._
**Remark 20**.: _In virtue of Lemma 13 and the monotonicity of \(\Psi(1,\Delta)\), defined in (23), setting \(\gamma\) as the unique solution of_
\[\Psi(1,\gamma)=\gamma\in[\sqrt{2},\frac{\pi}{2}], \tag{40}\]
_we have_
\[\Psi(1,\Delta)>\Delta,\text{ if }\Delta<\gamma. \tag{41}\]
For oscillatory solutions of (1) with nonpositive coefficient \(p(t)\leq 0,t\in\mathbb{R}\), we have the following result.
**Theorem 21**.: _Consider an oscillatory solution \(x\) of (1), with \(p(t)\leq 0,t\in\mathbb{R}\) and assume that_
\[\tau_{m}\sqrt{\operatorname*{esssup}_{t\in\mathbb{R}}|p(t)|}\leq\gamma, \tag{42}\]
_where \(\gamma\) is defined in (40). Then \(x(t)\) is bounded. If, further, the strict inequality holds in (42), then \(x(t)\) tends to zero at infinity._
**Proof.** By Lemma 1, it suffices to consider the case \(\operatorname*{esssup}_{t\in\mathbb{R}}|p(t)|=1\) (the case of the ODE, \(\operatorname*{esssup}_{t\in\mathbb{R}}|p(t)|=0\), is trivial). Let us consider a point \(\xi\) such that \(x(\xi)=0\). We shall show that
\[|x(u)|\leq\max_{t\in[\xi-\tau_{m}-\vartheta_{\tau_{m}},\xi]}|x(t)|,u\geq\xi, \tag{43}\]
where \(\vartheta_{(\cdot)}\) is as in Lemma 3. We may assume \(\max_{t\in[\xi-\tau_{m}-\vartheta_{\tau_{m}},\xi]}|x(t)|>0\) without loss of generality. Assuming the contrary, that (43) does not hold, by considering the point
\[\psi:=\inf\{u>\xi:|x(u)|>\max_{t\in[\xi-\tau_{m}-\vartheta_{\tau_{m}},\xi]}|x( t)|\},\]
it is easy to see that this point is in a semicycle \((a,b)\) such that
\[\max_{t\in(a,b)}|x(t)|>\max_{t\in[\xi-\tau_{m}-\vartheta_{\tau_{m}},a]}|x(t)| =\max_{t\in[\xi-\tau_{m}-\vartheta_{\tau_{m}},\xi]}|x(t)|, \tag{44}\]
where \(a\geq\xi\). We may assume that the extremum of this semicycle is positive, without loss of generality. Then in \((a,b)\), the maximum is first assumed at a point \(w\) such that \(x^{\prime}(w)=0.\) Using
(1), and \(p(t)\leq 0,t\in\mathbb{R},\) there exists a set of positive measure consisting of points \(v>w\) such that \(x^{\prime\prime}(v)<0,\) and also \(\tau(v)<a\). By Corollary 12, and (44), we have for such points \(v\)
\[v > w\geq a+\Psi\left(\frac{\max_{t\in[\xi-\tau_{m}-\vartheta_{\tau_{m} },\xi]}|x(t)|}{\max_{t\in(a,b)}|x(t)|},\tau_{m}\right)\] \[a > \tau(v),\]
where \(\Psi(\rho,\Delta)\) is defined in (23).
Under the strict inequality in (42) we have by (41), and Lemma 13, that \(\Psi(\rho,\tau_{m})\geq\tau_{m}\) for a \(\rho>1.\)
We will similarly show that
\[|x(u)|\leq\frac{1}{\rho}\max_{t\in[\xi-\tau_{m}-\vartheta_{\tau_{m}},\xi]}|x( t)|,u\geq\xi.\]
Assuming the contrary, by considering the point
\[\psi:=\inf\{u>\xi:|x(u)|>\frac{1}{\rho}\max_{t\in[\xi-\tau_{m}-\vartheta_{ \tau_{m}},\xi]}|x(t)|\},\]
it is easy to see that this point is in a semicycle \((a,b)\) such that
\[\max_{t\in(a,b)}|x(t)|>\frac{1}{\rho}\max_{t\in[\xi-\tau_{m}-\vartheta_{\tau_ {m}},a]}|x(t)|=\frac{1}{\rho}\max_{t\in[\xi-\tau_{m}-\vartheta_{\tau_{m}},\xi ]}|x(t)|, \tag{45}\]
where \(a\geq\xi\). We may assume that the extremum of this semicycle is positive, without loss of generality. Then in \((a,b)\), the maximum is first assumed at a point \(w\) such that \(x^{\prime}(w)=0.\) Using (1), and \(p(t)\leq 0,t\in\mathbb{R},\) there exists a set of positive measure consisting of points \(v>w\) such that \(x^{\prime\prime}(v)<0,\) and also \(\tau(v)<a\). By Corollary 12, and (45), we have for such points \(v\)
\[v > w\geq a+\Psi\left(\frac{\max_{t\in[\xi-\tau_{m}-\vartheta_{\tau_ {m}},\xi]}|x(t)|}{\max_{t\in(a,b)}|x(t)|},\tau_{m}\right)\] \[a > \tau(v).\]
The required result follows by considering any sequence of zeros \(\zeta_{n}\) with \(\zeta_{1}:=\xi\) and \(\zeta_{n+1}>\zeta_{n}+\tau_{m}+\vartheta_{\tau_{m}}\). By induction, one obtains
\[|x(u)|\leq\left(\frac{1}{\rho}\right)^{n}\max_{t\in[\xi-\tau_{m}-\vartheta_{ \tau_{m}},\xi]}|x(t)|,u\geq\zeta_{n}.\]
**Corollary 22**.: _Assume \(p(t)\leq 0,t\in\mathbb{R},\)\(\int_{0}^{\infty}|p(t)|dt=\infty\) and \(\tau_{m}\sqrt{\operatorname{esssup}_{t\in\mathbb{R}}|p(t)|}<\gamma,\) where \(\gamma\) is defined in (40). Then any given solution \(x\) of (1) has exactly one of the following asymptotic behaviors:_
\(i)\)__
\[x\text{ is oscillatory and }\lim_{t\to\infty}x(t)=0\]
\[x\]
\[x\]
is nonoscillatory, tends to zero monotonically, \[\lim_{t\to\infty}x^{\prime}(t)=0\]
\[\lim_{t\to\infty}x(t)=\lim_{t\to\infty}x^{\prime}(t)=+\infty\]
\[\lim_{t\to\infty}x(t)=\lim_{t\to\infty}x^{\prime}(t)=-\infty.\]
If furthermore, \(\tau_{m}\sqrt{\operatorname{esssup}_{t\in\mathbb{R}}|p(t)|}\leq\frac{2}{e}\), there must exist solutions exhibiting \(ii)\).
## 6 Discussion and open problems
The following example illustrates the import and information of Theorem 14.
**Example 23**: _Consider the equation_
\[x^{\prime\prime}(t)+x(t-4)=0. \tag{46}\]
_Here \(\tau_{m}=4>2\sqrt{2}\approx 2.83\) and Theorem 14 implies that unbounded solutions of (46) have semicycles of length greater than \(2\sqrt{2}\), whereas oscillatory solutions with semicycles of smaller length tend to zero. It is known [22, p. 135] that Equation (46) possesses a spectral decomposition, and its real eigensolutions are of the form \(x(t)=\operatorname{Re}\left(\exp(\lambda t)\right)\), or \(x(t)=\operatorname{Im}\left(\exp(\lambda t)\right)\), where \(\lambda\) is a solution of_
\[\lambda^{2}+\exp(-4\lambda)=0.\]
_Using the Lambert \(W-\)function (on which see [9]), we may express the eigenvalues \(\lambda\) in the form_
\[\lambda=\frac{1}{2}W_{n}(\pm 2i),n\in\mathbb{Z}.\]
_Let us consider the eigenvalues_
\[\frac{1}{2}W_{0}(\pm 2i) \approx 0.34\pm 0.37i,\] \[\frac{1}{2}W_{1}(+2i) \approx -0.57+3.05i\] \[\frac{1}{2}W_{1}(-2i) \approx -0.21+1.50i\] \[\frac{1}{2}W_{2}(+2i) \approx -0.92+6.21i\] \[\frac{1}{2}W_{2}(-2i) \approx -0.77+4.63i\]
_We notice that, as a necessary consequence of Theorem 14, the unbounded solutions corresponding to eigenvalues with positive real part have semicycles of length_
\[\frac{\pi}{|\operatorname{Im}\frac{1}{2}W_{0}(\pm 2i)|}\approx 8.45>2\sqrt{2} \approx 2.83,\]
_while the solutions with semicyles of length less than \(2\sqrt{2}\)_
\[\frac{\pi}{|\operatorname{Im}\frac{1}{2}W_{1}(+2i)|} \approx 1.03<2\sqrt{2}\approx 2.83\] \[\frac{\pi}{|\operatorname{Im}\frac{1}{2}W_{1}(-2i)|} \approx 2.09<2\sqrt{2}\approx 2.83\] \[\frac{\pi}{|\operatorname{Im}\frac{1}{2}W_{2}(+2i)|} \approx 0.51<2\sqrt{2}\approx 2.83\] \[\frac{\pi}{|\operatorname{Im}\frac{1}{2}W_{2}(-2i)|} \approx 0.68<2\sqrt{2}\approx 2.83.\]
_correspond to eigenvalues with negative real part, thus tend to zero._
We now present examples illustrating the sharpness of the bounds obtained in Theorem 14 in the cases \(\tau_{m}=0,\tau_{m}\geq 2\sqrt{2}.\)
**Example 24**.: _Fix an arbitrarily small \(\varepsilon\geq 0\). Then by the inequality \(\tan(x)>\tanh(x),x\in(0,\frac{\pi}{2}),\) we have \(\varepsilon-\arctan\left(\frac{\sinh(\varepsilon)}{\cosh(\varepsilon)}\right) \geq 0\), and we set \(A:=\pi+\varepsilon-\arctan\frac{\sinh(\varepsilon)}{\cosh(\varepsilon)}\geq\pi.\) Consider the solution of_
\[x^{\prime\prime}(t)+p(t)x(t)=0\] \[p(t):=\left\{\begin{array}{ll}-1,&t\in[nA,nA+\varepsilon]\\ +1,&t\in[nA+\varepsilon,(n+1)A]\end{array}\right.\] \[x(0)=0,x^{\prime}(0)=1\]
_where \(n=0,1,2,...\) Obviously,_
\[x(t)=\left\{\begin{array}{ll}\left(-\sqrt{\left(\sinh(\varepsilon)\right)^{2 }+\left(\cosh(\varepsilon)\right)^{2}}\right)^{n}\sinh\left(t-nA\right),\\ t\in[nA,nA+\varepsilon],\\ \left(-1\right)^{n}\left(\sqrt{\left(\sinh(\varepsilon)\right)^{2}+\left( \cosh(\varepsilon)\right)^{2}}\right)^{n+1}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
_where \(n=0,1,2,...\) Obviously we have_
\[y(t)=\left\{\begin{array}{l}\left(-1\right)^{n}\left[1+\varepsilon^{2}\right]^{n }\left(1-\frac{1}{2}\left(\sqrt{2}-\left(t-2nB\right)\right)^{2}\right),\\ t\in\left[2nB,2nB+\sqrt{2}\right],\\ \left(-1\right)^{n}\left[1+\varepsilon^{2}\right]^{n}\left(1+\frac{1}{2} \left(t-2nB-\sqrt{2}\right)^{2}\right),\\ t\in\left[2nB+\sqrt{2},2nB+\sqrt{2}+\varepsilon\right],\\ \left(-1\right)^{n}\left[1+\varepsilon^{2}\right]^{n}\left[1+\frac{1}{2} \varepsilon^{2}-\frac{1}{2}\left(t-\left[2nB+\sqrt{2}+\varepsilon\right] \right)^{2}+\varepsilon\left(t-\left[2nB+\sqrt{2}+\varepsilon\right]\right) \right],\\ t\in\left[2nB+\sqrt{2}+\varepsilon,2nB+\sqrt{2}+2\varepsilon\right],\\ \left(-1\right)^{n}\left[1+\varepsilon^{2}\right]^{n+1}\left(1-\frac{1}{2} \left(t-\left[2nB+\sqrt{2}+2\varepsilon\right]\right)^{2}\right),\\ t\in\left[2nB+\sqrt{2}+2\varepsilon,(2n+1)B\right].\end{array}\right.\]
_Hence, \(y(t)\) has semiccycles of length \(B,\) does not tend to zero whenever \(B=2\sqrt{2}\) and is unbounded when \(B>2\sqrt{2}\)._
However, straightforward calculations show that similar periodic examples are not generally possible in the case \(\tau_{m}\in(0,2\sqrt{2}),\) as these would violate the continuity of the first derivative at the zeros. We do note that the bounds on the semicycles variate within a relatively small interval \(\left[2\sqrt{2}\approx 2.83,\pi\approx 3.14\right].\) Hence the present results are rather close to the optimal case, yet deeper investigation of the first derivative at the zeros is required in order to obtain better bounds. Presently, it is unclear how the methods of this paper can be extended so as to include this second dimension without regularity restrictions such as slow oscillation or the separation of the zeros. Perhaps the techniques of [17] may prove useful in this regard.
Ultimately, one also desires upper bounds on the semicycles, as well as estimates on semicycle length depending on the initial function (see partial results in [4, Chapter 12, Chapter 14], [13], [35]). Thus, based on the initial data, one could determine the semicycles, whence one would deduce the asymptotic behavior of the solution.
Examples 24 and 25 have sign-changing coefficient, and we do not presume our results are sharp in the case where the coefficient \(p\) has stable sign. It would be interesting to explore further results in the vein of Theorem 21, [35, p. 39], which directly take into account the sign of the coefficient and its influence on the semicycles.
**Problem 26**.: _Find the optimal constant \(\varkappa>0\) such that under \(\tau_{m}<\varkappa\) and \(-1\leq p(t)\leq 0,t\in\mathbb{R}\), all oscillatory solutions of (1) tend to zero at infinity._
Remark 20, Theorem 21, imply that \(\varkappa\geq\sqrt{2}\). The following Example shows that \(\varkappa\leq\pi\).
**Example 27**.: _The periodic function \(\sin(t)\) solves_
\[y^{\prime\prime}(t)=y(t-\pi),t\in\mathbb{R}\]
_with constant delay \(\tau_{m}=\pi.\)_
All the known examples of solutions on the boundary between solutions that tend to zero and unbounded solutions are periodic, for both the second-order and the first-order equation. Presuming the optimal constant corresponds to periodic oscillatory solutions of (1), one should have \(\varkappa\approx\pi\), as in the above Example. As one cannot assume that an arbitrary oscillatory solution is periodic in the investigation of this problem, it remains open.
## 7 Acknowledgments
The first author was supported by the NSERC Grant RGPIN-2020-03934. The third author acknowledges the support of Ariel University.
|
2305.12466
|
Types of the geodesic motions in Kerr-Sen-AdS$_{4}$ spacetime
|
We consider the geodesic motions in the Kerr-Sen-AdS$_4$ spacetime. We obtain
the equations of motion for light rays and test particles. Using the parametric
diagrams, we shown some regions where the radial and latitudinal geodesic
motions are allowed. We analyse the impact of parameter related to dilatonic
scalar on the orbit and find that it will result in more rich and complex
orbital types.
|
Ziqiang Cai, Tong-Yu He, Wen-Qian Wang, Zhan-Wen Han, Rong-Jia Yang
|
2023-05-21T14:05:49Z
|
http://arxiv.org/abs/2305.12466v2
|
# Analytic solutions of the geodesic equation for Kerr-Sen-AdS\({}_{4}\) black holes
###### Abstract
We consider the geodesic motions in the Kerr-Sen-AdS\({}_{4}\) spacetime. We obtain the equations of motion for light rays and test particles and solve them analytically in terms of the Weierstrass functions as well as the Kleinian function. Using the parametric diagrams, we shown some regions where the radial and latitudinal geodesic motions are allowed.
## I Introduction
The Event Horizon Telescope has released the observed black hole shadow [1] which allows chances to have a deeper understanding of gravitational field of massive object. Studying test particles and light rays in spacetimes has been a matter of interest for a long time: it is an important channel for understanding black holes and predicts a number of observational effects. The study of geodesic motion can be traced back to the early work done by Hagihara [2] who solved analytically the equations of motion of test particles and light rays in the Schwarzschild spacetime. It has been shown that the geodesic equations in Kerr, Reissner-Nordstrom, and Kerr-Newman spacetimes have the same mathematical structure [3]. Since then, many works in the literatures have extensively investigated the equations of motion of particles and light rays in various spacetimes, see for example [4; 5; 6; 7; 8; 9]. The geodesic equations in some spacetimes can be analytically solved in terms of the Weierstrass funtions and the derivatives of Kleinian functions [5; 7; 10; 11]. These methods have been applied to higher dimensional black holes [12; 13; 14; 15], to Taub-NUT and wormhole spacetime [16; 17], and to Kerr-Sen dilaton-axion black hole [10]. Recently this analytical approach has been further developed and applied to the hyperelliptic case, where the analytical solutions of the equations of motion in the four-dimensional Schwarzschild-(A)dS, Reissner-Nordstrom(A)dS, and Kerr-(A)dS spacetimes were presented [7; 12; 18; 19; 20; 21]. The motions of test particles were also studied in various black string spacetimes [22; 23; 24; 25; 26; 27]. Recently, Kerr geodesics in terms of Weierstrass elliptic functions are discussed in [28].
In [29], a solution including a nonzero negative cosmological constant into the Kerr-Sen solution was obtained. An analysis of all possible orbits for particles and light in the spacetime of this Kerr-Sen-AdS\({}_{4}\) black hole is still not presented. In this paper, we will fill this gap. We will consider the geodesic motion in the background of the Kerr-Sen-AdS\({}_{4}\) black hole. We will analyze the possible orbit types and find the full set of analytic solutions to the equations of motions for test particles and lights by using the effective potential techniques and parametric diagrams.
The order of this paper is as follows. In Section II, we give a brief review of the Kerr-Sen-AdS\({}_{4}\) metric. In Section III, we present the equations for geodesic motions in the Kerr-Sen-AdS\({}_{4}\) spacetime. In Section IV, we give a full analysis of the geodetic equations. In Section V, we discuss the solution of the geodetic equations. Finally, we will briefly summarise and discuss our results in Section VI.
## II The Kerr-Sen-AdS\({}_{4}\) black hole solution
The Lagrangian including a nonzero negative cosmological constant into the four-dimensional gauged Einstein-Maxwell-dilaton-axion theory has the following form
\[\mathcal{L}=\sqrt{-g}\left\{R-\frac{1}{2}\left(\partial\phi\right)^{2}-\frac{1 }{2}e^{2\phi}\left(\partial\chi\right)^{2}-e^{-\phi}F^{2}+\frac{1}{l^{2}} \left[4+e^{-\phi}+e^{\phi}\left(1+\chi^{2}\right)\right]\right\}+\frac{\chi}{2 }\varepsilon^{\mu\nu\rho\lambda}F_{\mu\nu}F_{\rho\lambda}, \tag{1}\]
where \(g\) is the determinant of the metric, \(R\) is the Ricci scalar, \(\phi\) is the dilaton scalar field, \(F_{\mu\nu}\) is the electromagnetic tensor and \(F^{2}=F_{\mu\nu}F^{\mu\nu}\), \(\chi\) is the axion pseudoscalar field dual to the three-form antisymmetric tensor: \(H=-e^{2\phi}\star d\chi\) and \(H^{2}=H_{\mu\nu\sigma}H^{\mu\nu\sigma}\), \(l\) is the cosmological scale, and \(\varepsilon_{\mu\nu\rho\lambda}\) is the four-dimensional Levi-Civita antisymmetric tensor density. A solution for this Lagrangian, called Kerr-Sen-AdS\({}_{4}\) black hole, was obtained in [29]. Written in terms of Boyer-Lindquist coordinates, it takes the following form:
\[\mathrm{d}s^{2}=-\frac{\Delta_{r}}{\rho^{2}}\left(\mathrm{d}t-\frac{a\sin^{2} \theta}{\Xi}\right)^{2}+\frac{\rho^{2}}{\Delta_{r}}\mathrm{d}r^{2}+\frac{\rho ^{2}}{\Delta_{\theta}}\mathrm{d}\theta^{2}+\frac{\Delta_{\theta}\sin^{2} \theta}{\rho^{2}}\left(a\mathrm{d}t-\frac{r^{2}+2br+a^{2}}{\Xi}\mathrm{d} \varphi\right)^{2}, \tag{2}\]
where
\[\Delta_{r}=\left(1+\frac{r^{2}+2br}{l^{2}}\right)\left(r^{2}+2br+a^{2}\right) -2Mr, \tag{3}\]
\[\Delta_{\theta}=1-\frac{a^{2}}{l^{2}}\cos^{2}\theta,\ \ \ \ \Xi=1-\frac{a^{2}}{l^{2}},\ \ \ \ \rho^{2}=r^{2}+2br+a^{2}\cos^{2}\theta, \tag{4}\]
in which \(a=J/M\) is the angular momentum per unit mass of the black hole, \(b=Q^{2}/2M\) is the dilatonic scalar charge, \(M\) is the mass of the black hole, and \(Q\) is the charge of the black hole. The horizons in metric (2) are given by \(\Delta_{r}=0\). When \(l\) tends to infinity, the Kerr-Sen-AdS\({}_{4}\) black hole (2) reduces to the Kerr-Sen solution [30]. The contravariant metric components are give by
\[\begin{gathered} g^{tt}=-\frac{(r^{2}+2br+a^{2})^{2}\Delta_{ \theta}\sin^{2}\theta-a^{2}-a^{2}\Delta_{r}\sin^{4}\theta}{\rho^{2}\Delta_{ \theta}\Delta_{r}\sin^{2}\theta},\\ g^{rr}=\frac{\Delta_{r}}{\rho^{2}},\ \ \ \ g^{\theta\theta}=\frac{ \Delta_{\theta}}{\rho^{2}},\ \ \ \ g^{\varphi\varphi}=-\frac{(a^{2}\Delta_{\theta}\sin^{2} \theta-\Delta_{r})\Xi^{2}}{\rho^{2}\Delta_{\theta}\Delta_{r}\sin^{2}\theta}, \\ g^{t\varphi}=g^{\varphi t}=\frac{(a\Delta_{r}\sin^{2}\theta-a \Delta_{\theta}(r^{2}+2br+a^{2})\sin^{2}\theta)\Xi}{\rho^{2}\Delta_{\theta} \Delta_{r}\sin^{2}\theta}.\end{gathered} \tag{5}\]
## III The geodesic equations
In this section, we will derive the equations of motion for Kerr-Sen-AdS\({}_{4}\) black hole (2) by using the Hamilton-Jacobi formalism, and later we will introduce effective potentials for the \(r\) and \(\theta\) motion. The Hamilton-Jacobi equation is
\[\frac{\partial S}{\partial\tau}+\frac{1}{2}g^{ij}\frac{\partial S}{\partial x ^{i}}\frac{\partial S}{\partial x^{j}}=0, \tag{6}\]
which can be solved with an ansatz for the action
\[S=\frac{1}{2}\varepsilon\tau-Et+L_{z}\phi+S_{\theta}(\theta)+S_{r}(r). \tag{7}\]
where the parameter \(\varepsilon\) is equal to \(1\) for particles and to \(0\) for light, \(\tau\) is an affine parameter along the geodesic. The energy \(E\) and the angular momentum \(L\), two constants of motion, are related to the the generalized momenta \(P_{t}\) and \(P_{\phi}\) as
\[P_{t}=g_{tt}\dot{t}+g_{t\varphi}\dot{\varphi}=-E,\ \ \ \ P_{\phi}=g_{\varphi \varphi}\dot{\varphi}+g_{t\varphi}\dot{t}=L. \tag{8}\]
Using Eqs. (6), (7), and (8), we have
\[\begin{gathered}\Delta_{\theta}\left(\frac{\partial S}{\partial \theta}\right)^{2}+\varepsilon a^{2}\cos^{2}\theta-\frac{2aEL\Xi-E^{2}a^{2} \sin^{2}\theta}{\Delta_{\theta}}+\frac{L^{2}\Xi^{2}}{\Delta_{\theta}\sin^{2} \theta}\\ =-\Delta_{r}\left(\frac{\partial S}{\partial r}\right)^{2}- \varepsilon\left(r^{2}+2br\right)+\frac{\left(r^{2}+2br+a^{2}\right)^{2}E^{2}+ a^{2}L^{2}\Xi^{2}-2a\left(r^{2}+2br+a^{2}\right)EL\Xi}{\Delta_{r}}.\end{gathered} \tag{9}\]
The left-hand side of equation (9) depends only on \(\theta\) and the right-hand side depends only on \(r\). With the ansatz Eq. (7) and the Carter constant [31], we obtain the equations of motion:
\[\rho^{4}\left(\frac{\mathrm{d}r}{\mathrm{d}\tau}\right)^{2}=-\Delta_{r}\left[K +\varepsilon\left(r^{2}+2br\right)\right]+\left[\left(r^{2}+2br+a^{2}\right)E- aL\Xi\right]^{2}, \tag{10}\]
\[\rho^{4}\left(\frac{\mathrm{d}\theta}{\mathrm{d}\tau}\right)^{2}=\Delta_{\theta} \left(K-\varepsilon a^{2}\cos^{2}\theta\right)-\frac{1}{\sin^{2}\theta}\left(aE \sin^{2}\theta-L\Xi\right)^{2}, \tag{11}\]
\[\rho^{2}\left(\frac{\mathrm{d}\varphi}{\mathrm{d}\tau}\right)=\frac{a\left(r^{ 2}+2br+a^{2}\right)E\Xi-a^{2}L\Xi^{2}}{\Delta_{r}}-\frac{1}{\Delta_{\theta} \sin^{2}\theta}\left(aE\Xi\sin^{2}\theta-L\Xi^{2}\right), \tag{12}\]
\[\rho^{2}\left(\frac{\mathrm{d}t}{\mathrm{d}\tau}\right)=\frac{E\left(r^{2}+2 br+a^{2}\right)^{2}-a\left(r^{2}+2br+a^{2}\right)L\Xi}{\Delta_{r}}-\frac{\sin^{2} \theta}{\Delta_{\theta}}\left(a^{2}E-\frac{aL\Xi}{\sin^{2}\theta}\right). \tag{13}\]
From Eqs. (10) and (11), we introduce two effective potentials \(V_{\mathrm{reff}}\) and \(V_{\theta\mathrm{eff}}\) such that \(V_{\mathrm{reff}}=E\) and \(V_{\theta\mathrm{eff}}=E\), corresponding to \(\left(\frac{\mathrm{d}r}{\mathrm{d}\tau}\right)^{2}=0\) and \(\left(\frac{\mathrm{d}\theta}{\mathrm{d}\tau}\right)^{2}=0\), respectively,
\[V_{\mathrm{reff}}=\frac{aL\Xi\pm\sqrt{\Delta_{r}\left[K+\varepsilon(r^{2}+2 br)\right]}}{r^{2}+2br+a^{2}}, \tag{14}\]
\[V_{\theta\mathrm{eff}}=\frac{L\Xi\pm\sqrt{\Delta_{\theta}(K-\varepsilon a^{2} \cos^{2}\theta)\sin^{2}\theta}}{a\sin^{2}\theta}. \tag{15}\]
To simplify the equations of motion, we adopt the Mino time \(\lambda\)[32] connected to the proper time \(\tau\) via \(\frac{\mathrm{d}\tau}{\mathrm{d}\lambda}=\rho^{2}\), then the equations of motions can be rewritten as
\[\left(\frac{\mathrm{d}r}{\mathrm{d}\lambda}\right)^{2}=-\Delta_{r}\left[K+ \varepsilon\left(r^{2}+2br\right)\right]+\left[\left(r^{2}+2br+a^{2}\right)E- aL\Xi\right]^{2}, \tag{16}\]
\[\left(\frac{\mathrm{d}\theta}{\mathrm{d}\lambda}\right)^{2}=\Delta_{\theta} \left(K-\varepsilon a^{2}\cos^{2}\theta\right)-\frac{1}{\sin^{2}\theta}\left( aE\sin^{2}\theta-L\Xi\right)^{2}, \tag{17}\]
\[\frac{\mathrm{d}\varphi}{\mathrm{d}\lambda}=\frac{a\left(r^{2}+2br+a^{2} \right)E\Xi-a^{2}L\Xi^{2}}{\Delta_{r}}-\frac{1}{\Delta_{\theta}\sin^{2}\theta }\left(aE\Xi\sin^{2}\theta-L\Xi^{2}\right), \tag{18}\]
\[\frac{\mathrm{d}t}{\mathrm{d}\lambda}=\frac{E\left(r^{2}+2br+a^{2}\right)^{2}- a\left(r^{2}+2br+a^{2}\right)L\Xi}{\Delta_{r}}-\frac{\sin^{2}\theta}{\Delta_{ \theta}}\left(a^{2}E-\frac{aL\Xi}{\sin^{2}\theta}\right). \tag{19}\]
Introducing some dimensionless quantities to rescale the parameters
\[\tilde{r}=\frac{r}{M},\ \ \tilde{a}=\frac{a}{M},\ \ \tilde{t}=\frac{t}{M},\ \ \tilde{L}=\frac{L}{M},\ \ \tilde{l}=\frac{l}{M},\ \ \tilde{b}=\frac{b}{M},\ \ \tilde{K}=\frac{K}{M^{2}},\ \ \gamma=M\lambda, \tag{20}\]
then the equations of motion (16)-(19) can be formulated as
\[\left(\frac{\mathrm{d}\tilde{r}}{\mathrm{d}\gamma}\right)^{2}=-\Delta_{\tilde{ r}}\left[\tilde{K}+\varepsilon\left(\tilde{r}^{2}+2\tilde{b}\tilde{r}\right) \right]+\left[\left(\tilde{r}^{2}+2\tilde{b}\tilde{r}+\tilde{a}^{2}\right)E- \tilde{a}\tilde{L}\Xi\right]^{2}=\tilde{R}(\tilde{r}), \tag{21}\]
\[\left(\frac{\mathrm{d}\theta}{\mathrm{d}\gamma}\right)^{2}=\Delta_{\theta} \left(\tilde{K}-\varepsilon\tilde{a}^{2}\cos^{2}\theta\right)-\frac{1}{\sin^{2 }\theta}\left(\tilde{a}E\sin^{2}\theta-\tilde{L}\Xi\right)^{2}=\tilde{\Theta}( \theta), \tag{22}\]
\[\frac{\mathrm{d}\varphi}{\mathrm{d}\gamma}=\frac{\tilde{a}\left(\tilde{r}^{2}+2 \tilde{b}\tilde{r}+\tilde{a}^{2}\right)E\Xi-\tilde{a}^{2}\tilde{L}\Xi^{2}}{ \Delta_{\tilde{r}}}-\frac{1}{\Delta_{\theta}\sin^{2}\theta}\left(\tilde{a}E \Xi\sin^{2}\theta-\tilde{L}\Xi^{2}\right), \tag{23}\]
\[\frac{\mathrm{d}\tilde{t}}{\mathrm{d}\gamma}=\frac{E\left(\tilde{r}^{2}+2\tilde{b} \tilde{r}+\tilde{a}^{2}\right)^{2}-\tilde{a}\left(\tilde{r}^{2}+2\tilde{b} \tilde{r}+\tilde{a}^{2}\right)\tilde{L}\Xi}{\Delta_{\tilde{r}}}-\frac{\sin^{2 }\theta}{\Delta_{\theta}}\left(\tilde{a}^{2}E-\frac{\tilde{a}\tilde{L}\Xi}{ \sin^{2}\theta}\right), \tag{24}\]
where
\[\Delta_{\tilde{r}}=\left(1+\frac{\tilde{r}^{2}+2\tilde{b}\tilde{r}}{\tilde{l} ^{2}}\right)\left(\tilde{r}^{2}+2\tilde{b}\tilde{r}+\tilde{a}^{2}\right)-2 \tilde{r}. \tag{25}\]
And the effective potentials can be expressed in terms of dimensionless quantity as
\[\tilde{V}_{\mathrm{reff}}=\frac{\tilde{a}\tilde{L}\Xi\pm\sqrt{\Delta_{\tilde{ r}}\left[\tilde{K}+\varepsilon\left(\tilde{r}^{2}+2\tilde{b}\tilde{r}\right) \right]}}{\tilde{r}^{2}+2\tilde{b}\tilde{r}+\tilde{a}^{2}}, \tag{26}\]
\[\tilde{V}_{\theta\mathrm{eff}}=\frac{\tilde{L}\Xi\pm\sqrt{\Delta_{\theta} \left(K-\varepsilon\tilde{a}^{2}\cos^{2}\theta\right)\sin^{2}\theta}}{\tilde{a }\sin^{2}\theta}. \tag{27}\]
## IV Analysis of the geodesic equations
In this section, we will give a full analysis of the geodesic equations of motion in the Kerr-Sen-AdS\({}_{4}\) spacetime and investigate the possible orbit types.
### Types of latitudinal motion
First we consider the function \(\tilde{\Theta}(\theta)\) in equation (22). Let \(v=\cos^{2}\theta\) with \(v\in[0,1]\), the function \(\tilde{\Theta}\) can be written as
\[\tilde{\Theta}(v)=\left(1-\frac{\tilde{a}^{2}}{\tilde{l}^{2}}v\right)\left( \tilde{K}-\varepsilon\tilde{a}^{2}v\right)-\frac{1}{1-v}\left[\tilde{a}E\left( 1-v\right)-\tilde{L}\Xi\right]^{2}. \tag{28}\]
Geodesic motion is possible only for \(\tilde{\Theta}(v)\geq 0\). The zeros of \(\tilde{\Theta}(\theta)\) are the turning points of the latitudinal motion. Assuming that \(\tilde{\Theta}(v)\) has some zeros in \([0,1]\), the number of zeros changes only if: (i) a zero crosses \(0\) or \(1\), or (ii) a double zero occurs. If \(v=0\) is a zero, then
\[\tilde{\Theta}(v=0)=\tilde{K}-\left(\tilde{a}E-\tilde{L}\Xi\right)^{2}, \tag{29}\]
or
\[\tilde{L}=\frac{\tilde{a}E\pm\sqrt{\tilde{K}}}{\Xi}. \tag{30}\]
From Eq. (28), we see that \(v=1\) is a pole of \(\tilde{\Theta}(v)\) for \(\tilde{L}\neq 0\). So \(v=1\) is a zero of \(\tilde{\Theta}(v)\) only if \(\tilde{L}=0\), therefore we have
\[\tilde{\Theta}(v=1,\tilde{L}=0)=\left(1-\frac{\tilde{a}^{2}}{\tilde{l}^{2}} \right)\left(\tilde{K}-\varepsilon\tilde{a}^{2}\right). \tag{31}\]
In order to remove the pole of \(\tilde{\Theta}(v)\) at \(v=1\), we consider another function
\[\tilde{\Theta}^{\prime}(v)=(1-v)\left(1-\frac{\tilde{a}^{2}}{\tilde{l}^{2}}v \right)\left(\tilde{K}-\varepsilon\tilde{a}^{2}v\right)-\left[\tilde{a}E \left(1-v\right)-\tilde{L}\Xi\right]^{2}, \tag{32}\]
where \(\tilde{\Theta}(v)=\frac{1}{1-v}\tilde{\Theta}^{\prime}(v)\). The double zeros satisfy the following conditions,
\[\tilde{\Theta}^{\prime}(v)=0\qquad\text{and}\qquad\frac{\mathrm{d}\tilde{ \Theta}^{\prime}(v)}{\mathrm{d}v}=0, \tag{33}\]
which implies
\[\tilde{L}=\frac{\left(6E\pm\sqrt{36E^{2}-\frac{36}{l^{2}}\widetilde{K}}\right) \left(-\frac{12}{l^{2}}\tilde{a}^{2}+12\right)}{-\frac{144}{l^{2}}\tilde{a} \overline{\Xi}}. \tag{34}\]
We plot parametric \(\tilde{L}-E^{2}\) diagrams in Fig. 1 from the condition of \(v=0\) being a zero (Eq. (30)) and the condition of double zeros (Eq. (34)). In regions a and b, geodesic motions are possible. The function \(\tilde{\Theta}\) has a single zero in region a, where the geodesics will cross the equatorial plane (\(\tilde{K}>(\tilde{a}E-\tilde{L}\Xi)^{2}\)). In region b, the function \(\tilde{\Theta}\) has two zeros, corresponding to motion above or below the equatorial plane (\(\tilde{K}<(\tilde{a}E-\tilde{L}\Xi)^{2}\)). If \(\tilde{K}=(\tilde{a}E-\tilde{L}\Xi)^{2}\), the geodesics will remain in the equatorial plane.
### Types of radial motion
The zeros of the function \(\tilde{R}\) in Eq. (21) are the turning points of orbits of particles and lights, so \(\tilde{R}\) determines the possible types of orbits. Unlike the function \(\tilde{\Theta}\), the number of real zeros of \(\tilde{R}\) changes if a double zero occurs:
\[\tilde{R}(\tilde{r})=0\qquad\text{and}\qquad\frac{\text{d}\tilde{R}(\tilde{r} )}{\text{d}\tilde{r}}=0. \tag{35}\]
From the double zero condition we can plot parametric \(\tilde{L}-E^{2}\) diagrams, see, for example, in Fig. 2. The polynomial \(\tilde{R}\) has no zeros in region I, 2 negative zeros in region II, 1 negative and 1 positive zeros in region III, 3 positive and 1 negative zeros in region IV, 5 positive and 1 negative zeros in region V. In regions marked with the letter "a", the orbits cross \(\theta=\pi/2\) but not \(\tilde{r}=0\). Whereas in regions marked with the letter "b", \(\tilde{r}=0\) can be crossed but \(\theta=\pi/2\) is never crossed. The \(\theta\) equation dose not allow geodesic motion in the grey areas. In Fig. 7, we show the effective potential together with examples of energies for different orbit types. The green and blue curves represent the two branches of the effective potential. The red dots which are the turning points of the orbits denote the zeros of the polynomial \(R\). The red dashed lines in the grey area correspond to energies. Since \(R<0\), no motion is possible in the grey area. The \(\theta\) equation does not allow geodesic motion (\(\Theta<0\)) in the oblique lines area.
## V Solution of the geodesic equation
In this section, we will investigate the analytical solutions of the geodesic equations (21)-(24) in the Kerr-Sen-AdS\({}_{4}\) spacetime. We will consider each equation separately and present the solutions of them in terms of the Weierstrass functions as well as the Kleinian function.
### The \(\theta\) equation
We first discuss the equation (22) which describes the \(\theta\) motion
\[\left(\frac{\mathrm{d}\theta}{\mathrm{d}\gamma}\right)^{2}=\tilde{\Theta}( \theta)=\Delta_{\theta}\left(\tilde{K}-\varepsilon\tilde{a}^{2}\cos^{2}\theta \right)-\frac{1}{\sin^{2}\theta}\left(\tilde{a}E\sin^{2}\theta-\tilde{L} \Xi\right)^{2}. \tag{36}\]
Substituting \(v=\cos^{2}\theta\), this equation can simplified as
\[\left(\frac{\mathrm{d}v}{\mathrm{d}\gamma}\right)^{2}=4v\tilde{\Theta^{ \prime}}(v)=4v\left(1-v\right)\left(1-\frac{\tilde{a}^{2}}{\tilde{l}^{2}}v \right)\left(\tilde{K}-\varepsilon\tilde{a}^{2}v\right)-4v\left[\tilde{a}E(1- v)-\tilde{L}\Xi\right]^{2}. \tag{37}\]
This equation is a polynomial of order four when \(\varepsilon=1\) and a polynomial of order three when \(\varepsilon=0\).
#### v.1.1 Timelike geodesics
Taking \(\varepsilon=1\), as mentioned above, equation (37) is a polynomial of order four. Applying a substitution, \(v=\xi^{-1}\), we can reduce the polynomial to one of order three, i.e.
\[\begin{split}\left(\frac{\mathrm{d}\xi}{\mathrm{d}\gamma}\right)^ {2}=4\xi^{3}\left(\tilde{K}-\tilde{a}^{2}E^{2}+2\tilde{a}\tilde{L}E\Xi-\tilde{ L}^{2}\Xi^{2}\right)+4\xi^{2}\left(-\tilde{a}^{2}-\frac{\tilde{K}\tilde{a}^{2}}{ \tilde{l}^{2}}-\tilde{K}+2\tilde{a}^{2}E^{2}-2\tilde{a}\tilde{L}E\Xi\right)\\ +4\xi\left(\frac{\tilde{a}^{4}}{\tilde{l}^{2}}+\tilde{a}^{2}+\frac {\tilde{K}\tilde{a}^{2}}{\tilde{l}^{2}}-\tilde{a}^{2}E^{2}\right)-\frac{4 \varepsilon\tilde{a}^{4}}{\tilde{l}^{2}}=\sum_{i=1}^{3}a_{i}\xi^{i}.\end{split} \tag{38}\]
Applying another substitution, \(\xi=\frac{1}{a_{3}}(4y-\frac{a_{2}}{3})\), Eq. (38) can be rewritten as
\[\left(\frac{\mathrm{d}y}{\mathrm{d}\gamma}\right)^{2}=4y^{3}-g_{2}y-g_{3}, \tag{39}\]
where \(g_{2}\), \(g_{3}\) are Weierstrass invariants which are given by
\[g_{2}=\frac{1}{16}\left(\frac{4}{3}a_{2}^{2}-4a_{1}a_{3}\right), \tag{40}\]
\[g_{3}=\frac{1}{16}\left(\frac{1}{3}a_{1}a_{2}a_{3}-\frac{2}{27}a_{2}^{3}-a_{0}a _{3}^{2}\right). \tag{41}\]
The equation (39) can be solved by the Weierstrass \(\wp\) function [18]
\[y(\gamma)=\wp(\gamma-\gamma_{\theta,\text{in}};g_{2},g_{3}), \tag{42}\]
where \(y_{0}=\frac{a_{3}}{4\cos^{2}\theta_{0}}+\frac{a_{2}}{12}\) and \(\gamma_{\theta;\text{in}}=\gamma_{0}+\int_{y_{0}}^{\infty}\frac{\text{d}y^{ \prime}}{\sqrt{4y^{\prime 3}-g_{2}y^{\prime}-g_{3}}}\). Finally, the solution to the equation (22) is
\[\theta(\gamma)=\arccos\left(\pm\sqrt{\frac{a_{3}}{4\wp(\gamma-\gamma_{\theta, \text{in}};g_{2},g_{3})-\frac{a_{2}}{3}}}\right). \tag{43}\]
Figure 7: Plots of the effective potential together. The blue and green curves show the two branches of the effective potential. The red dashed lines correspond to energies. The red dots mark the zeros of the polynomial \(R\). No motion is possible in the grey area. No \(\theta\) geodesic motions are allowed in the oblique lines area dashed area. The vertical black dashed lines represent the position of the horizons.
Null geodesics
When \(\varepsilon=0\), equation (37) can be written as
\[\begin{split}\left(\frac{\mathrm{d}v}{\mathrm{d}\gamma}\right)^{2}= 4\left(\frac{\tilde{a}^{2}}{\tilde{l}^{2}}\tilde{K}-\tilde{a}^{2}E^{2}\right)v ^{3}+\left(8\tilde{a}^{2}E^{2}-8\tilde{a}E\tilde{L}\Xi-4\tilde{K}\frac{\tilde{ a}^{2}}{\tilde{l}^{2}}-4\tilde{K}\right)v^{2}+\\ \left(4\tilde{K}-4\tilde{L}^{2}\Xi^{2}+8\tilde{a}E\tilde{L}\Xi-4 \tilde{a}^{2}E^{2}\right)v=\sum_{i=1}^{3}b_{i}v^{i}.\end{split} \tag{44}\]
Equation (44) is a polynomial of degree three. Using the standard substitution \(v=\frac{1}{b_{3}}(4y-\frac{b_{2}}{3})\) with \(4v\tilde{\Theta}^{\prime}(v)=\sum_{i=1}^{3}b_{i}v^{i}\), transforming Eq. (44) to the form of Eq. (39), then the solution is given by
\[\theta(\gamma)=\arccos\left(\pm\sqrt{\frac{4}{b_{3}}\wp(\gamma-\gamma_{\theta,\mathrm{in}};g_{2},g_{3})-\frac{b_{2}}{3b_{3}}}\right), \tag{45}\]
where \(\gamma_{\theta;\mathrm{in}}\), \(g_{2}\), and \(g_{3}\) are given by as above with \(a_{i}\) replaced by \(b_{i}\).
### The \(\tilde{r}\) equation
The dynamics of \(\tilde{r}\) are described by equation (21) which is a polynomial of order six when \(\varepsilon=1\) or a polynomial of order four when \(\varepsilon=0\). In the following, we will discuss these two cases separately.
#### iii.2.1 Null geodesics
For \(\varepsilon=0\), equation (21) is a polynomial of order four, which can be solved by using the method as above. Substituting \(\tilde{r}=\xi^{-1}+\tilde{r}_{\tilde{R}}\) with \(\tilde{r}_{\tilde{R}}\) a zero of \(\tilde{R}\), and then equation (21) can be rewritten as
\[\left(\frac{\mathrm{d}\xi}{\mathrm{d}\gamma}\right)^{2}=\tilde{R}(\xi)=\sum_{ i=1}^{3}b_{i}\xi^{i}, \tag{46}\]
where \(b_{i}=\frac{1}{(4-i)!}\frac{\mathrm{d}^{(4-i)}\tilde{R}}{\mathrm{d}\tilde{r}^{ (4-i)}}(\tilde{r}_{\tilde{R}})\). Equation (46) can be solved with the help of the Weierstrass elliptic \(\wp\) function
\[\tilde{r}(\gamma)=\frac{b_{3}}{4\wp(\gamma-\gamma_{\tilde{r},\mathrm{in}};g_ {2},g_{3})-\frac{b_{2}}{3}}+\tilde{r}_{\tilde{R}}, \tag{47}\]
where \(y_{0}=\frac{b_{3}}{4(\tilde{r}_{0}-r_{R})}+\frac{b_{2}}{12}\), \(\gamma_{\tilde{r},\mathrm{in}}=\gamma_{0}+\int_{y_{0}}^{\infty}\frac{\mathrm{ d}y^{\prime}}{\sqrt{4y^{3}-g_{2},r,y^{\prime}-g_{3},\tau}}\), and \(g_{2}\), \(g_{3}\) are defined as in section VA with \(a_{i}=b_{i}\).
#### iii.2.2 Timelike geodesics
For massive particles, \(\varepsilon=1\), equation (21) is of the hyperelliptic type. Using the substitution \(\tilde{r}=\pm\frac{1}{u}+\tilde{r}_{\tilde{R}}\), it be rewritten as
\[\left(u\frac{\mathrm{d}u}{\mathrm{d}\gamma}\right)^{2}=\tilde{R}_{u}\equiv \sum_{i=0}^{5}c_{i}u^{i}, \tag{48}\]
where \(c_{i}=\frac{(\pm 1)^{i}}{(6-i)!}\frac{\mathrm{d}^{(6-i)\tilde{R}}}{\mathrm{d}u^{(6- i)}}(\tilde{r}_{\tilde{R}})\). The solution to this equation is [7]
\[u(\gamma)=-\frac{\sigma_{1}}{\sigma_{2}}(\gamma_{\sigma}), \tag{49}\]
where \(\sigma_{i}\) is the \(i\)-th derivative of the Kleinian sigma function
\[\sigma(z)=C\exp\left(-\frac{1}{2}z^{t}\eta\omega^{-1}z\right)\theta[g,h]\left( (2\omega)^{-1}z;\tau\right), \tag{50}\]
where \(\tau=(\omega^{-1}\omega^{\prime})\) is the symmetric Riemann matrix, \((2\omega,2\omega^{\prime})\) is the period-matrix, \((2\eta,2\eta^{\prime})\) is the period-matrix of the second type, and \(C\) is a constant. The Riemann theta-function \(\theta[g,h]\) is written as
\[\theta[g;h](z;\tau)=\sum_{m\in\mathbb{Z}^{g}}\exp[i\pi(m+g)^{t}(\tau(m+g)+2z+2h )], \tag{51}\]
The solution of the \(\tilde{r}\) equation is
\[\tilde{r}(\gamma)=\mp\frac{\sigma_{2}}{\sigma_{1}}(\gamma_{\sigma})+\tilde{r} _{\tilde{R}}, \tag{52}\]
where the sign depends on the sign chosen in the substitution \(\tilde{r}=\pm\frac{1}{u}+\tilde{r}_{\tilde{R}}\). This is an analytic solution of the equation of a test particle in the Kerr-Sen-AdS\({}_{4}\) spacetime and is valid in all regions of the spacetime.
### The \(\varphi\) equation
Integrating the differential equation (23), we obtain
\[\varphi-\varphi_{0}=\int_{\tilde{r}_{0}}^{\tilde{r}}\frac{\tilde{a}(\tilde{r} ^{2}+2\tilde{b}\tilde{r}+\tilde{a}^{2})E\Xi-\tilde{a}^{2}\tilde{L}\Xi^{2}}{ \Delta_{\tilde{r}}\sqrt{\tilde{R}(\tilde{r})}}\mathrm{d}\tilde{r}-\int_{ \theta_{0}}^{\theta}\frac{\tilde{a}E\Xi\sin^{2}\theta-\tilde{L}\Xi^{2}}{ \Delta_{\theta}\sin^{2}\theta\sqrt{\tilde{\Theta}(\theta)}}\mathrm{d}\theta=I _{r}-I_{\theta}, \tag{53}\]
where \(\frac{\mathrm{d}\tilde{r}}{\mathrm{d}\gamma}=\sqrt{\tilde{R}(\tilde{r})}\), \(\frac{\mathrm{d}\theta}{\mathrm{d}\gamma}=\sqrt{\tilde{\Theta}(\theta)}\), \(I_{r}\) is an \(\tilde{r}\)-dependent integral, and \(I_{\theta}\) is an \(\theta\)-dependent integral.
#### iii.3.1 The \(\theta\)-dependent integral
Let us consider the integral
\[I_{\theta}=\int_{\theta_{0}}^{\theta}\frac{\tilde{a}E\Xi\sin^{2}\theta-\tilde{ L}\Xi^{2}}{\Delta_{\theta}\sin^{2}\theta\sqrt{\tilde{\Theta}(\theta)}}\mathrm{d}\theta. \tag{54}\]
Substituting \(v=\cos^{2}\theta\), equation (54) can be simplified as
\[I_{\theta}=\mp\int_{v_{0}}^{v}\frac{\tilde{a}E\Xi(1-v)-\tilde{L}\Xi^{2}}{ \Delta_{v}(1-v)\sqrt{4v\tilde{\Theta}^{\prime}}}\mathrm{d}v^{\prime}. \tag{55}\]
Like done in [7], the solution to \(I_{\theta}\) is given by
\[\begin{split} I_{\theta}=\frac{|a_{3}|}{a_{3}}\Big{\{}(\tilde{a} \Xi E-\tilde{L}\Xi^{2})(v-v_{0})-\sum_{i=1}^{4}\frac{a_{3}}{4\phi^{\prime}(v_{ i})}\left[\zeta(v_{i})(v-v_{0})+\log\frac{\sigma(v-v_{i})}{\sigma(v_{0}-v_{i})}+2 \pi ik_{i}\right]\\ \left[-\frac{\tilde{a}^{3}}{\tilde{l}^{2}}\left(E\Xi+\frac{ \tilde{a}\tilde{L}\Xi^{2}}{\tilde{l}^{2}}\right)(\delta_{i1}+\delta_{i2})+ \tilde{L}\Xi^{2}(\delta_{i3}+\delta_{i4})\right]\Big{\}},\end{split} \tag{56}\]
where \(\wp(v_{1})=\frac{a_{2}}{12}+\frac{4\tilde{a}^{2}a_{3}}{\tilde{l}^{2}}\), \(\wp(v_{3})=\frac{a_{2}}{12}+\frac{a_{3}}{4}\), and \(v=v(\gamma)=\gamma-\gamma_{\theta,\mathrm{in}}\) with \(\gamma_{\theta,\mathrm{in}}\) defined as above.
#### iii.3.2 The \(r\)-dependent integral
Now we consider the \(\tilde{r}\)-dependent integral \(I_{r}\)
\[I_{r}=\int_{\tilde{r}_{0}}^{\tilde{r}}\frac{\tilde{a}(\tilde{r}^{2}+2\tilde{b} \tilde{r}+\tilde{a}^{2})E\Xi-\tilde{a}^{2}\tilde{L}\Xi^{2}}{\Delta_{\tilde{r} }\sqrt{\tilde{R}(\tilde{r})}}\mathrm{d}\tilde{r}. \tag{57}\]
For \(\varepsilon=0\), using the substitutions \(\tilde{r}=\frac{1}{\xi}+\tilde{r}_{\tilde{R}}\) and \(\xi=\frac{1}{b_{3}}(4y-\frac{b_{2}}{3})\), and performing a partial fraction decomposition, and finally substituting \(y=\wp\), we have
\[\begin{split} I_{r}=&\frac{|b_{3}|}{b_{3}}\left\{ \sum_{i=1}^{4}\sum_{j=1}^{2}\frac{C_{i}}{\wp^{\prime}(v_{ij})}\left[\xi\left( v_{ij}\right)\left(v-v_{0}\right)+\log\sigma\left(v-v_{ij}\right)\right. \right.\\ &\left.\left.-\log\sigma\left(v_{0}-v_{ij}\right)\right]-\frac{ \tilde{a}E\Xi\left(\tilde{r}_{\tilde{R}}^{2}+2\tilde{b}\tilde{r}_{\tilde{R}}+ \tilde{a}^{2}\right)-\tilde{a}^{2}\Xi^{2}\tilde{L}}{\Delta_{\tilde{r}=\tilde{ r}_{\tilde{R}}}}\left(v-v_{0}\right)\right\},\end{split} \tag{58}\]
where \(v=v(\gamma)=\gamma-\gamma_{\tilde{r},\text{in}}\) and \(v_{0}=v(\gamma_{0})\).
For massive particle, \(\varepsilon=1\), applying substitution \(\tilde{r}=\pm\frac{1}{u}+\tilde{r}_{\tilde{R}}\) and using the same method for integration as in subsection VB (for detail see [7]), we get the solution
\[\begin{split} I_{r}=&\mp\frac{\tilde{a}u_{0}}{|u_{0 }|}\left\{C_{1}\left(\omega-\omega_{0}\right)+C_{0}\left(f(\omega)-f\left( \omega_{0}\right)\right)\right.\\ &+\sum_{i=1}^{4}\frac{C_{2,i}}{\sqrt{\tilde{R}_{u_{i}}}}\left[ \frac{1}{2}\log\frac{\sigma\left(W^{+}(\omega)\right)}{\sigma\left(W^{-}( \omega)\right)}-\frac{1}{2}\log\frac{\sigma\left(W^{+}\left(\omega_{0}\right) \right)}{\sigma\left(W^{-}\left(\omega_{0}\right)\right)}\right.\\ &\left.-\left(f(\omega)-f\left(\omega_{0}\right),\omega-\omega_{ 0}\right)\left(\int_{u_{i}^{-}}^{u_{i}^{+}}d\tilde{r}\right)\right]\right\} \end{split} \tag{59}\]
where \(\omega_{0}=\omega(\gamma_{0})\), the constants \(C_{i}\) are the coefficients of the partial fractions, \(u_{i}\) are the four zeros of \(\Delta_{\tilde{r}}=\pm\frac{1}{u}+\tilde{r}_{\tilde{R}},u_{0}=\pm(\tilde{r}- \tilde{r}_{\tilde{R}})^{-1}\), and \(W^{\pm}(\omega)=(f(\omega),\omega)^{t}-2\int_{\infty}^{u_{i}^{\pm}}\mathrm{d} \vec{z}\) with \(u_{i}^{\pm}=u_{i}\pm\sqrt{\tilde{R}_{u_{i}}}\).
### The \(t\) equation
Integrating the differential equation (24), we can obtain
\[\tilde{t}-\tilde{t}_{0}=\int_{r_{0}}^{r}\frac{E\left(\tilde{r}^{2}+2\tilde{b} \tilde{r}+\tilde{a}^{2}\right)^{2}-\tilde{a}\left(\tilde{r}^{2}+2\tilde{b} \tilde{r}+\tilde{a}^{2}\right)\tilde{L}\Xi}{\Delta_{\tilde{r}}\sqrt{\tilde{R} }}\mathrm{d}r-\int_{\theta_{0}}^{\theta}\frac{\sin^{2}\theta}{\Delta_{\theta} \sqrt{\tilde{\Theta}}}\left(\tilde{a}^{2}E-\frac{\tilde{a}\tilde{L}\Xi}{\sin^ {2}\theta}\right)\mathrm{d}\theta=\tilde{I}_{r}-\tilde{I}_{\theta}. \tag{60}\]
The solution to \(I_{\theta}\) is given by
\[\tilde{I}_{\theta}=a_{3}(v-v_{0})+\sum_{i=1}^{2}\frac{3a_{3}\tilde{a}^{2}}{ \tilde{l}^{2}\wp^{\prime}(v_{i})}\left[\zeta(v_{i})(v-v_{0})+\log\sigma(v-v_{i })-\log\sigma(v_{0}-v_{i})\right], \tag{61}\]
where \(\wp(v_{1})=\frac{a_{2}}{12}-\frac{3\tilde{a}^{2}a_{3}}{\tilde{l}^{2}}=\wp(v_{ 2}),v=v(\gamma)=2\gamma-\gamma_{\theta,in}\) and \(v_{0}=v(\gamma_{0})\).
For \(\varepsilon=0\), the solution to \(I_{r}\) is
\[\begin{split}\tilde{I}_{r}=&\frac{|b_{3}|}{b_{3}} \left\{\sum_{i=1}^{4}\sum_{j=1}^{2}\frac{\tilde{C}_{i}}{\wp^{\prime}(v_{ij})} \left[\xi\left(v_{ij}\right)\left(v-v_{0}\right)+\log\sigma\left(v-v_{ij} \right)\right.\right.\\ &\left.-\log\sigma\left(v_{0}-v_{ij}\right)\right]\\ &\left.-\frac{\tilde{a}\tilde{L}\Xi\left(\tilde{r}_{\tilde{R}}^{2 }+2\tilde{b}\tilde{r}_{\tilde{R}}+\tilde{a}^{2}\right)-E\left(\tilde{r}_{ \tilde{R}}^{2}+2\tilde{b}\tilde{r}_{\tilde{R}}+\tilde{a}^{2}\right)^{2}}{ \Delta_{\tilde{r}=\tilde{r}_{\tilde{R}}}}\left(v-v_{0}\right)\right\}\end{split} \tag{62}\]
\(\tilde{C}_{i}\) are the coefficients of the partial fractions, \(\wp(v_{i1})=y_{i}=\wp(v_{i2})\), \(y_{i}\) are the four zeros of \(\Delta_{y(\tilde{r})}\), and \(v=v(\gamma)=\gamma-\gamma_{\tilde{r},\text{in}},v_{0}=v(\gamma_{0})\).
When \(\varepsilon=1\), as done in [7], the solution of the \(\tilde{r}\)-dependent integral is
\[\begin{split}\tilde{I}_{r}=&\frac{u_{0}}{\sqrt{c_{5} }\left|u_{0}\right|}\left\{\tilde{C}_{1}\left(\omega-\omega_{0}\right)+\tilde{ C}_{0}\left(f(\omega)-f\left(\omega_{0}\right)\right)\right.\\ &+\sum_{i=1}^{4}\frac{\tilde{C}_{2,i}}{\sqrt{\tilde{\tilde{R}}_{ u_{i}}}}\left[\frac{1}{2}\log\frac{\sigma\left(W^{+}(\omega)\right)}{\sigma \left(W^{-}(\omega)\right)}-\frac{1}{2}\log\frac{\sigma\left(W^{+}\left(\omega _{0}\right)\right)}{\sigma\left(W^{-}\left(\omega_{0}\right)\right)}\right.\\ &\left.-\left(f(\omega)-f\left(\omega_{0}\right),\omega-\omega_{0 }\right)\left(\int_{u_{i}^{-}}^{u_{+}^{+}}d\vec{r}\right)\right]\right\} \end{split} \tag{63}\]
The constants \(\tilde{C}_{0},\tilde{C}_{1},\tilde{C}_{2}\) are the coefficients of the partial fractions.
## VI Conclusion
In this paper, we have discussed the motion of particles and light rays in the Kerr-Sen-AdS\({}_{4}\) spacetime. We have obtained the geodesic equations and analytical solutions. The geodesic equations can be solved in terms of the Weierstrass elliptic functions and the derivatives of Kleinian functions. Using the parametric diagrams, we have shown some regions where the \(\tilde{r}\) and the \(\theta\) geodesic motions are allowed. The analytical solutions of the equations of motion in presented here may can be used to study the shadow of the Kerr-Sen-AdS\({}_{4}\) black hole.
###### Acknowledgements.
This work is supported in part by Hebei Provincial Natural Science Foundation of China (Grant No. A2021201034).
|
2306.15881
|
Blockwise Feature Interaction in Recommendation Systems
|
Feature interactions can play a crucial role in recommendation systems as
they capture complex relationships between user preferences and item
characteristics. Existing methods such as Deep & Cross Network (DCNv2) may
suffer from high computational requirements due to their cross-layer
operations. In this paper, we propose a novel approach called blockwise feature
interaction (BFI) to help alleviate this issue. By partitioning the feature
interaction process into smaller blocks, we can significantly reduce both the
memory footprint and the computational burden. Four variants (denoted by P, Q,
T, S, respectively) of BFI have been developed and empirically compared. Our
experimental results demonstrate that the proposed algorithms achieves close
accuracy compared to the standard DCNv2, while greatly reducing the
computational overhead and the number of parameters. This paper contributes to
the development of efficient recommendation systems by providing a practical
solution for improving feature interaction efficiency.
|
Weijie Zhao, Ping Li
|
2023-06-28T02:52:51Z
|
http://arxiv.org/abs/2306.15881v1
|
# Blockwise Feature Interaction in Recommendation Systems
###### Abstract.
Feature interactions can play a crucial role in recommendation systems as they capture complex relationships between user preferences and item characteristics. Existing methods such as Deep & Cross Network (DCNv2) may suffer from high computational requirements due to their cross-layer operations. In this paper, we propose a novel approach called **blockwise feature interaction (BFI)** to help alleviate this issue. By partitioning the feature interaction process into smaller blocks, we can significantly reduce both the memory footprint and the computational burden. Four variants (denoted by **P**, **Q**, **T**, **S**, respectively) of BFI have been developed and empirically compared. Our experimental results demonstrate that the proposed algorithms achieves close accuracy compared to the standard DCNv2, while greatly reducing the computational overhead and the number of parameters. This paper contributes to the development of efficient recommendation systems by providing a practical solution for improving feature interaction efficiency.
+
Footnote †: journal: Information Systems
## 1. Introduction
In the recent decade, deep learning has emerged as a powerful technique for building recommendation systems that can effectively capture complex patterns and user preferences (Beng et al., 2015; Chen et al., 2015; Chen et al., 2015; Chen et al., 2015; Chen et al., 2015; Chen et al., 2015; Li et al., 2015; Li et al., 2015; Li et al., 2015; Li et al., 2015; Li et al., 2015; Li et al., 2015; Li et al., 2015). One significant challenge in designing these systems lies in capturing feature interactions, which are essential for achieving high performance and accuracy (Li et al., 2015; Li et al., 2015; Li et al., 2015; Li et al., 2015).
**Feature interactions** refer to the relationships and dependencies between different input features, such as user demographics, historical preferences, and item characteristics. These interactions can be highly non-linear and intricate, making it challenging to model them accurately using traditional linear models or shallow neural networks. The number of potential feature interactions grows exponentially with the number of features, making it infeasible to explicitly enumerate and represent all possible interactions. Even if we only consider second-order feature interactions, we will still have to explicitly introduce a quadratic number of combined features. As the number of features increases, the computational and memory requirements become unpractical.
**Feature interactions in deep learning.** Deep learning, with its ability to automatically learn hierarchical representations and complex relationships, has shown great promise in addressing this challenge. For example, Deep & Cross Network (DCN) and its improved version DCNv2 (Li et al., 2015) demonstrate remarkable performance in capturing feature interactions. DCNv2 combines the strengths of deep neural networks and cross-networks, allowing for the learning of both low- and high-order feature interactions. We show an example of cross networks in the left part of Figure 1. By leveraging cross-network architectures, DCNv2 enables the model to explicitly model the interactions between features, enhancing the overall predictive power of the system.
**Cost of cross networks in DCNv2.** Despite its effectiveness, the cost associated with DCNv2 cannot be neglected. As shown in the left part of Figure 1, the complexity of the cross networks in DCNv2 grows quadratically (\(D^{2}\)) with the number of input features (\(D\)), resulting in a substantial computational burden and increased training time. This cost factor is particularly pronounced in large-scale recommendation systems that handle vast amounts of features and require real-time or near-real-time predictions. As a result, mitigating the cost while maintaining the desired level of performance becomes a critical consideration for deploying DCNv2-based recommendation systems.
**Blockwise Feature Interaction (BFI).** In this paper, we aim to explore the trade-off between performance and cost of the cross network in DCNv2 to alleviate the quadratic factor. We propose Blockwise Feature Interaction (BFI), a network architecture that partitions the cross layers into smaller blocks. By splitting the vectors for the fully-connected embedding layers, the \(D^{2}\) cost is reduced to \((\frac{D}{K})^{2}\times K=\frac{D^{2}}{K}\), where D represents the total number of input features, and K is the number of blocks in BFI. The right part of Figure 1 depicts the architecture of BFI when \(K=3\).
**Contributions.** The main contributions of this paper are:
* We introduce Blockwise Feature Interaction (BFI), that reduces both computation and memory consumption of cross networks for modeling feature interactions in recommendation systems.
* We propose four variants of BFI which are designed for saving the memory and evaluation cost, by reducing the number of parameter shuffling and introducing parameter sharing.
* We experimentally evaluate the proposed methods on public datasets. The empirical results confirm the effectiveness of our proposed methods.
## 2. Blockwise Feature Interaction
### Cross Network
As shown in the left part of Figure 1, the cross network operates by taking the outputs of the input layer and feeding them into a series of cross layers. Each cross layer captures a specific level
of interaction between the features. The cross layers consist of a combination of linear and nonlinear operations that enable the model to learn complex feature interactions. The key mechanism in the cross network is the cross operation, which calculates the element-wise product of different features and applies a weight (\(W\)) to each interaction. This weight reflects the importance or contribution of the specific interaction to the overall prediction. By incorporating these weighted feature interactions, the cross network enhances the expressive power of the model, enabling it to capture complex patterns and dependencies that might not be evident in individual features.
During the training process, the cross network learns the weights associated with the feature interactions through backpropagation and gradient descent optimization. The model adjusts the weights to minimize the discrepancy between its predictions and the ground truth labels. This iterative learning process allows the cross network to fine-tune its parameters and improve its ability to capture relevant feature interactions.
The cross network can capture both low-order and high-order feature interactions. Low-order interactions refer to simple pairwise interactions between two features, while high-order interactions involve combinations of three or more features. By incorporating \(C\) cross layers, the cross network can capture increasingly complex and higher-order interactions. The larger \(C\) is, the higher-order feature interaction is captured.
**Cost of cross network.** The cost of feeding forward an instance with \(D\) features through an \(C\)-layer cross network is \(O(D^{2}\cdot C)\)--the \(1\times D\) vector and \(D\times D\) matrix multiplication for each layers. The space complexity is \(O(D^{2})\cdot C\) for \(C\) weight matrices. Both of them are quadratic to the number of features, making it inefficient or even infeasible to process in latency-limited real-world systems and to store the feature interaction weights.
### Cross Weight Partitioning
To address the quadratic computational cost associated with capturing feature interactions in recommendation systems, we propose the Blockwise Feature Interaction (BFI) method. BFI aims to reduce the computational overhead while maintaining or even improving the performance of recommendation systems.
The quadratic term in the computational and memory costs arises from the weight matrices \(W\), where the size of \(W\) is \(D\times D\). However, by partitioning the features into \(K\) parts, we can effectively reduce the size of \(W\) to \((D/K)\times(D/K)\), resulting in smaller matrix sizes of \((D/K)^{2}\). Despite having \(K\) matrices due to the partitioning (one matrix for each partition), the total size of these \(K\) matrices is \(D^{2}/K\), which is \(K\) times smaller than the original cross network.
This partitioning strategy offers a practical solution to mitigate the quadratic time and memory costs associated with the cross network, enabling the efficient capture of feature interactions in recommendation systems.
Figure 1. A visual illustration of the difference between the cross network in DCNv2 and our proposed Blockwise Feature Interaction (BFI) architecture, where mult is the element-wise multiplication, [] is the concatenation operator, D is the dimension of the input vector, and K is the block factor in BFI (K=3 in this example).
Algorithm 1 outlines the workflow of our proposed BFI method. The algorithm takes a feature vector, \(x_{0}\), as input along with the number of cross network layers, \(C\), and the BFI factor, \(K\). It aims to produce an embedded feature vector, \(x_{C}\), which represents the input features with captured interactions. The algorithm begins by initializing a loop over the cross network layers, from \(c=1\) to \(C\). Within each layer, the feature vector \(x_{c-1}\) from the previous layer is shuffled and split into \(K\) parts. The purpose of this step is to partition the features into smaller blocks to reduce computational costs. After that, we go through an embedding layer for each partitioned feature and merge them as the \(D\)-dimensional embedding. The embedding is then processed as normal cross network layers: elementwise multiplication and addition operations.
### Shuffle & Split
Note that the "shuffle & split" operation mentioned in the algorithm serves two important purposes in capturing feature interactions within the Blockwise Feature Interaction Network.
1. _Addressing Dependencies in Feature Indices_: The shuffling step is necessary because the original feature indices may not be independent. For example, if the feature indices are continuous and represent similar characteristics, it implies that adjacent features may have higher chances of interacting with each other. By shuffling the feature vector, the algorithm introduces randomness and breaks any potential ordering or dependencies among the features. This ensures that the subsequent splitting does not bias the interactions solely based on their original order.
2. _Enabling Interactions Across Blocks_: Without shuffling, each partition would be independent across multiple cross layers. This means that the features within different blocks would have no direct interaction with each other, limiting the network's capacity to capture comprehensive feature dependencies. By shuffling the feature vector, the algorithm introduces randomness in the arrangement of the features, allowing for cross-block interactions to occur. As a result, the subsequent processing of each partition in the cross network layers can capture and model feature interactions both within and across the blocks, enhancing the network's ability to learn complex dependencies.
### Interaction Weight Sharing
After applying BFI, the memory required for storing the feature interaction weights is reduced from \(O(D^{2})\) to \(O(D^{2}/K)\). We find that there is a potential to further reduce the memory footprint by making all partitions share a single embedding.
The observation is that during the embedding process, the feature interaction weights take an embedding as input and generate new embeddings for higher-order interactions. The only difference across the partitions is that they use embeddings from different partitioned blocks. Therefore, we can eliminate the need for separate embeddings per partition and instead share a single embedding across all partitions.
By sharing one embedding across all partitions, the memory requirement can be substantially reduced to \(O(D^{2}/K^{2})\). This reduction is significant because when training and inferencing deep learning models, the computations are often performed on GPUs, which typically have limited memory capacity. The difference in memory consumption by a factor of \(K\) can be critical in determining whether the model fits within the available GPU memory or not.
In Section 3, we enumerate the following 2 options: (1) whether shuffling at each cross layer to address cross block interaction or only shuffling once on the original feature to address the feature indices dependencies; (2) whether enabling weight sharing. The 4 variants are denoted by P, Q, T, and S, respectively (Table 1).
## 3. Experimental Evaluation
The objective of the experimental evaluation is to investigate the overall performance, as well as the impact of parameters, of the proposed method. Specifically, we target at answering the following questions in the experiments:
* What is the impact of BFI on the performance of cross networks?
* How does the shuffling operation affect the performance?
* What is the effect of feature interaction weight sharing?
**Datasets.** We show the experiments on two public datasets: Covertype1 with 54 features, 290,506 training instances and 290,506 testing instances; and YouTube Multiview Video Games Dataset2 (YSpectro) having 1,024 features, 97,934 training data and 11,930 testing data.
Footnote 1: [http://archive.ics.uci.edu/dataset/31/covertype](http://archive.ics.uci.edu/dataset/31/covertype)
Footnote 2: [https://archive.ics.uci.edu/dataset/269/youtube+multiview-video+games+dataset](https://archive.ics.uci.edu/dataset/269/youtube+multiview-video+games+dataset)
**Models.** Classification is the task of our model. The raw features first go through the cross network/BFI. Then a given number (\(L\)) of fully-connected layers (256 neurons) are followed. The last layer is a fully-connected layer that regresses to the number of classes. The activation function we used is ReLU (He et al., 2016). We implemented the network of BFI in PyTorch (Kipf and Welling, 2017) 1.13.1, Python 3.8.15.
\begin{table}
\begin{tabular}{c c c} \hline \hline Legend & Shuffle & Share \\ \hline P & every layer & ✗ \\ Q & only once & ✗ \\ T & every layer & ✓ \\ S & only once & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 1. Legend explanation.
Figure 2 and Figure 3 show the test accuracy of BFI and the baseline (DCNv2). We tested 4 variants of BFI denoted with P, Q, T, and S, respectively (Table 1). For block factor \(K\), we use 3, 6, 9 on Covertype and 4, 8 on YSpectro. We only report the case when the number of fully-connected layers is 2 (\(L=2\)) for YSpectro because 1, 2, and 3 show very similar trends. We only keep case 2 for a cleaner view.
**Performance of BFI.**\(\mathrm{P}\) is the case when we pay the highest cost in BFI--we enable shuffling on every cross layer and disable weight sharing. For Covertype, the difference between BFI and baseline is minimum when block factor \(K\) is 3 (BFI only requires \(1/3\) computation and memory on cross network than the baseline). As expected, when we use a greater \(K\), the gap becomes larger.
On the other hand, note that using more cross layers captures higher order of feature interaction and potentially generates better
Figure 3. Test accuracy for BFI and the cross network in DCNv2 (baseline) on YSpectro.
Figure 2. Test accuracy for BFI and the cross network in DCNv2 (baseline) on Covertype.
results. When we compare the performance of BFI and the baseline with the same cost, e.g., the baseline uses 1 cross layer while BFI uses 3 cross layers on Covertype, BFI shows a higher test accuracy. This trend is also true on YSpectro: given a fixed computation and memory budget, using BFI enables us to have more cross layers and yield better performance.
**Effect of shuffling.** To analyze the effect of shuffling (whether the cross block interaction matters), we compare P with Q, and T with S. The gap is obvious when the number of following fully-connected layers is 1. However, the gap shrinks when we have more layers. The reason is that the latter fully-connected layers can also exploit the feature interactions. Although the higher order interactions are not captured across blocks in the cross networks, those latter fully-connected layers capture them.
**Weight sharing.** We compare P with T and Q with S to explore the effect of weight sharing. The sharing almost always degrades the performance for both datasets. However, the performance gap becomes negligible when we have a sufficient number of following fully-connected layers. Note that the weight sharing reduces the memory footprint of cross networks by a factor of \(K\). We can conclude that the weight sharing is effective especially when we can fit the network into GPU memory with this memory saving or the following layer is sufficiently deep.
## 4. Conclusions
Feature interactions play a crucial role in uncovering complex dependencies and improving the accuracy of recommendations. This paper presented the Blockwise Feature Interaction Network (BFI) as an approach for capturing feature interactions. BFI reduces both computation and memory consumption of cross networks for modeling feature interactions. We propose 4 variants of BFI and analyze the effect of shuffling and weight sharing. We experimentally evaluate the proposed methods on public datasets. The results demonstrated that the BFI method effectively captures feature interactions and improves the cross network performance while having the same computation and memory budget. Furthermore, by sharing embeddings, we achieved substantial reductions in memory usage without sacrificing much model effectiveness. Weight sharing proved to be a viable strategy for reducing memory footprint, which is especially crucial for resource-constrained environments such as GPU-based training and inference. We hope this work can provide valuable insights for using feature interactions.
|
2302.07972
|
Filtered Iterative Denoising for Linear Inverse Problems
|
Iterative denoising algorithms (IDAs) have been tremendously successful in a
range of linear inverse problems arising in signal and image processing. The
classic instance of this is the famous Iterative Soft-Thresholding Algorithm
(ISTA), based on soft-thresholding of wavelet coefficients. More modern
approaches to IDAs replace soft-thresholding with a black-box denoiser, such as
BM3D or a learned deep neural network denoiser. These are often referred to as
``plug-and-play" (PnP) methods because, in principle, an off-the-shelf denoiser
can be used for a variety of different inverse problems. The problem with PnP
methods is that they may not provide the best solutions to a specific linear
inverse problem; better solutions can often be obtained by a denoiser that is
customized to the problem domain. A problem-specific denoiser, however,
requires expensive re-engineering or re-learning which eliminates the
simplicity and ease that makes PnP methods attractive in the first place. This
paper proposes a new IDA that allows one to use a general, black-box denoiser
more effectively via a simple linear filtering modification to the usual
gradient update steps that accounts for the specific linear inverse problem.
The proposed Filtered IDA (FIDA) is mathematically derived from the classical
ISTA and wavelet denoising viewpoint. We show experimentally that FIDA can
produce superior results compared to existing IDA methods with BM3D.
|
Danica Fliss, Willem Marais, Robert D. Nowak
|
2023-02-15T22:29:55Z
|
http://arxiv.org/abs/2302.07972v1
|
# Filtered Iterative Denoising for Linear Inverse Problems
###### Abstract
Iterative denoising algorithms (IDAs) have been tremendously successful in a range of linear inverse problems arising in signal and image processing. The classic instance of this is the famous Iterative Soft-Thresholding Algorithm (ISTA), based on soft-thresholding of wavelet coefficients. More modern approaches to IDAs replace soft-thresholding with a black-box denoiser, such as BM3D or a learned deep neural network denoiser. These are often referred to as "plug-and-play" (PnP) methods because, in principle, an off-the-shelf denoiser can be used for a variety of different inverse problems. The problem with PnP methods is that they may not provide the best solutions to a specific linear inverse problem; better solutions can often be obtained by a denoiser that is customized to the problem domain. A problem-specific denoiser, however, requires expensive re-engineering or re-learning which eliminates the simplicity and ease that makes PnP methods attractive in the first place. This paper proposes a new IDA that allows one to use a general, black-box denoiser more effectively via a simple linear filtering modification to the usual gradient update steps that accounts for the specific linear inverse problem. The proposed Filtered IDA (FIDA) is mathematically derived from the classical ISTA and wavelet denoising viewpoint. We show experimentally that FIDA can produce superior results compared to existing IDA methods with BM3D.
Filtered Iterative Denoising, Inverse Problems, Filtered Iterative Denoising, Inverse Problems
## 1 Introduction
Black-box denoisers like BM3D and deep neural network denoisers form the backbone of state-of-the-art methods for solving linear inverse problems in signal and image processing. Using denoisers for regularization is attractive because one can use an off-the-shelf denoiser in a variety of different inverse problems, sometimes referred to as "plug-and-play" methods. We argue, however, that the regularization should be adapted to the linear operator of the forward problem. This would require re-learning a denoiser for each specific linear inverse problem, defeating the simplicity and flexibility of such approaches. To circumvent this, we propose a novel approach that instead appropriately modifies the data-fitting objective and leads to a filtered gradient update, eliminating the need for learning or adaptation of the denoiser. We call our new approach a Filtered Iterative Denoising Algorithm (FIDA).
This paper considers the following form of _linear inverse problem_. Let \(\mathbf{y}\) denote observations of a signal or image \(\mathbf{x}\) given by
\[\mathbf{y} = \mathbf{A}\mathbf{x}+\mathbf{\epsilon} \tag{1}\]
where \(\mathbf{A}\) is a known linear operator and \(\mathbf{\epsilon}=\mathbf{y}-\mathbb{E}[\mathbf{y}]\) may be viewed as a mean zero noise. In other words, we assume that the expected value of \(\mathbf{y}\) is a linear transformation of \(\mathbf{x}\). The linear operator \(\mathbf{A}\) can denote the effect of blurring, sub-sampling, compressed sensing, tomographic projection, or other distortions. Throughout the paper, we assume that \(\mathbf{x}\) and \(\mathbf{y}\) are real-valued vectors and \(\mathbf{A}\) is a real-valued matrix with compatible dimensions (extensions to complex-valued objects are possible). The goal is to recover \(\mathbf{x}\) from the data \(\mathbf{y}\). The recovery problem is often ill-posed and regularization methods are used to find a solution that balances the fit to the data and the regularity of the solution (measured in an appropriate sense).
## 2 Iterative Denoising Algorithms
Consider what we will call an _Iterative Denoising Algorithm_ (IDA), outlined in Algorithm 2 below. Let \(L\) denote a loss function measuring the quality of a solution \(\mathbf{x}\). This paper will focus on the squared error loss \(L(\mathbf{x})=\frac{1}{2}\|\mathbf{y}-\mathbf{A}\mathbf{x}\|_{2}^{2}\), but extensions to other losses may be possible. IDA iterates between a gradient descent step on the loss followed by a denoising step, denoted by **denoise** that effectively regularizes each iterate. The denoiser takes the gradient descent iterate and a denoising parameter \(\lambda_{\gamma}\) as inputs and outputs a denoised version of the iterate. The parameter \(\lambda_{\gamma}\) may depend on the stepsize \(\gamma\) and other problem parameters such as the noise level.
The genesis of such algorithms is traced back to iterative soft-thresholding algorithms (Nowak & Figueiredo, 2001; Figueiredo & Nowak, 2003; Daubechies et al., 2004; Figueiredo et al., 2007; Wright et al., 2009; Beck & Teboulle, 2009). The soft-thresholding operation is the proximal operator for the \(\ell^{1}\) norm regularizer and the threshold level applied at each iteration is proportional to the stepsize \(\gamma\). General IDA methods replace the soft-thresholding denoiser with a off-the-shelf or black-box denoisers such as BM3D (Venkatakrishnan et al., 2013) or deep neural network denoisers (Jin et al., 2017). IDA and its many variants are often referred to as _plug-and-play_ regularization methods because one can simply "plug-in" any denoiser; see (Kamilov et al., 2017) for a recent review of such methods. In this paper, we focus on a basic IDA outlined in (2), but the main ideas may be extended to related formulations including regularization-by-denoising (RED) (Romano et al., 2017), deep unfolding (Chen et al., 2018), and multiagent consensus equilibrium (MACE) (Buzzard et al., 2018).
### The Trouble with Black-Box Denoisers
The trouble with a plug-in denoiser is that it is ignorant of the specific linear operator \(\mathbf{A}\) involved in the inverse problem, which can lead to inappropriate denoising. To see this, consider a very simple setting where \(\mathbf{A}\) is a square diagonal matrix with diagonal entries \(\delta_{i}>0\) and assume that \(\mathbf{x}\) is sparse in the canonical basis. Let \(\mathbf{y}=\mathbf{A}\mathbf{x}+\mathbf{\epsilon}\), where \(\mathbf{\epsilon}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\), a white Gaussian noise vector with variance \(1\). Because \(\mathbf{x}\) is sparse in the canonical basis and \(\mathbf{A}\) is diagonal, it is reasonable to estimate each element of \(\mathbf{x}\) separately. Note that \(y_{i}\sim\mathcal{N}(\delta_{i}x_{i},1)\), so the signal-to-noise ratio (SNR) in \(y_{i}\) is \(\delta_{i}^{2}x_{i}^{2}\). The _oracle_ denoiser for the \(i\)-th element of \(\mathbf{x}\) is
\[\widehat{x_{i}}^{O}=\begin{cases}y_{i},&\text{if }x_{i}^{2}>\delta_{i}^{-2}. \\ 0,&\text{otherwise}.\end{cases}\]
In other words, the oracle denoiser "keeps or kills" \(y_{i}\) depending on whether the SNR is greater than \(1\). The key point is that the optimal threshold depends on \(\delta_{i}\), the \(i\)-th diagonal element of \(\mathbf{A}\). The analog of the oracle thresholding step would be to apply a hard or soft threshold to \(y_{i}\) itself, and the threshold level should also depend on \(\delta_{i}\). This observation led (Donoho, 1995) to develop the so-called wavelet-vagulette decomposition (WVD), a soft-thresholding algorithm for linear inverse problems that uses varying threshold levels that account for the SNRs induced by the linear operator \(\mathbf{A}\). **For optimal performance, the denoising step must account for the specific operator involved in the linear inverse problem.** This tells us that, in general, IDA methods should also adjust the denoiser to the specific \(\mathbf{A}\). The notion that the regularizer or prior may depend on the observation model, while perhaps not widely appreciated, has also been discussed in the Bayesian setting (Gelman et al., 2017).
To illustrate the problem, consider the following satellite imaging problem that motivated our investigation. The Visible Infrared Imaging Radiometer Suite (VIIRS) day night band (DNB) is used for cloud type identification via spatial textures. The utility of DNB observations, however, is highly dependent on the signal to noise ratio (SNR) of the observations which is directly proportional to the lunar luminosity and zenith angle. Low levels lunar light results in noisy images. Complicating matters further, light sensors have varying (known) gain factors, which results in the striping artifact shown in Figure 1. The figure also shows results for the standard Iterative Denoising Algorithm (IDA) and the proposed Filtered IDA, using BM3D as the denoiser in both cases. The BM3D denoising parameter was adjusted separately in both cases to obtain the best results (for fair comparison). The filtered IDA method proposed in this paper produces significantly better results, especially noticeable in the the lower left of the images in Figure 1.
Figure 1: Satellite image restoration example. Notice that the new Filtered IDA method does a much better job of denoising and preserving structure, particularly visible in lower left of images, than the standard IDA method.
## 3 Filtered IDA
The goal is to develop a new IDA strategy that avoids the trouble with black-box denoisers (i.e., avoids the need to adjust the denoiser to the specific linear operator \(\mathbf{A}\)). Many black-box denoisers like total variation, BM3D, and deep neural denoisers operate similarly to a wavelet soft-thresholding operation. Total variation is the \(\ell^{1}\) norm of the gradient. In fact, the connection is even deeper since the \(k\)th largest Haar wavelet coefficient of an image is upper bounded by the total variation of the image divided by \(k\); see Proposition 8 in (Needell & Ward, 2013). Block-matching approaches like BM3D (Dabov et al., 2007) employ transforms and hard-thresholding (rather than soft-thresholding) operations. Similarities between iterative soft-thresholding algorithms and multilayer neural network denoisers were explored by (Gregor & LeCun, 2010; Xin et al., 2016) and others. These connections support the idea of deriving a new IDA strategy based on \(\ell^{1}\) regularization for sparse models. The key idea in our new approach will be to modify the gradient descent step rather than the denoiser to account for the specific \(\mathbf{A}\) involved in the problem.
### Insights from \(\ell^{1}\) Regularization
Suppose that \(\mathbf{\Psi}\) is an orthogonal matrix constituting a sparsifying orthonormal basis for the signal \(\mathbf{x}\). That is, the coefficient vector \(\mathbf{\theta}=\mathbf{\Psi}^{\top}\mathbf{x}\) is a sparse or approximately sparse vector. Iterative soft-thresholding algorithms solve the optimization
\[\min_{\mathbf{x}}\|\mathbf{y}-\mathbf{A}\mathbf{x}\|_{2}^{2}+\lambda\,\|\mathbf{\Psi}^{\top}\mathbf{x} \|_{1}\;,\]
where \(\lambda\) is a regularization parameter that determines the thresholding level.
Our analysis is based on the following key assumption.
**Assumption 3.1**.: The linear operator \(\mathbf{A}\) and the orthogonal matrix \(\mathbf{\Psi}\) satisfy the following property.
\[\mathbf{A}^{\top}\mathbf{\Phi}=\mathbf{\Psi}\mathbf{\Delta}\]
where \(\mathbf{\Delta}\succ 0\) is a diagonal matrix and \(\mathbf{\Phi}\) is a nearly orthogonal matrix satisfying \(\mathbf{\Phi}^{\top}\mathbf{\Phi}\approx\mathbf{I}\), where \(\mathbf{I}\) is the identity matrix.
_Remark 3.2_.: Let \(\mathbf{\psi}_{i}\) denote the \(i\)-th column (basis vector) in \(\mathbf{\Psi}\). The assumption states that there exists a complementary vector \(\mathbf{\phi}_{i}\) such that \(\mathbf{A}^{\top}\mathbf{\phi}_{i}=\delta_{i}\mathbf{\psi}_{i}\).
_Remark 3.3_.: We could assume a more precise quantification of the near-orthogonality of \(\mathbf{\Phi}\), but we will only be using the assumption to derive a new iterative denoising algorithm. We leave quantitative analysis to future work.
Let \(\mathbf{\psi}_{i}\) denote the columns (basis vectors) in \(\mathbf{\Psi}\). Since \(\mathbf{\Psi}\) is an orthogonal matrix, we have the series representation
\[\mathbf{x}\;=\;\sum_{i}(\mathbf{x}^{\top}\mathbf{\psi}_{i})\,\mathbf{\psi}_{i}\;.\]
Assumption 3.1 allows us to compute each coefficient \(\theta_{i}=\mathbf{x}^{\top}\mathbf{\psi}_{i}\) from the distorted signal \(\mathbf{A}\mathbf{x}\) as follows. Let \(\delta_{i}\) denote the \(i\)-th diagonal element in \(\mathbf{\Delta}\) and consider the inner product between \(\mathbf{A}\mathbf{x}\) and \(\mathbf{\phi}_{i}\), the \(i\)-th column of matrix \(\mathbf{\Phi}\):
\[\mathbf{\phi}_{i}^{\top}\mathbf{A}\mathbf{x}\;=\;(\mathbf{A}^{\top}\mathbf{\phi}_{i})^{\top}\mathbf{x }\;=\;\delta_{i}\mathbf{\psi}_{i}^{\top}\mathbf{x}\;,\]
so \(\theta_{i}=\mathbf{\psi}_{i}^{\top}\mathbf{x}=\delta_{i}^{-1}\mathbf{\phi}_{i}^{\top}\mathbf{A }\mathbf{x}\). Thus, the same operation may be applied to the noisy observation \(\mathbf{y}=\mathbf{A}\mathbf{x}+\mathbf{\epsilon}\) (instead of \(\mathbf{A}\mathbf{x}\)) to obtain an unbiased estimator of the coefficient.
_Remark 3.4_.: By Assumption 3.1, \(\mathbf{\Phi}\mathbf{\Phi}^{\top}\mathbf{A}=\mathbf{\Phi}\mathbf{\Delta}\mathbf{\Psi}^{\top}\) and the near-orthogonality of \(\mathbf{\Phi}\) then implies \(\mathbf{A}\approx\mathbf{\Phi}\mathbf{\Delta}\mathbf{\Psi}^{\top}\), a matrix factorization reminiscent of the singular value decomposition. Since \(\mathbf{\Psi}\) is orthogonal, we then have \(\mathbf{A}\mathbf{\Psi}\approx\mathbf{\Phi}\mathbf{\Delta}\) and thus \(\delta_{i}\mathbf{\phi}_{i}\approx\mathbf{A}\mathbf{\psi}_{i}\) with \(\delta_{i}=\|\mathbf{A}\mathbf{\psi}_{i}\|_{2}\).
_Remark 3.5_.: If \(\mathbf{A}\) is any diagonal matrix and we use the canonical basis \(\mathbf{\Psi}=\mathbf{I}\), then \(\mathbf{\Phi}=\mathbf{I}\).
Assumption 3.1 is satisfied in a wide variety of situations. The most well-known is the case where \(\mathbf{\Psi}\) is an orthogonal wavelet transform and \(\mathbf{A}\) is a _weakly invertible_ linear operator such as an integration operator or Radon transform operator, in which case the factorization is called the _wavelet-vaguelette decomposition_ (WVD) (Donoho, 1995; Abramovich & Silverman, 1998). The WVD method applies a soft-threshold to the unbiased coefficient estimates \(\widehat{\theta}_{i}=\delta_{i}^{-1}\mathbf{\phi}_{i}^{\top}\mathbf{y}\), with threshold levels depending on the diagonal entries of \(\mathbf{\Delta}\), as derived below.
### Derivation of Soft-Thresholding IDA
First we recall the soft-thresholding function. For \(y\in\mathbb{R}\) the solution to the optimization
\[\min_{\theta\in\mathbb{R}}\tfrac{1}{2}(y-\theta)^{2}+\lambda|\theta|\]
is given by the soft-thresholding operation
\[\widehat{\theta} = s_{\lambda}(y)\;:=\;\text{sign}(y)\max(0,|y|-\lambda)\;. \tag{2}\]
The soft-thresholding denoiser arises from \(\ell^{1}\) regularization, as shown next. Let \(\mathbf{\theta}=\mathbf{\Psi}^{\top}\mathbf{x}\) and \(\mathbf{z}=\mathbf{\Phi}^{\top}\mathbf{y}\). Furthermore, let \(\mathbf{\Lambda}\) be a diagonal matrix with nonzero entries denoted by \(\lambda_{i}\); these will be regularization parameters and determine the thresholding levels. As shown below, it turns out that the thresholding levels should depend on diagonal elements
of \(\mathbf{\Delta}\). Consider the weighted \(\ell^{1}\) regularization problem
\[\min_{\mathbf{x}}\tfrac{1}{2}\|\mathbf{y}-\mathbf{A}\mathbf{x}\|_{2}^{2}+\|\mathbf{ \Lambda}\mathbf{\Psi}^{\top}\mathbf{x}\|_{1} \tag{3}\] \[\approx \min_{\mathbf{\theta}}\tfrac{1}{2}\|\mathbf{y}-\mathbf{\Phi}\mathbf{\Delta}\mathbf{ \theta}\|_{2}^{2}+\|\mathbf{\Lambda}\mathbf{\theta}\|_{1}\] \[\approx \min_{\mathbf{\theta}}\tfrac{1}{2}\|\mathbf{\Phi}^{\top}\mathbf{y}-\mathbf{\Delta }\mathbf{\theta}\|_{2}^{2}+\|\mathbf{\Lambda}\mathbf{\theta}\|_{1}\] \[\equiv \min_{\mathbf{\theta}}\tfrac{1}{2}\|\mathbf{z}-\mathbf{\Delta}\mathbf{\theta}\|_ {2}^{2}+\|\mathbf{\Lambda}\mathbf{\theta}\|_{1}\] \[\equiv \min_{\mathbf{\theta}}\tfrac{1}{2}\sum_{i}(z_{i}-\delta_{i}\theta_{i })^{2}+\lambda_{i}|\theta_{i}|\] \[\equiv \min_{\mathbf{\theta}}\tfrac{1}{2}\sum_{i}(\delta_{i}^{-1}z_{i}- \theta_{i})^{2}+\lambda_{i}\delta_{i}^{-2}|\theta_{i}|\]
where Assumption 3.1 is used in the first and second step. In the final line above, the summation only includes terms when \(\delta_{i}>0\). If \(\delta_{i}=0\), then \(\theta_{i}\) is unrecoverable and our estimate of that coefficient is \(0\). The final optimization is separable in each \(\theta_{i}\) and the solution is a soft-thresholding step:
\[\widehat{\theta}_{i}\ =\ \text{sign}(\delta_{i}^{-1}z_{i})\max(0,|\delta_{i}^{ -1}z_{i}|-\lambda_{i}\delta_{i}^{-2})\.\]
The quantity \(\lambda_{i}\delta_{i}^{-2}\) is the threshold level. Recall that \(\mathbf{y}=\mathbf{A}\mathbf{x}+\mathbf{\epsilon}\), where \(\mathbb{E}[\mathbf{\epsilon}]=\mathbf{0}\). Let us further make the _white noise_ assumption that \(\mathbb{E}[\mathbf{\epsilon}\mathbf{\epsilon}^{\top}]=\sigma^{2}\mathbf{I}\), where \(\mathbf{I}\) is the identity matrix. Since \(\mathbf{\Phi}\) is nearly orthogonal its columns have approximately unit norm. In fact, we may assume the columns each have exactly unit norm by absorbing the normalization factors into \(\mathbf{\Delta}\). Therefore, the standard deviation of \(\delta_{i}^{-1}z_{i}\) is \(\delta_{i}^{-1}\sigma\). Classical statistical arguments (Donoho, 1995) dictate a threshold level proportional to the standard deviation of the noise, so take
\[\lambda_{i}=\lambda\,\delta_{i}\]
for some global \(\lambda>0\). The intuitive explanation for this is obvious: the threshold should be set about the level of the noise standard deviation so that coefficients that are purely noise (which will be many under the sparsity assumption) are set to zero. This requires threshold levels that are proportional to \(\delta_{i}\), and thus dependent on \(\mathbf{A}\). In other words, the soft-threshold denoiser must be adjusted based on \(\mathbf{A}\) resulting in the following optimization
\[\min_{\mathbf{x}}\tfrac{1}{2}\|\mathbf{y}-\mathbf{A}\mathbf{x}\|_{2}^{2}+\lambda\,\|\mathbf{ \Delta}\mathbf{\Psi}^{\top}\mathbf{x}\|_{1} \tag{4}\]
where \(\mathbf{\Delta}\) is the diagonal matrix in Assumption 3.1 and \(\lambda>0\) is a regularization parameter.
## 4 Filtered Iterative Denoising
There is a way around the difficulty of needing to adapt the regularizer/denoiser to the operator \(\mathbf{A}\). Working backwards from (3) with \(\lambda_{i}=\lambda\delta_{i}\), we have
\[\min_{\mathbf{\theta}}\tfrac{1}{2}\sum_{i}(z_{i}-\delta_{i}\theta_{i })^{2}+\lambda\delta_{i}|\theta_{i}|\] \[\quad\equiv\ \min_{\mathbf{\theta}}\tfrac{1}{2}\sum_{i}(\delta_{i}^{-1 /2}z_{i}-\delta_{i}^{1/2}\theta_{i})^{2}+\lambda|\theta_{i}|\] \[\quad\equiv\ \min_{\mathbf{x}}\tfrac{1}{2}\|\mathbf{\Delta}^{-1/2}\mathbf{\Phi}^{ \top}\mathbf{y}-\mathbf{\Delta}^{1/2}\mathbf{\Psi}^{\top}\mathbf{x}\|_{2}^{2}+\lambda\|\mathbf{ \Psi}^{\top}\mathbf{x}\|_{1}\]
which is approximately equivalent to (4) above (and exactly equivalent if \(\mathbf{\Phi}\) is orthogonal, rather than nearly orthogonal). If a particular \(\delta_{i}=0\), then the corresponding diagonal element of \(\mathbf{\Delta}^{1/2}\) is also \(0\) and we define the corresponding element of \(\mathbf{\Delta}^{-1/2}\) to be \(0\) as well.. Notice that in the optimization above the denoising regularization term _does not_ depend the operator \(\mathbf{A}\). To minimize this objective using IDA we will compute the gradient of the data-fitting term, which in this case is
\[\nabla L(\mathbf{x}) := \nabla\tfrac{1}{2}\|\mathbf{\Delta}^{-1/2}\mathbf{\Phi}^{\top}\mathbf{y}-\bm {\Delta}^{1/2}\mathbf{\Psi}^{\top}\mathbf{x}\|_{2}^{2} \tag{5}\] \[= -\mathbf{\Psi}\mathbf{\Delta}^{1/2}\Big{(}\,\mathbf{\Delta}^{-1/2}\mathbf{\Phi}^ {\top}\mathbf{y}-\mathbf{\Delta}^{1/2}\mathbf{\Psi}^{\top}\mathbf{x}\Big{)}\] \[= \mathbf{\Psi}\mathbf{\Delta}\mathbf{\Psi}^{\top}\mathbf{x}-\mathbf{\Psi}\mathbf{\Phi}^{ \top}\mathbf{y}\]
We gain insight into this gradient as follows. Since \(\mathbf{A}\approx\mathbf{\Phi}\mathbf{\Delta}\mathbf{\Psi}^{\top}\) and \(\mathbf{\Phi}^{\top}\mathbf{\Phi}\approx\mathbf{I}\), we have
\[\mathbf{\Psi}\mathbf{\Delta}\mathbf{\Psi}^{\top}=\mathbf{\Psi}\mathbf{\Phi}^{\top}\mathbf{\Phi}\mathbf{ \Delta}\mathbf{\Psi}^{\top}\approx\mathbf{\Psi}\mathbf{\Phi}^{\top}\mathbf{A}\.\]
Thus we may approximate the gradient as
\[\nabla L(\mathbf{x})\ \approx\ \mathbf{\Psi}\mathbf{\Phi}^{\top}\Big{(}\mathbf{A}\mathbf{x}-\mathbf{y} \Big{)}\]
Also observe that \(\mathbf{A}\mathbf{\Psi}\mathbf{\Delta}^{\dagger}\approx\mathbf{\Phi}\), where \(\mathbf{\Delta}^{\dagger}\) is the pseudoinverse of \(\mathbf{\Delta}\) (i.e., \(\mathbf{\Delta}^{\dagger}\) has diagonal elements \(\delta_{i}^{-1}\) if \(\delta_{i}>0\) and \(0\) if \(\delta_{i}=0\)). Thus, we have \(\mathbf{\Psi}\mathbf{\Phi}^{\top}\approx\mathbf{\Psi}\mathbf{\Delta}^{\dagger}\mathbf{\Psi}^{ \top}\mathbf{A}^{\top}\). This shows that the gradient in (5) is approximately the gradient of \(\tfrac{1}{2}\|\mathbf{y}-\mathbf{A}\mathbf{x}\|_{2}^{2}\) filtered by \(\mathbf{\Psi}\mathbf{\Delta}^{\dagger}\mathbf{\Psi}^{\top}\)
\[\nabla L(\mathbf{x}) \approx \mathbf{\Psi}\mathbf{\Delta}^{\dagger}\mathbf{\Psi}^{\top}\mathbf{A}^{\top}\Big{(} \mathbf{A}\mathbf{x}-\mathbf{y}\Big{)}. \tag{6}\]
The filtering operation \(\mathbf{\Psi}\mathbf{\Delta}^{\dagger}\mathbf{\Psi}^{\top}\) is a \(\mathbf{\Psi}\)-domain filtering operation that scales each component of the gradient according to the inverse attenuation factor of \(\mathbf{A}\) acting on the component. Approximation (6) will be our preferred expression for the gradient. This filtering operation can be computed efficiently whenever \(\mathbf{\Psi}\) admits a fast transform (e.g., Fourier or wavelet transform). This leads to a novel iterative denoising algorithm (explained in Algorithm 2 below) called _Filtered IDA_, where the **denoise** step applies a soft-threshold to the coefficients \(\mathbf{\Psi}^{\top}\widetilde{\mathbf{x}}^{k+1}\) at threshold \(\lambda\) and then computes the inverse transform by applying \(\mathbf{\Psi}\) to the result. Specifically,
\[\textbf{denoise}(\widetilde{\mathbf{x}}^{k+1},\gamma)\ =\ \sum_{j}s_{\gamma\lambda}\big{(}\mathbf{\psi}_{j}^{\top} \widetilde{\mathbf{x}}^{k+1}\big{)}\,\mathbf{\psi}_{j}\,\]
where \(s_{\gamma\lambda}\) is the soft-thresholding function (2) with threshold \(\gamma\lambda\). The key point is that this is the standard soft-threshold with a global threshold level, and there is no need to adjust the threshold for each coefficient separately to match the noise level induced by the linear operator. In other words, this is an off-the-shelf soft-thresholding operation. More generally, we may use any denoiser behaves similarly to the soft-thresholding operation.
```
input:\(\mathbf{x}^{0}\), \(\gamma>0\), \(k=0\) and sparsifying basis \(\mathbf{\Psi}\) while\(k<t\)do \(\widetilde{\mathbf{x}}^{k+1}\leftarrow\mathbf{x}^{k}+\gamma\,\mathbf{\Psi}\mathbf{\Delta}^{\dagger} \mathbf{\Psi}^{\top}\mathbf{A}^{\top}\Big{(}\mathbf{y}-\mathbf{A}\mathbf{x}^{k}\Big{)}\) \(\mathbf{x}^{k+1}\leftarrow\mathbf{denoise}(\widetilde{\mathbf{x}}^{k+1},\lambda_{\gamma})\) \(k\gets k+1\) endwhile
```
**Algorithm 2** Filtered Iterative Denoising Algorithm (FIDA)
### Choosing the Basis \(\mathbf{\Psi}\)
The approach may be used in conjunction with an off-the-shelf denoising algorithm, represented by \(\mathbf{denoise}\) in FIDA. The rationale of the approach rests on two considerations:
1. The black-box denoiser operation is similar to soft-thresholding with an appropriate sparsifying basis \(\mathbf{\Psi}\).
2. The operator \(\mathbf{A}\) satisfies Assumption 3.1 with \(\mathbf{\Psi}\).
The choice of \(\mathbf{\Psi}\) may be based on either or both of these considerations. One can view \(\mathbf{\Psi}\) as a hyperparameter of FIDA, and different options may be tried since the behavior of the black-box denoiser may be difficult to characterize. We discuss several natural choices next.
**Wavelet Basis:** Smooth wavelet bases are a good choice. The theory of the _wavelet-vaguelette decomposition_ (WVD) (Donoho, 1995; Abramovich & Silverman, 1998; Chesneau et al., 2010) shows that if \(\mathbf{\Psi}\) is an orthogonal wavelet transform and \(\mathbf{A}\) is a _weakly invertible_ linear operator like an integration operator or Radon transform operator, then Assumption 3.1 is met. Black-box denoisers trace their lineage back to wavelet and total variation denoising methods, so wavelet thresholding denoisers are a good candidate for approximating black-box denoisers as well. The particular choice of wavelet maybe be viewed as a hyperparameter of the FIDA. Smoother wavelets (i.e., not Haar wavelets) are suggested by the WVD theory and also recommended for the FIDA.
**Learned Bases:** Bases or dictionaries learned from examples (Mairal et al., 2008) are also a potentially attractive choice. Learned representations may be non-orthogonal and redundant, but recall our main derivation hinges only on the \(\mathbf{\Phi}\) (not \(\mathbf{\Psi}\)) being nearly orthogonal. The diagonal elements of \(\mathbf{\Delta}\) are determined by the norm of columns of \(\mathbf{A}\mathbf{\Psi}\), so in principle the filter \(\mathbf{\Psi}\mathbf{\Delta}^{\dagger}\mathbf{\Psi}^{\top}\) may be effective even if \(\mathbf{\Psi}\) is not orthogonal.
**Diagonalizing Basis:** If \(\mathbf{A}=\mathbf{\Psi}\mathbf{\Delta}\mathbf{\Psi}^{\top}\) for some orthogonal matrix \(\mathbf{\Psi}\), then the gradient in (6) is \(\mathbf{\Psi}\mathbf{D}\mathbf{\Psi}^{\top}(\mathbf{A}\mathbf{x}-\mathbf{y})\), where \(\mathbf{D}\) is a diagonal matrix with \(1\) on each diagonal corresponding \(\delta_{i}\neq 0\) and \(0\) otherwise. This form is most easily obtained directly from (5). Special instances of this setting occur when \(\mathbf{A}\) is diagonal and \(\mathbf{\Psi}\) is the canonical basis or when \(\mathbf{A}\) is a (circular) convolution operator and \(\mathbf{\Psi}\) is the Discrete Fourier Transform (DFT) basis. Of course, black-box denoisers may or may not be approximated by thresholding in these bases, but nonetheless may be attractive and perform well.
**SVD Basis:** Let \(\mathbf{A}=\mathbf{\Phi}\mathbf{\Delta}\mathbf{\Psi}^{\top}\) denote the Singular Value Decomposition (SVD) of \(\mathbf{A}\). Then \(\mathbf{A}^{\top}\mathbf{\Phi}=\mathbf{\Psi}\mathbf{\Delta}\) and Assumption 3.1 is satisfied (in fact \(\mathbf{\Phi}\) is truly orthogonal in this case). In this case, the the gradient in (5) is given by \(\mathbf{\Psi}\mathbf{\Phi}^{\top}(\mathbf{A}\mathbf{x}-\mathbf{y})\). The SVD basis may or may not be good match for approximating the behavior of a given black-box denoiser.
## 5 Experiments
In our experiments, we use 10 grayscale images that are 256 x 256 in size with pixel values ranging \([0,255]\). We apply two different forward models (\(\mathbf{A}\) matrix) to our data. One is a Gaussian blur, and the other is variable sensor gain model, similar to what occurs in the satellite imaging problem described in Section 2.1. We then add Gaussian noise with mean zero and standard deviation \(\sigma\) to get our degraded images.
Because deblurring a noisy image is an ill-posed inverse problem, we assume a low-noise regime. We apply Gaussian noise with \(\sigma=0.2,1,5\) to our dataset. On the other hand, gain correction of a noisy image is well-posed, so we looked at a higher noise setting where \(\sigma=5,10,20\).
Using BM3D as our denoiser, we compare our algorithm against the standard IDA. We implement our method with the wavelet basis (W-FIDA), using the Daubechies D6 wavelet basis, as well as with the diagonalizing basis (D-FIDA). For the deblurring problem, the Fourier basis is the diagonalizing basis, and for gain correction, the diagonalizing basis is the pixel basis.
For fair comparison, we average the peak-signal-to-noise-ratio (PSNR) measured in dB over 10 independent runs, and we sweep over a range of denoising parameters, \(\lambda_{\gamma}\). The results in Tables 1-2 show the best average PSNR over \(\lambda_{\gamma}\) for each method, as depicted in Figure 3.
In Figures 3-5, we look more closely at a particular example, the blurred "hill" image; the original is shown in Figure 2.
## 6 Discussion
From the results in Table 1, we see that our proposed filtered IDA method used for deblurring significantly outperforms the standard IDA when noise is low. When the standard deviation of the noise is high (\(\sigma=5\)), the standard IDA slightly outperforms our method in some cases, in terms of PSNR. However, visual inspection of the example in Figure 5 suggests that the proposed filtered IDA removes noise better than standard IDA while still maintaining qualitatively good structure. Standard IDA appears to opt towards keeping noise, avoiding error due to inaccurate reconstruction. This may explain why the standard IDA has a slightly higher PSNR for the largest level of \(\sigma\).
Table 1 also shows W-FIDA consistently outperforming D-FIDA in the deblurring problem. By looking at Figure 4, which shows the PSNR as a function of IDA iterations, we can speculate why. When using the diagonalizing basis, the PSNR peaks after a small number of iterations and then worsens with further iterations. The peak PSNR is also lower than that of the other IDA methods. Behavior like this suggests that the performance of IDA/FIDA may be enhanced by additional regularization in form of early stopping.
When considering the gain correction problem, we see a pattern in Table 2 opposite to that in the deblurring setting. That is, as \(\sigma\) increases, our proposed method begins to outperform standard IDA. Furthermore, our diagonalizing basis is consistently outperforming our wavelet basis, though the two bases do tend towards the same PSNR in most cases.
The visual distinctions in Figure 5 are small due to the high PSNR. More striking visual differences occur in more se
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline & boats & bridge & camera & couple & flag & hill & house & man & peppers & saturn \\ \hline \multicolumn{10}{|c|}{\(\sigma=0.2\)} \\ \hline IDA & 36.2593 & 31.3291 & 32.7814 & 36.1147 & 32.8976 & 36.6289 & 38.9325 & 35.8846 & 31.7055 & 48.9936 \\ \hline D-FIDA & 38.283 & 32.8457 & **36.1399** & 38.2633 & **37.0899** & 38.0141 & 40.7933 & 36.9305 & **37.7321** & 50.6108 \\ \hline W-FIDA & **38.4229** & **32.9955** & 35.4155 & **38.4624** & 35.9628 & **38.5186** & **41.6699** & **37.4383** & 35.982 & **51.5752** \\ \hline \multicolumn{10}{|c|}{\(\sigma=1\)} \\ \hline IDA & 33.7121 & 29.5842 & 31.5512 & 33.7485 & 32.2291 & 34.2627 & 36.5864 & 33.363 & 31.4105 & 47.5211 \\ \hline D-FIDA & 32.8805 & 28.8595 & 31.6366 & 32.9861 & 33.3814 & 33.2032 & 35.9584 & 32.2233 & 34.3249 & 46.9006 \\ \hline W-FIDA & **34.0971** & **29.7239** & **32.3855** & **34.0051** & **33.6895** & **34.7233** & **37.0234** & **33.5791** & **34.5936** & **47.8242** \\ \hline \multicolumn{10}{|c|}{\(\sigma=5\)} \\ \hline IDA & **29.9958** & **26.869** & **29.0645** & **30.0964** & 30.0994 & **30.8298** & **34.0729** & **29.776** & 30.6765 & 42.647 \\ \hline D-FIDA & 28.0383 & 25.2469 & 27.7563 & 27.5693 & 29.0831 & 28.3158 & 32.5683 & 28.0429 & 30.231 & 42.0463 \\ \hline W-FIDA & 29.8612 & 26.7151 & 29.0512 & 29.8505 & **30.1599** & 30.6215 & 34.0443 & 29.6727 & **31.4304** & **42.9068** \\ \hline \end{tabular}
\end{table}
Table 1: PSNR (dB) for Deblurring Problem
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline Image & boats & bridge & camera & couple & flag & hill & house & man & peppers & saturn \\ \hline \multicolumn{10}{|c|}{\(\sigma=5\)} \\ \hline IDA & 35.8209 & **33.8451** & 37.0098 & 36.218 & 37.401 & **34.7144** & **38.4773** & **34.5367** & **36.9739** & **45.1132** \\ \hline D-FIDA & **36.0732** & 33.736 & **37.2867** & 36.2702 & **37.6331** & 34.4821 & 38.3447 & 34.2866 & 36.6743 & 44.9589 \\ \hline W-FIDA & 36.0264 & 33.6706 & 37.2478 & **36.281** & 37.6173 & 34.5156 & 38.3617 & 34.3021 & 36.6795 & 44.8634 \\ \hline \multicolumn{10}{|c|}{\(\sigma=10\)} \\ \hline IDA & 31.8819 & 29.2484 & 33.1915 & 31.9839 & 34.7869 & 30.7138 & 35.1479 & 30.1071 & 33.1833 & **41.91** \\ \hline D-FIDA & **32.1274** & **29.55** & **33.4604** & **32.4706** & **34.8465** & **31.2209** & **35.1966** & **30.4129** & 33.4513 & 41.8671 \\ \hline W-FIDA & 32.108 & 29.5456 & 33.4544 & 32.4692 & 34.7861 & 31.1827 & 35.1856 & 30.4106 & **33.4597** & 41.715 \\ \hline \multicolumn{10}{|c|}{\(\sigma=20\)} \\ \hline IDA & 28.4247 & 25.9532 & **29.7566** & 28.5431 & 31.4189 & 27.7433 & **32.2804** & 26.5139 & 29.7204 & 38.1508 \\ \hline D-FIDA & **28.5858** & **26.0796** & 29.6381 & **28.9446** & **31.7539** & **28.0874** & 32.2439 & **27.1913** & **30.1773** & **38.3859** \\ \hline W-FIDA & 28.5386 & 26.0367 & 29.6253 & 28.904 & 31.6136 & 28.0541 & 32.2276 & 27.186 & 30.1156 & 38.2836 \\ \hline \end{tabular}
\end{table}
Table 2: PSNR (dB) for Gain Correction Problem
Figure 2: Image of hill groundtruth.
were blurring situations, like that illustrated in Figure 6. In this experiment, we use a strong Gaussian blur with a small amount of noise and compare standard IDA to FIDA, both using BM3D. FIDA uses the diagonal (Fourier basis) filtering, in this example. The regularization parameter was adjusted to produce the best result for each method (for fair comparison of the best performances). FIDA provides a visually better result with significantly higher PSNR. Interestingly, both IDA and FIDA introduce a slight striping artifact in the woman's dress, which we believe is related to the BM3D denoiser.
## 7 Conclusions
This paper proposes a simple linear filtering operation for gradient updates in iterative denoising algorithms for linear inverse problems. The filtering accounts for the specific linear operator involved in the problem, and eliminates the need to adapt the denoiser to the problem. The derivation of the Filtered Iterative Denoising Algorithm (FIDA) is based on a denoising by thresholding in an appropriate transform domain, such as the wavelet domain. This may be a reasonable approximation to the function of many black-box denoisers, which motivates the use of FIDA with a variety of denoising methods.
There are a number of interesting directions for future research. First and foremost is developing a deeper understanding of how the choice of the basis \(\mathbf{\Psi}\) used for filtering interacts with specific black-box denoisers and how this effects overall performance. Recall that the filtering operation multiplies the usual gradient \(\mathbf{A}^{\top}(\mathbf{Ax}-\mathbf{y})\) by the matrix \(\mathbf{\Psi}\mathbf{\Delta}^{\dagger}\mathbf{\Psi}\). More generally, arbitrary matrices could be used to filter the gradient. One generalization of the current filter \(\mathbf{\Psi}\mathbf{\Delta}^{\dagger}\mathbf{\Psi}^{\top}\) is to replace \(\mathbf{\Delta}^{\dagger}\) with another diagonal matrix. Recall that the diagonal elements are \(\delta_{i}^{-1}\), if \(\delta_{i}\neq 0\), and \(0\) otherwise. Instead we could set the diagonal elements to be \(\delta_{i}/(|\delta_{i}|^{2}+\tau)\) with a small \(\tau>0\), in the spirit of Wiener filtering. The parameter \(\tau\) plays the role of an additional hyperparameter that can be tuned to improve performance. The filtering matrix could even be learned from data for a particular inverse problem domain, which is arguably simpler that adapting or learning a black-box denoiser from scratch. Finally, our experiments focused on problems in satellite image denoising in the presence of heterogeneous sensor gains and image deblurring/deconvolution. Future work should explore the potential of FIDA in a broader range of applications, possibly including tomographic reconstruction and superresolution. This paper also focuses on the squared error loss and a particular form of iterative denoising. The filtering approach proposed here might be applied to other formulations such as regularization-by-denoising (RED) (Romano et al., 2017), deep unfolding (Chen et al., 2018), and multiagent consensus equilibrium (MACE) (Buzzard et al., 2018; Ryu et al., 2019; Hurault et al., 2022).
Figure 4: PSNR (dB) vs iteration for blurred ”hill” image. From left to right, the plots show the PSNR per iteration when \(\sigma=0.2\), \(\sigma=1\), and \(\sigma=5\). The denoising parameter, \(\lambda_{\gamma}\), for each method is assumed to be the \(\lambda_{\gamma}\) that produced the best results.
Figure 3: PSNR (dB) vs denoising parameter for blurred ”hill” image. From left to right, the plots show results when \(\sigma=0.2\), \(\sigma=1\), and \(\sigma=5\). The results in Table 1 report the PSNR using the best value of \(\lambda_{\gamma}\) for each method.
Figure 5: Noisy deblurred images. From left to right we have the blurry and noisy image, denoised by IDA, denoised by D-FIDA, and denoised by W-FIDA. From top to bottom we have \(\sigma=0.2\), \(\sigma=1\), and \(\sigma=5\).
Figure 6: Severe deblurring example. Original and blurred image (top), PSNR 26.5dB. Standard IDA (bottom right) and proposed Filtered IDA (bottom left), PSNR = 28dB. The regularization parameter was adjusted to obtain the highest PSNR in each case.
|
2303.08968
|
A parsimonious neural network approach to solve portfolio optimization
problems without using dynamic programming
|
We present a parsimonious neural network approach, which does not rely on
dynamic programming techniques, to solve dynamic portfolio optimization
problems subject to multiple investment constraints. The number of parameters
of the (potentially deep) neural network remains independent of the number of
portfolio rebalancing events, and in contrast to, for example, reinforcement
learning, the approach avoids the computation of high-dimensional conditional
expectations. As a result, the approach remains practical even when considering
large numbers of underlying assets, long investment time horizons or very
frequent rebalancing events. We prove convergence of the numerical solution to
the theoretical optimal solution of a large class of problems under fairly
general conditions, and present ground truth analyses for a number of popular
formulations, including mean-variance and mean-conditional value-at-risk
problems. We also show that it is feasible to solve Sortino ratio-inspired
objectives (penalizing only the variance of wealth outcomes below the mean) in
dynamic trading settings with the proposed approach. Using numerical
experiments, we demonstrate that if the investment objective functional is
separable in the sense of dynamic programming, the correct time-consistent
optimal investment strategy is recovered, otherwise we obtain the correct
pre-commitment (time-inconsistent) investment strategy. The proposed approach
remains agnostic as to the underlying data generating assumptions, and results
are illustrated using (i) parametric models for underlying asset returns, (ii)
stationary block bootstrap resampling of empirical returns, and (iii)
generative adversarial network (GAN)-generated synthetic asset returns.
|
Pieter M. van Staden, Peter A. Forsyth, Yuying Li
|
2023-03-15T22:37:33Z
|
http://arxiv.org/abs/2303.08968v1
|
A parsimonious neural network approach to solve portfolio optimization problems without using dynamic programming
###### Abstract
We present a parsimonious neural network approach, which does not rely on dynamic programming techniques, to solve dynamic portfolio optimization problems subject to multiple investment constraints. The number of parameters of the (potentially deep) neural network remains independent of the number of portfolio rebalancing events, and in contrast to, for example, reinforcement learning, the approach avoids the computation of high-dimensional conditional expectations. As a result, the approach remains practical even when considering large numbers of underlying assets, long investment time horizons or very frequent rebalancing events. We prove convergence of the numerical solution to the theoretical optimal solution of a large class of problems under fairly general conditions, and present ground truth analyses for a number of popular formulations, including mean-variance and mean-conditional value-at-risk problems. We also show that it is feasible to solve Sortino ratio-inspired objectives (penalizing only the variance of wealth outcomes below the mean) in dynamic trading settings with the proposed approach. Using numerical experiments, we demonstrate that if the investment objective functional is separable in the sense of dynamic programming, the correct time-consistent optimal investment strategy is recovered, otherwise we obtain the correct pre-commitment (time-inconsistent) investment strategy. The proposed approach remains agnostic as to the underlying data generating assumptions, and results are illustrated using (i) parametric models for underlying asset returns, (ii) stationary block bootstrap resampling of empirical returns, and (iii) generative adversarial network (GAN)-generated synthetic asset returns.
**Keywords**: Asset allocation, portfolio optimization, neural network, dynamic programming
**JEL classification**: G11, C61
## 1 Introduction
We present, and analyze the convergence of, a parsimonious and flexible neural network approach to obtain the numerical solution of a large class of dynamic (i.e. multi-period) portfolio optimization problems that can be expressed in the following form,
\[\inf_{\xi\in\mathbb{R}}\inf_{\mathcal{P}\in\mathcal{A}}\left\{\,E_{\mathcal{P} }^{t_{0},w_{0}}\left[F\left(W\left(T\right),\xi\right)+G\left(W\left(T\right), E_{\mathcal{P}}^{t_{0},w_{0}}\left[W\left(T\right)\right],w_{0},\xi\right)\, \right]\,\right\}. \tag{1.1}\]
While rigorous definitions and assumptions are discussed in subsequent sections, here we simply note that in general, \(F:\mathbb{R}^{2}\rightarrow\mathbb{R}\) and \(G:\mathbb{R}^{4}\rightarrow\mathbb{R}\) denote some continuous functions and \(\xi\in\mathbb{R}\) some auxiliary variable, with \(T>0\) denoting the investment time horizon, \(W\left(t\right),t\in[t_{0},T]\), the controlled wealth process, and \(\mathcal{P}\) representing the investment strategy (or control) implemented over \([t_{0},T]\). Typically, \(\mathcal{P}\) specifies the amount or fraction of wealth to invest in each of (a potentially large number of) the underlying assets at each portfolio rebalancing event, which in practice occurs at some discrete subset of rebalancing times in \([t_{0},T]\). \(\mathcal{A}\) denotes the set of admissible investment strategies encoding the (possibly multiple) investment constraints faced by the investor. Finally, \(E_{\mathcal{P}}^{t_{0},w_{0}}\left[\cdot\right]\) denotes the expectation given control \(\mathcal{P}\) and initial wealth \(W\left(t_{0}\right)=w_{0}\).
Although (1.1) is written for objective functions involving the terminal portfolio wealth \(W\left(T\right)\), the approach and convergence analysis could be generalized without difficulty to objective functions that are wealth path-dependent, i.e. functions of \(\left\{W\left(t\right):t\in\mathcal{T}\right\}\) for some subset \(\mathcal{T}\subseteq\left[t_{0}\text{,}T\right]\) - see Forsyth et al. (2022); Van Staden et al. (2022a) for examples. However, since a sufficiently rich class of problems are of the form (1.1), this will remain the main focus of this paper.
The proposed approach does not rely on the separability of the objective functional in (1.1) in the sense of dynamic programming, remains agnostic as to the underlying data generation assumptions, and is sufficiently flexible such that practical considerations such as multiple investment constraints and discrete portfolio rebalancing can be incorporated without difficulty.
Leaving the more formal treatment for subsequent sections, for introductory purposes we highlight some specific examples of problems of the form (1.1):
* Utility maximization (see for example Vigna (2014)), in which case there is no outer optimization problem and \(G\equiv 0\), while \(w\to U\left(w\right)\) denotes the investor's utility function, so that (1.1) therefore reduces to \[\sup_{\mathcal{P}\in\mathcal{A}}\left\{E_{\mathcal{P}}^{t_{0},w_{0}}\left[U \left(W\left(T\right)\right)\right]\right\}.\] (1.2)
* Mean-variance (MV) optimization (see e.g. Li and Ng (2000); Zhou and Li (2000)), with \(\rho>0\) denoting the scalarization (or risk aversion) parameter, where the problem \[\sup_{\mathcal{P}\in\mathcal{A}}\left\{E_{\mathcal{P}}^{t_{0},w_{0}}\left[W \left(T\right)\right]-\rho\cdot Var_{\mathcal{P}}^{t_{0},w_{0}}\left[W\left(T \right)\right]\right\},\] (1.3) can also be written in the general form (1.1).
* Mean-Conditional Value-at-Risk (CVaR) optimization, in which case we do have both an inner and an outer optimization problems (see e.g. Forsyth (2020); Miller and Yang (2017), resulting in a problem of the form \[\inf_{\xi\in\mathbb{R}}\inf_{\mathcal{P}\in\mathcal{A}}\left\{E_{\mathcal{P}}^ {t_{0},w_{0}}\left[F\left(W\left(T\right),\xi\right)\right]\right\},\] (1.4) for a particular choice of the function \(F\) (see (2.15) below).
* To illustrate the flexibility and generality of the proposed approach, we also consider a "mean semi-variance" portfolio optimization problem that is inspired by the popular Sortino ratio (Bodie et al. (2014)) in the case of one-period portfolio analysis, where only the variance of downside outcomes relative to the mean is penalized. In the case of dynamic trading strategies, this suggests an objective function of the form \[\sup_{\mathcal{P}\in\mathcal{A}}\left\{E_{\mathcal{P}}^{t_{0},w_{0}}\left[W \left(T\right)-\rho\cdot\left(\min\left\{W\left(T\right)-E_{\mathcal{P}}^{t_ {0},w_{0}}\left[W\left(T\right)\right],0\right\}\right)^{2}\right]\right\},\] (1.5) where, as in the case of (1.3), the parameter \(\rho>0\) encodes the trade-off between risk and return. Note that (1.5) is not separable in the sense of dynamic programming, and in the absence of embedding results (analogous to those of Li and Ng (2000); Zhou and Li (2000) in the case of MV optimization (1.3)), problem (1.5) cannot be solved using traditional dynamic programming-based methods.
However, we emphasize that (1.2)-(1.5) are only a selection of examples, and the proposed approach and theoretical analysis remains applicable to problems that can be expressed in the general form (1.1).
Portfolio optimization problems of the form (1.1) can give rise to investment strategies that are not time-consistent due to the presence of the (possibly non-linear) function \(G\) (Bjork et al. (2021)). Since the objective in (1.1) is therefore potentially not separable in the sense of dynamic programming (see for example (1.3) or (1.5)). This gives rise to two related problems: (i) Since (1.1) cannot be solved using a dynamic programming-based approach, some other solution methodology has to be implemented, or some re-interpretation of the problem or the concept of "optimality" might be required (see for example Bjork and Murgoci (2014); Vigna (2022)), (ii) if the investment strategies are time-inconsistent, this can raise questions as to whether these strategies are feasible to implement as practical investment strategies.
We make the following general observations:
* It may be desirable to avoid using dynamic programming (DP) even if (1.1) _can_ be solved using DP techniques, such as in the special case where \(G\equiv 0\) in (1.1) and the investment strategies are time-consistent. For example, it is well known that DP has an associated "curse of dimensionality", in that as the
number state variables increases linearly, the computational burden increases exponentially (Fernandez-Villaverde et al. (2020); Han and Weinan (2016)). In addition, since DP techniques necessarily incur estimation errors at each time step, significant error amplification can occur which is further exacerbated in high-dimensional settings (see for example Li et al. (2020); Tsang and Wong (2020); Wang and Foster (2020)).
However, instead of relying on DP-based techniques and attempting to address the challenges of dimensionality using machine learning techniques (see for example Bachouch et al. (2022); Dixon et al. (2020); Fernandez-Villaverde et al. (2020); Gao et al. (2020); Henry-Labordere (2017); Hure et al. (2021); Lucarelli and Borrotti (2020); Park et al. (2020)), the proposed method fundamentally avoids DP techniques altogether. This is especially relevant in our setting, since we have shown that in the case of portfolio optimization problems specifically, DP can be _unnecessarily_ high-dimensional even in simple settings (see Van Staden et al. (2022b)). This occurs since the objective functional (or performance criteria (Oksendal and Sulem (2019)) ) is typically high-dimensional while the optimal investment strategy (the fundamental quantity of concern) remains relatively low-dimensional. The proposed method therefore forms part of the significant recent interest in developing machine learning techniques to solve multi-period portfolio optimization problems that avoids using DP techniques altogether (see for example Li and Forsyth (2019); Ni et al. (2022); Tsang and Wong (2020); Van Staden et al. (2022b)).
* Time-inconsistent problems naturally arise in financial applications (see Bjork et al. (2021) for numerous examples), and as a result their solution is often an area of active research due to the unique challenges involved in solving these problems without resorting to DP techniques. Examples include the mean-variance problem, which remained an open problem for decades until the solution using the embedding technique of Li and Ng (2000); Zhou and Li (2000). As a result, being able to obtain a numerical solution to problems of the form (1.1) directly is potentially very valuable for research. The solution of time-inconsistent problems is also practical interest, since in many cases, there exists an induced time consistent objective function (Forsyth (2020); Strub et al. (2019a,b)). The optimal policy for this induced time consistent objective function is identical to the pre-commitment policy at time zero. The induced time consistent strategy is, of course implementable (Forsyth (2020)), in the sense that the investor has no incentive to deviate from the strategy determined at time zero, at later times. An alternative approach to handling time-inconsistent problems is to search for the equilibrium control (Bjork et al. (2021)). A fascinating result obtained in Bjork and Murgoci (2010) is that for every equilibrium control, there exists a standard, time consistent problem which has the same control, under a different objective function. This essentially means that the question of time-consistency is a often matter of perspective, since there may be alternative objective functions which give rise to the same pre-commitment control, yet are time-consistent. In fact, other subtle issues arise in comparing pre-commitment and time consistent controls, see Vigna (2020, 2022) for further discussion. Furthermore, over very short time horizons such as those encountered in optimal trade execution, time consistency or its absence may not be of much concern to the investor or market participant (see for example Forsyth et al. (2011); Tse et al. (2013)). In addition, as noted by Bernard and Vanduffel (2014), if the strategy is realized in an investment product sold to a retail investor, then the optimal policy from the investor's point of view is in fact of pre-commitment type, since the retail client does not herself trade in the underlying assets during the lifetime of the contract.
As a result of these observations, we will consider problem (1.1) in its general form. Our method builds on and formalizes the initial results described in Li and Forsyth (2019) where a shallow NN was applied to a portfolio optimization problem with an objective that is separable in the sense of DP. The contributions of this paper are as follows:
* We present a flexible neural network (NN) approach to solve problems of the form (1.1) that does not rely on DP techniques. Our approach only requires the solution of a _single_ optimization problem, and therefore avoids the error amplification problems associated with the time-recursion in DP-based techniques, including for example Reinforcement Learning algorithms such as Q-learning (see for example Dixon et al. (2020); Gao et al. (2020); Park et al. (2020)) or other algorithms relying at some level on the DP principle
for a time-stepping backward recursion (see for example Bachouch et al. (2022); Van Heeswijk and Poutre (2019)). Perhaps the best descriptor of our approach is Policy Function Approximation, in the taxonomy in Powell (2023).
We make very limited assumptions regarding the underlying asset dynamics. In particular, if underlying asset (and by extension wealth) dynamics are specified, this can be incorporated as easily as the case where the underlying dynamics can only be observed without any parametric assumptions.
The proposed solution methodology is _parsimonious_, in that the number of parameters does not scale with the number of rebalancing events. This contrasts the proposed methodology with for example that of Han and Weinan (2016); Hure et al. (2021); Tsang and Wong (2020), and ensures that our approach remains feasible even for problems with very long time horizons (for example the accumulation phase of a pension fund - see Forsyth et al. (2019)) or with shorter time horizon but with frequent trading/rebalancing (for example the trade execution problems encountered in Forsyth et al. (2011)). The solution approach only places very weak requirements on the form of the investment objective in (1.1). In addition, we find that using relatively shallow neural networks (at most two hidden layers) in our approach achieve very accurate results in ground truth testing, thereby ensuring that the resulting NN in the proposed approach is relatively easy and efficient to train since it is less likely to be susceptible to problems of vanishing or exploding gradients associated with very deep neural networks (Goodfellow et al. (2016)).
2. We analyze the convergence of the proposed approach, and show that the theoretical optimal investment strategy of (1.1), provided it exists, can be attained by the numerical solution.
3. Finally, we present ground truth analyses confirming that the proposed approach is very effective in solving portfolio optimization problems of the form (1.1). The results illustrate numerically that if (1.1) is not separable in the sense of DP, our approach recovers the correct pre-commitment (time-inconsistent) optimal control, otherwise it recovers the correct time-consistent optimal control. To emphasize that the approach remains agnostic to the underlying data generation assumptions, results are illustrated using (i) parametric models for asset dynamics, (ii) stationary block bootstrap resampling of empirical asset returns, and (ii) generative adversarial network (GAN)-generated synthetic asset returns.
The remainder of the paper is organized as follows: Section 2 presents the problem formulation, while Section 3 provides a summary of the proposed approach, with additional technical and practical details provided in Appendix A and Appendix B. Section 4 presents the convergence analysis of the proposed approach. Finally, Section 5 provides ground truth analyses, with Section 6 concluding the paper and discussing possible avenues for future research.
## 2 Problem formulation
We start by formulating portfolio optimization problems of the form (1.1) more rigorously in a setting of discrete portfolio rebalancing and multiple investment constraints. Throughout, we work on filtered probability space \(\left(\Omega,\mathcal{F},\left\{\mathcal{F}\left(t\right)\right\}_{t\in[t_{0 },T]},\mathbb{P}\right)\) satisfying the usual conditions, with \(\mathbb{P}\) denoting the actual (and not the risk-neutral) probability measure.
Let \(\mathcal{T}\) denote the set of \(N_{rb}\) discrete portfolio rebalancing times in \(\left[t_{0}=0,T\right]\), which we assume to be equally-spaced to lighten notation,
\[\mathcal{T} = \left\{\left.t_{m}=m\Delta t\right|m=0,...,N_{rb}-1\right\}, \qquad\Delta t=T/N_{rb}, \tag{2.1}\]
where we observe that the last rebalancing event occurs at time \(t_{N_{rb}-1}=T-\Delta t\).
At each rebalancing time \(t_{m}\in\mathcal{T}\), the investor observes the \(\mathcal{F}\left(t_{m}\right)\)-measurable vector \(\boldsymbol{X}\left(t_{m}\right)=\left(X_{i}\left(t_{m}\right):i=1,...,\eta_ {X}\right)\in\mathbb{R}^{n_{X}}\), which can be interpreted informally as the information taken into account by the investor in reaching their asset allocation decision. As a concrete example, we assume below that \(\boldsymbol{X}\left(t_{m}\right)\) includes at least the wealth available for investment, an assumption which can be rigorously justified using analytical results (see for example Van Staden et al. (2022)).
Given \(\boldsymbol{X}\left(t_{m}\right)\), the investor then rebalances a portfolio of \(N_{a}\) assets to new positions given by the vector
\[\boldsymbol{p}_{m}\left(t_{m},\boldsymbol{X}\left(t_{m}\right)\right) = \left(p_{m,i}\left(t_{m},\boldsymbol{X}\left(t_{m}\right)\right) :i=1,..,N_{a}\right)\in\mathbb{R}^{N_{a}}, \tag{2.2}\]
where \(p_{m,i}\left(t_{m},\mathbf{X}\left(t_{m}\right)\right)\) denotes the fraction of wealth \(W\left(t_{m}\right)\) invested in the \(i\)th asset at rebalancing time \(t_{m}\). The subscript "\(m\)" in the notation \(\mathbf{p}_{m}\) emphasizes that in general, each rebalancing time \(t_{m}\in\mathcal{T}\) could be associated with potentially a different function \(\mathbf{p}_{m}:\mathbb{R}^{n_{X}+1}\rightarrow\mathbb{R}^{N_{n}}\), while the subscript is removed below when we consider a single function that is simply evaluated at different times, in which case we will write \(\mathbf{p}:\mathbb{R}^{n_{X}+1}\rightarrow\mathbb{R}^{N_{n}}\).
For purposes of concreteness, we assume that the investor is subject to the constraints of (i) no short-selling and (ii) no leverage being allowed, although the proposed methodology can be adjusted without difficulty to treat different constraint formulations1. For illustrative purposes, we therefore assume that each allocation (2.2) is only allowed to take values in \(\left(N_{a}-1\right)\)-dimensional probability simplex \(\mathcal{Z}\),
Footnote 1: As discussed in Section 3 and Appendix A, adjustments to the output layer of the neural network may be required.
\[\mathcal{Z} = \left\{\left(y_{1},...,y_{N_{a}}\right)\in\mathbb{R}^{N_{a}}: \sum_{i=1}^{N_{a}}y_{i}=1\text{ and }y_{i}\geq 0\text{ for all }i=1,...,N_{a}\right\}. \tag{2.3}\]
In this setting, an investment strategy or control \(\mathcal{P}\) applicable to \(\left[t_{0},T\right]\) is therefore of the form,
\[\mathcal{P} = \left\{\mathbf{p}_{m}\left(t_{m},\mathbf{X}\left(t_{m}\right)\right)= \left(p_{m,i}\left(t_{m},\mathbf{X}\left(t_{m}\right)\right):i=1,..,N_{a}\right):t _{m}\in\mathcal{T}\right\}, \tag{2.4}\]
while the set of admissible controls \(\mathcal{A}\) is defined by
\[\mathcal{A} = \left\{\left.\mathcal{P}=\left\{\mathbf{p}_{m}\left(t_{m},\mathbf{X} \left(t_{m}\right)\right):t_{m}\in\mathcal{T}\right\}\right|\mathbf{p}_{m}\left(t _{m},\mathbf{X}\left(t_{m}\right)\right)\in\mathcal{Z},\forall t_{m}\in\mathcal{T }\right\}. \tag{2.5}\]
The randomness in the system is introduced through the returns of the underlying assets. Specifically, let \(R_{i}\left(t_{m}\right)\) denote the \(\mathcal{F}\left(t_{m+1}\right)\)-measurable return observed on asset \(i\) over the interval \(\left[t_{m},t_{m+1}\right]\). We make no assumptions regarding the underlying asset dynamics, but at a minimum, we do require \(\left(\mathbb{P}\right)\) integrability, i.e. \(\mathbb{E}\left|R_{i}\left(t_{m}\right)\right|<\infty\) for all \(i\in\left\{1,...,N_{a}\right\}\) and \(m\in\left\{0,...,N_{rb}-1\right\}\). Informally, we will refer to the set
\[\mathbf{Y} = \left\{\left(Y_{i}\left(t_{m}\right):=1+R_{i}\left(t_{m}\right):i= 1,...,N_{a}\right)^{\top}:m\in\left\{0,...,N_{rb}-1\right\}\right\} \tag{2.6}\]
as the _path_ of (joint) asset returns over the investment time horizon \(\left[t_{0},T\right]\).
To clarify the subsequent notation, for any functional \(\psi\left(t\right),t\in\left[t_{0},T\right]\) we will use the notation \(\psi\left(t^{-}\right)\) and \(\psi\left(t^{+}\right)\) as shorthand for the one-sided limits \(\psi\left(t^{-}\right)=\lim_{\epsilon\downarrow 0}\psi\left(t-\epsilon\right)\) and \(\psi\left(t^{+}\right)=\lim_{\epsilon\downarrow 0}\psi\left(t+\epsilon\right)\), respectively.
Given control \(\mathcal{P}\in\mathcal{A}\), asset returns \(\mathbf{Y}\), initial wealth \(W\left(t_{0}^{-}\right)\coloneqq w_{0}>0\) and a (non-random) cash contribution schedule \(\left\{q\left(t_{m}\right):t_{m}\in\mathcal{T}\right\}\), the portfolio wealth dynamics for \(m=0,...,N_{rb}-1\) are given by the general recursion
\[W\left(t_{m+1}^{-};\mathcal{P},\mathbf{Y}\right) = \left[W\left(t_{m}^{-};\mathcal{P},\mathbf{Y}\right)+q\left(t_{m} \right)\right]\cdot\sum_{i=1}^{N_{a}}p_{m,i}\left(t_{m},\mathbf{X}\left(t_{m} \right)\right)\cdot Y_{i}\left(t_{m}\right). \tag{2.7}\]
Note that we write \(W\left(u\right)=W\left(u;\mathcal{P},\mathbf{Y}\right)\) to emphasize the dependence of wealth on the control \(\mathcal{P}\) and the (random) path of asset returns in \(\mathbf{Y}\) that relates to the time period \(t\in\left[t_{0},u\right]\). In other words, despite using \(\mathbf{Y}\) in the notation for simplicity, \(W\left(u;\mathcal{P},\mathbf{Y}\right)\) is \(\mathcal{F}\left(u\right)\)-measurable. Finally, since there are no contributions or rebalancing at maturity, we simply have \(W\left(t_{N_{rb}}^{-}\right)=W\left(T^{-}\right)=W\left(T\right)=W\left(T; \mathcal{P},\mathbf{Y}\right)\).
### Investment objectives
Given this general investment setting and wealth dynamics (2.7), our goal is to solve dynamic portfolio optimization problems of the general form
\[\inf_{\xi\in\mathbb{R}}\inf_{\mathcal{P}\in\mathcal{A}}J\left(\mathcal{P},\xi;t _{0},w_{0}\right), \tag{2.8}\]
where, for some given continuous functions \(F:\mathbb{R}^{2}\rightarrow\mathbb{R}\) and \(G:\mathbb{R}^{3}\rightarrow\mathbb{R}\), the objective functional \(J\) is given by
\[J\left(\mathcal{P},\xi;t_{0},w_{0}\right) = E_{\mathcal{P}}^{t_{0},w_{0}}\left[F\left(W\left(T;\mathcal{P}, \mathbf{Y}\right),\xi\right)+G\left(W\left(T;\mathcal{P},\mathbf{Y}\right),E_{ \mathcal{P}}^{t_{0},w_{0}}\left[W\left(T;\mathcal{P},\mathbf{Y}\right)\right],w_{0 },\xi\right)\right]. \tag{2.9}\]
Note that the expectations \(E^{t_{0},w_{0}}\left[\cdot\right]\) in (2.9) are taken over \(\mathbf{Y}\), given initial wealth \(W\left(t_{0}^{-}\right)=w_{0}\), control \(\mathcal{P}\in\mathcal{A}\) and auxiliary variable \(\xi\in\mathbb{R}\). In addition to the assumption of continuity of \(F\) and \(G\), we will make only the minimal assumptions regarding the exact properties of \(J\), including that \(\xi\to F\left(\cdot,\xi\right)\) and \(\xi\to G\left(\cdot,\cdot,w_{0},\xi\right)\) are
convex for all admissible controls \(\mathcal{P}\in\mathcal{A}\), and the standard assumption (see for example Bjork et al. (2021)) that an optimal control \(\mathcal{P}^{*}\in\mathcal{A}\) exists.
For illustrative and ground truth analysis purposes, we consider a number of examples of problems of the form (2.8)-(2.9).
As noted in the Introduction, the simplest examples of problems of the form (2.8) arise in the special case where \(G\equiv 0\) and there is no outer optimization problem over \(\xi\), such as in the case of standard utility maximization problems. As concrete examples of this class of objective functions, we will consider the quadratic target minimization (or quadratic utility) described in for example Vigna (2014); Zhou and Li (2000),
\[\left(DSQ\left(\gamma\right)\right):\qquad\inf_{\mathcal{P}\in\mathcal{A}} \left\{E^{t_{0},w_{0}}\left[\left(W\left(T;\mathcal{P},\boldsymbol{Y}\right)- \gamma\right)^{2}\right]\right\},\qquad\gamma>0, \tag{2.10}\]
as well as the (closely-related) one-sided quadratic loss minimization used in for example Dang and Forsyth (2016); Li and Forsyth (2019),
\[\left(OSQ\left(\gamma\right)\right):\qquad\inf_{\mathcal{P}\in\mathcal{A}} \left\{E^{t_{0},w_{0}}\left[\left(\min\left\{W\left(T;\mathcal{P}, \boldsymbol{Y}\right)-\gamma,0\right\}\right)^{2}-\epsilon\cdot W\left(T; \mathcal{P},\boldsymbol{Y}\right)\right]\right\},\qquad\gamma>0. \tag{2.11}\]
The term \(\epsilon W(\cdot)\) in equation (2.11) ensures that the problem remains well-posed2 in the event that \(W(t)\gg\gamma\). Observe that problems of the form (2.10) or (2.11) are separable in the sense of dynamic programming, so that the resulting optimal control is therefore time-consistent.
Footnote 2: Although this is a mathematical necessity (see e.g. (Li and Forsyth, 2019)), in practice, if we use a very small value of \(\epsilon\), then this has no perceptible effect on the summary statistics. In the numerical results of Section 5, we use \(\epsilon=10^{-6}\); see Appendix B for a discussion.
As a classical example of the case where \(G\) is nonlinear and the objective functional (2.9) is not separable in the sense of dynamic programming, we consider the mean-variance (MV) objective with scalarization or risk-aversion parameter \(\rho>0\) (see for example Bjork et al. (2017)),
\[\left(MV\left(\rho\right)\right): \sup_{\mathcal{P}\in\mathcal{A}}\left\{E^{t_{0},w_{0}}\left[W \left(T;\mathcal{P},\boldsymbol{Y}\right)\right]-\rho\cdot Var^{t_{0},w_{0}} \left[W\left(T;\mathcal{P},\boldsymbol{Y}\right)\right]\right\},\qquad\rho>0. \tag{2.12}\] \[= \sup_{\mathcal{P}\in\mathcal{A}}E_{\mathcal{P}}^{t_{0},w_{0}} \left[W\left(T;\mathcal{P},\boldsymbol{Y}\right)-\rho\cdot\left(W\left(T; \mathcal{P},\boldsymbol{Y}\right)-E_{\mathcal{P}}^{t_{0},w_{0}}\left[W\left(T ;\mathcal{P},\boldsymbol{Y}\right)\right]\right)^{2}\right].\]
Note that issues relating to the time-inconsistency of the optimal control of (2.12) are discussed in Remark 2.1 below, along with the relationship between (2.10) and (2.12).
As an example of a problem involving both the inner and outer optimization in (2.8), we consider the Mean - Conditional Value-at-Risk (or Mean-CVaR) problem, subsequently simply abbreviated the MCV problem. First, as a measure of tail risk, the CVaR at level \(\alpha\), or \(\alpha\)-CVaR, is the expected value of the worst \(\alpha\) percent of wealth outcomes, with typical values being \(\alpha\in\{1\%,5\%\}\). As in Forsyth (2020), a _larger_ value of the CVaR is preferable to smaller value, since our definition of \(\alpha\)-CVaR is formulated in terms of the terminal _wealth_, not in terms of the _loss_. Informally, if the distribution of terminal wealth \(W\left(T\right)\) is continuous with PDF \(\hat{\psi}\), then the \(\alpha\)-CVaR in this case is given by
\[\text{CVAR}_{\alpha} = \frac{1}{\alpha}\int_{-\infty}^{w_{\alpha}^{*}}W\left(T\right) \cdot\hat{\psi}\left(W\left(T\right)\right)\cdot dW\left(T\right), \tag{2.13}\]
where \(w_{\alpha}^{*}\) is the corresponding Value-at-Risk (VaR) at level \(\alpha\) defined such that \(\int_{-\infty}^{w_{\alpha}^{*}}\hat{\psi}\left(W\left(T\right)\right)dW\left(T \right)=\alpha\). We follow for example Forsyth (2020) in defining the MCV problem with scalarization parameter \(\rho>0\) formally as
\[\sup_{\mathcal{P}\in\mathcal{A}}\left\{\rho\cdot E^{t_{0},w_{0}}\left[W\left(T \right)\right]+\text{CVAR}_{\alpha}\right\},\qquad\rho>0. \tag{2.14}\]
However, instead of (2.13), we use the definition of CVaR from Rockafellar and Uryasev (2002) that is applicable to more general terminal wealth distributions, so that the MCV problem definition used subsequently aligns with the definition given in Forsyth (2020); Miller and Yang (2017)),
\[\left(MCV\left(\rho\right)\right):\qquad\inf_{\xi\in\mathbb{R}}\inf_{\mathcal{P }\in\mathcal{A}}E^{t_{0},w_{0}}\left[-\rho\cdot W\left(T;\mathcal{P}, \boldsymbol{Y}\right)-\xi+\frac{1}{\alpha}\max\left(\xi-W\left(T;\mathcal{P}, \boldsymbol{Y}\right),0\right)\right],\quad\rho>0. \tag{2.15}\]
Finally, as noted in the Introduction, we apply the ideas underlying the Sortino ratio where the variance of
returns below the mean are penalized, to formulate the following objective function for dynamic trading,
\[\left(MSemiV\left(\rho\right)\right):\sup_{\mathcal{P}\in\mathcal{A}}\left\{E_{ \mathcal{P}}^{t_{0},w_{0}}\left[W\left(T;\mathcal{P},\mathbf{Y}\right)-\rho\cdot \left(\min\left\{W\left(T;\mathcal{P},\mathbf{Y}\right)-E_{\mathcal{P}}^{t_{0},w_{0 }}\left[W\left(T;\mathcal{P},\mathbf{Y}\right)\right],0\right\}\right)^{2}\right] \right\}, \tag{2.16}\]
which we refer to as the "Mean- Semi-variance" problem, with scalarization (or risk-aversion) parameter \(\rho>0\).3
Footnote 3: In continuous time, the unconstrained Mean-Semi-variance problem is ill-posed (Jin et al. (2005)). However, we will impose bounded leverage constraints, which is, of course, a realistic condition. This makes problem \(\left(MSemiV\left(\rho\right)\right)\) well posed.
The following remark discusses issues relating to the possible time-inconsistency of the optimal controls of (2.12), (2.15) and (2.16).
**Remark 2.1**.: (Time-_in_consistency and induced time-consistency) Formally, the optimal controls for problems \(MV\left(\rho\right)\), \(MCV\left(\rho\right)\) and \(MSemiV\left(\rho\right)\) are not time-consistent, but instead are of the pre-commitment type (see Basak and Chabakauri (2010); Bjork and Murgoci (2014); Forsyth (2020)). However, in many cases, there exists an induced time consistent problem formulation which has the same controls at time zero as the pre-commitment problem (see Forsyth (2020); Strub et al. (2019, 2019)).
As a concrete example of induced time-consistency, the embedding result of Li and Ng (2000); Zhou and Li (2000) establishes that the \(DSQ\left(\gamma\right)\) objective is the induced time-consistent objective function associated with the \(MV\left(\rho\right)\) problem, which is a result that we exploit for ground truth analysis purposes in Section 5.
Similarly, there is an induced time consistent objective function for the Mean-CVAR problem \(MCV\left(\rho\right)\) in (2.15) - see Forsyth (2020).
Consequently, when we refer to a strategy as optimal, for either the Mean-CVAR \(\left(MCV\left(\rho\right)\right)\) or Mean-Variance \(\left(MV\left(\rho\right)\right)\) problems, this will be understood to mean that at any \(t>t_{0}\), the investor follows the associated induced time-consistent strategy rather than a pre-commitment strategy.
In the Mean-Semi-variance \(\left(MSemiV\left(\rho\right)\right)\) case as per (2.16), there is no obvious induced time consistent objective function. In this case, we seek the pre-commitment policy.
For a detailed discussion of the many subtle issues involved in the case of time-inconsistency, induced time-consistency, and equilibrium controls, see for example Bjork et al. (2021); Bjork and Murgoci (2014); Forsyth (2020); Strub et al. (2019, 2019); Vigna (2020, 2022).
## 3 Neural network approach
In this section, we provide an overview of the neural network (NN) approach. Additional technical details and practical considerations are discussed in Appendices A and B, while the theoretical justification via convergence analysis will be discussed in Section 4 (and Appendix B).
Recall from (2.2) that \(\mathbf{X}\left(t_{m}\right)\in\mathbb{R}^{\eta_{X}}\) denotes the information taken into account in determining the investment strategy (2.2) at rebalancing time \(t_{m}\). Using the initial experimental results of Li and Forsyth (2019) and the analytical results of Van Staden et al. (2022b) applied to this setting, we assume that \(\mathbf{X}\left(t_{m}\right)\) includes at least the wealth available for investment at time \(t_{m}\), so that
\[W\left(t_{m}^{+};\mathcal{P},\mathbf{Y}\right)\coloneqq W\left(t_{m}^{-};\mathcal{ P},\mathbf{Y}\right)+q\left(t_{m}\right)\ \ \in\ \ \mathbf{X}\left(t_{m}\right),\qquad\forall t_{m}\in\mathcal{T}. \tag{3.1}\]
However, we emphasize that \(\mathbf{X}\left(t_{m}\right)\) may include additional variables in different settings. For example, in non-Markovian settings or in the case of certain solution approaches involving auxiliary variables, it is natural to "lift the state space" by including additional quantities in \(\mathbf{X}\) such as relevant historical quantities related to market variables, or other auxiliary variables - see for example Forsyth (2020); Miller and Yang (2017); Tsang and Wong (2020).
Let \(\mathcal{D}_{\mathbf{\phi}}\subseteq\mathbb{R}^{\eta_{X}+1}\) be the set such that \(\left(t_{m},\mathbf{X}\left(t_{m}\right)\right)\in\mathcal{D}_{\mathbf{\phi}}\) for all \(t_{m}\in\mathcal{T}\). Let \(C\left(\mathcal{D}_{\mathbf{\phi}},\mathcal{Z}\right)\) denote the set of all continuous functions from \(\mathcal{D}_{\mathbf{\phi}}\) to \(\mathcal{Z}\subset\mathbb{R}^{N_{\mathbf{\ast}}}\) (see (2.3)). We will use the notation \(\mathbf{X}^{\ast}\) to denote the information taken into account by the optimal control, since in the simplest case implied by (3.1), we simply have \(\mathbf{X}^{\ast}=W^{\ast}\), where \(W^{\ast}\) denotes the wealth under the optimal strategy. We make the following assumption.
**Assumption 3.1**.: _(Properties of the optimal control) Considering the general form of the problem (2.8), we assume that there exists an optimal feedback control \(\mathcal{P}^{\ast}\in\mathcal{A}\). Specifically, we assume that at each rebalancing time \(t_{m}\in\mathcal{T}\), the time \(t_{m}\) itself together with the information vector under optimal behavior \(\mathbf{X}^{\ast}\left(t_{m}\right)\), which includes at least the wealth \(W^{\ast}\left(t_{m}^{+}\right)\) available for investment (see (3.1)), are sufficient to fully determine the optimal asset allocation \(\mathbf{p}_{m}^{\ast}\left(t_{m},\mathbf{X}^{\ast}\left(t_{m}\right)\right)\)._
_Furthermore, we assume that there exists a continuous function \(\mathbf{p}^{*}\in C\left(\mathcal{D}_{\mathbf{\phi}},\mathcal{Z}\right)\) such that \(\mathbf{p}^{*}_{m}\left(t_{m},\mathbf{X}^{*}\left(t_{m}\right)\right)=\mathbf{p}^{*}\left(t _{m},\mathbf{X}^{*}\left(t_{m}\right)\right)\) for all \(t_{m}\in\mathcal{T}\), so that the optimal control \(\mathcal{P}^{*}\) can be expressed as_
\[\mathcal{P}^{*} = \left\{\mathbf{p}^{*}\left(t_{m},\mathbf{X}^{*}\left(t_{m}\right)\right) :\forall t_{m}\in\mathcal{T}\right\},\quad\text{where}\quad\mathbf{p}^{*}\in C \left(\mathcal{D}_{\mathbf{\phi}},\mathcal{Z}\right). \tag{3.2}\]
We make the following observations regarding Assumption 3.1:
* Continuity of \(\mathbf{p}^{*}\) in space _and time_: While assuming the optimal control is a continuous map in the state space \(\mathbf{X}\) is fairly standard in the literature, especially in the context of using neural network approximations (see for example Han and Weinan (2016); Hure et al. (2021); Tsang and Wong (2020)), the assumption of continuity in time in (3.2) is therefore worth emphasizing. This assumption enforces the requirement that in the limit of continuous rebalancing (i.e. when \(\Delta t\to 0\)), the control remains a continuous function of time, which is a practical requirement for any reasonable investment policy. In particular, this ensures that the asset allocation retains its smooth behavior as the number of rebalancing events in \(\left[0,T\right]\) is increased, which we consider a fundamental requirement ensuring that the resulting investment strategy is reasonable. In addition, in Section 5 we demonstrate how the known theoretical solution to a problem assuming continuous rebalancing (\(\Delta t\to 0\)) can be approximated very well using \(\Delta t\gg 0\) in the NN approach, even though the resulting NN approximation is only truly optimal in the case of \(\Delta t\gg 0\).
* The control is a _single_ function for _all_ rebalancing times; note that the function \(\mathbf{p}^{*}\) is not subscripted by time. If the portfolio is rebalanced only at discrete time intervals, the investment strategy can be found (as suggested in (3.2)) by evaluating this continuous function at discrete time intervals, i.e. \(\left(t_{m},\mathbf{X}\left(t_{m}\right)\right)\rightarrow\mathbf{p}^{*}\left(t_{m}, \mathbf{X}\left(t_{m}\right)\right)=\left(p_{1}^{*}\left(t_{m},\mathbf{X}\left(t_{m} \right)\right):i=1,...,N_{a}\right)\), for all \(t_{m}\in\mathcal{T}\). We discuss below how we solve for this (single) function directly, without resorting to dynamic programming, which avoids not only the challenge with error propagation due to value iteration over multiple timesteps, but also avoids solving for the high-dimensional conditional expectation (also termed the performance criteria by Oksendal and Sulem (2019)) if we are only interested in the relatively low-dimensional optimal control (see for example Van Staden et al. (2022b)).
These observations ultimately suggest the NN approach discussed below, while the soundness of Assumption 3.1 is experimentally confirmed in the ground truth results presented in Section 5.
Given Assumption 3.1 and in particular (3.2), we therefore limit our consideration to controls of the form
\[\mathcal{P} = \left\{\mathbf{p}\left(t_{m},\mathbf{X}\left(t_{m}\right)\right):\forall t _{m}\in\mathcal{T}\right\},\quad\text{for some}\quad\mathbf{p}\in C\left( \mathcal{D}_{\mathbf{\phi}},\mathcal{Z}\right). \tag{3.3}\]
To simplify notation, we identify an arbitrary control \(\mathcal{P}\) of the form (3.3) with its associated function \(\mathbf{p}=\left(p_{i}:i=1,...,N_{a}\right)\in C\left(\mathcal{D}_{\mathbf{\phi}}, \mathcal{Z}\right)\), so that the objective functional (2.9) is written as
\[J\left(\mathbf{p},\xi;t_{0},w_{0}\right) = E^{t_{0},w_{0}}\left[F\left(W\left(T;\mathbf{p},\mathbf{Y}\right),\xi \right)+G\left(W\left(T\right),E^{t_{0},w_{0}}\left[W\left(T;\mathbf{p},\mathbf{Y} \right)\right],w_{0},\xi\right)\right]. \tag{3.4}\]
In (3.4), \(W\left(\cdot;\mathbf{p},\mathbf{Y}\right)\) denotes the controlled wealth process using a control of the form (3.3), so that the wealth dynamics (2.7) for \(t_{m}\in\mathcal{T}\) (recall \(t_{N_{\mu}}^{-}=T\)) now becomes
\[W\left(t_{m+1}^{-};\mathbf{p},\mathbf{Y}\right) = \left[W\left(t_{m}^{-};\mathbf{p},\mathbf{Y}\right)+q\left(t_{m}\right) \right]\cdot\sum_{i=1}^{N_{a}}p_{i}\left(t_{m},\mathbf{X}\left(t_{m}\right)\right) \cdot Y_{i}\left(t_{m}\right). \tag{3.5}\]
Therefore, using Assumption 3.1 and (3.4)-(3.5), problem (2.8) is therefore expressed as
\[V\left(t_{0},w_{0}\right) = \inf_{\xi\in\mathbb{R}}\inf_{\mathbf{p}\in C\left(\mathcal{D}_{\mathbf{ \phi}},\mathcal{Z}\right)}J\left(\mathbf{p},\xi;t_{0},w_{0}\right). \tag{3.6}\]
We now provide a brief overview of the proposed methodology to solve problems of the form (3.6). This consists of two steps discussed in the following subsections, namely (i) the NN approximation to the control, and (ii) computational estimate of the optimal control.
### Step 1: NN approximation to control
Let \(n\in\mathbb{N}\). Consider a fully-connected, feedforward NN \(\boldsymbol{f}_{n}\) with parameter vector \(\boldsymbol{\theta}_{n}\in\mathbb{R}^{\nu_{n}}\) and a fixed number \(\mathcal{L}^{h}\geq 1\) of hidden layers, where each hidden layer contains \(\hbar\left(n\right)\in\mathbb{N}\) nodes. The NN has \(\left(\eta_{X}+1\right)\) input nodes, mapping feature (input) vectors of the form \(\boldsymbol{\phi}\left(t\right)=\left(t,\boldsymbol{X}\left(t\right)\right) \in\mathcal{D}_{\boldsymbol{\phi}}\) to \(N_{a}\) output nodes. For a more detailed introduction to neural networks, see for example Goodfellow et al. (2016).
Additional technical and practical details can be found in Appendices A and B. For this discussion, we simply note that the index \(n\in\mathbb{N}\) is used for the purposes of the analytical results and convergence analysis, where we fix a choice of \(\mathcal{L}^{h}\geq 1\) while \(\hbar\left(n\right),n\in\mathbb{N}\) is assumed to be a monotonically increasing sequence such that \(\lim_{n\rightarrow\infty}\hbar\left(n\right)=\infty\) (see Section 4 and Appendix A). However, for practical implementation, a fixed value of \(\hbar\left(n\right)\in\mathbb{N}\) is chosen (along with \(\mathcal{L}^{h}\geq 1\)) to ensure the NN has sufficient depth and complexity to solve the problem under consideration (see Appendix B).
Any NN considered is constructed such that \(\boldsymbol{f}_{n}:\mathcal{D}_{\boldsymbol{\phi}}\rightarrow\mathcal{Z}\subset \mathbb{R}^{N_{a}}\). In other words, the values of the \(N_{a}\) outputs are automatically in the set \(\mathcal{Z}\) defined in (2.3) for any \(\boldsymbol{\phi}\in\mathcal{D}_{\boldsymbol{\phi}}\),
\[\boldsymbol{f}_{n}\left(\boldsymbol{\phi}\left(t\right); \boldsymbol{\theta}_{n}\right) = \left(f_{n,i}\left(\boldsymbol{\phi}\left(t\right);\boldsymbol{ \theta}_{n}\right):i=1,...,N_{a}\right)\in\mathcal{Z}. \tag{3.7}\]
As a result, the outputs of the NN \(\boldsymbol{f}_{n}\) in (3.7) can be interpreted as portfolio weights satisfying the required investment constraints. While a more detailed discussion of the structure can be found in Assumption A.1 in Appendix A, we summarize some key aspects of the NN structure illustrated in Figure 3.1:
1. We emphasize that the rebalancing time is an _input_ into the NN as per the feature vector \(\boldsymbol{\phi}\left(t\right)=\left(t,\boldsymbol{X}\left(t\right)\right) \in\mathcal{D}_{\boldsymbol{\phi}}\), so that the NN parameter vector \(\boldsymbol{\theta}_{n}\) itself does not depend on time.
2. While we assume sigmoid activations for the hidden nodes for concreteness and convenience (see Assumption A.1), any of the commonly-used activation functions can be implemented with only minor modifications to the technical results presented in Section 4.
3. Since we are illustrating the approach using the particular form of \(\mathcal{Z}\) in (2.3) because of its wide applicability (no short-selling and no leverage), a softmax output layer is used to ensure the NN output remains in \(\mathcal{Z}\subset\mathbb{R}^{N_{a}}\) for any \(\boldsymbol{\phi}\left(t\right)\) (see (3.7)). However, different admissible control set formulations can be handled without difficulty4. Footnote 4: For example, position limits and limited leverage can be introduced using minor modifications to the output layer. Perhaps the only substantial challenge is offered by unrealistic investment scenarios, such as insisting that trading should continue in the event of bankruptcy, in which case consideration should be given to the possibility of wealth being identically zero or negative.
For some fixed value of the index \(n\in\mathbb{N}\), let \(\mathcal{N}_{n}\) denote the set of NNs constructed in the same way as \(\boldsymbol{f}_{n}\) for the fixed and given values of \(\mathcal{L}^{h}\) and \(\hbar\left(n\right)\). While a formal definition of the set \(\mathcal{N}_{n}\) is provided in Appendix A, here we simply note that each NN \(\boldsymbol{f}_{n}\left(\cdot;\boldsymbol{\theta}_{n}\right)\in\mathcal{N}_{n}\) only differs in terms of the parameter values constituting its parameter vector \(\boldsymbol{\theta}_{n}\) (i.e. for a fixed \(n\), each \(\boldsymbol{f}_{n}\in\mathcal{N}_{n}\) has the same number of hidden layers \(\mathcal{L}^{h}\), hidden nodes \(\hbar\left(n\right)\), activation functions etc.).
Figure 3.1: Illustration of the structure of the NN as per (3.7). Additional construction and implementation details can be found in Appendix A.
Observing that \(\mathcal{N}_{n}\subset C\left(\mathcal{D}_{\mathbf{\phi}},\mathcal{Z}\right)\), our first step is to approximate (3.6) by performing the optimization over \(\mathbf{f}_{n}\left(\cdot;\mathbf{\theta}_{n}\right)\in\mathcal{N}_{n}\) instead. In other words, we approximate the control \(\mathbf{p}\) by a neural network \(\mathbf{f}_{n}\in\mathcal{N}_{n}\),
\[\mathbf{p}\left(\mathbf{\phi}\left(t\right)\right) \simeq \mathbf{f}_{n}\left(\mathbf{\phi}\left(t\right);\mathbf{\theta}_{n}\right), \qquad\text{where}\quad\mathbf{\phi}\left(t\right)=\left(t,\mathbf{X}\left(t\right) \right),\mathbf{p}\in C\left(\mathcal{D}_{\mathbf{\phi}},\mathcal{Z}\right),\mathbf{f}_{n }\in\mathcal{N}_{n}. \tag{3.8}\]
We identify the NN \(\mathbf{f}_{n}\left(\cdot;\mathbf{\theta}_{n}\right)\) with its parameter vector \(\mathbf{\theta}_{n}\), so that the (approximate) objective functional using approximation (3.8) is written as
\[J_{n}\left(\mathbf{\theta}_{n},\xi;t_{0},w_{0}\right) = E^{t_{0},w_{0}}\left[F\left(W\left(T;\mathbf{\theta}_{n},\mathbf{Y} \right),\xi\right)+G\left(W\left(T;\mathbf{\theta}_{n},\mathbf{Y}\right),E^{t_{0},w_{ 0}}\left[W\left(T;\mathbf{\theta}_{n},\mathbf{Y}\right)\right],w_{0},\xi\right)\, \right]. \tag{3.9}\]
Combining (3.7) and (3.8), the wealth dynamics (3.5) is expressed as
\[W\left(t_{m+1}^{-};\mathbf{\theta}_{n},\mathbf{Y}\right) = \left[W\left(t_{m}^{-};\mathbf{\theta}_{n},\mathbf{Y}\right)+q\left(t_{m }\right)\right]\cdot\sum_{i=1}^{N_{a}}f_{n,i}\left(\mathbf{\phi}\left(t_{m} \right);\mathbf{\theta}_{n}\right)\cdot Y_{i}\left(t_{m}\right),\quad m=0,...,N_{ rb}-1. \tag{3.10}\]
Using (3.8) and (3.9), for fixed and given values of \(\mathcal{L}^{h}\) and \(\hbar\left(n\right)\), we therefore approximate problem (3.6) by
\[V_{n}\left(t_{0},w_{0}\right) = \inf_{\xi\in\mathbf{\xi}}\inf_{\mathbf{f}_{n}\left(\cdot;\mathbf{\theta}_{n} \right)\in\mathcal{N}_{n}}J_{n}\left(\mathbf{\theta}_{n},\xi;t_{0},w_{0}\right) \tag{3.11}\] \[= \inf_{\xi\in\mathbf{\xi}}\inf_{\mathbf{\theta}_{n}\in\mathbb{R}^{n-}}J_{ n}\left(\mathbf{\theta}_{n},\xi;t_{0},w_{0}\right)\] (3.12) \[= \inf_{\left(\mathbf{\theta}_{n},\xi\right)\in\mathbb{R}^{n+1}}J_{n} \left(\mathbf{\theta}_{n},\xi;t_{0},w_{0}\right).\]
We highlight that the optimization in (3.12) is unconstrained since, by construction, each NN \(\mathbf{f}_{n}\left(\cdot;\mathbf{\theta}_{n}\right)\in\mathcal{N}_{n}\) always generates outputs in \(\mathcal{Z}\).
The notation \(\left(\mathbf{\theta}_{n}^{*},\xi^{*}\right)\) and the associated NN \(\mathbf{f}_{n}^{*}\left(\cdot;\mathbf{\theta}_{n}^{*}\right)\in\mathcal{N}_{n}\) are subsequently used to denote the values achieving the optimum in (3.12) for given values of \(\mathcal{L}^{h}\) and \(\hbar\left(n\right)\). Note however that we do _not_ assume that the optimal control \(\mathbf{p}^{*}\in C\left(\mathcal{D}_{\mathbf{\phi}},\mathcal{Z}\right)\) satisfying Assumption 3.1 is also a NN in \(\mathcal{N}_{n}\), since by the universal approximation results (see for example Hornik et al. (1989)), we would expect that the error in approximating (3.6) by (3.12) can be made arbitrarily small for sufficiently large \(\hbar\left(n\right)\). These claims are rigorously confirmed in Section 4 below, where we consider a sequence of NNs \(\mathbf{f}_{n}\left(\cdot;\mathbf{\theta}_{n}\right)\in\mathcal{N}_{n}\) obtained by letting \(\hbar\left(n\right)\rightarrow\infty\) as \(n\rightarrow\infty\) (for any fixed value of \(\mathcal{L}^{h}\geq 1\)).
### Step 2 : Computational estimate of the optimal control
In order to solve the approximation (3.12) to problem (3.6), we require estimates of the expectations in (3.9). For computational purposes, suppose we take as given a set \(\mathcal{Y}_{n}\in\mathbb{R}^{n\times N_{a}\times N_{rb}}\), consisting of \(n\in\mathbb{N}\) independent realizations of the paths of joint asset returns \(\mathbf{Y}\),
\[\mathcal{Y}_{n} = \left\{\mathbf{Y}^{\left(j\right)}:j=1,...,n\right\}. \tag{3.13}\]
We highlight that each entry \(\mathbf{Y}^{\left(j\right)}\in\mathcal{Y}_{n}\) consists of a _path_ of joint asset returns (see (2.6)), and we assume that the paths are independent, we do _not_ assume that the asset returns constituting each path are independent. In particular, both cross-correlations and autocorrelation structures within each path of returns are permitted.
Constructing the set \(\mathcal{Y}_{n}\) in practical applications is further discussed in Appendix B. In the numerical examples in Section 5, we use examples where \(\mathcal{Y}_{n}\) is generated using (i) Monte Carlo simulation of parametric asset dynamics, (ii) stationary block bootstrap resampling of empirical asset returns, (Anarkulova et al. (2022)) and (iii) generative adversarial network (GAN)-generated synthetic asset returns (Yoon et al. (2019)). While we let \(n\rightarrow\infty\) in (3.13) for convergence analysis purposes, in practical applications (e.g. the results of Section 5) we simply choose \(n\) sufficiently large such that we are reasonably confident that reliable numerical estimates of the expectations in (3.9) are obtained.
Given a NN \(\mathbf{f}_{n}\left(\cdot;\mathbf{\theta}_{n}\right)\in\mathcal{N}_{n}\) and set \(\mathcal{Y}_{n}\), the wealth dynamics (3.10) along path \(\mathbf{Y}^{\left(j\right)}\in\mathcal{Y}_{n}\) is given by
\[W^{\left(j\right)}\left(t_{m+1}^{-};\mathbf{\theta}_{n},\mathcal{Y}_{n}\right) = \left[W^{\left(j\right)}\left(t_{m}^{-};\mathbf{\theta}_{n},\mathcal{Y} _{n}\right)+q\left(t_{m}\right)\right]\cdot\sum_{i=1}^{N_{a}}f_{n,i}\left(\mathbf{ \phi}^{\left(j\right)}\left(t_{m}\right);\mathbf{\theta}_{n}\right)\cdot Y_{i}^{ \left(j\right)}\left(t_{m}\right), \tag{3.14}\]
for \(m=0,...,N_{rb}-1\). We introduce the superscript \(\left(j\right)\) to emphasize that the quantities are obtained along the \(j\)th entry of (3.13).
The computational estimate of \(J_{n}\left(\mathbf{\theta}_{n},\xi;t_{0},w_{0}\right)\) in (3.9) is then given by
\[\hat{J}_{n}\left(\mathbf{\theta}_{n},\xi;t_{0},w_{0},\mathcal{Y}_{n}\right) = \frac{1}{n}\sum_{j=1}^{n}F\left(W^{\left(j\right)}\left(T;\mathbf{ \theta}_{n},\mathcal{Y}_{n}\right),\xi\right) \tag{3.15}\] \[+\frac{1}{n}\sum_{j=1}^{n}G\left(W^{\left(j\right)}\left(T;\mathbf{ \theta}_{n},\mathcal{Y}_{n}\right),\frac{1}{n}\sum_{k=1}^{n}W^{\left(k\right)} \left(T;\mathbf{\theta}_{n},\mathcal{Y}_{n}\right),w_{0},\xi\right),\]
so that we approximate problem (3.12) by
\[\hat{V}_{n}\left(t_{0},w_{0};\mathcal{Y}_{n}\right) = \inf_{\left(\mathbf{\theta}_{n},\xi\right)\in\mathbb{R}^{n+1}}\hat{ J}_{n}\left(\mathbf{\theta}_{n},\xi;t_{0},w_{0},\mathcal{Y}_{n}\right). \tag{3.16}\]
The numerical solution of (3.16) can then proceed using standard (stochastic) gradient descent techniques (see Appendix B). For subsequent reference, let \(\left(\hat{\mathbf{\theta}}_{n}^{*},\hat{\xi}_{n}^{*}\right)\) denote the optimal point in (3.16) relative to the training data set \(\mathcal{Y}_{n}\) in (3.16).
In the case of sufficiently large datasets (3.13), in other words as \(n\rightarrow\infty\), we would expect that the error in approximating (3.12) by (3.16) can be made arbitrarily small. However, as noted above, as \(n\rightarrow\infty\) and the number of hidden nodes \(\hbar\left(n\right)\rightarrow\infty\) (for any fixed \(\mathcal{L}^{h}\geq 1\)), (3.12) is also expected to approximate (3.6) more accurately. As a result, we obtain the necessary intuition for establishing the convergence of (3.16) to (3.6) under suitable conditions, which is indeed confirmed in the results of Section 4.
Note that since \(\mathcal{Y}_{n}\) is used in (3.16) to obtain the optimal NN parameter vector \(\hat{\mathbf{\theta}}_{n}^{*}\), it is usually referred to as the NN "training" dataset (see for example Goodfellow et al. (2016)). Naturally, we can also construct a "testing" dataset \(\mathcal{Y}_{\hat{n}}^{test}\), that is of a similar structure as (3.13), but typically based on a different implied distribution of \(\mathbf{Y}\) as a result of different data generation assumptions. For example, \(\mathcal{Y}_{\hat{n}}^{test}\) can be obtained using a different time period of historical data for its construction, or different process parameters if there are parametric asset dynamics specified. The resulting approximation \(\mathbf{f}_{n}^{*}\left(\cdot;\hat{\mathbf{\theta}}_{n}^{*}\right)\in\mathcal{N}_{n}\) to the optimal control \(\mathbf{p}^{*}\in C\left(\mathcal{D}_{\hat{\mathbf{\phi}}},\mathcal{Z}\right)\) obtained using the training dataset in (3.16) can then be implemented on the testing dataset for out-of-sample testing or scenario analysis. This is discussed in more detail in Appendix B.
**Remark 3.1**.: (Extension to wealth path-dependent objectives) As noted in the Introduction, the NN approach as well as the convergence analysis of Section 4 can be extended to objective functions that depend on the entire wealth path \(\left\{W\left(t\right):t\in\mathcal{T}\right\}\) instead of just the terminal wealth \(W\left(T\right)\). This is achieved by simply modifying (3.15) appropriately and ensuring the wealth is assessed at the desired intervals using (3.14).
### Advantages of the NN approach
The following observations highlight some advantages of the proposed NN approach:
* semi-variance problem (2.16) in Section 5.
* The proposed methodology is parsimonious, in the sense that the NN parameter vector remains independent of number of rebalancing events. Specifically, we observe that the NN parameter vector \(\mathbf{\theta}_{n}\in\mathbb{R}^{n_{n}}\) of the NN does _not_ depend on the rebalancing time \(t_{m}\in\mathcal{T}\) or on the sample path \(j\). This contrasts our
approach with the approaches of for example Han and Weinan (2016); Tsang and Wong (2020),5 where the number of parameters scale with the number of rebalancing events. As a result, the NN approach presented here can lead to potentially significant computational advantages in the cases of (i) long investment time horizons or (ii) short trading time horizons with a frequent number of portfolio rebalancing events.
Footnote 5: Tsang and Wong (2020) use a stacked NN approach, with a different NN at each rebalancing time.
A natural question might be whether the NNs in the proposed approach are required to be very deep, thus potentially exposing the training of the NN in (3.16) to problem of vanishing or exploding gradients (see for example Goodfellow et al. (2016)). However, the ground truth results presented in Section 5 demonstrate that we obtain very accurate results with relatively shallow NNs (at most two hidden layers). We suspect this might be due to the optimal control being relatively low-dimensional compared to the high-dimensional objective functionals in portfolio optimization problems with discrete rebalancing (see Van Staden et al. (2022b) for a rigorous analysis), while in this NN approach approach the optimal control is obtained directly without requiring the solution of the (high-dimensional) objective functional at rebalancing times.
Note that these advantages also contrast the NN approach with Reinforcement Learning-based algorithms to solve portfolio optimization problems, as the following remark discusses.
**Remark 3.2**.: (Contrast of NN approach to Reinforcement Learning). Reinforcement learning (RL) algorithms (for example, Q-learning) relies fundamentally on the DP principle for the numerical solution of the portfolio optimization problem (see for example Gao et al. (2020); Lucarelli and Borrotti (2020); Park et al. (2020)). This requires, at each value iteration step, the approximation of a (high-dimensional) conditional expectation. As a result, RL is associated with standard DP-related concerns related to error amplification and the curse of dimensionality discussed above, and also cannot solve general problems of the form (1.1) without relying on for example an embedding approach to obtain an associated problem that can be solved using DP methods.
## 4 Convergence analysis
In this section, we present the theoretical justification of the proposed NN approach as outlined in Section 3. We confirm that the numerical solution of (3.16) can be used to approximate the theoretical solution of (3.6) arbitrarily well (in probability) under suitable conditions. This section only summarizes the key convergence results which are among the main contributions of this paper, while additional technical details and proofs are provided in Appendix A.
We start with Theorem 4.1, which confirms the validity of Step 1 (Subsection 3.1), namely using a NN \(\boldsymbol{f}_{n}\left(\cdot;\boldsymbol{\theta}_{n}\right)\in\mathcal{N}_{n}\) to approximate the control. Note that Theorem 4.1 relies on two assumptions, presented in Appendix A.2: We emphasize that Assumption A.3 is purely made for purposes of convenience, since its requirements can easily be relaxed with only minor modifications to the proofs (as discussed in Remark A.1), but at the cost of significant notational complexity and no additional insights. In contrast, Assumption A.2 is critical to establish the result of Theorem 4.1, and requires that the optimal investment strategy (or control) satisfies Assumption 3.1, places some basic requirements on \(F\) and \(G\), and assumes that the sequence of NNs \(\left\{\boldsymbol{f}_{n}\left(\cdot;\boldsymbol{\theta}_{n}\right),n\in \mathbb{N}\right\}\) is constructed such that the number of nodes in each hidden layer \(\hbar\left(n\right)\rightarrow\infty\) as \(n\rightarrow\infty\) (no assumptions are yet required regarding the exact form of \(n\rightarrow\hbar\left(n\right)\)).
**Theorem 4.1**.: _(Validity of NN approximation) We assume that Assumption A.2 holds, and for ease of exposition, we also assume that Assumption A.3 holds. Then the NN approximation to the control in (3.8) is valid, in the sense that \(V\left(t_{0},w_{0}\right)\) in (3.6) can be approximated arbitrarily well by \(V_{n}\left(t_{0},w_{0}\right)\) in (3.12) for sufficiently large \(n\), since_
\[\lim_{n\rightarrow\infty}\left|V_{n}\left(t_{0},w_{0}\right)-V \left(t_{0},w_{0}\right)\right| = \lim_{n\rightarrow\infty}\left|\inf_{\left(\boldsymbol{\theta}_{n },\xi\right)\in\mathbb{R}^{m+1}}J_{n}\left(\boldsymbol{\theta}_{n},\xi;t_{0},w _{0}\right)-\inf_{\xi\in\mathbb{R}}\inf_{\boldsymbol{p}\in C\left(\mathcal{D}_ {\boldsymbol{\phi}},\mathcal{Z}\right)}J\left(\boldsymbol{p},\xi;t_{0},w_{0} \right)\right| \tag{4.1}\] \[= 0.\]
Proof.: See Appendix A.3.
Having justified Step 1 of the approach, Theorem 4.2 now confirms the validity of Step 2 of the NN approach (see Subsection 3.2), namely using the computational estimate \(\boldsymbol{f}_{n}^{\star}\left(\cdot;\hat{\boldsymbol{\theta}}_{n}^{\star} \right)\in\mathcal{N}_{n}\) from (3.16) as an approximation
of the true optimal control \(\mathbf{p}^{*}\in C\left(\mathcal{D}_{\mathbf{\phi}},\mathcal{Z}\right)\). Note that in addition to the assumptions of Theorem 4.1, Theorem 4.2 also requires Assumption A.4, which by necessity includes computational considerations such as the structure of the training dataset \(\mathcal{Y}_{n}\), the rate of divergence of the number of hidden nodes \(\hbar\left(n\right)\rightarrow\infty\) as \(n\rightarrow\infty\), and assumptions regarding the optimization algorithm used in solving problem (3.16).
**Theorem 4.2**.: _(Validity of computational estimate) We assume that Assumption A.2, Assumption A.3 and Assumption A.4 hold. Then the computational estimate to the optimal control (3.2) obtained using (3.8) and (3.16) is valid, in the sense that the value function \(V\left(t_{0},w_{0}\right)\) in (3.6) can be approximated arbitrarily well in probability by \(\hat{V}_{n}\left(t_{0},w_{0};\mathcal{Y}_{n}\right)\) in (3.16) for sufficiently large \(n\), since_
\[\left|\hat{V}_{n}\left(t_{0},w_{0};\mathcal{Y}_{n}\right)-V\left(t _{0},w_{0}\right)\right| = \left|\inf_{\left(\mathbf{\theta}_{n},\xi\right)\in\mathbb{R}^{n+1} }\hat{J}_{n}\left(\mathbf{\theta}_{n},\xi;t_{0},w_{0},\mathcal{Y}_{n}\right)-\inf _{\xi\in\mathbb{R}}\inf_{\mathbf{p}\in C\left(\mathcal{D}_{\mathbf{\phi}},\mathcal{Z} \right)}J\left(\mathbf{p},\xi;t_{0},w_{0}\right)\right| \tag{4.2}\] \[\overset{P}{\longrightarrow} 0,\qquad\text{as }n\rightarrow\infty.\]
Proof.: See Appendix A.3.
Taken together, Theorem 4.1 and Theorem 4.2 establish the theoretical validity of the NN approach to solve problems of the form (1.1).
## 5 Numerical results
In this section, we present numerical results obtained by implementing the NN approach described in Section 3. For illustrative purposes, the examples focus on investment objectives as outlined in Subsection 2.1, and we use three different data generation techniques for obtaining the training data set \(\mathcal{Y}_{n}\) of the NN: (i) parametric models for underlying asset returns, (ii) stationary block bootstrap resampling of empirical returns, and (ii) generative adversarial network (GAN)-generated synthetic asset returns.
### Closed-form solution: \(DSQ\left(\gamma\right)\) with continuous rebalancing
Under certain conditions, some of the optimization problems in Subsection 2.1 can be solved analytically. In this subsection, we demonstrate how a closed-form solution of problem \(DSQ\left(\gamma\right)\) in (2.10), assuming _continuous_ rebalancing (i.e. if we let \(\Delta t\to 0\) in (2.1)), can be approximated very accurately using a very simple NN (1 hidden layer, only 3 hidden nodes) using discrete rebalancing with \(\Delta t\gg 0\) in (2.1). This simultaneously illustrates how parsimonious the NN approach is, as well as how useful the imposition of time-continuity is in ensuring the smooth behavior of the (approximate) optimal control.
In this subsection as well as in Subsection 5.2, we assume parametric dynamics for the underlying assets. For concreteness, we consider the scenario of two assets, \(N_{a}=2\), with unit values \(S_{i},i=1,2\), evolving according to the following dynamics,
\[\frac{dS_{i}\left(t\right)}{S_{i}\left(t^{-}\right)} = \left(\mu_{i}-\lambda_{i}\kappa_{i}^{\left(1\right)}\right)\cdot dt +\sigma_{i}\cdot dZ_{i}\left(t\right)+d\left(\sum_{k=1}^{\pi_{i}\left(t \right)}\left(\vartheta_{i}^{\left(k\right)}-1\right)\right),\qquad i=1,2. \tag{5.1}\]
Note that (5.1) takes the form of the standard jump diffusion models in finance - see e.g. Kou (2002); Merton (1976) for more information. For each asset \(i\) in (5.1), \(\mu_{i}\) and \(\sigma_{i}\) denote the (actual, not risk-neutral) drift and volatility, respectively, \(Z_{i}\) denotes a standard Brownian motion, \(\pi_{i}\left(t\right)\) denotes a Poisson process with intensity \(\lambda_{i}\geq 0\), and \(\vartheta_{i}^{\left(k\right)}\) are i.i.d. random variables with the same distribution as \(\vartheta_{i}\), which represents the jump multiplier of the \(i\)th risky asset with \(\kappa_{i}^{\left(1\right)}=\mathbb{E}\left[\vartheta_{i}-1\right]\) and \(\kappa_{i}^{\left(2\right)}=\mathbb{E}\left[\left(\vartheta_{i}-1\right)^{2}\right]\). While the Brownian motions can be correlated with \(dZ_{1}\left(t\right)dZ_{2}\left(t\right)=\rho_{1,2}\cdot dt\), we make the standard assumption that the jump components are independent (see for example Forsyth and Vetzal (2022)).
For this subsection only, we treat the first asset (\(i=1\) in (5.1)) as a "risk-free" asset, and set \(\mu_{1}=r>0\) where \(r\) is the risk-free rate, so that we have \(\lambda_{1}=0\), \(\sigma_{1j}=0\)\(\forall j\), and \(Z_{1}\equiv 0\), while the second asset (\(i=2\) in (5.1)) is assumed to be a broad equity market index (the "risky asset"). In this scenario, if problem \(DSQ\left(\gamma\right)\) in (2.10) is solved subject to dynamics (5.1) together with the assumptions of costless continuous trading,
infinite leverage, and uninterrupted trading in the event of insolvency, then the \(DSQ\left(\gamma\right)\)-optimal control can be obtained analytically as
\[\boldsymbol{p}^{*}\left(t,W^{*}\left(t\right)\right)=[1-p_{2}^{*}\left(t,W^{*} \left(t\right)\right),p_{2}^{*}\left(t,W^{*}\left(t\right)\right)]\in\mathbb{R} ^{2}, \tag{5.2}\]
where the fraction of wealth in the broad stock market index (asset \(i=2\)) is given by (Zweng and Li (2011))
\[p_{2}^{*}\left(t,W^{*}\left(t\right)\right)=\frac{\mu_{2}-r}{\sigma_{2}^{2}+ \lambda_{2}\kappa_{2}^{(2)}}\cdot\left[\frac{\gamma e^{-r\left(T-t\right)}-W^{ *}\left(t\right)}{W^{*}\left(t\right)}\right],\qquad w_{0}<\gamma e^{-r\left(T -t\right)}. \tag{5.3}\]
By design, the NN approach is not constructed to solve problems with unrealistic assumptions such as continuous trading, infinite leverage and short-selling, or trading in the event of bankruptcy, all of which are required to derive (5.3). However, if the implicit quadratic wealth target for the DSQ problem (i.e. the value of \(\gamma\), see Vigna (2014)) is not too aggressive, the analytical solution (5.3) does not require significant leverage or lead to a large probability of insolvency. In such a scenario, we can use the NN approach to approximate (5.3).
We select \(w_{0}=100\), \(T=1\) year and \(\gamma=138.33\), and simulate \(n=2.56\times 10^{6}\) paths of the underlying assets using (5.1) and parameters as in Table C.1 (Appendix C). On this set of paths, the true analytical solution (5.3) is implemented using 7,200 time steps. In contrast, for the NN approach, we use only 4 rebalancing events in \([0,T=1]\), and therefore aggregate the simulated returns in quarterly time intervals to construct the training data set \(\mathcal{Y}_{n}\). We consider only a very shallow NN, consisting of a single hidden layer and only 3 hidden nodes.
Figure 5.1 compares the resulting optimal investment strategies by illustrating the optimal proportion of wealth invested in the the broad equity market index (asset \(i=2\)) as a function of time and wealth. We emphasize that the NN strategy in Figure 5.1(b) is not expected to be exactly identical to the analytical solution in Figure 5.1(a), since it is based on fundamentally different assumptions such as discrete rebalancing and investment constraints (2.5).
However, requiring that the NN feature vector includes time in the proposed NN approach, together with a NN parameter vector that does not depend on time, we guarantee the smooth behavior in time of the NN approximation observed in Figure 5.1(b). As a result, Table 5.1 shows that the shallow NN strategy trained with \(\Delta t\gg 0\) results in a remarkably accurate and parsimonious approximation to the true analytical solution where \(\Delta t\to 0\), since we obtain nearly identical optimal terminal wealth distributions.
### Ground truth: Problem \(MCV\left(\rho\right)\)
In the case of the Mean-CVaR problem \(MCV\left(\rho\right)\) in (2.15), Forsyth and Vetzal (2022) obtain an MCV-optimal investment strategy subject to the same investment constraints as in Section 2 (namely discrete rebalancing, no short-selling or leverage allowed, and no trading in insolvency) using the partial (integro-)differential equation (PDE) approach of Forsyth (2020).
Figure 5.1: Closed-form solution - \(DSQ\left(\gamma\right)\) with continuous rebalancing: Optimal proportion of wealth invested in the broad equity market index as a function of time and wealth. The NN approximation is obtained for a specific initial wealth of \(w_{0}=\)100, and only four rebalancing events in \([0,T]\).
For ground truth analysis purposes, we therefore consider the same investment scenario as in Forsyth and Vetzal (2022), where two underlying assets are considered, namely 30-day US T-bills and a broad equity market index (the CRSP VWD index) - see Appendix C for definitions. However, in contrast to the preceding section where one asset was taken as the risk-free asset, both assets are now assumed to evolve according to dynamics of the form (5.1), using the double-exponential Kou (2002) formulation for the jump distributions. The NN training data set is therefore constructed by simulating the same underlying dynamics. While further details regarding the context and motivation for the investment scenario can be found in Forsyth and Vetzal (2022), here we simply note that the scenario involves \(T=5\) years, quarterly rebalancing, a set of admissible strategies satisfying (2.5), and parameters for (5.1) as in Table C.2.
As discussed in Appendix B, the inherently higher complexity of the Mean-CVaR optimal control requires the NN to be deeper than in the case of the problem considered in Subsection 5.1. As a result, we consider approximating NNs with two hidden layers, each with 8 hidden nodes, while relatively large mini-batches of 2,000 paths were used in the stochastic gradient descent algorithm (see Appendix B) to ensure sufficiently accurate sampling of the tail of the returns distribution in selecting the descent direction at each step. Note that despite using a deeper NN, this NN structure is still very parsimonious and relatively shallow compared to the rebalancing time-dependent structures considered in for example Han and Weinan (2016), where a new set of parameters is introduced at each rebalancing event.
Table 5.2 compares the PDE results reported in Forsyth and Vetzal (2022) with the corresponding NN results. Note that the PDE optimal control was determined by solving a Hamilton-Jacobi-Bellman PDE numerically. The statistics for the PDE generated control were computed using \(n=2.56\times 10^{6}\) Monte Carlo simulations of the joint underlying asset dynamics in order to calculate the results of Table 5.2, while the NN was trained on \(n=2.56\times 10^{6}\) paths of the same underlying asset dynamics but which were independently simulated. While some variability of the results are therefore to be expected due to the underlying samples, the results in Table 5.2 demonstrate the robustness of the proposed NN approach.
### Ground truth: Problems \(MV\left(\rho\right)\) and \(DSQ\left(\gamma\right)\)
In this subsection, we demonstrate that if the investment objective (1.1) is separable in the sense of dynamic programming, the correct time-consistent optimal investment strategy is recovered, otherwise we obtain the correct pre-commitment (time-inconsistent) investment strategy.
To demonstrate this, the theoretical embedding result of Li and Ng (2000); Zhou and Li (2000), which establishes the equivalence of problems \(MV\left(\rho\right)\) and \(DSQ\left(\gamma\right)\) under fairly general conditions, can be exploited for ground truth analysis purposes as follows. Suppose we solved problems \(MV\left(\rho\right)\) and \(DSQ\left(\gamma\right)\) on the same
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \cline{3-7} \multicolumn{1}{c}{} & \multicolumn{5}{|c|}{\(W\left(T\right)\) percentiles} \\ \hline Solution approach & Rebalancing & 5th & 20th & 50th & 80th & 95th \\ \hline \hline Closed-form solution & Continuous, \(\Delta t\)\(\rightarrow\) 0 & 86.81 & 98.02 & 106.35 & 112.82 & 118.15 \\ \hline Shallow NN approximation & Discrete, \(\Delta t=0.25\), total of \(N_{rb}=4\) only & 86.62 & 97.30 & 105.67 & 112.54 & 118.85 \\ \hline \end{tabular}
\end{table}
Table 5.1: Closed-form solution - \(DSQ\left(\gamma\right)\) with continuous rebalancing: Percentiles of the simulated ( \(n=2.56\times 10^{6}\)) terminal wealth distributions obtained by implementing the optimal strategies in Figure 5.1. In both cases, a mean terminal wealth of 105 is obtained. Note that the NN approximation was obtained under the assumption of quarterly rebalancing only, no leverage or short-selling, and therefore no trading in insolwebers.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \(\rho\) & \multicolumn{2}{|c|}{5\% CVaR} & \multicolumn{2}{|c|}{\(E^{q_{0},w_{0}}\left[W\left(T\right)\right]\)} & \multicolumn{2}{|c|}{Value function} & \multicolumn{2}{|c|}{\%} \\ \cline{2-7} & PDE & NN & PDE & NN & PDE & NN & \\ \hline
0.10 & 940.60 & 940.55 & 1069.19 & 1062.97 & 1047.52 & 1046.85 & -0.06\% \\ \hline
0.25 & 936.23 & 937.39 & 1090.89 & 1081.99 & 1208.95 & 1207.88 & -0.09\% \\ \hline
1.00 & 697.56 & 690.11 & 1437.73 & 1444.16 & 2135.29 & 2134.27 & -0.05\% \\ \hline
1.50 & 614.92 & 611.65 & 1508.10 & 1510.07 & 2877.07 & 2876.76 & -0.01\% \\ \hline \end{tabular}
\end{table}
Table 5.2: Ground truth - problem \(MCV\left(\rho\right)\): The PDE results are obtained from Forsyth and Vetzal (2022) for selected points on the Mean-CVaR “efficient frontier”. The “Value function” column reports the value of the objective function (2.14) under the corresponding optimal control, while “% difference” reports the percentage difference in the reported value functions for the NN solution compared to the PDE solution.
underlying training data set. We remind the reader that in the proposed NN approach, problem \(MV\left(\rho\right)\) can indeed be solved directly without difficulty, which is not possible in dynamic programming-based approaches. Then, considering the numerical results, there should be values of parameters \(\rho\equiv\tilde{\rho}\) and \(\gamma\equiv\tilde{\gamma}\) such that the optimal strategy of \(MV\left(\rho\equiv\tilde{\rho}\right)\) corresponds exactly to the optimal strategy of \(DSQ\left(\gamma\equiv\tilde{\gamma}\right)\), with a specific relationship holding between \(\tilde{\rho}\) and \(\tilde{\gamma}\). The NN approach can therefore enable us to numerically demonstrate the embedding result of Li and Ng (2000); Zhou and Li (2000) in a setting where the underlying asset dynamics are not explicitly specified and where multiple investment constraints are present. We start by recalling the embedding result.
**Proposition 5.1**.: _(Embedding result of Li and Ng (2000); Zhou and Li (2000)) Fix a value \(\tilde{\rho}>0\). If \(\mathcal{P}^{*}\in\mathcal{A}\) is the optimal control of problem \(MV\left(\rho\equiv\tilde{\rho}\right)\) in (2.12), then \(\mathcal{P}^{*}\) is also the optimal control for problem \(DSQ\left(\gamma=\tilde{\gamma}\right)\) in (2.10), provided that_
\[\tilde{\gamma} = \frac{1}{2\tilde{\rho}}+E^{t_{0},w_{0}}\left[W^{*}\left(T; \mathcal{P}^{*},\mathbf{Y}\right)\right]. \tag{5.4}\]
Proof.: See Li and Ng (2000); Zhou and Li (2000). We also highlight the alternative proof provided in Dang and Forsyth (2016), which shows that this result is valid for any admissible control set \(\mathcal{A}\).
Since (5.4) is valid for any admissible control set \(\mathcal{A}\), we consider a factor investing scenario where portfolios are constructed using popular long-only investable equity factor indices (Momentum, Value, Low Volatility, Size), a broad equity market index (the CRSP VWD index), 30-day T-bills and 10-year Treasury bonds (see Appendix C for definitions). For illustrative purposes in the case of an investor primarily concerned with long-run factor portfolio performance, we use a horizon of \(T=10\) years, \(w_{0}=120\), annual contributions of \(q\left(t_{m}\right)=12\), and annual rebalancing.
Given historical returns data for the underlying assets, we construct training and testing (out-of-sample) data sets for the NN, \(\mathcal{Y}_{n}\) and \(\mathcal{Y}_{n}^{test}\), respectively, using stationary block bootstrap resampling of empirical historical asset returns (see Appendix C), which is popular with practitioners (Anarkulova et al. (2022); Cavaglia et al. (2022); Cogneau and Zakalmouline (2013); Dichtl et al. (2016); Scott and Cavaglia (2017); Simonian and Martirosyan (2022)) and is designed to handle weakly stationary time series with serial dependence. See Ni et al. (2022) for a discussion concerning the probability of obtaining a repeated path in block bootstrap resampling (which is negligible for any realistic number of samples). Due to availability of historical data we use inflation-adjusted monthly empirical returns from 1963:07 to 2020:12. The training data set (\(n=10^{6}\)) is obtained using an expected block size of 6 months of joint returns from 1963:07 to 2009:12, while the testing data set (\(n=10^{6}\)) uses an expected block size of 3 months and returns from 2010:01 to 2020:12. We consider NNs with two hidden layers, each with only eight hidden nodes.
Choosing two values of \(\tilde{\rho}>0\) to illustrate different levels of risk aversion (see Table 5.3), we solve problem \(MV\left(\rho=\tilde{\rho}\right)\) in (2.12) directly using the proposed approach to obtain the optimal investment strategy \(\mathbf{f}\left(;\hat{\mathbf{\theta}}_{me}^{*}\right)\). Note that since we consider a fixed NN structure in this setting rather than a sequence of NNs, we drop the subscript "\(n\)" in the notation \(\mathbf{f}\left(;\hat{\mathbf{\theta}}_{me}^{*}\right)\). Using this result together with (5.4), we can approximate the associated value of \(\tilde{\gamma}\) by
\[\tilde{\gamma} \simeq \frac{1}{2\tilde{\rho}}+\frac{1}{n}\sum_{j=1}^{n}W^{*\left(j \right)}\left(T;\hat{\mathbf{\theta}}_{mev}^{*},,\mathcal{Y}_{n}\right), \tag{5.5}\]
and solve problem \(DSQ\left(\gamma=\tilde{\gamma}\right)\) independently using the proposed approach on the same training data set \(\mathcal{Y}_{n}\).
According to Proposition 5.1, the resulting investment strategy \(\mathbf{f}\left(;\hat{\mathbf{\theta}}_{dsq}^{*}\right)\) should be (approximately) identical to the strategy \(\mathbf{f}\left(\cdot;\hat{\mathbf{\theta}}_{mv}^{*}\right)\) if the proposed approach works as required. Note that the parameter vectors are expected to be different (i.e. \(\hat{\mathbf{\theta}}_{dsq}^{*}\neq\hat{\mathbf{\theta}}_{mv}^{*}\)) due to a variety of reasons (multiple local minima, optimization using SGD, etc.), but the resulting wealth distributions and asset allocation should agree, i.e. \(\mathbf{f}\left(\cdot;\hat{\mathbf{\theta}}_{dsq}^{*}\right)\simeq\mathbf{f}\left(\cdot; \hat{\mathbf{\theta}}_{mv}^{*}\right)\).
Figure 5.2 demonstrates the investment strategies \(\mathbf{f}\left(\cdot;\hat{\mathbf{\theta}}_{mv}^{*}\right)\) and \(\mathbf{f}\left(\cdot;\hat{\mathbf{\theta}}_{sq}^{*}\right)\) obtained by training the NNs on the same training data set using values of \(\tilde{\rho}=0.017\) and \(\tilde{\gamma}=429.647\), respectively. Note that the values \(\tilde{\rho}\) and \(\tilde{\gamma}\) are rounded to three decimal places, and Figure 5.2 corresponds to Results set 1 in Table 5.3. In this example, only four of the underlying candidate assets have non-zero investments, which is to be expected due to the high correlation between long-only equity factor indices.
Table 5.3 confirms that the associated optimal terminal wealth distributions of \(MV\left(\rho=\tilde{\rho}\right)\) and \(DSQ\left(\gamma=\tilde{\gamma}\right)\) indeed correspond, both in-sample (training data set) and out-of-sample (testing data set).
The proposed NN approach therefore clearly works as expected, in that we demonstrated that the result of Proposition 5.1 in a completely model-independent way in a portfolio optimization setting where no known analytical solutions exist. In particular, we emphasize that no assumptions were made regarding parametric underlying asset dynamics, the results are entirely data-driven. As a result, we can interpret the preceding results as showing that the approach correctly recovers the time-inconsistent (or pre-commitment) strategy without difficulty if the objective is not separable in the sense of dynamic programming, such as in the case of the \(MV\left(\rho\right)\) problem, whereas if the objective is separable in the sense of dynamic programming, such as in the case of the \(DSQ\left(\gamma\right)\) problem, the approach correctly recovers the associated time-consistent strategy.
Figure 5.2: Ground truth - problems \(MV\left(\rho=\tilde{\rho}\right)\) and \(DSQ\left(\gamma=\tilde{\gamma}\right)\): investment strategies \(\boldsymbol{f}\left(\cdot;\hat{\boldsymbol{\theta}}_{mv}^{*}\right)\) and \(\boldsymbol{f}\left(\cdot;\hat{\boldsymbol{\theta}}_{dst}^{*}\right)\) obtained by training the NNs using values of \(\tilde{\rho}=0.017\) and \(\tilde{\gamma}=429.647\) (rounded to three decimal places), respectively. Each figure shows the proportion of wealth invested in the asset as a function of the minimal NN features, namely time and available wealth. Zero investment under the optimal strategies in the broad market index and the Size factor.
\begin{table}
\begin{tabular}{|c||c|c|c||c|c|c|c|} \cline{2-9} \multicolumn{1}{c||}{} & \multicolumn{2}{c||}{Results set 1: \(\tilde{\rho}=0.017\), \(\tilde{\gamma}=429.647\)} & \multicolumn{3}{c|}{Results set 2: \(\tilde{\rho}=0.0097\), \(\tilde{\gamma}=493.196\)} \\ \hline \(W\left(T\right)\) & \multicolumn{2}{c|}{Training data} & \multicolumn{2}{c||}{Testing data} & \multicolumn{2}{c|}{Training data} & \multicolumn{2}{c|}{Testing data} \\ \cline{2-9} distribution & MV & DSQ & MV & DSQ & MV & DSQ & MV & DSQ \\ \hline \hline Mean & 400.2 & 400.3 & 391.2 & 391.6 & 441.5 & 441.8 & 441.8 & 441.5 \\ \hline Stdev & 55.4 & 55.4 & 26.2 & 25.7 & 79.6 & 79.7 & 39.4 & 39.5 \\ \hline
5th percentile & 276.5 & 276.4 & 346.6 & 347.5 & 255.2 & 254.6 & 367.8 & 367.1 \\ \hline
25th percentile & 391.8 & 392.3 & 382.4 & 382.8 & 422.4 & 423.6 & 430.9 & 430.7 \\ \hline
50th percentile & 416.1 & 416.3 & 396.5 & 396.8 & 469.8 & 470.1 & 451.3 & 451.2 \\ \hline
75th percentile & 429.9 & 429.8 & 406.4 & 406.7 & 487.7 & 489.6 & 465.0 & 464.8 \\ \hline
95th percentile & 452.1 & 452.1 & 418.9 & 419.0 & 516.1 & 516.5 & 480.9 & 480.2 \\ \hline \end{tabular}
\end{table}
Table 5.3: Ground truth - problems \(MV\left(\rho=\tilde{\rho}\right)\) and \(DSQ\left(\gamma=\tilde{\gamma}\right)\): Terminal wealth results obtained using \(n=10^{6}\) joint paths for the underlying assets. Note that the values of \(\tilde{\rho}\) and \(\tilde{\gamma}\) are rounded to three decimal places,.
### Mean - Semi-variance strategies
Having demonstrated the reliability of the results obtained using the proposed NN approach with the preceding ground truth analyses, we now consider the solution of the Mean - Semi-variance problem (2.16). To provide the necessary context to interpret the \(MSemiV\left(\rho\right)\)-optimal results, we compare the results of the optimal solutions of the \(MCV\left(\rho=\rho_{mcv}\right)\), \(MSemiV\left(\rho=\rho_{msv}\right)\), and \(OSQ\left(\gamma=\gamma_{osq}\right)\) problems, where the values of \(\rho_{mcv}\), \(\rho_{msv}\) and \(\gamma_{osq}\) are selected to obtain the same expected value of terminal wealth on the NN training data set. This is done since the MCV- and OSQ-optimal strategies have been analyzed in great detail (Dang and Forsyth (2016); Forsyth (2020)), and are therefore well understood. Note that since all three strategies are related to the maximization of the mean terminal wealth and while simultaneously minimizing some risk measure (which is implicitly done in the case of the OSQ problem, see Dang and Forsyth (2016)), it is natural to compare the strategies on the basis of equal expectation of terminal wealth.
To highlight the main qualitative features of the \(MSemiV\left(\rho\right)\)-optimal results, we consider a simple investment scenario of two assets, namely 30-day T-bills and a broad equity market index (the VWD index) - see Appendix C for definitions. We choose \(T=\)5 years, \(w_{0}=1000\), and zero contributions to demonstrate a lump sum investment scenario with quarterly rebalancing.
To illustrate the flexibility of the NN approach to underlying data generating assumptions, the NN training data sets are constructed using generative adversarial network (GAN)-generated synthetic asset returns obtained by implementing the TimeGAN algorithm proposed by Yoon et al. (2019). In more detail, using empirical monthly asset returns from 1926:01 to 2019:12 for the underlying assets (data sources are specified in Appendix C), the TimeGAN is trained with default parameters as in Yoon et al. (2019) using block sizes of 6 months to capture both correlation and serial correlation aspects of the (joint) time series.6 Once trained, the TimeGAN is then used to generate a set of \(n=10^{6}\) paths of synthetic asset returns, which is used as the training data set to train the NNs corresponding to the MCV, MSemiV and OSQ-optimal investment strategies.
Footnote 6: It appears that the actual code in Yoon et al. (2019) implements the following steps: (i) takes as input actual price data, (ii) forms rolling blocks of price data and (iii) forms a single synthetic price path (which is the same length as the original path) by randomly sampling (without replacement) from the set of rolling blocks. Step (iii) corresponds to the non-overlapping block bootstrap using a fixed block size. This should be contrasted with stationary block bootstrap resampling of Politis and Romano (1994). Step (i) does not make sense as input to a bootstrap technique, since the data set is about 10 years long, with an initial price of $50 and a final price of $1200. We therefore changed Step (i), so that all data was converted to returns prior to being used as input.
Figure 5.3 illustrates the resulting optimal investment strategies, and we observe that the MSemiV-optimal strategy is fundamentally different from the MCV and OSQ-optimal strategies, while featuring elements of both. Specifically, Figure 5.4, which illustrates the resulting optimal terminal wealth distributions (with the same expectation), demonstrates that the MSemiV strategy, like the MCV strategy, can offer better downside protection than the OSQ strategy, while the MSemiV strategy retains some of the qualitative elements of the OSQ distribution such as the left skew.
Having illustrated that the MSemiV problem can be solved in a dynamic trading setting using the proposed NN approach to obtain investment strategies that offer potentially valuable characteristics, we leave a more in-depth investigation of the properties and applications of MSemiV-optimal strategies for future work.
Figure 5.3: Optimal investment strategies for the \(MCV\left(\rho=\rho_{mcv}\right)\), \(MSemiV\left(\rho=\rho_{msv}\right)\), and \(OSQ\left(\gamma=\gamma_{osq}\right)\) strategies, obtaining identical expectation of terminal wealth on the training data set. Each figure shows the proportion of wealth invested in the broad equity market index as a function of the minimal NN features, namely time and available wealth.
## 6 Conclusion
In this paper, we presented a flexible NN approach, which does not rely on dynamic programming techniques, to solve a large class of dynamic portfolio optimization problems. In the proposed approach, a single optimization problem is solved, issues of instability and error propagation involved in estimating high-dimensional conditional expectations are avoided, and the resulting NN is parsimonious in the sense that the number of parameters does not scale with the number of rebalancing events.
We also presented theoretical convergence analysis results which show that the numerical solution obtained using the proposed approach can recover the optimal investment strategy, provided it exists, regardless of whether the resulting optimal investment strategy is time-consistent or (formally) time-inconsistent.
Numerical results confirmed the advantages of the NN approach, and showed that accurate results can be obtained in ground truth analyses in a variety of settings. The numerical results also highlighted that the approach remains agnostic as to the underlying data generating assumptions, so that for example empirical asset returns or synthetic asset returns can be used without difficulty.
We conclude by noting that the NN approach is not necessarily limited to portfolio optimization problems such a those encountered during the accumulation phase of pension funds, and could be extended to address the significantly more challenging problems encountered during the accumulation phase of defined contribution pension funds (see for example Forsyth (2022)). We leave this extension for future work.
## 7 Declarations
The authors have no competing interests to declare that are relevant to the content of this article. P.A. Forsyth's work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) grant RGPIN-2017-03760.
|
2307.13101
|
Contrastive Example-Based Control
|
While many real-world problems that might benefit from reinforcement
learning, these problems rarely fit into the MDP mold: interacting with the
environment is often expensive and specifying reward functions is challenging.
Motivated by these challenges, prior work has developed data-driven approaches
that learn entirely from samples from the transition dynamics and examples of
high-return states. These methods typically learn a reward function from
high-return states, use that reward function to label the transitions, and then
apply an offline RL algorithm to these transitions. While these methods can
achieve good results on many tasks, they can be complex, often requiring
regularization and temporal difference updates. In this paper, we propose a
method for offline, example-based control that learns an implicit model of
multi-step transitions, rather than a reward function. We show that this
implicit model can represent the Q-values for the example-based control
problem. Across a range of state-based and image-based offline control tasks,
our method outperforms baselines that use learned reward functions; additional
experiments demonstrate improved robustness and scaling with dataset size.
|
Kyle Hatch, Benjamin Eysenbach, Rafael Rafailov, Tianhe Yu, Ruslan Salakhutdinov, Sergey Levine, Chelsea Finn
|
2023-07-24T19:43:22Z
|
http://arxiv.org/abs/2307.13101v1
|
# Contrastive Example-Based Control
###### Abstract
While many real-world problems that might benefit from reinforcement learning, these problems rarely fit into the MDP mold: interacting with the environment is often expensive and specifying reward functions is challenging. Motivated by these challenges, prior work has developed data-driven approaches that learn entirely from samples from the transition dynamics and examples of high-return states. These methods typically learn a reward function from high-return states, use that reward function to label the transitions, and then apply an offline RL algorithm to these transitions. While these methods can achieve good results on many tasks, they can be complex, often requiring regularization and temporal difference updates. In this paper, we propose a method for offline, example-based control that learns an implicit model of multi-step transitions, rather than a reward function. We show that this implicit model can represent the Q-values for the example-based control problem. Across a range of state-based and image-based offline control tasks, our method outperforms baselines that use learned reward functions; additional experiments demonstrate improved robustness and scaling with dataset size.1
Footnote 1: Videos of our method are available on the project website: [https://sites.google.com/view/laeo-rl](https://sites.google.com/view/laeo-rl).
Code is released at: [https://github.com/khatch31/laeo](https://github.com/khatch31/laeo).
**Keywords:** reinforcement learning, offline RL, robot learning, reward learning, contrastive learning, model-based reinforcement learning, example-based control, reward-free learning
## 1 Introduction
Reinforcement learning is typically framed as the problem of maximizing a given reward function. However, in many real-world situations, it is more natural for users to define what they want an agent to do with examples of successful outcomes (Fu et al., 2018; Zolna et al., 2020; Xu and Denil, 2019; Eysenbach et al., 2021). For example, a user that wants their robot to pack laundry into a washing machine might provide multiple examples of states where the laundry has been packed correctly. This problem setting is often seen as a variant of inverse reinforcement learning (Fu et al., 2018), where the aim is to learn only from examples of successful outcomes, rather than from
expert demonstrations. To solve this problem, the agent must both figure out what constitutes task success (i.e., what the examples have in common) and how to achieve such successful outcomes.
In this paper, our aim is to address this problem setting in the case where the agent must learn from offline data without trial and error. Instead, the agent must infer the outcomes of potential actions from the provided data, while also relating these inferred outcomes to the success examples. We will refer to this problem of offline RL with success examples as _offline example-based control_.
Most prior approaches involve two steps: _first_ learning a reward function, and _second_ combining it with an RL method to recover a policy (Fu et al., 2018; Zolna et al., 2020; Xu and Denil, 2019). While such approaches can achieve excellent results when provided sufficient data (Kalashnikov et al., 2021; Zolna et al., 2020), learning the reward function is challenging when the number of success examples is small (Li et al., 2021; Zolna et al., 2020). Moreover, these prior approaches are relatively complex (e.g., they use temporal difference learning) and have many hyperparameters.
Our aim is to provide a simple and scalable approach that avoids the challenges of reward learning. The main idea will be learning a certain type of dynamics model. Then, using that model to predict the probabilities of reaching each of the success examples, we will be able to estimate the Q-values for every state and action. Note that this approach does not use an offline RL algorithm as a subroutine. The key design decision is the model type; we will use an implicit model of the time-averaged future (precisely, the discounted state occupancy measure). This decision means that our model reasons across multiple time steps but will not output high-dimensional observations (only a scalar number). A limitation of this approach is that it will correspond to a single step of policy improvement: the dynamics model corresponds to the dynamics of the behavioral policy, not of the reward-maximizing policy. While this means that our method is not guaranteed to yield the optimal policy, our experiments nevertheless show that our approach outperforms multi-step RL methods.
The main contribution of this paper is an offline RL method (LAEO) that learns a policy from examples of high-reward states. The key idea behind LAEO is an implicit dynamics model, which represents the probability of reaching states at some point in the future. We use this model to estimate the probability of reaching examples of high-return states. LAEO is simpler yet more effective than prior approaches based on reward classifiers. Our experiments demonstrate that LAEO can successfully solve offline RL problems from examples of high-return states on four state-based and two image-based manipulation tasks. Our experiments show that LAEO is more robust to occlusions and also exhibits better scaling with dataset size than prior methods. We show that LAEO can work in example-based control settings in which goal-conditioned RL methods fail. Additionally, we show that the dynamics model learned by LAEO can generalize to multiple different tasks, being used to solve tasks that are not explicitly represented in the training data.
## 2 Related Work
Reward learning.To overcome the challenge of hand-engineering reward functions for RL, prior methods either use supervised learning or adversarial training to learn a policy that matches the expert behavior given by the demonstration (imitation learning) (Pomerleau, 1988; Ross et al., 2011; Ho and Ermon, 2016; Spencer et al., 2021) or learn a reward function from demonstrations and optimize the policy with the learned reward through trial and error (inverse RL) (Ng and Russell, 2000; Abbeel and Ng, 2004; Ratliff et al., 2006; Ziebart et al., 2008; Finn et al., 2016; Fu et al., 2018). However, providing full demonstrations complete with agent actions is often difficult, therefore, recent works have focused on the setting where only a set of user-specified goal states or human videos
are available (Fu et al., 2018; Singh et al., 2019; Kalashnikov et al., 2021; Xie et al., 2018; Eysenbach et al., 2021; Chen et al., 2021). These reward learning approaches have shown successes in real-world robotic manipulation tasks from high-dimensional imageinputs (Finn et al., 2016; Singh et al., 2019; Zhu et al., 2020; Chen et al., 2021). Nevertheless, to combat covariate shift that could lead the policy to drift away from the expert distribution, these methods usually require significant online interaction. Unlike these works that study online settings, we consider learning visuomotor skills from offline datasets.
Offline RL.Offline RL (Ernst et al., 2005; Riedmiller, 2005; Lange et al., 2012; Levine et al., 2020) studies the problem of learning a policy from a static dataset without online data collection in the environment, which has shown promising results in robotic manipulation (Kalashnikov et al., 2018; Mandlekar et al., 2020; Rafailov et al., 2021; Singh et al., 2020; Julian et al., 2020; Kalashnikov et al., 2021). Prior offline RL methods focus on the challenge of distribution shift between the offline training data and deployment using a variety of techniques, such as policy constraints (Fujimoto et al., 2018; Liu et al., 2020; Jaques et al., 2019; Wu et al., 2019; Zhou et al., 2020; Kumar et al., 2019; Siegel et al., 2020; Peng et al., 2019; Fujimoto and Gu, 2021; Ghasemipour et al., 2021), conservative Q-functions (Kumar et al., 2020; Kostrikov et al., 2021; Yu et al., 2021; Sinha and Garg, 2021), and penalizing out-of-distribution states generated by learned dynamics models (Kidambi et al., 2020; Yu et al., 2020; Matsushima et al., 2020; Argenson and Dulac-Arnold, 2020; Swazinna et al., 2020; Rafailov et al., 2021; Lee et al., 2021; Yu et al., 2021).
While these prior works successfully address the issue of distribution shift, they still require reward annotations for the offline data. Practical approaches have used manual reward sketching to train a reward model (Cabi et al., 2019; Konyushkova et al., 2020; Rafailov et al., 2021) or heuristic reward functions (Yu et al., 2022). Others have considered offline learning from demonstrations, without access to a predefined reward function (Mandlekar et al., 2020; Zolna et al., 2020; Xu et al., 2022; Jarboui and Perchet, 2021), however they rely on high-quality demonstration data. In contrast, our method: _(1)_ addresses distributional shift induced by both the learned policy and the reward function in a principled way, _(2)_ only requires user-provided goal states and _(3)_ does not require expert-quality data, resulting in an effective and practical offline reward learning scheme.
## 3 Learning to Achieve Examples Offline
Offline RL methods typically require regularization, and our method will employ regularization in two ways. First, we regularize the policy with an additional behavioral cloning term, which penalizes the policy for sampling out-of-distribution actions. Second, our method uses the Q-function for the behavioral policy, so it performs one (not many) step of policy improvement. These regularizers mean that our approach is not guaranteed to yield the optimal policy.
### Preliminaries
We assume that an agent interacts with an MDP with states \(s\in\mathcal{S}\), actions \(a\), a state-only reward function \(r(s)\geq 0\), initial state distribution \(p_{0}(s_{0})\) and dynamics \(p(s_{t+1}\mid s_{t},a_{t})\). We use \(\tau=(s_{0},a_{0},s_{1},a_{1},\cdots)\) to denote an infinite-length trajectory. The likelihood of a trajectory under a policy \(\pi(a\mid s)\) is \(\pi(\tau)=p_{0}(s_{0})\prod_{t=0}^{\infty}p(s_{t+1}\mid s_{t},a_{t})\pi(a_{t} \mid s_{t})\). The objective is to learn a policy \(\pi(a\mid s)\) that maximizes the expected, \(\gamma\)-discounted sum of rewards: \(\max_{\pi}\mathbb{E}_{\pi(\tau)}\left[\sum_{t=0}^{\infty}\gamma^{t}r(s_{t}) \right].\) We define the Q-function for policy \(\pi\) as the expected discounted sum of returns, conditioned on an
initial state and action:
\[Q^{\pi}(s,a)\triangleq\mathbb{E}_{\pi(\tau)}\left[\sum_{t=0}^{\infty}\gamma^{t}r(s _{t})\Big{|}_{a_{0}=a}^{s_{0}=s}\right]. \tag{1}\]
We will focus on the offline (i.e., batch RL) setting. Instead of learning by interacting with the environment (i.e., via trial and error), the RL agent will receive as input a dataset of trajectories \(\mathcal{D}_{\tau}=\{\tau\sim\beta(\tau)\}\) collected by a behavioral policy \(\beta(a\mid s)\). We will use \(Q^{\beta}(s,a)\) to denote the Q-function of the behavioral policy.
Specifying the reward function.In many real-world applications, specifying and measuring a scalar reward function is challenging, but providing examples of good states (i.e., those which would receive high rewards) is straightforward. Thus, we follow prior work (Fu et al., 2018; Zolna et al., 2020; Eysenbach et al., 2021; Xu and Denil, 2019; Zolna et al., 2020) in assuming that the agent does not observe scalar rewards (i.e., \(\mathcal{D}_{\tau}\) does not contain reward information). Instead, the agent receives as input a dataset \(\mathcal{D}_{*}=\{s^{*}\}\) of high-reward states \(s^{*}\in\mathcal{S}\). These high-reward states are examples of good outcomes, which the agent would like to achieve. The high-reward states are not labeled with their specific reward value.
To make the control problem well defined, we must relate these success examples to the reward function. We do this by assuming that the frequency of each success example is proportional to its reward: good states are more likely to appear (and be duplicated) as success examples.
**Assumption 1**: _Let \(p_{\tau}(s)\) be the empirical probability density of state \(s\) in the trajectory dataset, and let \(p_{*}(s)\) as the empirical probability density of state \(s\) under the high-reward state dataset. We assume that there exists a positive constant \(c\) such that \(r(s)=c\frac{p_{*}(s)}{p_{\tau}(s)}\) for all states \(s\)._
This is the same assumption as Eysenbach et al. (2021). This assumption is important because it shows how example-based control is universal: for any reward function, we can specify the corresponding example-based problem by constructing a dataset of success examples that are sampled according to their rewards. We assumed that rewards are non-negative so that these sampling probabilities are positive.
This assumption can also be read in reverse. When a user constructs a dataset of success examples in an arbitrary fashion, they are implicitly defining a reward function. In the tabular setting, the (implicit) reward function for state \(s\) is the count of the times \(s\) occurs in the dataset of success examples. Compared with goal-conditioned RL (Kaelbling, 1993), defining tasks via success examples is more general. By identifying what all the success examples have in common (e.g., laundry is folded), the RL agent can learn what is necessary to solve the task and what is irrelevant (e.g., the color of the clothes in the laundry). We now can define our problem statement as follows:
**Definition 1**: _In the **offline example-based control** problem, a learning algorithm receives as input a dataset of trajectories \(\mathcal{D}_{\tau}=\{\tau\}\) and a dataset of successful outcomes \(\mathcal{D}_{*}=\{s\}\) satisfying Assumption 1. The aim is to output a policy that maximizes the RL objective (Eq. 3.1)._
This problem setting is appealing because it mirrors many practical RL applications: a user has access to historical data from past experience, but collecting new experience is prohibitively expensive. Moreover, this problem setting can mitigate the challenges of reward function design. Rather than having to implement a reward function and add instruments to measure the corresponding components, the users need only provide a handful of observations that solved the task. This problem
setting is similar to imitation learning, in the sense that the only inputs are data. However, unlike imitation learning, in this problem setting the high-reward states are not labeled with actions, and these high-reward states may not necessarily contain entire trajectories.
Our method will estimate the discounted state occupancy measure,
\[p^{\beta}(s_{t+}=s\mid s_{0},a_{0})\triangleq(1-\gamma)\sum_{t=0}^{\infty}\gamma^ {t}p_{t}^{\pi}(s_{t}=s\mid s_{0},a_{0}), \tag{2}\]
where \(p_{t}^{\beta}(s_{t}\mid s,a)\) is the probability of policy \(\beta(a\mid s)\) visiting state \(s_{t}\) after exactly \(t\) time steps. Unlike the transition function \(p(s_{t+1}\mid s_{t},a_{t})\), the discounted state occupancy measure indicates the probability of visiting a state at any point in the future, not just at the immediate next time step. In tabular settings, this distribution corresponds to the successor representations (Dayan, 1993). To handle continuous settings, we will use the contrastive approach from recent work (Mazoure et al., 2020; Eysenbach et al., 2022). We will learn a function \(f(s,a,s_{f})\in\mathds{R}\) takes as input an initial state-action pair as well as a candidate future state, and outputs a score estimating the likelihood that \(s_{f}\) is a real future state. The loss function is a standard contrastive learning loss(e.g., Ma and Collins (2018)), where positive examples are triplets of a state, action, and future state:
\[\max_{f}\mathcal{L}(f;\mathcal{D}_{\tau})\triangleq\mathbb{E}_{p(s,a),s_{f} \sim p^{\beta}(s_{t+}|s,a)}\left[\log\sigma(f(s,a,s_{f}))\right]+\mathbb{E}_{p (s,a),s_{f}\sim p(s)}\left[\log(1-\sigma(f(s,a,s_{f})))\right],\]
where \(\sigma(\cdot)\) is the sigmoid function. At optimality, the implicit dynamics model encodes the discounted state occupancy measure:
\[f^{*}(s,a,s_{f})=\log p^{\beta}(s_{t+}=s_{f}\mid s,a)-\log p_{\tau}(s_{f}). \tag{3}\]
We visualize this implicit dynamics model in Fig. 1. Note that this dynamics model is policy dependent. Because it is trained with data collected from one policy (\(\beta(a\mid s)\)), it will correspond to the probability that _that_ policy visits states in the future. Because of this, our method will result in estimating the value function for the behavioral policy (akin to 1-step RL (Brandfonbrener et al., 2021)), and will not perform multiple steps of policy improvement. Intuitively, the training of this implicit model resembles hindsight relabeling (Kaelbling, 1993; Andrychowicz et al., 2017). However, it is generally unclear how to use hindsight relabeling for single-task problems. Despite being a single-task method, our method will be able to make use of hindsight relabeling to train the dynamics model.
### Deriving Our Method
The key idea behind out method is that this implicit dynamics model can be used to represent the Q-values for the example-based problem, up to a constant. The proof is in Appendix A.
**Lemma 2**: _Assume that the implicit dynamics model is learned without errors. Then the Q-function for the data collection policy \(\beta(a\mid s)\) can be expressed in terms of this implicit dynamics model:_
\[Q^{\beta}(s,a)=\frac{c}{1-\gamma}\mathbb{E}_{p_{*}(s^{*})}\left[e^{f(s,a,s^{* })}\right]. \tag{4}\]
Figure 1: Our method will use contrastive learning to predict which states might occur at some point in the future.
So, after learning the implicit dynamics model, we can estimate the Q-values by averaging this model's predictions across the success examples. We will update the policy using Q-values estimated in this manner, plus a regularization term:
\[\min_{\pi}\mathcal{L}(\pi;f,\mathcal{D}_{*})\triangleq-(1-\lambda)\mathbb{E}_{ \pi(a|s)p(s),s^{*}\sim\mathcal{D}_{*}}\left[e^{f(s,a,s^{*})}\right]-\lambda \mathbb{E}_{s,a\sim\mathcal{D}_{\tau}}\left[\log\pi(a\mid s)\right]. \tag{5}\]
In our experiments, we use a weak regularization coefficient of \(\lambda=0.5\).
It is worth comparing this approach to prior methods based on learned reward functions (Xu and Denil, 2019; Fu et al., 2018; Zolna et al., 2020). Those methods learn a reward function from the success examples, and use that learned reward function to synthetically label the dataset of trajectories. Both approaches can be interpreted as learning a function on one of the datasets and then applying that function to the other dataset. Because it is easier to fit a function when given large quantities of data, we predict that our approach will outperform the learned reward function approach when the number of success examples is small, relative to the number of unlabeled trajectories. Other prior methods (Eysenbach et al., 2021; Reddy et al., 2020) avoid learning reward functions by proposing TD update rules that are applied to both the unlabeled transitions and the high-return states. However, because these methods have yet to be adapted to the offline RL setting, we will focus our comparisons on the reward-learning methods.
### A Geometric Perspective
Before presenting the complete RL algorithm, we provide a geometric perspective on the representations learned by our method. Our implicit models learns a representation of state-action pairs \(\phi(s,a)\) as well as a representation of future states \(\psi(s)\). One way that our method can optimize these representations is by treating \(\phi(s,a)\) as a prediction for the future representations.2 Each of the high-return states can be mapped to the same representation space. To determine whether a state-action pair has a large or small Q-value, we can simply see whether the predicted representation \(\phi(s,a)\) is close to the representations of any of the success examples. Our method learns these representations so that the Q-values are directly related to the Euclidean distances3 from each success example. Thus, our method can be interpreted as learning a representation space such that estimating Q-values corresponds to simple geometric operations (kernel smoothing with an RBF kernel (Hastie et al., 2009, Chpt. 6)) on the learned representations. While the example-based control problem is more general than goal-conditioned RL (see Sec. 3.1), we can recover goal-conditioned RL as a special case by using a single success example.
Figure 2: If the state-action representation \(\phi(s,a)\) is close to the representation of a high-return state \(\psi(s)\), then the policy is likely to visit that state. Our method estimates Q-values by combining the distances to all the high-return states (Eq. 1).
### A Complete Algorithm
We now build a complete offline RL algorithm based on these Q-functions. We will call our method Learning to Achieve Examples Offline (LAEO). Our algorithm will resemble one-step RL methods, but differ in how the Q-function is trained. After learning the implicit dynamics model (and, hence, Q-function) we will optimize the policy. The objective for the policy is maximizing (log) Q-values plus a regularization term, which penalizes sampling unseen actions:4
Footnote 4: For all experiments except Fig. 8, we apply Jensen’s inequality to the first term, using \(\mathbb{E}_{\pi(a|s),s^{*}\sim p_{*}(s)}[f(s,a,s^{*})]\).
\[\max_{\pi} (1-\lambda)\log\mathbb{E}_{\pi(a|s)p_{\tau}(s)}\left[Q(s,a)\right] +\lambda\mathbb{E}_{(s,a)\sim p_{\tau}(s,a)}\left[\log\pi(a\mid s)\right]\] \[=(1-\lambda)\log\mathbb{E}_{\pi(a|s),s^{*}\sim p_{*}(s)}\left[e^{ f(s,a,s^{*})}\right]+\lambda\mathbb{E}_{(s,a)\sim p_{\tau}(s,a)}\left[\log\pi(a \mid s)\right]. \tag{6}\]
As noted above, this is a one-step RL method: it updates the policy to maximize the Q-values of the behavioral policy. Performing just a single step of policy improvement can be viewed as a form of regularization in RL, in the same spirit as early stopping is a form of regularization in supervised learning. Prior work has found that one-step RL methods can perform well in the offline RL setting. Because our method performs only a single step of policy improvement, we are not guaranteed that it will converge to the reward-maximizing policy. We summarize the complete algorithm in Alg. 1.
```
1:Inputs: dataset of trajectories \(\mathcal{D}=\{\tau\}\), dataset of high-return states \(\mathcal{D}_{*}=\{s\}\).
2:Learn the model via contrastive learning: \(f\leftarrow\arg\min_{f}\mathcal{L}(f;\mathcal{D}_{\tau})\)\(\triangleright\) Eq. 5
3:Learn the policy: \(\pi\leftarrow\arg\min_{\pi}\mathcal{L}(\pi;f,\mathcal{D}_{*})\)\(\triangleright\) Eq. 6
4:return policy \(\pi(a\mid s)\)
```
**Algorithm 1** Learning to Achieve Examples Offline
## 4 Experiments
Our experiments test whether LAEO can effectively solve offline RL tasks that are specified by examples of high-return states, rather than via scalar reward functions. We study when our approach outperforms prior approaches based on learned reward functions. We look not only at the performance relative to baselines on state-based and image-based tasks, but also how that performance depends on the size and composition of the input datasets. Additional experiments study how LAEO performs when provided with varying numbers of success observations and whether our method can solve partially observed tasks. We include full hyperparameters and implementation details in Appendix B. Code is available at [https://github.com/khatch31/laeo](https://github.com/khatch31/laeo). Videos of our method are available at [https://sites.google.com/view/laeo-rl](https://sites.google.com/view/laeo-rl).
Figure 3: **Benchmark tasks**: We evaluate the performance of LAEO on six simulated manipulation tasks, two of which use pixel observations (FetchReach-image and FetchPush-image) and four of which use low-dimensional states (FetchReach, FetchPush, SawyerWindowOpen, and SawyerDrawerClose).
Baselines.Our main point of comparison will be prior methods that use learned reward functions: ORIL (Zolna et al., 2020) and PURL (Xu and Denil, 2019). The main difference between these methods is the loss function used to train reward function: ORIL uses binary cross entropy loss while PURL uses a positive-unlabeled loss (Xu and Denil, 2019). Note that the ORIL paper also reports results using a positive-unlabeled loss, but for the sake of clarity we simply refer to it as PURL. After learning the reward function, each of these methods applies an off-the-shelf RL algorithm. We will implement all baselines using the TD3+BC (Fujimoto and Gu, 2021) offline RL algorithm. These offline RL methods achieve good performance on tasks specified via reward functions (Kostrikov et al., 2021; Brandfonbrener et al., 2021; Fujimoto and Gu, 2021). We also include Behavioral Cloning (BC) results.
Benchmark comparison.We start by comparing the performance of LAEO to these baselines on six manipulation tasks. FetchReach and FetchPush are two manipulation tasks from Plappert et al. (2018) that use state-based observations. FetchReach-image and FetchPush-image are the same tasks but with image-based observations. SawyerWindowOpen and Sawyer-DrawerClose are two manipulation tasks from Yu et al. (2020). For each of these tasks, we collect a dataset of medium quality by training an online agent from Eysenbach et al. (2022) and rolling out multiple checkpoints during the course of training. The resulting datasets have success rates between \(45\%-50\%\). We report results after \(500,000\) training gradient steps (or \(250,000\) steps, if the task success rates have converged by that point).
We report results in Fig. 4. We observe that LAEO, PURL, and ORIL perform similarly on FetchReach and FetchReach-image. This is likely because these are relatively easy tasks, and each of these methods is able to achieve a high success rate. Note that all of these methods significantly outperform BC, indicating that they are able to learn better policies than the mode behavior policies represented in the datasets. On SawyerDrawerClose, all methods, including BC, achieve near perfect success rates, likely due to the simplicity of this task. On FetchPush, FetchPush-image, and SawyerWindowOpen, LAEO outperforms all of the baselines by a
Figure 4: **Benchmark comparison**: LAEO matches or outperforms prior example-based offline RL methods on state and image-based tasks, including those that learn a separate reward function (ORIL, PURL).The gap in performance is most significant on the FetchPush and FetchPush-image tasks, which involve more complicated dynamics than the other tasks, suggesting that LAEO may outperform model free reward-learning approaches on tasks with complicated dynamics. LAEO also outperforms BC on all of the tasks, highlighting LAEO’s ability to learn a policy that outperforms the behavior policy on non-demonstration datasets.
significant margin. Recall that the main difference between LAEO and PURL/ORIL is by learning a dynamics model, rather than the reward function. These experiments suggest that for tasks with more complex dynamics, learning a dynamics model can achieve better performance than is achieved by model-free reward classifier methods.
**Varying the input data.** Our next experiment studies how the dataset composition affects LAEO and the baselines. On each of three tasks, we generate a low-quality dataset by rolling out multiple checkpoints from a partially trained agent from Eysenbach et al. (2022). In comparison to the medium-quality datasets collected earlier, which have success rates between \(45\%-50\%\), these low quality datasets have success rates between \(8\%-12\%\). We will denote these low quality datasets with the "Hard" suffix. Fig. 5 shows that LAEO continues to outperform baselines on these lower-quality datasets.
Our next experiments study how varying the number of high-return example states and the number of reward-free trajectories affects performance. As noted in the Sec. 1, we conjecture that our method will be especially beneficial relative to reward-learning approaches in settings with very few high-return example states. In Fig. 6_(left)_, we vary the number of high-return example states on FetchPush -image, holding the number of unlabeled trajectories constant. We observe that LAEO maintains achieves the same performance with 1 success example as with 200 success examples. In contrast, ORIL's performance decreases as the number of high-return example states decreases. In Fig. 6_(right)_, we vary the number of unlabeled trajectories, holding the number of high-return example states constant at \(200\). We test the performance of LAEO vs. ORIL on three different dataset sizes on FetchPush-image, roughly corresponding to three different orders of magnitude: the \(0.1\times\) dataset contains \(3,966\) trajectories, the \(1\times\) dataset contains \(31,271\) trajectories, and the \(10\times\) dataset contains \(300,578\) trajectories. We observe that LAEO continues to see performance gains as number of unlabeled trajectories increases, whereas ORIL's performance plateaus. Taken together these results suggest that, in comparison to reward classifier based methods, LAEO needs less human supervision and is more effective at leveraging large quantities of unlabeled data.
Figure 5: **Data quality. LAEO continues to match or outperform reward classifier based methods on datasets that contain a low percentage of successful trajectories.**
Figure 6: **Effect of dataset size: _(Left)_ The most competitive baseline (ORIL) achieves better performance when given more examples of high-return states, likely because it makes it easier to learn ORIL’s reward classifier. LAEO, which does not require learning a reward classifier, consistently achieves high success rates. _(Right)_ LAEO continues to improve when trained with more reward-free trajectories, while ORIL’s performance plateaus.**
Partial Observability.We also test the performance of LAEO on a partially-observed task. We modify the camera position in the FetchPush-image so that the block is occluded whenever the end effector is moved to touch the block. While such partial observability can stymie temporal difference methods (Whitehead and Ballard, 1991), we predict that LAEO might continue to solve this task because it does not rely on temporal difference learning. The results, shown in Fig. 7, confirm this prediction. On this partially observable task, we compare the performance of LAEO with that of ORIL, the best performing baseline on the fully observable tasks. On the partially observable task, LAEO achieves a success rate of \(51.9\%\), versus \(33.9\%\) for ORIL.
Comparison to Goal-Conditioned RL.One of the key advantages of example-based control, relative to goal-conditioned RL, is that the policy can identify common patterns in the success examples to solve tasks in scenarios where it has never before seen a success example. In settings such as robotics, this can be an issue since acquiring a goal state to provide to the agent requires already solving the desired task in the first place. We test this capability in a variant of the SawyerDrawerClose environment. For training, the drawer's X position is chosen as one of five fixed locations. Then, we evaluate the policy learned by LAEO on three types of environments: _In Distribution_: the drawer's X position is one of the five locations from training; _Interpolation_: The drawer's X position is between some of the locations seen during training; _Extrapolation_: The drawer's X position is outside the range of X positions seen during training. We compare to a goal-conditioned policy learned via contrastive RL, where actions are extracted by averaging over the (training) success examples: \(\pi(a\mid s)=\mathbb{E}_{s^{*}\sim p_{*}(s)}[\pi(a\mid s,g=s^{*})]\).
The results, shown in Fig. 8, show that LAEO consistently outperforms this goal-conditioned baseline. As expected, the performance is highest for the In Distribution environments and lowest for the Extrapolation environments. Taken together, these experiments show that LAEO can learn to reach multiple different goal locations without access to goal states during test time.
Multitask Critic.We explore whether a LAEO dynamics network trained on data from one task can be used to solve other downstream tasks. We create a simple multitask environment by defining several different tasks that can be solved in the SawyerDrawerClose environment: Close, Half-closed, Open, Reach-near, Reach-medium, and Reach-far. We then use a trained critic network from the previous set of experiments (Comparison to Goal-Conditioned RL), condition it on a success example from a downstream task, and select actions by using cross entropy method (CEM) optimization. By using CEM optimization, we do not need to train a sepa
Figure 8: **Comparison with goal-conditioned RL. LAEO solves manipulation tasks at multiple different locations without being provided with a goal-state at test time.**
Figure 7: **Partial observability. LAEO continues to solve the FetchPush-image manipulation task in a setting where the new camera placement causes partial observability. This camera angle causes the block to be hidden from view by the gripper when the gripper reaches down to push the block.**
rate policy network for each of the tasks. See Appendix C for implementation details and for details of the multitask drawer environment.
CEM over the LAEO critic achieves non-zero success rates on all six tasks, despite only being trained on data from the Close task (see Figure 9). In contrast, randomly sampling actions from the action space achieves a \(0\%\) success rate on all of the tasks. Results are averaged across eight random seeds. This suggests that a single LAEO critic can be leveraged to solve multiple downstream tasks, as long as the dynamics required to solve those tasks are represented in the training data. Note that since we condition the critic network on a single goal example, these experiments can be interpreted from a goal-conditioned perspective as well as an example-based control perspective. In future work, we aim to explore the multitask capabilities of the LAEO dynamics model in an example-based control setting at a larger scale.
This will involve training on larger, more diverse datasets as well as conditioning the critic network on multiple success examples for a single task (as done in the Comparison to Goal-Conditioned RL experiments).
## 5 Conclusion
In this paper, we present an RL algorithm aimed at settings where data collection and reward specification are difficult. Our method learns from a combination of high-return states and reward-free trajectories, integrating these two types of information to learn reward-maximizing policies. Whereas prior methods perform this integration by learning a reward function and then applying an off-the-shelf RL algorithm, ours learns an implicit dynamics model. Not only is our method simpler (no additional RL algorithm required!), but also it achieves higher success rates than prior methods.
While our experiments only start to study the ability of contrastive-based methods to scale to high-dimensional observations, we conjecture that methods like LAEO may be particularly amenable to such problems because the method for learning the representations (contrastive learning) resembles prior representation learning methods (Mazoure et al., 2020; Nair et al., 2022). Scaling this method to very large offline datasets is an important direction for future work.
## 6 Acknowledgments
BE is supported by the Fannie and John Hertz Foundation and the NSF GRFP (DGE2140739).
|
2301.10786
|
Flavor Physics Constraints on Left-Right Symmetric Models with Universal
Seesaw
|
We study the phenomenological constraints on the parameter space in the
parity symmetric scenario of a class of left-right symmetric models in which
the fermion masses are generated through a universal seesaw mechanism. The
model, motivated by the axionless solution to the strong $\mathcal{CP}$
problem, has a simple Higgs sector consisting of left- and right-handed
doublets. The fermion masses are then generated through their mixing with heavy
vector-like fermions, which leads to flavor changing neutral currents arising
at tree-level and also introduces non-unitarity in the charged current
interactions. These new contributions lead to flavor and flavor universality
violating processes and forbidden decays, which are used to derive constraints
on the parameter space of the model. We also argue that although the model has
the potential to resolve flavor anomalies, it fails to do so in the case of
B-anomalies $R_{K^{(*)}}$ and anomalous magnetic moment of muon $(g-2)_\mu$.
|
Ritu Dcruz
|
2023-01-25T19:00:09Z
|
http://arxiv.org/abs/2301.10786v2
|
# Flavor Physics Constraints on Left-Right Symmetric Models with Universal Seesaw
###### Abstract
We study the phenomenological constraints on the parameter space in the parity symmetric scenario of a class of left-right symmetric models in which the fermion masses are generated through a universal seesaw mechanism. The model, motivated by the axionless solution to the strong \({\cal CP}\) problem, has a simple Higgs sector consisting of left- and right-handed doublets. The fermion masses are then generated through their mixing with heavy vector-like fermions, which leads to flavor changing neutral currents arising at tree-level and also introduces non-unitarity in the charged current interactions. These new contributions lead to flavor and flavor universality violating processes and forbidden decays, which are used to derive constraints on the parameter space of the model. We also argue that although the model has the potential to resolve flavor anomalies, it fails to do so in the case of B-anomalies \(R_{K^{(*)}}\) and anomalous magnetic moment of muon \((g-2)_{\mu}\).
## 1 Introduction
### Model Description
###### Contents
* 1 Introduction
* 2 Model Description
* 3 Interactions among Physical Fields
* 3.1 Charged Fermions
* 3.1.1 Neutral Current
* 3.1.2 Charged Current
* 3.1.3 Higgs Current
* 3.2 Neutrinos
* 4 Constraints on Neutral Current Couplings
* 4.1 \(Z\) Decays
* 4.1.1 \(Z\to\ell^{+}_{i}\ell^{-}_{i}\)
* 4.1.2 \(Z\to\ell^{+}_{i}\ell^{-}_{j}\)
* 4.1.3 Lepton Flavour Universality Violation
* 4.2 3 Body Decay of Charged Leptons
* 4.3 Radiative Decays of Charged Leptons
* 4.4 Mass difference of Neutral Mesons
* 4.5 Charged Leptonic Decays of Mesons
* 4.6 Semi-Leptonic Meson Decay
* 5 Constraints on Charged Current Couplings
* 5.1 \(W_{L}\) Decay to Leptons
* 5.2 Radiative Decays of Charged Leptons
* 5.3 Lepton Flavour Universality Tests
* 5.4 Mass Difference of Neutral Mesons
* 6 Constraints on Higgs Couplings
* 7 Neutral Current B-anomalies
* 8 Anomalous Magnetic Moment of Muon
* 9 Conclusion
* A Neutral Current Interaction
* B Input Parameters for Meson Mixing
* B.1 \(K-\overline{K}\) Mixing
* B.2 \(D-\overline{D}\) Mixing
* B.3 \(B_{q}-\overline{B}_{q}\) Mixing
* B
C Form Factors for Meson Decay D Kaon Mixing Box Diagram Expressions
## 1 Introduction
The left-right symmetric models (LRSMs) are simple extensions of the standard model (SM), initially proposed as a solution to the parity asymmetry observed at low energy scales, involving the symmetry group \(SU(3)_{c}\times SU(2)_{L}\times SU(2)_{R}\times U(1)_{B-L}\). The LRSMs, based on the concept that the \(V-A\) structure of weak interaction may be a low-energy phenomenon that vanishes at very high energy scales of the order of a few TeV, overcome this by putting both left- and right-handed particles on equal footing thereby restoring the symmetry [1; 2; 3]. Such models have several attractive features. Right-handed neutrinos arise naturally, generating light neutrino masses through seesaw mechanism [4; 5; 6; 7; 8; 9; 10]. These models also give a physical interpretation of the hypercharge as a quantity arising from the \(B-L\) quantum number.
The class of LRSMs discussed in this paper is motivated by the axionless solution to the strong \(\mathcal{CP}\) problem [11] since parity is a good symmetry broken spontaneously [12]. The fermion masses in these models are induced through a universal seesaw mechanism [13; 14; 15] by introducing heavy vector-like fermionic (VLF) singlet partners. The scalar sector of the model is minimal, containing only two Higgs doublets. The light fermion masses are quadratically dependent on Yukawa couplings (\(\mathcal{Y}_{i}\)), allowing for the values of \(\mathcal{Y}_{i}\) required to explain fermion mass hierarchy to be in the range \(\mathcal{Y}_{i}=(10^{-3}-1)\) as opposed to \(\mathcal{Y}_{i}=(10^{-6}-1)\) in SM or standard LRSMs. Another exciting feature of the LRSMs is the flavor-changing neutral current (FCNC) interactions leading to flavor-violating processes occurring at the tree level due to direct couplings between VLFs and SM fermions. There can also be corrections to the SM charged current interactions. These can result in stringent constraints on the parameter space of the model which are explored in great detail in this work. The model can also potentially resolve the recent flavor anomalies such as \((g-2)_{\mu}\), \(R_{K^{(*)}}\), \(R_{D^{(*)}}\), CDF-\(W\) mass shift and the Cabibbo anomaly. This model has also been studied recently in the context of low energy experimental signals [16], gravitational waves [17], neutrino oscillations [18] and, in explaining CKM unitarity problem and CDF \(W\)-boson mass [19].
Several measurements of semi-leptonic decays of B mesons have shown significant deviations from their SM predicted values [20; 21]. The most notable ones are the lepton flavor universality (LFU) violation observed in the neutral current transitions related to the ratio \(R_{K^{(*)}}=\text{BR}(B\to K^{(*)}\mu\mu)/\text{BR}(B\to K^{(*)}ee)\) and in charged current mediated \(R_{D^{(*)}}=\text{BR}(B\to D^{(*)}\tau\nu_{\tau})/\text{BR}(B\to D^{(*)}\ell \nu_{\ell})\), \(\ell=e,\ \mu\). These are theoretically clean observables since the hadronic uncertainties cancel out in the ratios, making them extremely sensitive to new physics probes. Deviations in the measurements of \(R_{D^{(*)}}\) and \(R_{K^{(*)}}\)[22; 23; 24; 25] from the SM are collectively referred to as B-anomalies. A solution to \(R_{D^{(*)}}\) anomaly using
this model has been discussed in [26]. The most precise measurement of \(R_{K}\) has resulted in a combined 3.1 \(\sigma\) discrepancy in \(R_{K^{(*)}}\) and related processes. As a natural consequence of parity invariance, a right-handed neutral gauge boson exists that mixes with the SM \(Z\) boson giving rise to two neutral gauge boson eigenstates, \(Z_{1}\) and \(Z_{2}\). Both these gauge bosons can mediate tree-level FCNC processes that contribute to \(R_{K^{(*)}}\) to occur at the tree level. Hence, the LRSM model can potentially resolve the neutral current B-anomalies as well.
Another major hint towards physics beyond the SM comes from the long-standing discrepancy between the experiment and theory in the anomalous magnetic moment (AMM) of muon, \(a_{\mu}\). The most recent measurement of \(a_{\mu}\) at Fermilab National Accelerator Laboratory (FNAL) reports a \(4.2\,\sigma\) deviation from the SM predicted value of AMM. The model explored in this work can provide chirally enhanced one-loop corrections to AMM of muon mediated by neutral scalar \((h,\,H)\) or gauge \((Z_{1},\,Z_{2})\) fields. The possibility of resolving the anomaly in muon AMM is explored briefly in this paper.
In this work, we look at the flavor physics constraints on the model, exploring the allowed parameter space from neutral current gauge interactions, investigate the most stringent constraints for charged current and Higgs interactions, and address the limitations of the model in resolving \(R_{K^{(*)}}\) discrepancies as well as the AMM of muon. The rest of the paper is organized as follows. We describe the details of the LRSM model [26], giving the particle content, Higgs potential and gauge boson mass matrix diagonalization in Sec. 2 and the fermion mass matrix diagonalizations and their interaction Lagrangian in the physical basis in Sec. 3. In Secs. 4, 5, and 6, we list the different constraints on the model parameters obtained from neutral and charged current gauge interactions, and scalar interactions, respectively, in the parity symmetric scenario of the model. In Sec. 7, we explore the solution to neutral current B-anomalies arising at tree-level mediated by the neutral gauge bosons, and at one-loop level mediated by the left-handed scalar doublet. Sec. 8 examines the VL-lepton mass-enhanced corrections to the AMM of muon at one-loop level mediated by the neutral gauge as well as scalar fields, and we conclude with Sec. 9.
## 2 Model Description
The particle spectrum of the LRSM with universal seesaw is composed of the usual SM fermions, right-handed neutrinos, and four sets of vector-like singlet fermions, denoted as \((U_{a},D_{a},E_{a},N_{a})\), where, the index \(a\) runs from 1 to 3 for the three generations of fermions. The SM fermions, along with the right-handed neutrinos, form left- or right-handed doublets, assigned to the gauge group \(SU(3)_{c}\times SU(2)_{L}\times SU(2)_{R}\times U(1)_{B-L}\) as follows (\(i=1-3\) is the family index):
\[\begin{split}\mathcal{Q}_{L,i}\left(3,2,1,+\frac{1}{3}\right)& =\begin{pmatrix}u_{L}\\ d_{L}\end{pmatrix}_{i},&\mathcal{Q}_{R,i}\left(3,1,2,+\frac{1}{3}\right)& =\begin{pmatrix}u_{R}\\ d_{R}\end{pmatrix}_{i},\\ \psi_{L,i}\left(1,2,1,-1\right)&=\begin{pmatrix}\nu_{L}\\ e_{L}\end{pmatrix}_{i},&\psi_{R,i}\left(1,1,2,-1\right)&=\begin{pmatrix}\nu_{ R}\\ e_{R}\end{pmatrix}_{i}.\end{split} \tag{1}\]
The Higgs sector is comprised simply of left- and right-handed doublets:
\[\chi_{L}\left(1,2,1,+1\right)=\begin{pmatrix}\chi_{L}^{+}\\ \chi_{L}^{0}\end{pmatrix},\ \ \chi_{R}\left(1,1,2,+1\right)=\begin{pmatrix}\chi_{R}^{+}\\ \chi_{R}^{0}\end{pmatrix}. \tag{2}\]
\(\sigma_{L}=Re(\chi_{L}^{0})/\sqrt{2}\) and \(\sigma_{R}=Re(\chi_{R}^{0})/\sqrt{2}\) are the neutral scalar fields which mix to give the SM-like Higgs boson \(h\) (125 GeV) and a heavy \(H\). In the limit of small mixing, the SM Higgs is identified as \(\sigma_{L}\). The general Higgs potential is given by
\[V=-(\mu_{L}^{2}\chi_{L}^{\dagger}\chi_{L}+\mu_{R}^{2}\chi_{R}^{\dagger}\chi_{R} )+\frac{\lambda_{1L}}{2}(\chi_{L}^{\dagger}\chi_{L})^{2}+\frac{\lambda_{1R}}{ 2}(\chi_{R}^{\dagger}\chi_{R})^{2}+\lambda_{2}(\chi_{L}^{\dagger}\chi_{L})( \chi_{R}^{\dagger}\chi_{R}). \tag{3}\]
In the parity symmetric limit, \(\lambda_{1L}=\lambda_{1R}\), but \(\mu_{L}\) can be allowed to be different from \(\mu_{R}\) since parity can be softly broken. The vacuum expectation values of the neutral fields for which the potential Eq. (3) has a minimum are denoted by
\[\left\langle\chi_{L}^{0}\right\rangle=\kappa_{L},\ \ \left\langle\chi_{R}^{0} \right\rangle=\kappa_{R}, \tag{4}\]
with \(\kappa_{L}\simeq 174\) GeV. The remaining fields are "eaten up" by \(W_{L,R}^{\pm}\) and \(Z_{L,R}\) bosons upon symmetry breaking. The physical Higgs spectrum obtained from the diagonalization of \(\sigma_{L}-\sigma_{R}\) mixing matrix,
\[\mathcal{M}_{\sigma_{L,R}}^{2}=\begin{bmatrix}2\lambda_{1L}\kappa_{L}^{2}&2 \lambda_{2}\kappa_{L}\kappa_{R}\\ 2\lambda_{2}\kappa_{L}\kappa_{R}&2\lambda_{1R}\kappa_{R}^{2},\end{bmatrix}, \tag{5}\]
are as follows:
\[\begin{array}{cc}h=\cos\zeta\sigma_{L}-\sin\zeta\sigma_{R},&H=\sin\zeta \sigma_{L}+\cos\zeta\sigma_{R},\\ M_{h}^{2}\simeq 2\lambda_{1L}\left(1-\frac{\lambda_{2}^{2}}{\lambda_{1L} \lambda_{1R}}\right)\kappa_{L}^{2},&M_{H}^{2}=2\lambda_{1R}\kappa_{R}^{2}, \end{array} \tag{6}\]
with the mixing angle \(\zeta\) given by
\[\tan 2\zeta=\frac{2\lambda_{2}\kappa_{L}\kappa_{R}}{(\lambda_{1R}\kappa_{R}^{2 }-\lambda_{1L}\kappa_{L}^{2})}. \tag{7}\]
To generate the fermion masses in the absence of the Higgs bidoublet \((1,2,2,0)\) seen in the standard LRSMs, vector-like fermions (VLFs) are introduced. The gauge quantum numbers of the VLFs are
\[U_{a}\left(3,1,1,+\frac{4}{3}\right),\ \ D_{a}\left(3,1,1,-\frac{2}{3}\right), \ \ E_{a}\left(1,1,1,-2\right),\ \ N_{a}\left(1,1,1,0\right), \tag{8}\]
The electric charge is given by
\[\begin{split} Q=& T_{3L}+T_{3R}+\frac{B-L}{2}\ \text{with},\\ \frac{Y}{2}=& T_{3R}+\frac{B-L}{2},\end{split} \tag{9}\]
thereby giving the hypercharge a definition in terms of the \(SU(2)_{R}\) and \(U(1)_{B-L}\) quantum numbers. Defining the covariant derivative as
\[D_{\mu}=\partial_{\mu}+i\frac{g_{L,R}}{2}(\vec{\tau}.\overrightarrow{W}_{L,R \mu})+ig_{B}\frac{B-L}{2}B_{\mu}, \tag{10}\]
the interaction of gauge bosons with Higgs field can be derived from the kinetic term of the Lagrangian
\[\mathcal{L}^{\rm KE}_{\rm Higgs}=(D_{\mu}\chi_{L})^{\dagger}(D_{\mu}\chi_{L})+(D_ {\mu}\chi_{R})^{\dagger}(D_{\mu}\chi_{R}). \tag{11}\]
It can be easily verified that the charged gauge bosons do not mix at tree level, and their masses are
\[M^{2}_{W^{\pm}_{L(R)}}=\frac{1}{2}g^{2}_{L(R)}\kappa^{2}_{L(R)}. \tag{12}\]
Of the neutral gauge bosons, the photon field \(A_{\mu}\) remains massless while the two orthogonal fields \(Z_{L}\) and \(Z_{R}\) mix. In the limit of small \(\kappa_{L}\), these fields are related to the gauge eigenstates by:
\[\begin{split} A^{\mu}&=\frac{g_{L}g_{R}B^{\mu}+g_ {B}g_{R}W^{\mu}_{3L}+g_{B}g_{L}W^{\mu}_{3R}}{\sqrt{g^{2}_{B}(g^{2}_{L}+g^{2}_{R })+g^{2}_{L}g^{2}_{R}}},\\ Z^{\mu}_{R}&=\frac{g_{B}B^{\mu}-g_{R}W^{\mu}_{3R}}{ \sqrt{g^{2}_{R}+g^{2}_{B}}},\\ Z^{\mu}_{L}&=\frac{g_{B}g_{R}B^{\mu}-g_{L}g_{R} \left(1+\frac{g^{2}_{R}}{g^{2}_{R}}\right)W^{\mu}_{3L}+g^{2}_{B}W^{\mu}_{3R}}{ \sqrt{g^{2}_{B}+g^{2}_{R}}\sqrt{g^{2}_{B}+g^{2}_{L}+\frac{g^{2}_{B}g^{2}_{L}}{ g^{2}_{R}}}}.\end{split} \tag{13}\]
The hypercharge relation implies \(g^{-2}_{Y}=g^{-2}_{R}+g^{-2}_{B}\), which can be used to eliminate \(g_{B}\) in terms of \(g_{Y}\) resulting in the \(Z_{L}-Z_{R}\) mixing matrix
\[\mathcal{M}^{2}_{Z_{L}-Z_{R}}=\frac{1}{2}\begin{pmatrix}(g^{2}_{L}+g^{2}_{Y}) \kappa^{2}_{L}&g^{2}_{Y}\sqrt{\frac{g^{2}_{I}+g^{2}_{Y}}{g^{2}_{R}-g^{2}_{Y}} }\kappa^{2}_{L}\\ g^{2}_{Y}\sqrt{\frac{g^{2}_{I}+g^{2}_{Y}}{g^{2}_{R}-g^{2}_{Y}}}\kappa^{2}_{L}& \frac{g^{4}_{R}\kappa^{2}_{R}+g^{4}_{Y}\kappa^{2}_{L}}{g^{2}_{R}-g^{2}_{Y}} \end{pmatrix}, \tag{14}\]
from which the physical states and masses can be obtained as:
\[\begin{split}& Z_{1}=\cos\xi Z_{L}-\sin\xi Z_{R},\qquad Z_{2}= \sin\xi Z_{L}+\cos\xi Z_{R},\\ & M^{2}_{Z_{1}}\simeq\frac{1}{2}(g^{2}_{Y}+g^{2}_{L})\kappa^{2}_{L},\qquad M^{2}_{Z_{2}}\simeq\frac{g^{4}_{R}}{2(g^{2}_{R}-g^{2}_{Y})}\kappa^{2} _{R}+\frac{g^{4}_{Y}}{2(g^{2}_{R}-g^{2}_{Y})}\kappa^{2}_{L},\end{split} \tag{15}\]
and the mixing angle is given by \(\xi\simeq\frac{g^{2}_{Y}}{g^{4}_{R}}\frac{\kappa^{2}_{L}}{\kappa^{2}_{R}}\sqrt {(g^{2}_{L}+g^{2}_{Y})(g^{2}_{R}-g^{2}_{Y})}\) or more accurately,
\[\tan(-2\xi)=\frac{2g^{2}_{Y}\sqrt{(g^{2}_{L}+g^{2}_{Y})(g^{2}_{R}-g^{2}_{Y})} \kappa^{2}_{L}}{g^{4}_{R}\kappa^{2}_{R}+g^{4}_{Y}\kappa^{2}_{L}-(g^{2}_{L}+g^{ 2}_{R})(g^{2}_{R}-g^{2}_{Y})\kappa^{2}_{L}}. \tag{16}\]
The SM \(Z\) boson of mass 91.18 GeV is identified to be \(Z_{1}\) or, in the limit of small mixing angle, \(Z_{L}\). \(Z_{1}\) and \(Z\) will be used interchangeably, henceforth.
## 3 Interactions among Physical Fields
In this section, the diagonalization of fermion mass matrices is studied and the fermionic interactions with the bosonic fields are explored.
### Charged Fermions
The Yukawa interactions of the charged fermions in the flavor basis is given by
\[\begin{split}\mathcal{L}_{\text{Yuk}}&=\mathcal{Y}_{U} \overline{\mathcal{Q}}_{L}\widetilde{\chi}_{L}U_{R}+\mathcal{Y}_{U}^{\prime} \overline{\mathcal{Q}}_{R}\widetilde{\chi}_{R}U_{L}+M_{U}\overline{U}_{L}U_{R} \\ &+\mathcal{Y}_{D}\overline{\mathcal{Q}}_{L}\chi_{L}D_{R}+\mathcal{ Y}_{D}^{\prime}\overline{\mathcal{Q}}_{R}\chi_{R}D_{L}+M_{D}\overline{D}_{L}D_{R} \\ &+\mathcal{Y}_{E}\overline{\psi}_{L}\chi_{L}E_{R}+\mathcal{Y}_{E}^ {\prime}\overline{\psi}_{R}\chi_{R}E_{L}+M_{E}\overline{E}_{L}E_{R}+\text{h.c. }\end{split} \tag{1}\]
with \(\widetilde{\chi}_{L,R}=i\tau_{2}\chi_{L,R}^{*}\). Under parity symmetry, \(\mathcal{Y}=\mathcal{Y}^{\prime}\) and \(M=M^{\dagger}\). The above Lagrangian gives \(6\times 6\) mass matrices for up-type quarks \((u,U)\), down-type quarks \((d,D)\) and charged leptons \((e,E)\) which are written in the form:
\[\mathcal{M}_{U,D,E}=\begin{pmatrix}0&\mathcal{Y}_{U,D,E}\kappa_{L}\\ \mathcal{Y}_{U,D,E}^{\prime\dagger}\kappa_{R}&M_{U,D,E}\end{pmatrix}. \tag{2}\]
Matrices of the form \(\mathcal{M}\) can be block-diagonalized by a bi-unitary transform with two unitary matrices parametrized by \(\rho_{L}\) and \(\rho_{R}\).
\[\mathcal{U}_{X}=\begin{pmatrix}\mathds{1}-\frac{1}{2}\rho_{X}^{\dagger}\rho_{ X}&\rho_{X}^{\dagger}\\ -\rho_{X}&\mathds{1}-\frac{1}{2}\rho_{X}\rho_{X}^{\dagger}\end{pmatrix},\ X=\{L,R\}. \tag{3}\]
\(\rho_{L}\) is necessarily small while we assume \(\rho_{R}\ll 1\) as well, to simplify the analysis.
\[\mathcal{M}_{\text{diag}}=\begin{pmatrix}\mathds{1}-\frac{1}{2}\rho_{L}^{ \dagger}\rho_{L}&-\rho_{L}^{\dagger}\\ \rho_{L}&\mathds{1}-\frac{1}{2}\rho_{L}\rho_{L}^{\dagger}\end{pmatrix} \begin{pmatrix}0&\mathcal{Y}\kappa_{L}\\ \mathcal{Y}^{\prime\dagger}\kappa_{R}&M\end{pmatrix}\begin{pmatrix}\mathds{1} -\frac{1}{2}\rho_{R}^{\dagger}\rho_{R}&\rho_{R}^{\dagger}\\ -\rho_{R}&\mathds{1}-\frac{1}{2}\rho_{R}\rho_{R}^{\dagger}\end{pmatrix}, \tag{4}\]
where the matrices \(\mathcal{U}_{L,R}\) are unitary up to \(\mathcal{O}(\rho^{2})\), and the parameters are related to the masses and Yukawa interactions by
\[\begin{split}\rho_{L}&=\kappa_{L}M^{-1\dagger}\mathcal{Y}^{ \dagger},\\ \rho_{R}&=\kappa_{R}M^{-1}\mathcal{Y}^{\prime\dagger},\end{split} \tag{5}\]
while the mass eigenvalues are
\[\hat{m}=-\kappa_{L}\kappa_{R}\mathcal{Y}M^{-1}\mathcal{Y}^{\prime\dagger}, \ \ \text{and}\ \ \ \hat{M}=M+\frac{1}{2}(\kappa_{R}^{2}\mathcal{Y}^{\prime\dagger}\mathcal{Y}^{ \prime}M^{-1\dagger}+\kappa_{L}^{2}M^{-1\dagger}\mathcal{Y}^{\dagger}\mathcal{ Y}). \tag{6}\]
\(M\) is assumed to be diagonal while \(\hat{m}\) needs to be diagonalized by a subsequent bi-unitary transform such that \(m_{f}=V_{L_{f}}\hat{m}V_{R_{f}}^{\dagger}\). Now, it is possible to write the interactions of charged fermions with gauge and scalar bosons in the mass basis. In the following sections, \(f(\ell)\) stands for mass eigenstate of charged fermions (leptons), and \(F\), \(U\), \(D\), and \(E\) represent charged VLF.
#### 3.1.1 Neutral Current
The tree-level interactions of charged fermions in their mass basis with the neutral gauge bosons are:
\[-g_{Z_{L}}^{-1}\mathcal{L}_{Z_{L}} =\overline{f}_{L}\gamma^{\mu}Z_{L_{\mu}}\left(A_{L}-(A_{L}-B_{L})V_ {L_{f}}\rho_{L_{F}}^{\dagger}\rho_{L_{F}}V_{L_{f}}^{\dagger}\right)f_{L} \tag{11}\] \[+\overline{f}_{R}\gamma^{\mu}Z_{L_{\mu}}\left(A_{L}^{\prime} \right)f_{R}\] \[+\overline{f}_{L}\gamma^{\mu}Z_{L_{\mu}}\left((A_{L}-B_{L})V_{L_{ f}}\rho_{L_{F}}^{\dagger}\right)F_{L}\] \[+\overline{F}_{L}\gamma^{\mu}Z_{L_{\mu}}\left((A_{L}-B_{L})\rho_ {L_{F}}V_{L_{f}}^{\dagger}\right)f_{L}\] \[+\overline{F}_{L,R}\gamma^{\mu}Z_{L_{\mu}}\left(B_{L}\right)F_{L, R},\] \[-g_{Z_{R}}^{-1}\mathcal{L}_{Z_{R}} =\overline{f}_{L}\gamma^{\mu}Z_{R_{\mu}}\left(A_{R}-(A_{R}-B_{R}) V_{L_{f}}\rho_{L_{F}}^{\dagger}\rho_{L_{F}}V_{L_{f}}^{\dagger}\right)f_{L}\] \[+\overline{f}_{L}\gamma^{\mu}Z_{R_{\mu}}\left((A_{R}-B_{R})V_{L_{ f}}\rho_{L_{F}}^{\dagger}\right)F_{L}\] \[+\overline{F}_{L}\gamma^{\mu}Z_{R_{\mu}}\left((A_{R}-B_{R})\rho_ {L_{F}}V_{L_{f}}^{\dagger}\right)f_{L}\] (12) \[+\left\{L\to R\text{ with }A_{R}\to A_{R}^{\prime}\right\}\] \[+\overline{F}_{L,R}\gamma^{\mu}Z_{R_{\mu}}\left(B_{R}\right)F_{L, R},\]
where,
\[g_{Z_{L,R}}=\frac{g_{L,R}^{2}}{\sqrt{g_{L,R}^{2}\pm g_{Y}^{2}}}\quad,\qquad \qquad\quad B_{L,R}=-\frac{g_{Y}^{2}}{g_{L,R}^{2}}\frac{Y_{F}}{2}, \tag{13}\]
\[A_{L,R}=T_{3L,3R}-\frac{g_{Y}^{2}}{g_{L,R}^{2}}\frac{Y_{f_{L,R}}}{2},\qquad \qquad A_{L,R}^{\prime}=-\frac{g_{Y}^{2}}{g_{L,R}^{2}}\frac{Y_{f_{R,L}}}{2}.\]
\(Y_{f_{L,R}}\) are the hypercharges of SM fermions and \(Y_{F}\) is the hypercharge of the VLF. Under parity symmetry, \(g_{R}=g_{L}\), \(V_{R_{f}}=V_{L_{f}}\) and \(\rho_{R}=\frac{\kappa_{R}}{\kappa_{L}}\rho_{L}\). The SM interaction of charged fermions to \(Z\) boson can be obtained from \(\mathcal{L}_{Z_{L}}\) as the VLF decouples from SM fermions. The above expressions can be converted into the mass basis of the neutral gauge bosons using the relations in Eq. (15), as shown in Eqs. (11) and (12) with the couplings given in Appendix. A.
#### 3.1.2 Charged Current
The charged current interactions of quarks are given by
\[-\frac{\sqrt{2}}{g_{L}}\mathcal{L}_{W_{X}} =\overline{u}_{X}\gamma^{\mu}W_{X\mu}^{+}\Big{(}V_{X_{u}}V_{X_{d} }^{\dagger}-\frac{1}{2}(V_{X_{u}}\rho_{X_{D}}^{\dagger}\rho_{X_{D}}V_{X_{d}}^{ \dagger} \tag{14}\] \[+V_{X_{u}}\rho_{X_{U}}^{\dagger}\rho_{X_{U}}V_{X_{d}}^{\dagger} \Big{)}\Big{)}d_{X}+\overline{u}_{X}\gamma\mu W_{X\mu}^{+}V_{X_{u}}\rho_{X_{D}} ^{\dagger}D_{X}\] \[+\overline{U}_{X}\gamma\mu W_{X\mu}^{+}\rho_{X_{U}}V_{X_{d}}^{ \dagger}d_{X}+\overline{U}_{X}\gamma^{\mu}W_{X\mu}^{+}\rho_{X_{U}}\rho_{X_{D}} ^{\dagger}D_{X}+\text{h.c.},\]
with \(X=\{L,R\}\) for \(\{W_{L},\,W_{R}\}\). Since the charged current interactions of leptons involve neutrinos, these are discussed following the diagonalization of the neutrino mass matrix in Sec. 3.2.
#### 3.1.3 Higgs Current
The interactions of the charged fermions with the scalar fields in the flavor basis are given by
\[\begin{split}\mathcal{L}_{\sigma_{L}}&=\overline{f}_{L} \frac{\sigma_{L}}{\sqrt{2}}\left(-V_{L_{f}}\mathcal{Y}_{F}\rho_{R_{F}}V_{R_{f}}^ {\dagger}+\frac{1}{2}V_{L_{f}}\rho_{L_{F}}^{\dagger}\rho_{L_{F}}\mathcal{Y}_{F }\rho_{R_{F}}V_{Rf}^{\dagger}\right)f_{R}\\ &+\overline{f}_{L}\frac{\sigma_{L}}{\sqrt{2}}\Big{(}V_{L_{f}} \mathcal{Y}_{F}-\frac{1}{2}(V_{L_{f}}\mathcal{Y}_{F}\rho_{R_{F}}\rho_{R_{F}}^{ \dagger}+V_{L_{f}}\rho_{L_{F}}^{\dagger}\rho_{L_{F}}\mathcal{Y}_{F})\Big{)}F _{R}\\ &+\overline{F}_{L}\frac{\sigma_{L}}{\sqrt{2}}\left(-\rho_{L_{F}} \mathcal{Y}_{F}\rho_{R_{F}}V_{R_{f}}^{\dagger}\right)f_{R}+\overline{F}_{L} \frac{\sigma_{L}}{\sqrt{2}}\left(\rho_{L_{F}}\mathcal{Y}_{F}-\frac{1}{2}\rho_ {L_{F}}\mathcal{Y}_{F}\rho_{R_{F}}\rho_{R_{F}}^{\dagger}\right)F_{R}+\text{h. c.}.\end{split} \tag{21}\]
The \(\sigma_{R}-f\) interaction can be obtained with the transformation \(\mathcal{L}_{\sigma_{R}}=\mathcal{L}_{\sigma_{L}}(L\leftrightarrow R, \mathcal{Y}\rightarrow\mathcal{Y}^{\prime})\). Since \(\sigma_{L}\) and \(\sigma_{R}\) mix, the interaction in the mass basis can be obtained by
\[\begin{split}\mathcal{L}_{h}&=\cos\zeta\mathcal{L }_{\sigma_{L}}-\sin\zeta\mathcal{L}_{\sigma_{R}},\\ \mathcal{L}_{H}&=\sin\zeta\mathcal{L}_{\sigma_{L}}+ \cos\zeta\mathcal{L}_{\sigma_{R}}.\end{split} \tag{22}\]
The Eq. (21) reduces to SM interaction in the limit of \(\zeta\) and the NP contributions tending to zero. This may be easily deduced from the interaction Lagrangian
\[\begin{split}\mathcal{L}_{h}&\supset\overline{f}_{ L}\cos\zeta\frac{h}{\sqrt{2}}\left(\frac{m_{f}}{\kappa_{L}}-\frac{1}{2}V_{Lf} \rho_{LF}^{\dagger}\rho_{LF}V_{Lf}^{\dagger}\frac{m_{f}}{\kappa_{L}}\right)f _{R}\\ &-\overline{f}_{R}\sin\zeta\frac{h}{\sqrt{2}}\left(\frac{m_{f}^{ \dagger}}{\kappa_{R}}-\frac{1}{2}V_{Rf}\rho_{RF}^{\dagger}\rho_{RF}V_{Rf}^{ \dagger}\frac{m_{f}^{\dagger}}{\kappa_{R}}\right)f_{L}+\text{h.c.}.\end{split} \tag{23}\]
For completeness, the corresponding interaction of heavy Higgs is
\[\begin{split}\mathcal{L}_{H}&\supset\overline{f}_{ L}\sin\zeta\frac{H}{\sqrt{2}}\left(\frac{m_{f}}{\kappa_{L}}-\frac{1}{2}V_{Lf} \rho_{LF}^{\dagger}\rho_{LF}V_{Lf}^{\dagger}\frac{m_{f}}{\kappa_{L}}\right)f _{R}\\ &+\overline{f}_{R}\cos\zeta\frac{H}{\sqrt{2}}\left(\frac{m_{f}^{ \dagger}}{\kappa_{R}}-\frac{1}{2}V_{Rf}\rho_{RF}^{\dagger}\rho_{RF}V_{Rf}^{ \dagger}\frac{m_{f}^{\dagger}}{\kappa_{R}}\right)f_{L}+\text{h.c.}.\end{split} \tag{24}\]
### Neutrinos
The Yukawa Lagrangian for neutrinos is
\[\begin{split}\mathcal{L}_{\text{Yuk}}^{\nu}=& Y_{\nu} \overline{\psi}_{L}\widetilde{\chi}_{L}N_{R}+\widetilde{Y}_{\nu}\overline{ \psi}_{L}\widetilde{\chi}_{L}(N_{L})^{c}_{R}+Y_{\nu}^{\prime}\overline{\psi}_{ R}\widetilde{\chi}_{R}N_{L}+\widetilde{Y}_{\nu}^{\prime}\overline{\psi}_{R} \widetilde{\chi}_{R}(N_{R})^{c}_{L}\\ &+M_{N}\overline{N}_{L}N_{R}+\mu_{L}^{\prime}N_{L}^{T}CN_{L}+\mu_ {R}^{\prime}N_{R}^{T}CN_{R}+\text{h.c.}.\end{split} \tag{25}\]
We assume both Dirac (\(M_{N}\)) and Majorana (\(\mu_{L}\) and \(\mu_{R}\)) mass terms for vector-like neutrinos. Under parity symmetry, \(N_{L}\leftrightarrow N_{R}\), \(\psi_{L}\leftrightarrow\psi_{R}\), and \(\chi_{L}\leftrightarrow\chi_{R}\), which makes \(Y_{\nu}=Y_{\nu}^{\prime}\), \(\widetilde{Y}_{\nu}=\widetilde{Y}_{\nu}^{\prime}\), \(\mu_{L}^{\prime}=\mu_{R}^{\prime}\) and \(M_{N}=M_{N}^{\dagger}\). We also assume that \(\mu_{L}^{\prime}\simeq\mu_{R}^{\prime}>M_{N}>\kappa_{R}>\kappa_{L}\) giving rise to light sterile neutrinos of sub-MeV or eV range. The heavy singlet neutrino mass matrix can be block diagonalized to a physical mass basis \((N_{1},N_{2})\). Then, the neutrino mass matrix sandwiched between \((\nu^{T},\nu^{cT},N_{1}^{T},N_{2}^{T})C\) and \((\nu,\nu^{c},N_{1},N_{2})^{T}\) is
\[\mathcal{M}_{N}=\begin{pmatrix}0&0&\widetilde{Y}^{*}\kappa_{L}&Y^{*}\kappa_{L} \\ 0&0&Y^{\prime}\kappa_{R}&\widetilde{Y}^{\prime}\kappa_{R}\\ \widetilde{Y}^{\dagger}\kappa_{L}&Y^{\prime T}\kappa_{R}&\mu_{L}&0\\ Y^{\dagger}\kappa_{L}&\widetilde{Y}^{\prime T}\kappa_{R}&0&\mu_{R}^{*}\\ \end{pmatrix}, \tag{26}\]
where all the fields are taken to be left-handed, and \(\mu_{R}^{\dagger}=\mu_{R}^{*}\). The mass matrix can be reduced to a \(2\times 2\) form assuming the mass hierarchy mentioned above, and it being a complex symmetric matrix, can be block diagonalized with a unitary transformation:
\[\begin{pmatrix}\mathds{1}-\frac{1}{2}\rho^{\dagger}\rho&\rho^{\dagger}\\ -\rho&\mathds{1}-\frac{1}{2}\rho\rho^{\dagger}\end{pmatrix}\begin{pmatrix}0& \Upsilon\\ \Upsilon^{T}&M\end{pmatrix}\begin{pmatrix}\mathds{1}-\frac{1}{2}\rho^{T}\rho^ {*}&-\rho^{T}\\ \rho^{*}&\mathds{1}-\frac{1}{2}\rho^{*}\rho^{T}\end{pmatrix}. \tag{23}\]
Here \(M\) is block diagonal and \(\Upsilon\) is a \(6\times 6\) matrix containing both \(\kappa_{L}\) and \(\kappa_{R}\) couplings. Note that the parameter \(\rho\) is different from the ones in Sec. 3.1.
\[\rho=-(\Upsilon M^{-1})^{\dagger}, \tag{24}\]
and
\[\hat{\mathbf{m}}=-2\Upsilon M^{-1}\Upsilon^{T},\ \ \hat{\mathbf{M}}=M+\frac{1}{2} \left(\Upsilon^{T}\Upsilon^{*}M^{-1\dagger}+M^{-1\dagger}\Upsilon^{\dagger} \Upsilon\right), \tag{25}\]
where, \(\hat{\mathbf{m}}\) and \(\hat{\mathbf{M}}\) are \(6\times 6\) matrices. \(\hat{\mathbf{m}}\), which represents the mixing between the doublet neutrinos, may be written as
\[\begin{pmatrix}M_{L}&M_{D}\\ M_{D}^{T}&M_{R}\end{pmatrix}, \tag{26}\]
with
\[\begin{split} M_{L}&=-2\kappa_{L}^{2}\widetilde{Y}^{*}\mu_{L} ^{-1}\widetilde{Y}^{\dagger}-2\kappa_{L}^{2}Y^{*}\mu_{R}^{*-1}Y^{\dagger},\\ M_{D}&=-2\kappa_{L}\kappa_{R}\widetilde{Y}^{*}\mu_{L}^{-1}Y^{ \prime T}-2\kappa_{L}\kappa_{R}Y^{*}\mu_{R}^{*-1}\widetilde{Y}^{\prime T},\\ M_{D}^{T}&=-2\kappa_{L}\kappa_{R}Y^{\prime}\mu_{L}^{-1} \widetilde{Y}^{\dagger}-2\kappa_{L}\kappa_{R}\widetilde{Y}^{\prime}\mu_{R}^{ *-1}Y^{\dagger},\\ M_{R}&=-2\kappa_{R}^{2}Y^{\prime}\mu_{L}^{-1}Y^{\prime T}-2\kappa_{R}^{2} \widetilde{Y}^{\prime}\mu_{R}^{*-1}\widetilde{Y}^{\prime T},\end{split} \tag{27}\]
and can be diagonalized again by the transformation:
\[\begin{pmatrix}\mathds{1}-\frac{1}{2}\delta^{\dagger}\delta&\delta^{ \dagger}\\ -\delta&\mathds{1}-\frac{1}{2}\delta\delta^{\dagger}\end{pmatrix}\begin{pmatrix} M_{L}&M_{D}\\ M_{D}^{T}&M_{R}\end{pmatrix}\begin{pmatrix}\mathds{1}-\frac{1}{2}\delta^{T} \delta^{*}&-\delta^{T}\\ \delta^{*}&\mathds{1}-\frac{1}{2}\delta^{*}\delta^{T}\end{pmatrix}. \tag{28}\]
Here, the mixing parameter is given by
\[\delta=-(M_{D}M_{R}^{-1})^{\dagger}, \tag{29}\]
and the masses of the light neutrinos are
\[\begin{split} m_{\nu_{L}}&=M_{L}-M_{D}M_{R}^{-1}M_{D}^{T},\\ m_{(\nu^{c})_{L}}&=M_{R}+\frac{1}{2}\left(M_{R}^{-1 \dagger}M_{D}^{\dagger}M_{D}+M_{D}^{T}M_{D}^{*}M_{R}^{-1\dagger}\right).\end{split} \tag{30}\]
To obtain the mass basis \((\nu_{1},\nu_{2},\nu_{3})\) of neutrinos, we need a subsequent unitary transformation equivalent to that of the PMNS matrix. It may be worth noting that if the lepton number violating interactions of neutrinos were ignored, the neutrino mass matrix would reduce to the same form as the charged fermions and can be diagonalized using the
same bi-unitary transformation. Up to the second order, the neutral current interactions of the light doublet neutrinos, under parity symmetry, are
\[\begin{split} g_{Z_{L}}^{-1}\mathcal{L}_{Z_{L}}& \supset\overline{\nu}_{L}\gamma^{\mu}Z_{L}\Big{(}A_{L}(1-\rho^{T} \rho^{*}-\delta^{T}\delta^{\dagger})\Big{)}\nu_{L}+(\overline{\nu}^{c})_{L} \gamma^{\mu}Z_{L}\Big{(}A_{L}\delta^{*}\delta^{T}\Big{)}(\nu^{c})_{L},\\ g_{Z_{R}}^{-1}\mathcal{L}_{Z_{R}}&\supset\overline{ \nu}_{L}\gamma^{\mu}Z_{R}\Big{(}A_{R}^{\prime}\left(1-\delta^{T}\delta^{*}- \rho^{T}\rho^{*}\right)+A_{R}\delta^{T}\delta^{*}\Big{)}\nu_{L}\\ &\quad+(\overline{\nu}^{c})_{L}\gamma^{\mu}Z_{R}\Big{(}A_{R}^{ \prime}\delta^{*}\delta^{T}+A_{R}\left(1-\delta^{*}\delta^{T}-\rho^{*}\rho^{T} \right)\Big{)}(\nu^{c})_{L},\end{split} \tag{3.25}\]
following the same notations used in Eq. (3.9). The interaction of leptons with \(W_{L}\) is
\[\begin{split}-\frac{\sqrt{2}}{g_{L}}\mathcal{L}_{W_{L}}& \supset\overline{\nu_{L}}\gamma^{\mu}W_{L\mu}^{+}\left(1-V_{L_{l} }\Big{(}\frac{1}{2}\rho_{L_{L}}^{\dagger}\rho_{L_{L}}-\frac{1}{2}\delta^{*} \delta^{T}-\frac{1}{2}\rho^{T}\rho^{*}\Big{)}V_{L_{l}}^{\dagger}\right)\ell_{ L}\\ &\quad+(\overline{\nu}^{c})_{L}\gamma^{\mu}W_{L\mu}^{+}\left(- \delta^{*}+\frac{1}{2}\delta^{*}\rho_{L_{L}}^{\dagger}\rho_{L_{L}}-\frac{1}{2} \rho^{T}\rho^{*}\right)V_{L}^{\dagger}\ell_{L}+\text{h.c.}.\end{split} \tag{3.26}\]
Here, we assume that \(V_{L_{l}}\) rotation associates \(\nu_{L_{l}}\) with the corresponding charged lepton. Similarly, the \(W_{R}\)-lepton interaction Lagrangian is
\[\begin{split}\frac{\sqrt{2}}{g_{R}}\mathcal{L}_{W_{R}}& \supset\overline{\nu_{L_{l}}}\gamma^{\mu}W_{R\mu}^{+}V_{L_{l}} \left(\delta^{T}-\frac{1}{2}\delta^{T}\rho_{R_{L}}^{T}\rho_{R_{L}}-\frac{1}{2} \rho^{T}\rho^{*}\right)V_{R_{l}}^{*}(l_{R})^{c}\\ &\quad+(\overline{\nu}^{c})_{L}\gamma^{\mu}W_{L\mu}^{+}\left(1- \frac{1}{2}\rho_{R_{L}}^{T}\rho_{R_{L}}^{*}-\frac{1}{2}\delta^{*}\delta^{T}- \frac{1}{2}\rho^{T}\rho^{*}\right)V_{R}^{*}(l_{R})^{c}+\text{h.c.}.\end{split} \tag{3.27}\]
## 4 Constraints on Neutral Current Couplings
With the features of the interaction Lagrangian detailed above and the plethora of experimental signals available, we can study the bounds on various NP parameters of the model. This section explore different constraints on couplings to \(Z_{1}\) and \(Z_{2}\). In the charged lepton sector, we consider the flavor-conserving and -violating two-body decays of \(Z_{1}\) and three-body decays of charged leptons. The most stringent limit comes from the three-body decay of muon. In the quark sector, we study the different meson decay processes and mass differences from neutral meson mixing. Since both \(Z_{1}\) and \(Z_{2}\) contributes to FCNCs, which are absent in the SM, we focus on the tree-level neutral current interactions of SM fermions. The \(Z_{1}\) interaction to SM fermions can be rewritten, under parity symmetry, as
\[-\mathcal{L}_{Z_{1}}\supset\frac{g_{L}}{\cos\theta_{W}}\overline{f}_{L} \gamma^{\mu}Z_{1\mu}(C_{L_{1}}+\widetilde{C}_{L_{1}})f_{L}+\frac{g_{L}}{\cos \theta_{W}}\overline{f}_{R}\gamma^{\mu}Z_{1\mu}(C_{R_{1}}+\widetilde{C}_{R_{1 }})f_{R}. \tag{4.1}\]
Similarly, the corresponding \(Z_{2}\) Lagrangian is
\[-\mathcal{L}_{Z_{2}}\supset\frac{g_{L}}{\cos\theta_{W}}\overline{f}_{L} \gamma^{\mu}Z_{2\mu}(C_{L_{2}}+\widetilde{C}_{L_{2}})f_{L}+\frac{g_{L}}{\cos \theta_{W}}\overline{f}_{R}\gamma^{\mu}Z_{2\mu}(C_{R_{2}}+\widetilde{C}_{R_{2 }})f_{R}, \tag{4.2}\]
where, \(\widetilde{C}\) contains the mixing with VLFs. These expressions can be found in Appendix. A. For the analysis, we assume \(g_{L}=g_{R}\), \(M_{Z_{1}}=M_{Z}=91.18\) and \(M_{Z_{2}}=5\) TeV, consistent with the current experimental limits [27]. A shorthand notation is used for the new couplings:
\[\begin{split}(V_{L_{f}}\rho_{L_{F}}^{\dagger}\rho_{L_{F}}V_{L_{f} }^{\dagger})_{ij}&=R_{ij},\\ (V_{R_{f}}\rho_{R_{F}}^{\dagger}\rho_{R_{F}}V_{R_{f}}^{\dagger})_{ ij}&=R_{ij}^{\prime}.\end{split} \tag{4.3}\]
Then, under parity symmetry,
\[R^{\prime}_{ij}=\frac{\kappa_{R}^{2}}{\kappa_{L}^{2}}R_{ij}. \tag{4.4}\]
In the limit of small mixing between the neutral gauge bosons, \(\sin\xi=\xi\), and \(\cos\xi\) is set to 1. We also assume \(R_{ij}\) is real for simplicity. For a clear understanding of how the constraints affect Yukawa couplings at different vector-like fermion masses, the results are given in terms of \((V_{L_{f}}\mathcal{Y}_{F}\mathcal{Y}_{F}^{\dagger}V_{L_{f}}^{\dagger})_{ij}\). Unless otherwise stated, the experimental bounds used in the analyses are obtained from PDG (Ref. [28]). The results are tabulated in the following sections.
### \(Z\) Decays
The constraints on the couplings to charged leptons from \(Z\) (\(\equiv Z_{1}\)) decay are reported here. As seen from Sec. 3.1.1 the couplings are modified by the presence of vector-like leptons as well as the mixing between the two neutral gauge bosons. In obtaining the branching ratios these contributions are included in the total decay width of \(Z\). For \(Z\to\ell_{i}^{+}\ell_{i}^{-}\) the new contribution to the appropriate couplings (\(R_{ii}\)) is turned on in both the \(\Gamma(Z\to\ell_{i}^{+}\ell_{i}^{-})\) and \(\Gamma_{total}\). We made use of the PDG results of partial decay widths of \(Z\)-boson to achieve the precision needed to compare theoretical calculations with the experimental results. In computing the branching ratios in the case of \(Z\to\ell_{i}^{+}\ell_{j}^{-}\) decays, \(\Gamma_{total}=2.4952\) GeV was used.
#### 4.1.1 \(Z\to\ell_{i}^{+}\ell_{i}^{-}\)
Here, we study the flavor-conserving decays of SM-like \(Z_{1}\) boson to charged leptons. There are new contributions to the diagonal couplings of \(Z_{1}\) which will be constrained from the \(Z\) decay modes. These decay modes are more constraining because of the interference of SM and NP terms as opposed to the case in the next subsection where \(Z\) decays to \(\ell_{i}\ell_{j}\). The decay rate can be obtained from the equation
\[\Gamma=\frac{g_{L}^{2}M_{Z_{1}}}{24\pi\cos^{2}\theta_{W}}\left(|C_{L_{1}}^{ii} +\widetilde{C}_{L_{1}}^{ii}|^{2}+|C_{R_{1}}^{ii}+\widetilde{C}_{R_{1}}^{ii}|^ {2}\right) \tag{4.5}\]
The experimental bounds of the decay modes and the corresponding constraints on the new diagonal couplings are given in Table 1. The constraints are obtained by allowing \(2\,\sigma\) deviation from the central value of the experimental results quoted.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Process** & **Exp. Bound** & **Constraint** \\ \hline \(Z\to e^{+}e^{-}\) & \((3.3632\pm 0.0042)\%\) & \(|(V_{L_{L}}\mathcal{Y}_{E}\mathcal{Y}_{E}^{\dagger}V_{L_{f}}^{\dagger})_{ee}| \leq 2.68\times 10^{-2}(\frac{M_{f}}{0.7\,\text{TeV}})^{2}\) \\ \hline \(Z\to\mu^{+}\mu^{-}\) & \((3.3662\pm 0.0066)\%\) & \(|(V_{L_{L}}\mathcal{Y}_{E}\mathcal{Y}_{E}^{\dagger}V_{L_{f}}^{\dagger})_{\mu\mu }|\leq 3.23\times 10^{-2}(\frac{M_{f}}{0.7\,\text{TeV}})^{2}\) \\ \hline \(Z\to\tau^{+}\tau^{-}\) & \((3.3696\pm 0.0083)\%\) & \(|(V_{L_{L}}\mathcal{Y}_{E}\mathcal{Y}_{E}^{\dagger}V_{L_{f}}^{\dagger})_{\tau \tau}|\leq 2.93\times 10^{-2}(\frac{M_{f}}{0.7\,\text{TeV}})^{2}\) \\ \hline \end{tabular}
\end{table}
Table 1: Constraints from flavour conserving \(Z\) decays to charged leptons. These are the allowed regions in the \(2\,\sigma\) range as a function of vector-like lepton mass \(M_{L}\) (TeV). These constraints were obtained by setting the total decay width of \(Z\) as a function of the NP contributions to the appropriate diagonal couplings and fixing the rest of the decays to its SM values.
#### 4.1.2 \(Z\to\ell_{i}^{+}\ell_{j}^{-}\)
The presence of FCNCs lead to tree-level flavor violating decay modes of \(Z\) boson to charged leptons, which are studied in this section. The constraints on the off-diagonal couplings contributing to the decay modes are given in Table 2, with the following expression for decay rate:
\[\Gamma=\frac{g_{L}^{2}M_{Z_{1}}}{24\pi\cos^{2}\theta_{W}}(|\widetilde{C}_{L_{1} }^{ij}|^{2}+|\widetilde{C}_{R_{1}}^{ij}|^{2}). \tag{4.6}\]
#### 4.1.3 Lepton Flavour Universality Violation
The lepton flavor universality violations of \(Z\) decays are studied here. The ratio of the decays to \(\ell_{i}^{+}\ell_{i}^{-}\) pairs is calculated using Eq. (4.5) and taking a Taylor expansion to the first order in both the \(R_{ii}\) parameters involved successively. In the Taylor series, the product of two parameters is ignored so that the quantity constrained is of the form \((1+\text{constant}\times(R_{ii}-R_{jj}))\) for \(\Gamma(Z\to\ell_{i}^{+}\ell_{i}^{-})/\Gamma(Z\to\ell_{j}^{+}\ell_{j}^{-})\). The constraints are listed in Table 3.
### 3 Body Decay of Charged Leptons
In this section, we examine the decay of leptons to three charged leptons involving one flavor conserving vertex. These decay amplitudes are proportional to \(R_{ij}\) since the flavor conserving vertex is SM-like coupling. The processes where both the vertices are flavor violating are not considered here since their amplitudes are proportional to \(R_{ij}^{2}\) which are
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Process** & **Exp. Bound** & **Constraint** \\ \hline \(Z\to e^{\pm}\mu^{\mp}\) & \(<7.5\times 10^{-7}\) & \(|(V_{L_{1}}\mathcal{Y}_{E}\mathcal{Y}_{E}^{\dagger}V_{L_{1}}^{\dagger})_{e\mu }|<5.16\times 10^{-2}(\frac{M_{f}}{0.7\,\text{TeV}})^{2}\) \\ \hline \(Z\to e^{\pm}\tau^{\mp}\) & \(<9.8\times 10^{-6}\) & \(|(V_{L_{1}}\mathcal{Y}_{E}\mathcal{Y}_{E}^{\dagger}V_{L_{1}}^{\dagger})_{e\tau }|<0.187(\frac{M_{f}}{0.7\,\text{TeV}})^{2}\) \\ \hline \(Z\to\mu^{\pm}\tau^{\mp}\) & \(<1.2\times 10^{-5}\) & \(|(V_{L_{1}}\mathcal{Y}_{E}\mathcal{Y}_{E}^{\dagger}V_{L_{1}}^{\dagger})_{\mu \tau}|<0.207(\frac{M_{f}}{0.7\,\text{TeV}})^{2}\) \\ \hline \end{tabular}
\end{table}
Table 2: Constraints from flavor violating \(Z\) decays to charged leptons as a function of vector-like lepton mass \(M_{L}\) (TeV). \(\Gamma_{total}\) was taken to be 2.4952 GeV.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Process** & **Exp. Bound** & **Constraint** \(\big{(}\frac{M_{L}}{0.7\,\text{TeV}}\big{)}^{2}\) \\ \hline \(\Gamma(Z\to\mu^{+}\mu^{-})\) & \((1.0001\pm 0.0024)\) & \(-2.94\times 10^{-2}\leq(V_{L_{1}}\mathcal{Y}_{E}\mathcal{Y}_{E}^{\dagger}V_{L_{1} }^{\dagger})_{\mu\mu-ee}\leq 2.81\times 10^{-2}\) \\ \hline \(\Gamma(Z\to\tau^{+}\tau^{-})\) & \((1.002\pm 0.0032)\) & \(-5.04\times 10^{-2}\leq(V_{L_{1}}\mathcal{Y}_{E}\mathcal{Y}_{E}^{\dagger}V_{L_{1} }^{\dagger})_{\tau\tau-ee}\leq 2.64\times 10^{-2}\) \\ \hline \(\Gamma(Z\to\tau^{+}\tau^{-})\) & \((1.001\pm 0.0026)\) & \(-3.70\times 10^{-2}\leq(V_{L_{1}}\mathcal{Y}_{E}\mathcal{Y}_{E}^{\dagger}V_{L_{1} }^{\dagger})_{\tau\tau-\mu\mu}\leq 2.52\times 10^{-2}\) \\ \hline \end{tabular}
\end{table}
Table 3: Constraints from lepton flavour universality violations in \(Z\) decays. A \(2\,\sigma\) range is allowed in constraining the parameters. The results are quoted as a function of vector-like lepton mass \(M_{L}\)(TeV).
much less constrained. In computing the decay rates for the processes with one of the vertices being SM-like, the NP contribution to that vertex is ignored (although, the mixing between \(Z_{1}\) and \(Z_{2}\) is retained). The decay rate for \(\ell_{i}^{-}\to\ell_{j}^{-}\ell_{k}^{-}\ell_{k}^{+}\) is given by
\[\Gamma=\frac{1}{(1+\delta_{jk})}\frac{g_{L}^{4}m_{i}^{5}}{1536\cos^{4}\theta_{ W}\pi^{3}}\frac{\mathcal{C}}{M_{Z_{1}}^{4}M_{Z_{2}}^{4}}, \tag{4.7}\]
where,
\[\begin{split}\mathcal{C}=& M_{Z_{2}}^{4}\Big{(}|C _{L_{1}}^{kk}|^{2}+|C_{R_{1}}^{kk}|^{2}\Big{)}\Big{(}|\widetilde{C}_{L_{1}}^{ji} |^{i}+|\widetilde{C}_{R_{1}}^{ji}|^{2}\Big{)}\\ &+M_{Z_{1}}^{4}\left(|\widetilde{C}_{L_{2}}^{kk}|^{2}\Big{(}|C_{L _{2}}^{ji}|^{2}+\frac{7}{10}|C_{R_{2}}^{ji}|^{2}\Big{)}+|\widetilde{C}_{R_{2}} ^{kk}|^{2}\Big{(}|C_{R_{2}}^{ji}|^{2}+\frac{7}{10}|C_{L_{2}}^{ji}|^{2}\Big{)} \right)\\ &+M_{Z_{1}}^{2}M_{Z_{2}}^{2}\Big{(}\Big{(}\widetilde{C}_{L_{1}}^{ ji}\widetilde{C}_{L_{2}}^{*ji}+\widetilde{C}_{R_{1}}^{ji}\widetilde{C}_{R_{2}}^{*ji} \Big{)}\Big{(}C_{L_{1}}^{kk}C_{L_{2}}^{*kk}+C_{R_{1}}^{kk}C_{R_{2}}^{*kk}\Big{)} \\ &+C_{L_{1}}^{kk}C_{L_{2}}^{kk}\Big{(}\widetilde{C}_{L_{1}}^{*ji} \widetilde{C}_{L_{2}}^{ji}+\frac{7}{10}\widetilde{C}_{R_{1}}^{*ji}\widetilde{C }_{R_{2}}^{ji}\Big{)}+C_{R_{1}}^{*kk}C_{R_{2}}^{kk}\Big{(}\widetilde{C}_{R_{1} }^{*ji}\widetilde{C}_{R_{2}}^{ji}+\frac{7}{10}\widetilde{C}_{L_{1}}^{*ji} \widetilde{C}_{L_{2}}^{ji}\Big{)}\Big{)}.\end{split} \tag{4.8}\]
The \((1+\delta_{ij})\) factor takes care of identical particles in the final state. The constraints on the off-diagonal couplings of neutral bosons to charged fermions are given in Table 4.
### Radiative Decays of Charged Leptons
The radiative decays of the form \(\ell_{1}\to\ell_{2}\gamma\) arise from one-loop diagrams as shown in Fig. 2 for \(\mu\to e\gamma\). The gauge interaction leading to such decays with VL-lepton \(F\) in the internal
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Process** & **Exp. Bound** & **Constraint** \\ \hline \(\mu^{-}\to e^{-}e^{+}e^{-}\) & \(<1.0\times 10^{-12}\) & \(|(V_{L_{1}}\mathcal{Y}_{E}\mathcal{Y}_{E}^{\dagger}V_{L_{i}}^{\dagger})_{e \mu}|<4.66\times 10^{-5}(\frac{M_{L}}{0.7\text{ TeV}})^{2}\) \\ \hline \(\tau^{-}\to e^{-}e^{+}e^{-}\) & \(<2.7\times 10^{-8}\) & \(|(V_{L_{1}}\mathcal{Y}_{E}\mathcal{Y}_{E}^{\dagger}V_{L_{i}}^{\dagger})_{e \tau}|<1.81\times 10^{-2}(\frac{M_{L}}{0.7\text{ TeV}})^{2}\) \\ \hline \(\tau^{-}\to e^{-}\mu^{+}\mu^{-}\) & \(<2.7\times 10^{-8}\) & \(|(V_{L_{1}}\mathcal{Y}_{E}\mathcal{Y}_{E}^{\dagger}V_{L_{i}}^{\dagger})_{e \tau}|<1.28\times 10^{-2}(\frac{M_{L}}{0.7\text{ TeV}})^{2}\) \\ \hline \(\tau^{-}\to\mu^{-}e^{+}e^{-}\) & \(<1.8\times 10^{-8}\) & \(|(V_{L_{1}}\mathcal{Y}_{E}\mathcal{Y}_{E}^{\dagger}V_{L_{i}}^{\dagger})_{\mu \tau}|<1.05\times 10^{-2}(\frac{M_{L}}{0.7\text{ TeV}})^{2}\) \\ \hline \(\tau^{-}\to\mu^{-}\mu^{+}\mu^{-}\) & \(<2.1\times 10^{-8}\) & \(|(V_{L_{1}}\mathcal{Y}_{E}\mathcal{Y}_{E}^{\dagger}V_{L_{i}}^{\dagger})_{\mu \tau}|<1.60\times 10^{-2}(\frac{M_{L}}{0.7\text{ TeV}})^{2}\) \\ \hline \end{tabular}
\end{table}
Table 4: Constraints from three body decays of charged leptons. New contributions to diagonal couplings are ignored in this analysis but the mixing between \(Z_{1}\) and \(Z_{2}\) are not omitted. The results are quoted as a function of vector-like lepton mass \(M_{L}\) (TeV).
Figure 1: Tree-level diagram for \(\mu\to 3e\).
line is given by
\[\mathcal{L}_{gauge}=\sum_{j=1}^{2}\sum_{i=1}^{2}\overline{F}\gamma^{\mu}\left( \mathcal{C}_{L_{i}}^{Fj}P_{L}+\mathcal{C}_{R_{i}}^{Fj}P_{R}\right)\ell_{j}Z_{i _{\mu}}+\text{h.c.}. \tag{4.9}\]
The appropriate coefficients can be obtained from Eqs. (3.7), and (3.8) converted to the mass basis of neutral gauge bosons, imposing parity symmetry:
\[\mathcal{C}_{L_{1}}^{Fj} =g_{L}\cos\theta_{W}\left(\sin\xi\frac{1}{2\sqrt{\cos 2\theta_{W}}} \frac{g_{Y}^{2}}{g_{L}^{2}}-\cos\xi\left(\frac{1}{2}+\frac{g_{Y}^{2}}{2g_{L}^ {2}}\right)\rho_{L_{F}}V_{L_{j}}^{\dagger}\right), \tag{4.10}\] \[\mathcal{C}_{R_{1}}^{Fj} =g_{L}\cos\theta_{W}\left(\sin\xi\frac{1}{2\sqrt{\cos 2\theta_{W}}} \frac{\kappa_{R}}{\kappa_{L}}\rho_{L_{F}}V_{L_{j}}^{\dagger}\right),\] \[\mathcal{C}_{L(R)_{2}}^{Fj} =-\mathcal{C}_{L(R)_{1}}^{Fj}(\xi\rightarrow\xi+\pi/2).\]
The partial width for \(\ell_{1}\rightarrow\ell_{2}\gamma\) is [29]
\[\Gamma=\frac{(m_{1}^{2}-m_{2}^{2})^{3}}{16\pi m_{1}^{3}}\left(|\sigma_{L}|^{2} +|\sigma_{R}|^{2}\right). \tag{4.11}\]
Using the Eqs. (4.9) and (4.10), the coefficients \(\sigma_{L,R}\) for \(\mu\to e\gamma\) mediated by the VL-lepton \(F\) are the following, where the subscript \(i=1,2\) corresponds to \(Z_{1,2}\):
\[\sigma_{L,R}=\sum_{i=1}^{2}\sigma_{L_{i},R_{i}}, \tag{4.12}\]
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Process** & **Exp. Bound** & **Constraint** \\ \hline \(\mu^{-}\to e^{-}\gamma\) & \(<4.2\times 10^{-13}\) & \(|(V_{L_{1}}\mathcal{Y}_{E}\mathcal{Y}_{E}^{\dagger}V_{L_{1}}^{\dagger})_{e \mu}|<1.18\times 10^{-6}(\frac{M_{f}}{0.7\text{ TeV}})^{2}\) \\ \hline \(\tau^{-}\to e^{-}\gamma\) & \(<3.3\times 10^{-8}\) & \(|(V_{L_{1}}\mathcal{Y}_{E}\mathcal{Y}_{E}^{\dagger}V_{L_{1}}^{\dagger})_{e \tau\tau}|<1.32\times 10^{-2}(\frac{M_{f}}{0.7\text{ TeV}})^{2}\) \\ \hline \(\tau^{-}\rightarrow\mu^{-}\gamma\) & \(<4.4\times 10^{-8}\) & \(|(V_{L_{1}}\mathcal{Y}_{E}\mathcal{Y}_{E}^{\dagger}V_{L_{1}}^{\dagger})_{\mu \tau\tau}|<1.78\times 10^{-2}(\frac{M_{f}}{0.7\text{ TeV}})^{2}\) \\ \hline \end{tabular}
\end{table}
Table 5: Constraints from \(\ell_{1}\rightarrow\ell_{2}\gamma\) processes. The vector-like lepton mass enhancement appearing in these diagrams is taken to be \(=1.0\) TeV. The results are quoted as a function of vector-like lepton mass \(M_{L}\)(TeV).
Figure 2: One-loop \(\mu\to e\gamma\) mediated by neutral gauge bosons. \(F\) stands for SM as well as vector-like charged leptons.
where,
\[\begin{split}\sigma_{L_{i}}=&-\left(\mathcal{C}^{eF}_{R_{i }}\mathcal{C}^{F\mu}_{R_{i}}y_{1}+\mathcal{C}^{eF}_{L_{i}}\mathcal{C}^{F\mu}_{ L_{i}}y_{2}+\mathcal{C}^{eF}_{R_{i}}\mathcal{C}^{F\mu}_{L_{i}}y_{3}+\mathcal{C}^{eF}_{L_{ i}}\mathcal{C}^{F\mu}_{R_{i}}y_{4}\right),\\ \sigma_{R_{i}}=&-\left(\mathcal{C}^{eF}_{R_{i}} \mathcal{C}^{F\mu}_{R_{i}}y_{2}+\mathcal{C}^{eF}_{L_{i}}\mathcal{C}^{F\mu}_{L_ {i}}y_{1}+\mathcal{C}^{eF}_{R_{i}}\mathcal{C}^{F\mu}_{L_{i}}y_{4}+\mathcal{C}^{ eF}_{L_{i}}\mathcal{C}^{F\mu}_{R_{i}}y_{3}\right).\end{split} \tag{4.13}\]
The factors \(y_{i}\), with \(c=\dfrac{i}{96m_{Z_{i}}^{2}(m_{Z_{i}}^{2}-m_{F}^{2})^{4}\pi^{2}}\), are
\[\begin{split} y_{1}=& c\dfrac{m_{\mu}}{2}\Big{(}(m_ {Z_{i}}^{2}-m_{F}^{2})\left\{-8m_{Z_{i}}^{6}+30m_{Z_{i}}^{4}m_{F}^{2}-9m_{Z_{i} }^{2}m_{F}^{4}+5m_{F}^{6}\right.\\ &\left.+m_{e}^{2}(2m_{Z_{i}}^{4}+5m_{Z_{i}}^{2}m_{F}^{2}-mF^{4}) \right\}+6m_{Z_{i}}^{4}m_{F}^{2}(m_{e}^{2}+3m_{F}^{2})\ln\dfrac{m_{F}^{2}}{m_{ Z_{i}}^{2}}\Big{)},\end{split}\]
\[\begin{split} y_{2}=& y_{1}(m_{e}\leftrightarrow m_ {\mu}),\\ y_{3}=& c\dfrac{m_{F}}{2}\Big{(}(m_{Z_{i}}^{2}-m_{F} ^{2})\left\{(m_{\mu}^{2}+m_{e}^{2})(m_{F}^{4}-2m_{Z_{i}}^{4}-5m_{Z_{i}}^{2}m_{ F}^{2})+6(4m_{Z_{i}}^{6}-3m_{Z_{i}}^{4}m_{F}^{2}-m_{F}^{6})\right\}\\ &-6m_{Z_{i}}^{4}m_{F}^{2}(m_{\mu}^{2}+m_{e}^{2}-6m_{Z_{i}}^{2}+6 m_{F}^{2})\ln\dfrac{m_{F}^{2}}{m_{Z_{i}}^{2}}\Big{)},\end{split} \tag{4.14}\]
The dominant contribution to radiative decays arise from the chirally enhanced VL-lepton mediated diagrams. In obtaining the constraints, the mass of the mediator VLF is set to 1 TeV. The results are given in Table 5.
### Mass difference of Neutral Mesons
The effective Lagrangian for \(\Delta F=2\) processes mediated by neutral gauge bosons, as shown in Fig. 3, is of the form
\[\mathcal{L}=\sum_{k=1}^{2}\left(C_{L_{k}}(\Lambda)\,\overline{q}_{L_{i}}^{ \alpha}\gamma^{\mu}q_{L_{j}}^{\alpha}Z_{k_{\mu}}+C_{R_{k}}(\Lambda)\,\overline {q}_{R_{i}}^{\alpha}\gamma^{\mu}q_{R_{j}}^{\alpha}Z_{k_{\mu}}+\dfrac{1}{2}M_{Z_ {k}}^{2}Z_{k_{\mu}}Z_{k}^{\mu}\right) \tag{4.15}\]
where i and j are flavor indices, Greek indices stand for color, and \(C(\Lambda)\) are the coefficients at high-scale. Upon integrating out the \(Z_{1}\) and \(Z_{2}\) masses, the Lagrangian becomes
\[\begin{split}\mathcal{H}=&\sum_{k=1}^{2}\dfrac{1}{2 M_{Z_{k}}^{2}}\Big{(}C_{L_{k}}^{2}(\Lambda)\,\overline{q}_{L_{i}}^{\alpha} \gamma_{\mu}q_{L_{j}}^{\alpha}\,\overline{q}_{L_{i}}^{\beta}\gamma^{\mu}q_{L_{j }}^{\beta}+C_{R_{k}}^{2}(\Lambda)\,\overline{q}_{R_{i}}^{\alpha}\gamma_{\mu}q_ {R_{j}}^{\alpha}\,\overline{q}_{R_{i}}^{\beta}\gamma^{\mu}q_{R_{j}}^{\beta}\\ &\hskip 113.811024pt+2C_{L_{k}}(\Lambda)\,C_{R_{k}}(\Lambda)\, \overline{q}_{L_{i}}^{\alpha}\gamma_{\mu}q_{L_{j}}^{\alpha}\,\overline{q}_{R _{i}}^{\beta}\gamma^{\mu}q_{R_{j}}^{\beta}\Big{)}.\end{split} \tag{4.16}\]
The mass difference can be evaluated as \(\Delta M=2\text{Re}\left\langle\phi\right|\mathcal{H}\left|\overline{\phi}\right\rangle\), \(\phi\) being the meson state. In
general, the NP operators contributing to \(\Delta F=2\) processes are [31]
\[\mathcal{O}_{1}^{q_{i}\,q_{j}} =\overline{q}_{Lj}^{\alpha}\gamma_{\mu}q_{Li}^{\alpha}\,\overline{q }_{Lj}^{\beta}\gamma^{\mu}q_{Li}^{\beta}\,,\] \[\mathcal{O}_{2}^{q_{i}\,q_{j}} =\overline{q}_{Rj}^{\alpha}q_{Li}^{\alpha}\,\overline{q}_{Rj}^{ \beta}q_{Li}^{\beta}\,,\] \[\mathcal{O}_{3}^{q_{i}\,q_{j}} =\overline{q}_{Rj}^{\alpha}q_{Li}^{\beta}\,\overline{q}_{Rj}^{ \beta}q_{Li}^{\alpha}\,, \tag{4.17}\] \[\mathcal{O}_{4}^{q_{i}\,q_{j}} =\overline{q}_{Rj}^{\alpha}q_{Li}^{\alpha}\,\overline{q}_{Lj}^{ \alpha}q_{Ri}^{\beta}\,,\] \[\mathcal{O}_{5}^{q_{i}\,q_{j}} =\overline{q}_{Rj}^{\alpha}q_{Li}^{\beta}\,\overline{q}_{Lj}^{ \beta}q_{Ri}^{\alpha}\,,\]
and \(\widetilde{\mathcal{O}}_{1,2,3}^{q_{i}\,q_{j}}\), obtained by the exchange \(L\leftrightarrow R\). Using Eqs. (4.16) and (4.17), the effective Hamiltonian for meson mixing mass difference can be written as
\[\mathcal{H}_{\text{eff}}=C_{1}(M_{Z})\mathcal{O}_{1}+\widetilde{C}_{1}(M_{Z}) \widetilde{\mathcal{O}}_{1}-4C_{5}(M_{Z})\mathcal{O}_{5}. \tag{4.18}\]
Here, the operator \(\mathcal{O}_{5}\) is obtained by Fierz transformation [32] of \(\overline{q}_{Li}^{\alpha}\gamma_{\mu}q_{Lj}^{\alpha}\overline{q}_{Ri}^{\beta} \gamma^{\mu}q_{Rj}^{\beta}\) in Eq. (4.16). The Wilson coefficients (WCs) at \(M_{Z_{1(2)}}\) scale (or any NP scale, \(\Lambda\)) need to be evolved down to hadronic scale \(m_{b}=4.6\) GeV for bottom mesons, \(\mu_{D}=2.8\) GeV for charmed mesons and \(\mu_{K}=2\) GeV for Kaons, which are the renormalization scales used in lattice computation of matrix elements [31]. Renormalization group evolution of \(C_{5}\) induces \(C_{4}\), causing the corresponding operator to appear in the expression of \(\Delta M\) at the hadronic scale. This is given by the analytical formula:
\[\left\langle\overline{\phi}\right|\mathcal{H}_{\text{eff}}\left|\phi\right\rangle _{i}=\sum_{j=1}^{5}\sum_{r=1}^{5}(b_{j}^{(r,i)}+\eta c_{j}^{(r,i)})\eta^{a_{j} }C_{i}(\Lambda))\left\langle\overline{\phi}\right|\mathcal{Q}_{r}\left|\phi \right\rangle, \tag{4.19}\]
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Process** & \(\Delta\mathbf{M_{\text{exp}}}\)**(GeV)** & \(\Delta\mathbf{M_{\text{SM}}}\) & **Constraint \((\frac{M_{F}}{1\text{ TeV}})^{2}\)** \\ \hline \hline \(K-\overline{K}\) & \((3.484\pm 0.006)\times 10^{-15}\) & \(2.364\times 10^{-15}\) GeV & \(|(V_{L_{f}}\mathcal{Y}_{D}\mathcal{Y}_{D}^{\dagger}V_{L_{f}}^{\dagger})_{as}| \leq 2.12\times 10^{-4}\) \\ \hline \(D-\overline{D}\) & \((6.2586^{+2.70}_{-2.90})\times 10^{-15}\) & \(3.87\times 10^{-15}\) GeV & \(|(V_{L_{f}}\mathcal{Y}_{U}\mathcal{Y}_{U}^{\dagger}V_{L_{f}}^{\dagger})_{ue}| \leq 9.63\times 10^{-5}\) \\ \hline \(B-\overline{B}\) & \((3.334\pm 0.013)\times 10^{-13}\) & \((0.543\pm 0.029)\text{ps}^{-1}\)[30] & \(|(V_{L_{f}}\mathcal{Y}_{D}\mathcal{Y}_{D}^{\dagger}V_{L_{f}}^{\dagger})_{ab}| \leq 3.76\times 10^{-4}\) \\ \hline \(B_{s}-\overline{B}_{s}\) & \((1.1688\pm 0.0014)\times 10^{-11}\) & \((18.77\pm 0.86)\text{ps}^{-1}\)[30] & \(|(V_{L_{f}}\mathcal{Y}_{D}\mathcal{Y}_{D}^{\dagger}V_{L_{f}}^{\dagger})_{ab}| \leq 4.35\times 10^{-4}\) \\ \hline \end{tabular}
\end{table}
Table 6: Constraints from neutral meson mass differences. The results are quoted as a function of vector-like quark mass \(M_{F}\) (TeV). \(B_{q}-\overline{B_{q}}\) calculations are done using Eq. (4.21) while the other two are evaluated using Eq. (4.19). More details can be found in Appendix. B.
Figure 3: Tree-level diagram contributing to kaon mixing.
where, \(\eta=\alpha_{s}(\Lambda)/\alpha_{s}(m_{t})\), and \(a_{j}\), \(b_{j}^{(r,i)}\) and \(c_{j}^{(r,i)}\) are the magic numbers. The relevant constants and magic numbers are provided in Appendix. B.3.
We follow a different approach for obtaining the mass differences in the case of \(B_{q}\)-mesons [33; 34], although the expression in Eq. (4.19) is equally valid. The SM predicts [35]
\[\Delta M_{q}=\frac{2G_{F}^{2}}{4\pi^{2}}M_{W}^{2}\lambda_{q}^{2}\hat{\eta}_{B}S _{0}(x_{t})\frac{\langle\mathcal{O}_{1}\rangle}{2M_{B_{q}}}, \tag{4.20}\]
where, \(q=\{s,d\}\), \(\lambda_{q}=V_{tb}V_{tq}^{*}\), \(S_{0}\) is the Inami-Lim function [36], \(x_{t}=(\overline{m}_{t}(\overline{m}_{t})/M_{W})^{2}\), \(\hat{\eta}_{B}=0.84\) and \(\langle\mathcal{O}_{1}\rangle=\frac{2}{3}M_{B_{q}}^{2}f_{b_{q}}^{2}B_{1}(m_{ b})\). Using this, the SM+NP contribution to mass mixing normalized to the SM one is given by
\[\begin{split}\frac{\Delta M_{q}^{\text{SM+NP}}}{\Delta M_{q}^{ \text{SM}}}=&\bigg{|}1+\frac{\sqrt{2}G_{F}M_{Z_{1}}^{2}}{ \mathcal{C}^{\Delta_{SM}}}\sum_{k=1,2}\bigg{[}\frac{2}{3}\bigg{(}\frac{ \widetilde{C}_{L_{k}}^{ij^{2}}+\widetilde{C}_{R_{k}}^{ij^{2}}}{M_{Z_{k}}^{2} }\eta_{k}^{6/23}\bigg{)}\\ &-\frac{B_{5}}{B_{1}}\bigg{(}\frac{2M_{B_{q}}^{2}}{3(m_{b}+m_{q}) ^{2}}+1\bigg{)}\bigg{(}\frac{\widetilde{C}_{L_{k}}^{ij}\widetilde{C}_{R_{k}}^{ ij}}{M_{Z_{k}}^{2}}\eta_{k}^{3/23}\bigg{)}\\ &+\frac{1}{3}\frac{B_{4}}{B_{1}}\bigg{(}\frac{2M_{B_{q}}^{2}}{(m_ {b}+m_{q})^{2}}+\frac{1}{3}\bigg{)}\bigg{(}\frac{\widetilde{C}_{L_{k}}^{ij} \widetilde{C}_{R_{k}}^{ij}}{M_{Z_{k}}^{2}}\left(\eta_{k}^{3/23}-\eta_{k}^{-24 /23}\right)\bigg{)}\bigg{]}\bigg{|},\end{split} \tag{4.21}\]
with \(\mathcal{C}^{\Delta_{\text{SM}}}=\frac{G_{F}^{2}}{12\pi}\lambda_{q}^{2}M_{W}^{ 2}S_{0}(x_{t})\hat{\eta}_{B}\). The analysis for \(B_{q}\) mixings is done with a \(2\,\sigma\) deviation in the SM predictions and \(1\,\sigma\) in experimental measurements while \(2\,\sigma\) deviation in the experimental measurements is allowed for \(K\) and \(D\) mass differences. The constraints obtained are tabulated in Table VI along with the experimental bounds and the SM predictions used.
### Charged Leptonic Decays of Mesons
Neutral mesons decaying to charged leptons of the form \(\phi\to\ell_{i}^{+}\ell_{i}^{-}\) are analyzed here, where \(\phi\) represents meson and \(\ell\) stands for lepton. The decay rate, with \(f_{\phi}\) being the form factor, is given by
\[\Gamma=\frac{g_{L}^{4}}{32\pi\cos^{4}\theta_{W}}f_{\phi}^{2}m_{\ell}^{2}m_{ \phi}\sqrt{1-\frac{4m_{\ell}^{2}}{m_{\phi}^{2}}}(|\mathcal{C}_{L}|^{2}+| \mathcal{C}_{R}|^{2}), \tag{4.22}\]
where, \(\mathcal{C}_{X}=\frac{C_{X_{1}}^{\ell\ell}}{M_{Z_{1}}^{2}}(\widetilde{C}_{L_{1 }}^{ij}-\widetilde{C}_{R_{1}}^{ij})+\frac{C_{X_{2}}^{\ell\ell}}{M_{Z_{2}}^{2} }(\widetilde{C}_{L_{2}}^{ij}-\widetilde{C}_{R_{2}}^{ij}),\text{ and }X=\{L,R\}.\) The new contributions to \(Z_{i}\,\overline{\ell}\ell\) couplings are ignored; i.e., \(R_{\ell\ell}=0\) and the allowed parameter space of \(R_{ij}\) from the quark sector is determined. For \(B_{q}\to\mu^{+}\mu^{-}\) decays, the constraints are obtained for the central value of the experimental measurements. The constraints obtained are listed in Table VII along with the experimental bounds and the form factors used. We see that the constraints are small when electrons are the final particles owing to the small mass while muon final states give stronger constraints on account of the stringent experimental bounds.
### Semi-Leptonic Meson Decay
Here, we explore the tree-level lepton flavor conserving decays of mesons of the for \(\phi_{H}\to phi_{L}\ell^{+}\ell^{-}\). We assume that total decay width is comprised of only the tree-level contribution from the NP model parameters, so the results are less constraining than what would be obtained with a more thorough investigation including the SM interference terms. The decay width for such processes is given by
\[\Gamma=\frac{g_{L}^{4}}{64\pi^{3}\cos^{4}\theta_{W}m_{H}}|f_{+}(0)|^{2}{\cal F} \left(|{\cal C}_{L}|^{2}+|{\cal C}_{R}|^{2}\right), \tag{4.23}\]
where, \({\cal F}=\frac{m_{V}^{2}}{48m_{H}^{2}}\left(-2m_{H}^{6}+9m_{H}^{4}m_{V}^{2}-6m _{H}^{2}m_{V}^{4}-6(m_{V}^{3}-m_{H}^{2}m_{V})^{2}\ln\frac{m_{V}^{2}-m_{H}^{2}}{ m_{V}^{2}}\right)\), \(m_{H}\) is the parent meson mass and \(m_{V}\) is the corresponding vector meson mass. The input parameters used are given in Appendix. C. The coefficients \({\cal C}_{L,R}\) are given by
\[{\cal C}_{X}=\frac{C_{X_{1}}^{\ell\ell}(\widetilde{C}_{L_{1}}^{ij}+\widetilde {C}_{R_{1}}^{ij})}{M_{Z_{1}}^{2}}+\frac{C_{X_{2}}^{l\ell\ell}(\widetilde{C}_{ L_{2}}^{ij}+\widetilde{C}_{R_{2}}^{ij})}{M_{Z_{2}}^{2}},\quad X=\{L,R\}. \tag{4.24}\]
The NP contribution to diagonal couplings from the lepton sector is set to zero since the quark vertex has the leading contribution. When the final state particles involve neutrinos,
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Process** & **Exp. Bound** & **Constraint** & \(f_{\phi}\) **(GeV)[37]** \\ \hline \(D\to e^{+}e^{-}\) & \(<7.9\times 10^{-8}\) & \(|(V_{L_{f}}{\cal Y}_{U}{\cal Y}_{U}^{\dagger}V_{L_{f}}^{\dagger})_{uc}|<35.96( \frac{M_{F}}{1\,\mathrm{TeV}})^{2}\) & 0.200 \\ \hline \(D\to\mu^{+}\mu^{-}\) & \(<6.2\times 10^{-9}\) & \(|(V_{L_{f}}{\cal Y}_{U}{\cal Y}_{U}^{\dagger}V_{L_{f}}^{\dagger})_{uc}|<4.88 \times 10^{-2}(\frac{M_{F}}{1\,\mathrm{TeV}})^{2}\) & 0.200 \\ \hline \(B\to e^{+}e^{-}\) & \(<8.3\times 10^{-8}\) & \(|(V_{L_{f}}{\cal Y}_{D}{\cal Y}_{D}^{\dagger}V_{L_{f}}^{\dagger})_{bd}|<12.70( \frac{M_{F}}{1\,\mathrm{TeV}})^{2}\) & 0.180 \\ \hline \(B\to\mu^{+}\mu^{-}\) & \((1.1^{+1.4}_{-1.3})\times 10^{-10}\) & \(|(V_{L_{f}}{\cal Y}_{D}{\cal Y}_{D}^{\dagger}V_{L_{f}}^{\dagger})_{bd}|\leq 2.24 \times 10^{-3}(\frac{M_{F}}{1\,\mathrm{TeV}})^{2}\) & 0.180 \\ \hline \(B\to\tau^{+}\tau^{-}\) & \(<2.1\times 10^{-3}\) & \(|(V_{L_{f}}{\cal Y}_{D}{\cal Y}_{D}^{\dagger}V_{L_{f}}^{\dagger})_{bd}|<0.676( \frac{M_{F}}{1\,\mathrm{TeV}})^{2}\) & 0.180 \\ \hline \(B_{s}\to e^{+}e^{-}\) & \(<2.8\times 10^{-7}\) & \(|(V_{L_{f}}{\cal Y}_{D}{\cal Y}_{D}^{\dagger}V_{L_{f}}^{\dagger})_{bs}|<20.92( \frac{M_{F}}{1\,\mathrm{TeV}})^{2}\) & 0.200 \\ \hline \(B_{s}\to\mu^{+}\mu^{-}\) & \(3.0\pm 0.4\times 10^{-9}\) & \(|(V_{L_{f}}{\cal Y}_{D}{\cal Y}_{D}^{\dagger}V_{L_{f}}^{\dagger})_{bs}|\leq 1.05 \times 10^{-2}(\frac{M_{F}}{1\,\mathrm{TeV}})^{2}\) & 0.200 \\ \hline \(B_{s}\to\tau^{+}\tau^{-}\) & \(<6.8\times 10^{-3}\) & \(|(V_{L_{f}}{\cal Y}_{D}{\cal Y}_{D}^{\dagger}V_{L_{f}}^{\dagger})_{bs}|<1.08( \frac{M_{F}}{1\,\mathrm{TeV}})^{2}\) & 0.200 \\ \hline \end{tabular}
\end{table}
Table 7: Constraints from flavor conserving charged leptonic decays of neutral mesons. The results are quoted as a function of vector-like quark mass \(M_{F}\).
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Process** & **Exp. Bound** & **Constraint** \\ \hline \(K^{+}\to\pi^{+}e^{+}e^{-}\) & \((3\pm 0.09)\times 10^{-7}\) & \(|(V_{L_{f}}{\cal Y}_{D}{\cal Y}_{D}^{\dagger}V_{L_{f}}^{\dagger})_{sd}|\leq 3.83 \times 10^{-2}(\frac{M_{F}}{1\,{\rm TeV}})^{2}\) \\ \hline \(K^{+}\to\pi^{+}\mu^{+}\mu^{-}\) & \((9.4\pm 0.6)\times 10^{-8}\) & \(|(V_{L_{f}}{\cal Y}_{D}{\cal Y}_{D}^{\dagger}V_{L_{f}}^{\dagger})_{sd}|\leq 2.15 \times 10^{-2}(\frac{M_{F}}{1\,{\rm TeV}})^{2}\) \\ \hline \(K^{+}\to\pi^{+}\nu\overline{\nu}\) & \((1.7\pm 1.1)\times 10^{-10}\) & \(|(V_{L_{f}}{\cal Y}_{D}{\cal Y}_{D}^{\dagger}V_{L_{f}}^{\dagger})_{sd}|\leq 3.66 \times 10^{-4}(\frac{M_{F}}{1\,{\rm TeV}})^{2}\) \\ \hline \hline \(B^{+}\to\pi^{+}\ell^{+}\ell^{-}\) & \(<4.9\times 10^{-8}\) & \(|(V_{L_{f}}{\cal Y}_{D}{\cal Y}_{D}^{\dagger}V_{L_{f}}^{\dagger})_{bd}|<6.49 \times 10^{-3}(\frac{M_{F}}{1\,{\rm TeV}})^{2}\) \\ \hline \(B^{+}\to\pi^{+}e^{+}e^{-}\) & \(<8.0\times 10^{-8}\) & \(|(V_{L_{f}}{\cal Y}_{D}{\cal Y}_{D}^{\dagger}V_{L_{f}}^{\dagger})_{bd}|<1.18 \times 10^{-2}(\frac{M_{F}}{1\,{\rm TeV}})^{2}\) \\ \hline \(B^{+}\to\pi^{+}\mu^{+}\mu^{-}\) & \((1.75\pm 0.22)\times 10^{-8}\) & \(|(V_{L_{f}}{\cal Y}_{D}{\cal Y}_{D}^{\dagger}V_{L_{f}}^{\dagger})_{bd}|\leq 5.51 \times 10^{-3}(\frac{M_{F}}{1\,{\rm TeV}})^{2}\) \\ \hline \(B\to\pi\ell^{+}\ell^{-}\) & \(<5.3\times 10^{-8}\) & \(|(V_{L_{f}}{\cal Y}_{D}{\cal Y}_{D}^{\dagger}V_{L_{f}}^{\dagger})_{bd}|<9.6 \times 10^{-3}(\frac{M_{F}}{1\,{\rm TeV}})^{2}\) \\ \hline \(B\to\pi e^{+}e^{-}\) & \(8.4\times 10^{-8}\) & \(|(V_{L_{f}}{\cal Y}_{D}{\cal Y}_{D}^{\dagger}V_{L_{f}}^{\dagger})_{bd}|<1.71 \times 10^{-2}(\frac{M_{F}}{1\,{\rm TeV}})^{2}\) \\ \hline \(B\to\pi\mu^{+}\mu^{-}\) & \(<6.9\times 10^{-8}\) & \(|(V_{L_{f}}{\cal Y}_{D}{\cal Y}_{D}^{\dagger}V_{L_{f}}^{\dagger})_{bd}|<1.55 \times 10^{-2}(\frac{M_{F}}{1\,{\rm TeV}})^{2}\) \\ \hline \(B^{+}\to\pi^{+}\nu\overline{\nu}\) & \(<1.4\times 10^{-5}\) & \(|(V_{L_{f}}{\cal Y}_{D}{\cal Y}_{D}^{\dagger}V_{L_{f}}^{\dagger})_{bd}|<6.24 \times 10^{-2}(\frac{M_{F}}{1\,{\rm TeV}})^{2}\) \\ \hline \(B\to\pi\nu\overline{\nu}\) & \(<9\times 10^{-6}\) & \(|(V_{L_{f}}{\cal Y}_{D}{\cal Y}_{D}^{\dagger}V_{L_{f}}^{\dagger})_{bd}|<7.06 \times 10^{-2}(\frac{M_{F}}{1\,{\rm TeV}})^{2}\) \\ \hline \hline \(B^{+}\to K^{+}\ell^{+}\ell^{-}\) & \((4.51\pm 0.23)\times 10^{-7}\) & \(|(V_{L_{f}}{\cal Y}_{D}{\cal Y}_{D}^{\dagger}V_{L_{f}}^{\dagger})_{bs}|\leq 1.63 \times 10^{-2}(\frac{M_{F}}{1\,{\rm TeV}})^{2}\) \\ \hline \(B^{+}\to K^{+}e^{+}e^{-}\) & \((5.5\pm 0.7)\times 10^{-7}\) & \(|(V_{L_{f}}{\cal Y}_{D}{\cal Y}_{D}^{\dagger}V_{L_{f}}^{\dagger})_{bs}|\leq 2.55 \times 10^{-2}(\frac{M_{F}}{1\,{\rm TeV}})^{2}\) \\ \hline \(B^{+}\to K^{+}\mu^{+}\mu^{-}\) & \((4.41\pm 0.22)\times 10^{-7}\) & \(|(V_{L_{f}}{\cal Y}_{D}{\cal Y}_{D}^{\dagger}V_{L_{f}}^{\dagger})_{bs}|\leq 2.29 \times 10^{-2}(\frac{M_{F}}{1\,{\rm TeV}})^{2}\) \\ \hline \(B^{+}\to K^{+}\tau^{+}\tau^{-}\) & \(<2.25\times 10^{-3}\) & \(|(V_{L_{f}}{\cal Y}_{D}{\cal Y}_{D}^{\dagger}V_{L_{f}}^{\dagger})_{bs}|<1.63( \frac{M_{F}}{1\,{\rm TeV}})^{2}\) \\ \hline \(B\to K\ell^{+}\ell^{-}\) & \((3.1^{+0.8}_{-0.7})\times 10^{-7}\) & \(|(V_{L_{f}}{\cal Y}_{D}{\cal Y}_{D}^{\dagger}V_{L_{f}}^{\dagger})_{bs}|\leq 1.36 \times 10^{-2}(\frac{M_{F}}{1\,{\rm TeV}})^{2}\) \\ \hline \(B\to Ke^{+}e^{-}\) & \((1.6^{+1.0}_{-0.8})\times 10^{-7}\) & \(|(V_{L_{f}}{\cal Y}_{D}{\cal Y}_{D}^{\dagger}V_{L_{f}}^{\dagger})_{bs}|\leq 1.38 \times 10^{-2}(\frac{M_{F}}{1\,{\rm TeV}})^{2}\) \\ \hline \(B\to K\mu^{+}\mu^{-}\) & \((3.39\pm 0.34)\times 10^{-7}\) & \(|(V_{L_{f}}{\cal Y}_{D}{\cal Y}_{D}^{\dagger}V_{L_{f}}^{\dagger})_{bs}|\leq 2.00 \times 10^{-2}(\frac{M_{F}}{1\,{\rm TeV}})^{2}\) \\ \hline \(B^{+}\to K^{+}\nu\overline{\nu}\) & \(<1.6\times 10^{-5}\) & \(|(V_{L_{f}}{\cal Y}_{D}{\cal Y}_{D}^{\dagger}V_{L_{f}}^{\dagger})_{bs}|<5.51 \times 10^{-2}(\frac{M_{F}}{1\,{\rm TeV}})^{2}\) \\ \hline \(B\to K\nu\overline{\nu}\) & \(<2.6\times 10^{-5}\) & \(|(V_{L_{f}}{\cal Y}_{D}{\cal Y}_{D}^{\dagger}V_{L_{f}}^{\dagger})_{bs}|<7.03 \times 10^{-2}(\frac{M_{F}}{1\,{\rm TeV}})^{2}\) \\ \hline \hline \(D^{+}\to\pi^{+}e^{+}e^{-}\) & \(<1.1\times 10^{-6}\) & \(|(V_{L_{f}}{\cal Y}_{D}{\cal Y}_{D}^{\dagger}V_{L_{f}}^{\dagger})_{uc}|<0.33( \frac{M_{F}}{1\,{\rm TeV}})^{2}\) \\ \hline \(D^{+}\to\pi^{+}\mu^{+}\mu^{-}\) & \(<7.3\times 10^{-8}\) & \(|(V_{L_{f}}{\cal Y}_{D}{\cal Y}_{D}^{\dagger}V_{L_{f}}^{\dagger})_{uc}|<8.54 \times 10^{-2}(\frac{M_{F}}{1\,{\rm TeV}})^{2}\) \\ \hline \(D\to\pi e^{+}e^{-}\) & \(<4\times 10^{-6}\) & \(|(V_{L_{f}}{\cal Y}_{D}{
only the SM neutrino vertex is taken into account so that the \(C_{X_{2}}\) coefficients vanish and the three flavors of neutrinos are summed together. As in the previous section, the constraints are obtained for the central values of the experimental measurements. They are tabulated in Table 8.
## 5 Constraints on Charged Current Couplings
In this section, we find the constraints on the theory parameters modifying the charged current couplings. The major constraints arise from lepton flavor universality violating processes. We also look at \(W_{L}\) decay to leptons and \(\ell_{1}\to\ell_{2}\gamma\) processes. For simplicity, the part of Eq. (3.26) relevant to the processes being studied is rewritten as
\[-\mathcal{L}_{W_{L}}=\overline{\nu}_{L}\gamma^{\mu}W_{L\mu}^{+}\frac{g_{L}}{ \sqrt{2}}(1-f_{\nu\ell})\ell_{L}+...+\text{h.c.}. \tag{5.1}\]
### \(W_{l}\) Decay to Leptons
Here, we obtain the constraints on the coupling of \(W_{L}\) to leptons by studying its different decay modes. The decay rate of such processes is given by
\[\Gamma=\frac{g_{L}^{2}M_{W}}{48\pi}|1-f_{\nu\ell}|^{2}. \tag{5.2}\]
To accommodate the deviation in the total decay width of \(W_{L}\) due to NP contribution, the branching ratio is computed by turning on the relevant coupling in each case. Since
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Process** & **Exp. Bound** & **Constraint** \\ \hline \(W_{L}^{+}\to e^{+}\nu\) & \((10.71\pm 0.16)\%\) & \(-7.03\times 10^{-3}\leq f_{\nu e}\leq 2.61\times 10^{-2}\) \\ \hline \(W_{L}^{+}\to\mu^{+}\nu\) & \((10.63\pm 0.15)\%\) & \(-1.89\times 10^{-3}\leq f_{\nu\mu}\leq 2.93\times 10^{-2}\) \\ \hline \(W_{L}^{+}\to\tau^{+}\nu\) & \((11.38\pm 0.21)\%\) & \(-3.46\times 10^{-2}\leq f_{\nu\tau}\leq 7.67\times 10^{-3}\) \\ \hline \end{tabular}
\end{table}
Table 9: Constraints from leptonic decays of \(W_{L}\). \(f\)’s are the new contribution to the charged current couplings as defined in Eq. (5.1). A \(2\,\sigma\) deviation is allowed in obtaining the constraints.
Figure 5: Tree-level diagram for \(B^{+}\to K^{+}e^{+}e^{-}\).
\(\Gamma(W_{L}\to q_{i}q_{j})_{\rm SM}=6\Gamma(W_{L}\to\nu_{\ell}\ell)_{\rm SM}\) owing to the apparent unitarity of the CKM matrix, the total decay width of \(W_{L}\) maybe written as
\[\Gamma_{W_{L}}=8\Gamma(W_{L}\to\nu_{\ell}\ell)_{\rm SM}+\frac{g_{L}^{2}M_{W}}{48 \pi}|1-f_{\nu\ell}|^{2}. \tag{10}\]
Then, the branching ratio of \(W_{L}^{+}\to e^{+}\nu\), for example, would be \(\Gamma(f_{\nu\ell}=f_{\nu e})/\Gamma_{W_{L}}(f_{\nu\ell}=f_{\nu e})\). We allow a \(2\,\sigma\) deviation above and below the central value in obtaining the constraints. The constraints from \(W_{L}\) decays are given in Table 9.
### Radiative Decays of Charged Leptons
Since the charged current couplings can be flavor-changing in this model, \(\ell_{1}\to\ell_{2}\gamma\) processes can occur at one-loop level without a neutrino mass insertion in the internal fermion line. The gauge interaction relevant for the decay is
\[\mathcal{L}=\frac{g_{L}}{\sqrt{2}}\overline{e}\gamma^{\sigma}W_{L\sigma}^{-} \nu_{e}+\frac{g_{L}}{\sqrt{2}}\overline{\mu}\gamma^{\sigma}W_{L\sigma}^{-}\nu _{\mu}+\frac{g_{L}}{\sqrt{2}}f_{e\nu_{\mu}}\overline{e}\gamma^{\sigma}W_{L \sigma}^{-}\nu_{\mu}+\frac{g_{L}}{\sqrt{2}}f_{\mu\nu_{e}}\overline{\mu} \gamma^{\sigma}W_{L\sigma}^{-}\nu_{e}+...+{\rm h.c.} \tag{11}\]
and the decay rate is computed along the same lines as Sec. 4.3. The internal fermion mediators are considered to be SM neutrinos such that one of the vertices is a SM vertex. The constraints from these processes are listed in Table 10.
### Lepton Flavour Universality Tests
In this section, we have analyzed the constraints arising from various lepton flavor universality (LFU) tests involving charged current. The ratio of leptonic decays of \(W\) boson, predicted to be unity in the SM, can be a direct test of lepton universality. The theoretical uncertainties for these processes appear from the mass of the final state particles,
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Process** & **Exp. Bound** & **Constraint** \\ \hline \(\mu^{-}\to e^{-}\gamma\) & \(<4.2\times 10^{-13}\) & \(|f_{e\nu}+f_{\mu\nu}^{*}|<2.59\times 10^{-6}\) \\ \hline \(\tau^{-}\to e^{-}\gamma\) & \(<3.3\times 10^{-8}\) & \(|f_{e\nu}+f_{\nu\nu}^{*}|<1.73\times 10^{-3}\) \\ \hline \(\tau^{-}\to\mu^{-}\gamma\) & \(<4.4\times 10^{-8}\) & \(|f_{\mu\nu}+f_{\nu\nu}^{*}|<2.0\times 10^{-3}\) \\ \hline \end{tabular}
\end{table}
Table 10: Constraints from \(l_{1}\to l_{2}\gamma\) processes. The \(f\)’s are the new contributions to charged current interaction.
Figure 6: One-loop diagram of \(\mu\to e\gamma\) mediated by \(W_{L}\). Keeping one of the vertices to be SM-like will eliminate \(\nu_{\tau}\) from appearing in this process.
which are small compared to the experimental uncertainties. Other theory parameters cancel out in the ratio. Using Eq. (10), the decay rate ratios given in Table 14 take the form \(1-2(f_{\nu\ell_{i}}-f_{\nu\ell_{j}})\) to the first order in both the parameters, where, \(i\) is lepton flavor in the numerator, and \(j\) is the one in the denominator. This formula is also valid for \(\Gamma(K\to\pi\mu\nu)/\Gamma(K\to\pi e\nu)\), and for \(\Gamma(\tau^{-}\to\mu^{-}\overline{\nu}_{\mu}\nu_{\tau})/\Gamma(\tau^{-}\to e ^{-}\overline{\nu}_{e}\nu_{\tau})\) which is given in Table 15.
Another set of LFU-violating constraints come from the purely leptonic decays of charged mesons. The ratios of such decays will be of the form
\[R=\frac{(m_{i}^{3}-m_{i}m_{\phi}^{2})^{2}\left(1-2(f_{\nu j}-f_{\nu i})\right)}{ (m_{j}^{3}-m_{\phi}^{2}m_{j})^{2}} \tag{11}\]
where, \(i\) is lepton in the numerator and \(j\) is the one in the denominator. The constraints from these are given in Table 16. Yet another set of constraints come from the three-body decays of charged leptons. The theoretical prediction to the lowest order for
\[\frac{\Gamma(\tau^{-}\to e^{-}\overline{\nu}_{e}\nu_{\tau})}{\Gamma(\mu^{-} \to e^{-}\overline{\nu}_{e}\nu_{\mu})}=\frac{m_{\tau}^{2}}{m_{\mu}^{5}}\left( 1-2(f_{\nu\tau}-f_{\nu\mu})\right). \tag{12}\]
Finally, we also look at ratios of the form \(\Gamma(\tau\to M\nu_{\tau})/\Gamma(M\to\ell\nu_{\ell})\) which are given by
\[R_{\tau/M}=\frac{\Gamma(\tau\to M\nu_{\tau})}{\Gamma(M\to\ell\nu_{\ell})}= \frac{1}{2}\frac{m_{\tau}^{3}}{m_{M}m_{\ell}^{2}}\frac{(1-m_{M}^{2}/m_{\tau}^ {2})^{2}}{(1-m_{\ell}^{2}/m_{M}^{2})^{2}}\left(1-2(f_{\nu\tau}-f_{\nu\ell}) \right)\text{\@@cite[cite]{[\@@bibref{}{Aguilar:2019wvj}{}{}]}}. \tag{13}\]
The results obtained from these sets of LFU-violating decays are given in Table 17. It
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Process** & **Exp. Bound** & **Constraint** \\ \hline \(\Gamma(W_{L}\to\mu\nu)/\Gamma(W_{L}\to e\nu)\) & \((0.996\pm 0.008)\) & \(-0.006\leq f_{\nu\mu}-f_{\nu e}\leq 0.010\) \\ \hline \(\Gamma(W_{L}\to\tau\nu)/\Gamma(W_{L}\to e\nu)\) & \((1.043\pm 0.024)\) & \(-0.046\leq f_{\nu\tau}-f_{\nu e}\leq 0.003\) \\ \hline \(\Gamma(W_{L}\to\tau\nu)/\Gamma(W_{L}\to\mu\nu)\) & \((1.070\pm 0.026)\) & \(-0.061\leq f_{\nu\tau}-f_{\nu\mu}\leq-0.009\) \\ \hline \(\Gamma(K\to\pi\mu\nu)/\Gamma(K\to\pi e\nu)\) & \((0.6608\pm 0.003)\) & \(0.167\leq f_{\nu\mu}-f_{\nu e}\leq 0.173\) \\ \hline \end{tabular}
\end{table}
Table 14: Lepton flavor universality violation in \(W\) decays and semileptonic decay of K-meson. The \(f\)’s are the new contributions to charged current interactions. A \(2\,\sigma\) deviation is allowed for obtaining the constraints.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Process** & **Exp. Bound** & **Constraint** \\ \hline \(\Gamma(K\to e\nu)/\Gamma(K\to\mu\nu)\) & \((2.488\pm 0.009)\times 10^{-5}\) & \(-0.019\leq f_{\nu\mu}-f_{\nu e}\leq-0.012\) \\ \hline \(\Gamma(\pi\to e\nu)/\Gamma(\pi\to\mu\nu)\) & \((1.23\pm 0.0023)\times 10^{-4}\) & \(-0.026\leq f_{\nu\mu}-f_{\nu e}\leq-0.019\) \\ \hline \(\Gamma(D_{s}\to\tau\nu)/\Gamma(D_{s}\to\mu\nu)\) & \((10.73\pm 0.69)\) & \(-0.12\leq f_{\nu\tau}-f_{\nu\mu}\leq 0.020\) \\ \hline \end{tabular}
\end{table}
Table 15: LFU from leptonic decays of mesons. The \(f\)’s are the new contributions to charged current interaction.
should be noted that some of the constraints listed in this section appear to disagree with (or exclude) SM. These correspond to the experimental measurements that are not consistent with the SM prediction within \(2\,\sigma\) uncertainty, like in the case of \(\Gamma(W_{L}\to\tau\nu)/\Gamma(W_{L}\to\mu\nu)\) in Table XI, which does not accept a vanishing NP parameter. Combining all such constraints, we also see that there is no common region of parameter space that can explain all the LFU violating processes simultaneously.
### Mass Difference of Neutral Mesons
In SM, neutral meson mixing occurs at one-loop level through the familiar box diagram involving \(W_{L}\) boson. The presence of \(W_{R}\) boson and heavy vector-like quarks can have additional effects on these processes which can give significant constraints on the NP parameters including the mass of \(W_{R}\). The diagrams contributing to kaon mixing are shown in Fig. 7. We only consider the first two diagrams since the contribution from the one with two \(W_{R}\)'s is extremely small.
The \(W_{L}-W_{L}\) box diagram contributes [39]
\[\mathcal{H}_{LL}=\frac{G_{F}^{2}M_{L}^{2}}{4\pi^{2}}(\bar{d}\gamma^{\mu}P_{L}s) ^{2}\sum_{i,j}\lambda_{i}^{LL}\lambda_{j}^{LL}\big{\{}(1+\frac{x_{i}x_{j}}{4})I _{2}(x_{i},x_{j};1)-2x_{i}x_{j}I_{1}(x_{1},x_{j};1)\big{\}} \tag{10}\]
while the contribution from \(W_{L}-W_{R}\) diagram is
\[\mathcal{H}_{LR}=\frac{G_{F}^{2}M_{L}^{2}\beta}{2\pi^{2}}\bar{d}P_{L}s\,\bar{d }P_{R}s\sum_{i,j}\lambda_{i}^{LR}\lambda_{j}^{RL}\sqrt{x_{i}x_{j}}\big{\{}(4 +\beta x_{i}x_{j})I_{1}(x_{i},x_{j};\beta)-(1+\beta)I_{2}(x_{1},x_{j};\beta) \big{\}} \tag{11}\]
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Process** & **Exp. Bound** & **Constraint** \\ \hline \(\frac{\Gamma(\tau^{-}\to\mu^{-}\overline{\nu}_{\mu}\nu_{\tau})}{\Gamma(\tau^{-} \to e^{-}\overline{\nu}_{e}\nu_{\tau})}\) & \((0.9764\pm 0.003)\) & \(8.8\times 10^{-3}\leq f_{\nu\mu}-f_{\nu e}\leq 1.48\times 10^{-2}\) \\ \hline \(\frac{\Gamma(\tau^{-}\to e^{-}\overline{\nu}_{e}\nu_{\tau})}{\Gamma(\mu^{-} \to e^{-}\overline{\nu}_{e}\nu_{\mu})}\) & \((1.349\pm 0.004)\) & \(-4.42\times 10^{-3}\leq f_{\nu\tau}-f_{\nu\mu}\leq 1.52\times 10^{-3}\) \\ \hline \(\frac{\Gamma(\tau^{+}\to\pi^{+}\nu_{\tau})}{\Gamma(\pi^{+}\to\mu^{+}\nu_{\mu})}\) & \((9704\pm 56)\) & \(-3.07\times 10^{-3}\leq f_{\nu\tau}-f_{\nu\mu}\leq 8.41\times 10^{-3}\) \\ \hline \(\frac{\Gamma(\tau^{+}\to\pi^{+}\nu_{\tau})}{\Gamma(\pi^{+}\to e^{+}\nu_{e})}\) & \((7.89\pm 0.05)\times 10^{7}\) & \(-2.35\times 10^{-2}\leq f_{\nu\tau}-f_{\nu e}\leq 1.04\times 10^{-2}\) \\ \hline \(\frac{\Gamma(\tau^{+}\to K^{+}\nu_{\tau})}{\Gamma(K^{+}\to e^{+}\nu_{e})}\) & \((1.89\pm 0.03)\times 10^{7}\) & \(-2.41\times 10^{-2}\leq f_{\nu\tau}-f_{\nu e}\leq 8.13\times 10^{-3}\) \\ \hline \end{tabular}
\end{table}
Table XIII: Constraints from other LFUV processes. The \(f\)’s are the new contributions to charged current interaction.
where,
\[x_{i}=m_{i}^{2}/M_{L}^{2},\qquad\qquad\beta=M_{L}^{2}/M_{R}^{2} \tag{5.10}\] \[I_{1}(x_{i},x_{j};\beta)= \frac{x_{i}\ln x_{i}}{(1-x_{i})(1-x_{i}\beta)(x_{i}-x_{j})}+(i \leftrightarrow j)-\frac{\beta\ln\beta}{(1-\beta)(1-x_{i}\beta)(1-x_{j}\beta)},\] \[I_{2}(x_{i},x_{j};\beta)= \frac{x_{i}^{2}\ln x_{i}}{(1-x_{i})(1-x_{i}\beta)(x_{i}-x_{j})}+( i\leftrightarrow j)-\frac{\ln\beta}{(1-\beta)(1-x_{i}\beta)(1-x_{j}\beta)}.\]
In the above equations, we need to implement the GIM cancellation as well as the simplification \(m_{u}\to 0\), and \(m_{c}^{2}/M_{L,R}^{2}\) is kept to the first order [40]. The expressions for \(\lambda_{i,j}\) are given in Appendix. D. Among the meson mixing processes, \(K-\overline{K}\) gave significant constraints on the NP parameters. The theoretical lower limit on the mass of \(W_{R}\) boson was found to be \(M_{W_{R}}\gtrsim 2.3\) TeV.
## 6 Constraints on Higgs Couplings
In this section, we tabulate the constraints on the Higgs couplings to fermions. Since there is a mixing between the light Higgs \(h\) and heavy Higgs \(H\), the parts of the Lagrangian given in Eq. (3.13) and Eq. (3.14) are relevant for the processes considered here. Similar to the parity symmetric approximation done in the case of \(Z_{i}\) couplings, the new contributions to Higgs couplings may be written in terms of
\[\mathcal{R} =V_{L_{f}}\rho_{L_{F}}^{\dagger}\rho_{L_{F}}V_{L_{f}}^{\dagger}m_ {f}, \tag{6.1}\] \[\mathcal{R}^{\prime} =V_{R_{f}}\rho_{R_{F}}^{\dagger}\rho_{R_{F}}V_{R_{f}}^{\dagger}m_ {f}^{\dagger},\]
Using this, the Higgs interactions to charged fermions are
\[\mathcal{L}_{h} =\overline{f}h\left(\left\{\frac{\cos\zeta}{\kappa_{L}}K_{1}- \frac{\sin\zeta}{\kappa_{R}}K_{2}^{\dagger}\right\}P_{R}+\left\{\frac{\cos \zeta}{\kappa_{L}}K_{1}^{\dagger}-\frac{\sin\zeta}{\kappa_{R}}K_{2}\right\}P_ {L}\right)f+..., \tag{6.2}\] \[\mathcal{L}_{H} =\overline{f}H\left(\left\{\frac{\sin\zeta}{\kappa_{L}}K_{1}+ \frac{\cos\zeta}{\kappa_{R}}K_{2}^{\dagger}\right\}P_{R}+\left\{\frac{\sin \zeta}{\kappa_{L}}K_{1}^{\dagger}+\frac{\cos\zeta}{\kappa_{R}}K_{2}\right\}P_ {L}\right)f+...,\]
with,
\[K_{1}=\Big{(}m_{f}-\frac{1}{2}\mathcal{R}\Big{)},\ \ K_{2}=\Big{(}m_{f}-\frac{1}{2} \frac{\kappa_{R}^{2}}{\kappa_{L}^{2}}\mathcal{R}\Big{)}. \tag{6.3}\]
Figure 7: Box diagrams contributing to kaon mixing. The \(F\) in the internal line represents both SM and vector-like up-type quarks. The last diagram is qualitatively insignificant compared to the other two.
The mixing angle is set to zero, with the mass of the heavy Higgs being \(\simeq 6.6\text{ TeV}\) when \(M_{Z_{2}}=5\text{ TeV}\). The first set of constraints comes from the expected deviation in the charged leptonic decay of Higgs from SM, \(\mu_{\ell^{+}\ell^{-}}=BR(h\to\ell^{+}\ell^{-})/BR(h\to\ell^{+}\ell^{-})_{\text{ SM}}\) with
\[\Gamma(h\to\ell^{+}\ell^{-})=\frac{\sqrt{m_{h}^{2}-4m_{\ell}^{2}}}{16\pi m_{h} ^{2}}\Bigg{(}\left(|C_{L_{h}}|^{2}+|C_{R_{h}}|^{2}\right)(m_{h}^{2}-2m_{\ell}^{2 })-2m_{\ell}^{2}\left(C_{L_{h}}C_{R_{h}}^{*}+C_{L_{h}}^{*}C_{R_{h}}\right) \Bigg{)}, \tag{100}\]
and the SM contribution being
\[\Gamma(h\to\ell^{+}\ell^{-})_{\text{SM}}=\frac{m_{h}m_{\ell}^{2}}{8\pi\kappa_ {L}^{2}}\Big{(}1-\frac{4m_{\ell}^{2}}{m_{h}^{2}}\Big{)}^{3/2}, \tag{101}\]
where, the coefficient of \(P_{L(R)}\) is \(C_{L(R)}\). The constraints from various expected results are tabulated in Table 14. Since the SM-like Higgs can also induce flavor change, we study the decay of the top quark to the up-type quarks and Higgs. The decay rate is given by
\[\begin{split}\Gamma(t\to hq)&=\frac{\sqrt{m_{h}^{4} -2m_{h}^{2}(m_{q}^{2}+m_{t}^{2})+(m_{q}^{2}-m_{t}^{2})^{2}}}{32\pi m_{t}^{3}} \times\\ &\Bigg{(}\left(|C_{L_{h}}|^{2}+|C_{R_{h}}|^{2}\right)(m_{t}^{2}+m _{q}^{2}-m_{h}^{2})+2m_{q}m_{t}\left(C_{L_{h}}C_{R_{h}}^{*}+C_{L_{h}}^{*}C_{R_ {h}}\right)\Bigg{)}.\end{split} \tag{102}\]
The constraints arising from the top decays are quoted in Table 14.
Another set of constraints comes from the tree-level meson mixing processes mediated by \(\{h,H\}\) as shown in Fig. 8. The major constraints arise from \(K-\overline{K}\) and \(B-\overline{B}\) mixing mass difference. The effective Hamiltonian leading to meson mixing may be written as
\[\mathcal{H}_{\text{eff}}=\sum_{k=h,H}\Bigg{(}\frac{C_{L_{k}}^{2}(\Lambda)}{2m_ {k}^{2}}\mathcal{O}_{2}+\frac{C_{R_{k}}^{2}(\Lambda)}{2m_{k}^{2}}\widetilde{ \mathcal{O}}_{2}+\frac{C_{L_{k}}(\Lambda)C_{R_{k}}(\Lambda)}{m_{k}^{2}} \mathcal{O}_{4}\Bigg{)}, \tag{103}\]
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Process** & **Exp. Bound** & **Constraint** \\ \hline \(t\to hc\) & \(<1.1\times 10^{-3}\) & \(|(V_{L_{f}}\mathcal{Y}_{F}\mathcal{Y}_{F}^{\dagger}V_{L_{f}}^{\dagger})_{ct}|<2.94(\frac{M_{f}}{\text{TeV}})^{2}\) \\ \hline \(t\to hu\) & \(<1.2\times 10^{-3}\) & \(|(V_{L_{f}}\mathcal{Y}_{F}\mathcal{Y}_{F}^{\dagger}V_{L_{f}}^{\dagger})_{ut}|<1.53(\frac{M_{f}}{\text{TeV}})^{2}\) \\ \hline \end{tabular}
\end{table}
Table 14: Constraints from top decaying to Higgs and up-type quark. The results are quoted as a function of vector-like quark mass \(M_{F}\) (TeV).
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Process** & **Exp. Bound** & **Constraint** \\ \hline \(\mu_{\mu^{+}\mu^{-}}\) & ATLAS\(=\)\(1.2\pm 0.6\)[41] & \(|(V_{L_{l}}\mathcal{Y}_{E}\mathcal{Y}_{E}^{\dagger}V_{L_{l}}^{\dagger})_{\mu\mu}| \leq 32.33(\frac{M_{f}}{0.7\text{ TeV}})^{2}\) \\ \cline{2-3} & CMS\(=\)\(1.19^{+0.41}_{-0.39}\)[42] & \(|(V_{L_{l}}\mathcal{Y}_{E}\mathcal{Y}_{E}^{\dagger}V_{L_{l}}^{\dagger})_{\mu\mu}| \leq 11.63(\frac{M_{f}}{0.7\text{ TeV}})^{2}\) \\ \hline \(\mu_{\tau^{+}\tau^{-}}\) & ATLAS\(=\)\(1.09^{+0.27}_{-0.26}\)[43] & \(|(V_{L_{l}}\mathcal{Y}_{E}\mathcal{Y}_{E}^{\dagger}V_{L_{l}}^{\dagger})_{\tau\tau} \leq 7.92(\frac{M_{f}}{0.7\text{ TeV}})^{2}\) \\ \hline \end{tabular}
\end{table}
Table 13: Constraint from SM-like Higgs decay to charged leptons. The results are quoted as a function of vector-like lepton mass \(M_{L}\) (TeV).
with reference to the operators given in Sec. 4.4. Both the quantities in Table XVI are computed using magic numbers given in Appendix. C. The \(\Delta M\)'s obtained this way were constrained using the central value of the experimental results. In the case of \(B-\overline{B}\), we also included an analysis with \(\Delta M^{\rm SM+NP}/\Delta M^{\rm SM}\), since there is an attempt to reduce theoretical uncertainties of the SM prediction of \(\Delta M_{B}\)[33]. This gives a more stringent constraint quoted in parenthesis in Table XVI.
## 7 Neutral Current B-anomalies
The experimental results on several \(b\to s\mu^{+}\mu^{-}\) processes deviate significantly from the Standard Model predictions. Over the years the discrepancies have been observed in branching ratios of \(B\to K^{(*)}\mu^{+}\mu^{-}\), \(B_{s}\to\phi\mu^{+}\mu^{-}\), and \(B_{s}\to\mu^{+}\mu^{-}\), and in the angular distribution of \(B\to K^{*}\mu^{+}\mu^{-}\). In particular, there is a combined 3.1 \(\sigma\) discrepancy with the measurements of the lepton flavor universality (LFU) ratio \(R_{K^{(*)}}=\Gamma(B\to K^{(*)}\mu^{+}\mu^{-})/\Gamma(B\to K^{(*)}e^{+}e^{-})\). The SM calculations of these observables are devoid of any hadronic uncertainties as they cancel out in the ratio leading to clean and highly accurate predictions [21]:
\[R_{K}^{SM} =1.0004^{+0.0008}_{-0.0007}, \tag{10}\] \[R_{K^{*}}^{SM} =\begin{cases}0.906\pm 0.028&q^{2}\in[0.045,1.1]\ {\rm GeV}^{2},\\ 1.00\pm 0.01&q^{2}\in[1.1,6]\ {\rm GeV}^{2},\end{cases}\]
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Process** & **Exp. Bound (GeV)** & **Constraint** \\ \hline \(K-\overline{K}\) & \((3.484\pm 0.006)\times 10^{-15}\) & \(|(V_{L_{f}}){\cal Y}_{F}{\cal Y}_{F}^{\dagger}V_{L_{f}}^{\dagger})_{ds}|\leq 1.06(\frac{M_{F}}{1{\rm TeV}})^{2}\) \\ \hline \(B-\overline{B}\) & \((3.334\pm 0.013)\times 10^{-13}\) & \(|(V_{L_{f}}){\cal Y}_{F}{\cal Y}_{F}^{\dagger}V_{L_{f}}^{\dagger})_{db}|\leq 0.2 14(\frac{M_{F}}{1{\rm TeV}})^{2}\left(4.21\times 10^{-2}(\frac{M_{F}}{1{\rm TeV}}) ^{2}\right)\) \\ \hline \end{tabular}
\end{table}
Table XVI: Constraints from mass differences of neutral mesons. The results are quoted as a function of vector-like quark mass \(M_{F}\)(TeV). The NP contributions were constrained against the central value of experimental results. The constraint given in the parenthesis is obtained using \(\Delta M^{\rm SM+NP}/\Delta M^{\rm SM}\).
Figure 8: Tree-level diagram of scalar mediated \(K-\overline{K}\) mixing.
where, \(q\) denotes the dilepton mass. The most precise measurement of these ratios so far has been by the LHCb:
\[R_{K} =0.846^{+0.042+0.013}_{-0.039-0.012}\quad q^{2}\in[1.1,6]~{}\text{ GeV}^{2},\ [25] \tag{112}\] \[R_{K^{*}} =\begin{cases}0.66^{+0.11}_{-0.07}\pm 0.024&q^{2}\in[0.045,1.1]~{} \text{GeV}^{2},\\ 0.69^{+0.113}_{-0.069}\pm 0.047&q^{2}\in[1.1,6]~{}\text{GeV}^{2}.\end{cases}\quad[22]\]
\(R_{K^{*}}\) measurements have also been done by Belle [23]
\[R_{K^{*}}=\begin{cases}0.52^{+0.36}_{-0.26}\pm 0.05&q^{2}\in[0.045,1.1]~{} \text{GeV}^{2},\\ 0.96^{+0.45}_{-0.29}\pm 0.11&q^{2}\in[1.1,6]~{}\text{GeV}^{2},\\ 0.99^{+0.27}_{-0.21}\pm 0.10&q^{2}\in[0.1,8]~{}\text{GeV}^{2},\\ 1.18^{+0.52}_{-0.32}\pm 0.10&q^{2}\in[15,19]~{}\text{GeV}^{2},\\ 0.94^{+0.17}_{-0.14}\pm 0.08&q^{2}\in[0.045,]~{}\text{GeV}^{2},\end{cases} \tag{113}\]
but with considerably larger errors compared to LHCb. Another important discrepancy has been observed in the branching ratio of \(B_{s}\to\mu^{+}\mu^{-}\) whose theoretical and experimental values are given in Table 7. These deviations of experimental measurements from the SM predictions, referred to as the neutral current B-anomalies, could be clear signals of new physics. In this section, we explore the contribution to these processes from the LRSM with universal seesaw in the phenomenologically interesting parity symmetric version.
The contributions to neutral current B-anomalies in this model arise from the Lagrangian
\[\mathcal{L}_{Z_{i}}=\sum_{i=1}^{2}\frac{1}{2}M_{Z_{i}}^{2}(Z_{i \mu})^{2}+\left(C_{L_{i}}^{bs}(\overline{b}_{L}\gamma^{\mu}s_{L})+C_{R_{i}}^{ bs}(\overline{b}_{R}\gamma^{\mu}s_{R})+C_{L_{i}}^{\ell\ell}(\overline{\ell}_{L} \gamma^{\mu}\ell_{L})+C_{R_{i}}^{\ell\ell}(\overline{\ell}_{R}\gamma^{\mu} \ell_{R})\right)Z_{i_{\mu}}, \tag{114}\]
where \(\ell=\{e,\mu\}\). Integrating out \(Z_{1,2}\) at tree-level gives the effective Lagrangian:
\[\mathcal{L}_{\text{eff}}= -\sum_{i=1}^{2}\frac{1}{M_{Z_{i}}^{2}}\Big{[}C_{L_{i}}^{bs}C_{L_{ i}}^{\ell\ell}(\overline{b}_{L}\gamma^{\mu}s_{L})(\overline{\ell}_{L}\gamma_{ \mu}\ell_{L})+C_{L_{i}}^{bs}C_{R_{i}}^{\ell\ell}(\overline{b}_{L}\gamma^{\mu}s _{L})(\overline{\ell}_{R}\gamma_{\mu}\ell_{R}) \tag{115}\] \[+C_{R_{i}}^{bs}C_{L_{i}}^{\ell\ell}(\overline{b}_{R}\gamma^{\mu} s_{R})(\overline{\ell}_{L}\gamma_{\mu}\ell_{L})+C_{R_{i}}^{bs}C_{R_{i}}^{\ell\ell}( \overline{b}_{R}\gamma^{\mu}s_{R})(\overline{\ell}_{R}\gamma_{\mu}\ell_{R})\] \[+\frac{1}{2}(C_{L_{i}}^{bs})^{2}(\overline{b}_{L}\gamma_{\mu}s_{L })^{2}+\frac{1}{2}(C_{R_{i}}^{bs})^{2}(\overline{b}_{R}\gamma_{\mu}s_{R})^{2} +C_{L_{i}}^{bs}C_{R_{i}}^{bs}(\overline{b}_{L}\gamma_{\mu}s_{L})(\overline{b} _{R}\gamma^{\mu}s_{R})\Big{]}.\]
Compared to Eqns. (114), (112) and Appendix.1, certain changes have been made to keep the notations simple. Firstly, the factor \(g_{L}/\cos\theta_{W}\) has been absorbed into the coefficients \(C_{L,R_{i}}\). Furthermore, \(C_{L,R_{i}}^{bs}\equiv\widetilde{C}_{L,R_{i}}^{bs}\) while \(C_{L,R_{i}}^{\ell\ell}\equiv\widetilde{C}_{L,R_{i}}^{\ell\ell}\) defined in Appendix.1. From the two expressions above, it is evident that the couplings relevant to resolving neutral current B-anomalies also contribute to \(B_{s}-\overline{B}_{s}\) mixing as well as \(Z_{i}\) decays. Therefore, the main constraints arise from
* The \(B_{s}-\overline{B_{s}}\) mass difference: \(\Delta M_{s}^{\text{NP+SM}}/\Delta M_{s}^{\text{SM}}\),
* Lepton flavour universality violation of \(Z\) decays: \(\Gamma(Z\to\mu^{+}\mu^{-})/\Gamma(Z\to e^{+}e^{-})\), and the individual branching ratios.
One would also need to consider the following observables in reconciling the B-anomalies:
* Muonic decay of \(B_{s}\) meson: \(\text{BR}(B_{s}\to\mu^{+}\mu^{-})^{\text{NP+SM}}/\text{BR}(B_{s}\to\mu^{+}\mu^{-} )^{\text{SM}}\),
* Mixing induced \(\mathcal{CP}\) asymmetry given by \(A^{\text{mix}}_{\mathcal{CP}}(B_{s}\to J/\psi\phi)=\sin(\phi_{\Delta}-2\beta_{s})\)[44] with the value \(-0.021\pm 0.031\)[45], where, \(\phi_{\Delta}\) is defined as \(\arg\left(\frac{\Delta M^{\text{NP+SM}}_{s}}{\Delta M^{\text{SM}}_{s}}\right)\) with \(\beta_{s}=0.01843^{+0.00048}_{-0.00034}\)[46].
Assuming the couplings are real, the constraint from \(B_{s}-\overline{B}_{s}\) mixing, allowing \(2\,\sigma\) deviation, is
\[\frac{\Delta M^{\text{NP+SM}}_{s}}{\Delta M^{\text{SM}}_{s}}\leq\frac{\Delta M ^{\text{exp}}_{s}}{\Delta M^{\text{SM}}_{s}}\equiv 0.95\pm 0.04\Rightarrow|R_{bs}| \leq 1.41\times 10^{-5}. \tag{7.6}\]
The LFU violation of Z decays can arise due to NP in either electron- or muon-sector, or both. For simplicity, we assume that the NP appears only in one of these sectors at a time leading to the following constraints, allowing \(2\,\sigma\) deviation:
\[\frac{\Gamma(Z\to\mu^{+}\mu^{-})}{\Gamma(Z\to e^{+}e^{-})}\Rightarrow\begin{cases} |R_{\mu\mu}|\leq 1.74\times 10^{-3},\\ |R_{ee}|\leq 1.81\times 10^{-3}.\end{cases} \tag{7.7}\]
From these constraints, the largest allowed values of the conventional Wilson coefficients are tabulated in Table 11 along with the respective LFU ratios. The NP WCs in terms of the model parameters are [49]
\[\begin{split} C^{\ell}_{9\,(10)}&=\mp\frac{4\sqrt{2}\pi} {8G_{F}\alpha_{\text{em}}V_{tb}V_{ts^{*}}}\sum_{i=1}^{2}\left(\frac{1}{M_{Z_{ i}}^{2}}C^{bs}_{L_{i}}\left(C^{\ell\ell}_{L_{i}}\pm C^{\ell\ell}_{R_{i}} \right)\right),\\ C^{{}^{\prime}\ell}_{9\,(10)}&=\mp\frac{4\sqrt{2}\pi}{8G_{F} \alpha_{\text{em}}V_{tb}V_{ts^{*}}}\sum_{i=1}^{2}\left(\frac{1}{M_{Z_{i}}^{2}} C^{bs}_{R_{i}}\left(C^{\ell\ell}_{L_{i}}\pm C^{\ell\ell}_{R_{i}}\right) \right).\end{split} \tag{7.8}\]
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Observable** & **SM prediction** & **Experimental Value** \\ \hline \(\frac{\Gamma(Z\to\mu^{+}\mu^{-})}{\Gamma(Z\to e^{+}e^{-})}\) & \(\simeq 1\) & \(1.0001\pm 0.0024\). \\ \hline \(\Delta M_{s}\) & \((18.77\pm 0.86)\text{ps}^{-1}\) & \((17.749\pm 0.020)\text{ps}^{-1}\) \\ \hline \(BR(Z\to e^{+}e^{-})\) & \((3.3663\pm 0.0012)\%\) & \((3.3632\pm 0.0042)\%\) \\ \hline \(BR(Z\to\mu^{+}\mu^{-})\) & \((3.3663\pm 0.0012)\%\) & \((3.3662\pm 0.0066)\%\) \\ \hline \(BR(B_{s}\to\mu^{+}\mu^{-})\) & \((3.65\pm 0.23)\times 10^{-9}\) & \((3.09^{+0.46}_{-0.43}{}^{+0.15}_{-0.11})\times 10^{-9}\)[47, 48] \\ \hline \end{tabular}
\end{table}
Table 11: Observables constraining the model parameters that contribute to resolving neutral current B-anomalies.
We see that the NP in electron-sector can improve the theoretical prediction of \(R_{K^{(*)}}\) slightly due to the large right-handed current (\(C_{9}^{{}^{\prime}e}=C_{10}^{{}^{\prime}e}\)) (see also Refs. [50; 51; 52]) contribution in the denominator. This, however, is not sufficient to explain all the \(b\to s\mu^{+}\mu^{-}\) related anomalies which points towards NP in the muonic sector. The NP in the muon-sector in this model worsens the \(R_{K^{(*)}}\) prediction due to the large right-handed current \(C_{9}^{{}^{\prime}\mu}=C_{10}^{{}^{\prime}\mu}\) appearing in the numerator, although it can explain the observed \(\text{BR}(B_{s}\to\mu^{+}\mu^{-})\). Since the global fit to the neutral current B-anomalies alludes to a muon-specific left-handed current [49], box diagram contribution mediated by left-handed scalar field \(\sigma_{L}\) (assuming no mixing with \(\sigma_{R}\) such that \((m_{\sigma_{L}}=m_{h})\ll(m_{\sigma_{R}}=m_{H})\)), as shown in Fig. 9 (a), leading to \(C_{9}^{\mu}=-C_{10}^{\mu}\) was also explored. The NP WC from the box diagram is
\[C_{9}^{\mu}=-C_{10}^{\mu}=-\frac{\sqrt{2}}{128\pi G_{F}\alpha_{\text{em}}}\frac {1}{V_{tb}V_{ts}^{*}}\frac{|y_{\mu}|^{2}y_{s}y_{b}^{*}}{4m_{\sigma_{L}}^{2}} \mathcal{F}(x_{i},x_{j}), \tag{7.9}\]
where,
\[\mathcal{F}(x_{i},x_{j})=\frac{x_{i}^{2}\ln x_{i}}{(x_{i}-1)^{2}(x_{i}-x_{j})} -\frac{x_{j}^{2}\ln x_{j}}{(x_{j}-1)^{2}(x_{i}-x_{j})}+\frac{1}{(x_{j}-1)(x_{i }-1)}, \tag{7.10}\]
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Wilson coefficient** & **Value** & **Observable** \\ \hline \hline \(C_{9}^{e}=-C_{10}^{e}\) & \(-2.74\times 10^{-4}\) & \(R_{K}=0.95\) \\ \hline \(C_{9}^{{}^{\prime}e}=C_{10}^{{}^{\prime}e}\) & \(-9.03\times 10^{-1}\) & \(\text{BR}(B_{s}\to\mu^{+}\mu^{-})=3.65\times 10^{-9}\) \\ \hline \hline \(C_{9}^{\mu}=-C_{10}^{\mu}\) & \(-2.63\times 10^{-4}\) & \(R_{K}=1.04\) \\ \hline \(C_{9}^{{}^{\prime}\mu}=C_{10}^{{}^{\prime}\mu}\) & \(-8.68\times 10^{-1}\) & \(R_{K^{*}}=1.03\) \\ \hline \(C_{9}^{{}^{\prime}\mu}=C_{10}^{{}^{\prime}\mu}\) & \(-8.68\times 10^{-1}\) & \(\text{BR}(B_{s}\to\mu^{+}\mu^{-})=2.35\times 10^{-9}\) \\ \hline \end{tabular}
\end{table}
Table 15: Maximum values of NP Wilson coefficients (Eq. (7.8)) allowed by the model parameters consistent with \(B_{s}-\overline{B}_{s}\) mixing (Eq. (7.6)) and LFUV of \(Z\) decays (Eq. (7.7)) and the corresponding extremum values of the observable. Note that NP from only one of the electron or muon sectors is turned on at a time.
Figure 9: Box diagram contributing to \(C_{9}\) from \(\sigma_{L}\) mediator (a) and \(B_{s}-\overline{B}_{s}\) mixing arising from the same mediator (b). \(F\) represents VL-down type quark whereas \(E\) represents VL-lepton.
with \(x_{i}=\frac{m_{F_{i}}}{m_{\sigma_{L}}}\) and \(x_{j}=\frac{m_{E_{j}}}{m_{\sigma_{L}}}\). Here, \(y_{\mu}\), \(y_{s}\) and \(y_{b}\) are Yukawa couplings of \(\mu_{L}\), \(s_{L}\) and \(b_{L}\), respectively, to \(\sigma_{L}\). The coupling of quarks \(b\) and \(s\) to \(\sigma_{L}\) and down-type VLFs lead to \(B_{s}-\overline{B}_{s}\) mixing as shown in Fig. 9 (b). The mass difference (procedure for the calculation is described in Sec. 4.4 and the magic numbers are given in Appendix. B.3) is given by
\[\Delta M_{s}=\frac{0.86}{32\pi^{2}}\frac{|y_{b}|^{2}|y_{s}|^{2}}{4m_{\sigma_{L }}^{2}}\frac{8}{3}f_{B_{s}}^{2}M_{B_{s}}B_{1}(\mu_{b})\mathcal{F}(x_{i},x_{j}), \tag{7.11}\]
with \(x_{i}=\frac{m_{F_{i}}}{m_{\sigma_{L}}}\) and \(x_{j}=\frac{m_{F_{j}}}{m_{\sigma_{L}}}\). The constraint arising from \(\Delta M_{s}\) is extremely severe to allow large values of WCs required to explain the observed \(R_{K^{(*)}}\). We conclude that this model, although it has room to improve the \(R_{K^{(*)}}\) prediction, cannot completely resolve the \(b\to s\mu^{+}\mu^{-}\) anomaly.
## 8 Anomalous Magnetic Moment of Muon
The anomalous magnetic moment of muon \(a_{\mu}=(g_{\mu}-2)/2\) is predicted to be \(a_{\mu}^{\rm SM}=116591810(43)\times 10^{-11}\)[53; 54; 55; 56]. The measurements of \(a_{\mu}\) at Fermilab National Accelerator Laboratory (FNAL), \(a_{\mu}({\rm FNAL})=116592040(54)\times 10^{-11}\)[57], which agrees with the previous Brookhaven National Laboratory (BNL) E821 measurement [58; 59] is at odds with the SM prediction. The difference, \(\Delta a_{\mu}=a_{\mu}^{\rm exp}-a_{\mu}^{\rm th}\simeq(251\pm 59)\times 10^{-11}\), is a \(4.2\,\sigma\) discrepancy. (There is an ambiguity that emerged recently with the results from the BMW collaboration [60] which agrees with the experimental measurement within 1.6 \(\sigma\).) The discrepancy is another major hint towards BSM physics. The neutral scalars \(h\) and \(H\), and the neutral gauge bosons \(Z_{1}\) and \(Z_{2}\) in the LRSM with universal seesaw can mediate significant chirally enhanced one-loop corrections to \(a_{\mu}\), as shown in Fig. 10, potentially resolving the anomaly in AMM of muon.
The corrections to AMM arise from the following Lagrangian:
\[\mathcal{L} \supset g_{L}\cos\theta_{W}\overline{\mu}\gamma^{\mu}Z_{1_{\mu}} \left[\left\{\frac{\sin\xi}{2\sqrt{\cos 2\theta_{W}}}\frac{g_{Y}^{2}}{g_{L}^{2}}- \cos\xi\left(\frac{1}{2}+\frac{1}{2}\frac{g_{Y}^{2}}{g_{L}^{2}}\right)\right\} V_{L}\rho_{L}^{\dagger}\,{\rm P}_{\rm L}+\frac{\kappa_{R}\sin\xi}{2\kappa_{L} \sqrt{\cos 2\theta_{W}}}V_{L}\rho_{L}^{\dagger}\,{\rm P}_{\rm R}\right]F\] \[-g_{L}\cos\theta_{W}\overline{\mu}\gamma^{\mu}Z_{2_{\mu}}\left[ \left\{\frac{\cos\xi}{2\sqrt{\cos 2\theta_{W}}}\frac{g_{Y}^{2}}{g_{L}^{2}}+ \sin\xi\left(\frac{1}{2}+\frac{1}{2}\frac{g_{Y}^{2}}{g_{L}^{2}}\right)\right\} V_{L}\rho_{L}^{\dagger}{\rm P}_{\rm L}-\frac{\kappa_{R}\cos\xi}{2\kappa_{L}\sqrt{ \cos 2\theta_{W}}}V_{L}\rho_{L}^{\dagger}{\rm P}_{\rm R}\right]F\]
Figure 10: Diagrams leading to correction to AMM.
\[+\overline{\mu}\,h\left[\frac{\cos\zeta}{\sqrt{2}}\,\left(\left\{V_{L} \mathcal{Y}-\frac{1}{2}(V_{L}\mathcal{Y}\,\rho_{L}\rho_{L}^{\dagger}\frac{\kappa _{R}^{2}}{\kappa_{L}^{2}}+V_{L}\rho_{L}^{\dagger}\rho_{L}\mathcal{Y})\right\} \mathrm{P}_{\mathrm{R}}-\left\{V_{L}\rho_{L}^{\dagger}\mathcal{Y}^{\dagger} \rho_{L}^{\dagger}\frac{\kappa_{R}}{\kappa_{L}}\right\}\mathrm{P}_{\mathrm{L}}\right)\] \[-\frac{\sin\zeta}{\sqrt{2}}\left(\left\{V_{L}\mathcal{Y}-\frac{1 }{2}(V_{L}\mathcal{Y}\,\rho_{L}\rho_{L}^{\dagger}+V_{L}\frac{\kappa_{R}^{2}}{ \kappa_{L}^{2}}\rho_{L}^{\dagger}\rho_{L}\mathcal{Y})\right\}\mathrm{P}_{ \mathrm{L}}-\left\{V_{L}\rho_{L}^{\dagger}\mathcal{Y}^{\dagger}\rho_{L}^{ \dagger}\frac{\kappa_{R}}{\kappa_{L}}\right\}\mathrm{P}_{\mathrm{R}}\right) \right]F\] \[+\overline{\mu}\,H\left[\frac{\sin\zeta}{\sqrt{2}}\,\left(\left\{ V_{L}\mathcal{Y}-\frac{1}{2}(V_{L}\mathcal{Y}\,\rho_{L}\rho_{L}^{\dagger}\frac{ \kappa_{R}^{2}}{\kappa_{L}^{2}}+V_{L}\rho_{L}^{\dagger}\rho_{L}\mathcal{Y}) \right\}\mathrm{P}_{\mathrm{R}}-\left\{V_{L}\rho_{L}^{\dagger}\mathcal{Y}^{ \dagger}\rho_{L}^{\dagger}\frac{\kappa_{R}}{\kappa_{L}}\right\}\mathrm{P}_{ \mathrm{L}}\right)\right.\] \[\left.+\frac{\cos\zeta}{\sqrt{2}}\left(\left\{V_{L}\mathcal{Y}- \frac{1}{2}(V_{L}\mathcal{Y}\,\rho_{L}\rho_{L}^{\dagger}+V_{L}\frac{\kappa_{R} ^{2}}{\kappa_{L}^{2}}\rho_{L}^{\dagger}\rho_{L}\mathcal{Y})\right\}\mathrm{P} _{\mathrm{L}}-\left\{V_{L}\rho_{L}^{\dagger}\mathcal{Y}^{\dagger}\rho_{L}^{ \dagger}\frac{\kappa_{R}}{\kappa_{L}}\right\}\mathrm{P}_{\mathrm{R}}\right) \right]F, \tag{113}\]
where, \(\mathrm{P}_{\{L,\mathrm{R}\}}\) are left- and right-handed projection operators, and \(F\) represents the heavy VL-leptons. Compared to the general form of the interaction Lagrangian
\[\mathcal{L}\supset\sum_{F,X}\overline{\mu}\left[C_{V}\gamma^{\mu}+C_{A}\gamma ^{\mu}\gamma^{5}\right]F\,X_{\mu}+\sum_{F,H}\overline{\mu}\left[C_{S}+C_{P} \gamma^{5}\right]F\,H\,, \tag{114}\]
The corrections to AMM [61], under the assumption \(m_{\mu}\to 0\), arising from the VL-lepton mass enhancements are:
\[\left[a_{\mu}\right]_{X}= \frac{m_{\mu}m_{F}}{4\pi^{2}}(C_{A}^{2}-C_{V}^{2})\mathcal{F}(m_{ F},m_{X}) \tag{115}\] \[\left[a_{\mu}\right]_{H}= \frac{m_{\mu}m_{F}}{8\pi^{2}}(C_{S}^{2}-C_{P}^{2})\mathcal{G}(m_{ F},m_{H})\]
with
\[\mathcal{F}(m_{F},m_{X})= \frac{\left[m_{F}^{6}-4m_{X}^{6}+3m_{X}^{4}m_{F}^{2}+6m_{X}^{4}m _{F}^{2}\ln\frac{m_{X}^{2}}{m_{F}^{2}}\right]}{4m_{X}^{2}(m_{X}^{2}-m_{F}^{2} )^{3}} \tag{116}\] \[\mathcal{G}(m_{F},m_{X})= \frac{\left[m_{F}^{4}+3m_{H}^{4}-4m_{H}^{2}m_{F}^{2}+2m_{H}^{4} \ln\frac{m_{F}^{2}}{m_{H}^{2}}\right]}{2(m_{F}^{2}-m_{H}^{2})^{3}}\]
A single-family mixing between muon and the corresponding VLF is not enough to resolve the anomaly in AMM due to the constraint from muon mass. To avoid this, we consider a case where muon mixes with two VL-leptons in a basis where the muon mass is negligible:
\[\left(\overline{\mu}_{L}\ \overline{E}_{e_{L}}\ \overline{E}_{\mu_{L}}\right) \begin{pmatrix}0&0&y_{\mu}\kappa_{L}\\ 0&0&M_{1}\\ y_{\mu}\kappa_{R}&M_{1}&M_{2}\end{pmatrix}\begin{pmatrix}\mu_{R}\\ E_{e_{R}}\\ E_{\mu_{R}}\end{pmatrix}, \tag{117}\]
such that \(y_{\mu}\kappa_{L}\ll\{M_{1},\,M_{2}\}\). The mixing matrix \(\mathcal{M}_{\mu}\) is diagonalized by a bi-unitary transform of the form
\[\mathrm{Diag}(0,\,M_{E},\,M_{M})=U_{L}\mathcal{M}_{\mu}U_{R}^{\dagger} \tag{118}\]
where,
\[U_{X=\{L,\,R\}}=\begin{pmatrix}c_{X_{1}}&s_{X_{1}}&0\\ -c_{X_{2}}s_{X_{1}}&c_{X_{1}}c_{X_{2}}&s_{X_{2}}\\ -s_{X_{1}}s_{X_{2}}&-c_{X_{1}}s_{X_{2}}&c_{X_{2}}\end{pmatrix}. \tag{119}\]
Here, \(c\left(s\right)\) stand for \(\cos\left(\sin\right)\). The mixing angles and the mass eigenstates are
\[\begin{split} L_{1}=\arctan\biggl{(}\frac{-y_{\mu}\kappa_{L}}{M_{1}} \biggr{)},& L_{2}=\frac{1}{2}\arctan\left(-\frac{2M_{2}\sqrt{M_{1 }^{2}+y_{\mu}^{2}\kappa_{L}^{2}}}{M_{2}^{2}+(\kappa_{R}^{2}-\kappa_{L}^{2})y_ {\mu}^{2}}\right)\\ R_{1}=\arctan\biggl{(}\frac{-y_{\mu}\kappa_{R}}{M_{1}}\biggr{)},& R_{2}=\frac{1}{2}\arctan\left(-\frac{2M_{2}\sqrt{M_{1 }^{2}+y_{\mu}^{2}\kappa_{R}^{2}}}{M_{2}^{2}-(\kappa_{R}^{2}-\kappa_{L}^{2})y_ {\mu}^{2}}\right)\\ M_{E,\,M}=\frac{1}{2}\left(2M_{1}^{2}+M_{2}^{2}+(\kappa_{L}^{2}+ \kappa_{R}^{2})y_{\mu}^{2}\mp\sqrt{4M_{2}^{2}(M_{1}^{2}+\kappa_{L}^{2}y_{\mu} ^{2})+\left(M_{2}^{2}+(\kappa_{R}^{2}-\kappa_{L}^{2})y_{\mu}^{2}\right)^{2}} \right)\end{split} \tag{8.8}\]
The mixing between \(Z_{L}-Z_{R}\) can be ignored such that \(Z_{1\left(2\right)}\equiv Z_{L\left(R\right)}\), whereas \(\sigma_{L}-\sigma_{R}\) mixing as of Eq. (2.6) is required to obtain a chiral enhancement. The total correction to AMM is
\[\begin{split} a_{\mu}=&-\frac{m_{\mu}}{4\pi^{2}} \biggl{[}\frac{g_{L}^{4}\tan^{2}\theta_{W}}{4(g_{L}^{2}-g_{Y}^{2})}c_{L_{1}}c_ {R_{1}}s_{L_{1}}s_{R_{1}}\left\{c_{L_{2}}c_{R_{2}}M_{E}\mathcal{F}(M_{E},m_{Z_ {R}})+s_{L_{2}}s_{R_{2}}M_{M}\mathcal{F}(M_{M},m_{Z_{R}})\right\}\\ &\qquad\qquad\frac{y_{\mu}^{2}\sin 2\zeta}{8}c_{L_{1}}c_{R_{1}} \left\{s_{L_{2}}s_{R_{2}}M_{E}\mathcal{G}(M_{E},m_{h})+c_{L_{2}}c_{R_{2}}M _{E}\mathcal{G}(M_{M},m_{h})\right.\\ &\qquad\qquad\qquad\qquad\left.-s_{L_{2}}s_{R_{2}}M_{E}\mathcal{ G}(M_{E},m_{H})-c_{L_{2}}c_{R_{2}}M_{E}\mathcal{G}(M_{M},m_{H})\right\} \biggl{]}\end{split} \tag{8.10}\]
The scatter plot in Fig. 11 shows that the correction to AMM is not large enough to explain the \(4.2\,\sigma\) discrepancy for scalar masses in the experimentally accessible (\(\leq\mathcal{O}\) (TeV)) range.
Figure 11: Scatter plot of correction to AMM as a function for \(Z_{R}\) mass for different values of the heavy higgs mass. The Yukawa coupling \(y_{\mu}=1\) and bare masses \(M_{1,2}\) was varied between \(0.5\) TeV and \(10\) TeV. The shaded regions lie outside the required \(2\,\sigma\) range of \(a_{\mu}\). The region to the left of the vertical red line is excluded by the lower bound on \(Z_{R}\) mass [27].
Conclusion
This paper presents a comprehensive description of the left-right symmetric model with a universal seesaw mechanism for generating fermion masses. The direct coupling of SM fermions with their VLF singlet partners leads to tree-level flavor-changing neutral current interactions and has room for flavor and flavor-universality violating processes, making this a compelling model to explore various flavor anomalies that have come up in recent years. Full tree-level Lagrangian in the physical basis has been explored, allowing us to obtain constraints on the theory parameters from neutral and charged current mediated processes. The parity symmetric version, motivated by the axionless solution to the strong \(\mathcal{CP}\) problem, was explored in great detail. The model was applied to find a solution to the neutral current B-anomalies and the AMM of muon. The constraints on the contributions from the model were found to be severe, ruling out the possibility of resolving the two anomalies. The contribution to \(R_{K^{(*)}}\) is restricted by the stringent constraint appearing from the mass difference in \(B_{s}-\overline{B}_{s}\) mixing. Although the model allows VLF mass-enhanced corrections to the AMM of muon, the constraint from muon mass eliminates any substantial correction to survive. We also found that the discrepancies observed in various LFU-violating processes mediated by charged current interactions could also not be simultaneously explained by the parameter space of the model.
## Acknowledgements
RD would like to thank K.S. Babu for his valuable guidance that led to the completion of this work, and Ajay Kaladharan, Vishnu P.K., and Anil Thapa for useful discussions. This work is supported by the U.S. Department of Energy under grant number DE-SC 0016013. Some computing for this project was performed at the High-Performance Computing Center at Oklahoma State University, supported in part through the National Science Foundation grant OAC-1531128.
## Appendix A Neutral Current Interaction
The coefficients of neutral current interaction of \(Z_{1}\) with SM fermions as defined in Eq. (4.1) are
\[C_{L_{1}} =\cos^{2}\theta_{W}\left[\cos\xi\left(T_{3L}-\frac{g_{Y}^{2}}{g _{L}^{2}}\frac{Y_{f_{L}}}{2}\right)-\frac{\sin\xi}{\sqrt{\cos 2\theta_{W}}} \left(-\frac{g_{Y}^{2}}{g_{L}^{2}}\frac{Y_{f_{L}}}{2}\right)\right],\] \[\widetilde{C}_{L_{1}} =-\cos^{2}\theta_{W}\left[\cos\xi\left(T_{3L}-\frac{g_{Y}^{2}}{g _{L}^{2}}\frac{Y_{f_{L}}-Y_{F}}{2}\right)+\frac{\sin\xi}{\sqrt{\cos 2\theta_{W}}} \left(\frac{g_{Y}^{2}}{g_{L}^{2}}\frac{Y_{f_{L}}-Y_{F}}{2}\right)\right]\times R _{ij},\] \[C_{R_{1}} =\cos^{2}\theta_{W}\left[\cos\xi\left(-\frac{g_{Y}^{2}}{g_{L}^{ 2}}\frac{Y_{f_{R}}}{2}\right)-\frac{\sin\xi}{\sqrt{\cos 2\theta_{W}}} \left(T_{3R}-\frac{g_{Y}^{2}}{g_{L}^{2}}\frac{Y_{f_{R}}}{2}\right)\right],\] \[\widetilde{C}_{R_{1}} =\cos^{2}\theta_{W}\frac{\sin\xi}{\sqrt{\cos 2\theta_{W}}} \left(T_{3R}\right)\times\frac{\kappa_{R}^{2}}{\kappa_{L}^{2}}R_{ij},\] (A.1)
whereas those of \(Z_{2}\) with SM fermions as in Eq. (4.2) are as follows:
\[\begin{split} C_{L_{2}}&=\cos^{2}\theta_{W}\left[\frac{ \cos\xi}{\sqrt{\cos 2\theta_{W}}}\left(-\frac{g_{Y}^{2}}{g_{L}^{2}}\frac{Y_{f_{L}}}{2} \right)+\sin\xi\left(T_{3L}-\frac{g_{Y}^{2}}{g_{L}^{2}}\frac{Y_{f_{L}}}{2} \right)\right],\\ \widetilde{C}_{L_{2}}&=\cos^{2}\theta_{W}\left[\frac {\cos\xi}{\sqrt{\cos 2\theta_{W}}}\left(\frac{g_{Y}^{2}}{g_{L}^{2}}\frac{Y_{f_{L}}-Y_{F}}{2} \right)-\sin\xi\left(T_{3L}-\frac{g_{Y}^{2}}{g_{L}^{2}}\frac{Y_{f_{L}}-Y_{F}} {2}\right)\right]\times R_{ij},\\ C_{R_{2}}&=\cos^{2}\theta_{W}\left[\frac{\cos\xi}{ \sqrt{\cos 2\theta_{W}}}\left(T_{3R}-\frac{g_{Y}^{2}}{g_{L}^{2}}\frac{Y_{f_{R}}}{2} \right)+\sin\xi\left(-\frac{g_{Y}^{2}}{g_{L}^{2}}\frac{Y_{f_{R}}}{2}\right) \right],\\ \widetilde{C}_{R_{2}}&=-\cos^{2}\theta_{W}\frac{ \cos\xi}{\sqrt{\cos 2\theta_{W}}}\left(T_{3R}\right)\times\frac{\kappa_{R}^{2}}{ \kappa_{L}^{2}}R_{ij}.\end{split}\] (A.2)
## Appendix B Input Parameters for Meson Mixing
Here, we provide various input parameters useful in computing the meson mixing mass difference. The strong coupling strengths at high scales [62], \(\alpha_{s}(\Lambda)\), are given in Table XIX.
### \(K-\overline{K}\) Mixing
The meson mixing mass difference is given by \(\Delta M_{K}=2\text{Re}\left\langle K\right|\mathcal{H}\left|\overline{K}\right\rangle\). The matrix element at low energy scale is obtained as:
\[\left\langle\overline{K}\right|\mathcal{H}_{\text{eff}}\left|K\right\rangle_{ i}=\sum_{j=1}^{5}\sum_{r=1}^{5}(b_{j}^{(r,i)}+\eta c_{j}^{(r,i)})\eta^{a_{j}}C_{i}( \Lambda))R_{r}\left\langle\overline{K}\right|\mathcal{Q}_{1}\left|K\right\rangle,\] (B.1)
with, \(\left\langle\mathcal{O}_{1}\right\rangle=\frac{1}{3}M_{K}f_{K}^{2}B_{1}(\mu)\). The non-vanishing entries of the magic numbers are [63]:
\[a_{i}=(0.29,-0.69,0.79,-1.1,0.14)\]
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
**Constants** & \(\mu\) & \(f_{K}\) & \(M_{K}\) & \(m_{s}\) & \(m_{d}\) \\ \hline
**Values** & 2 GeV & 159.8 MeV & 467.611 MeV & 93 MeV & 4.67 MeV \\ \hline \hline
**Constants** & \(R_{1}\) & \(R_{2}\) & \(R_{3}\) & \(R_{4}\) & \(R_{5}\) \\ \hline
**Values** & 1 & -12.9 & 3.98 & 20.8 & 5.2 \\ \hline \end{tabular}
\end{table}
Table XIX. Values of input parameters used in \(K-\overline{K}\) mixing.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
**Scale** & \(M_{Z_{1}}\) & \(M_{Z_{2}}\) & \(M_{W_{L}}\) & \(M_{W_{R}}\) & \(M_{h}\) & \(M_{H}\) & \(M_{t}\) \\ \hline \(\Lambda\) & 91.187 GeV & 5 TeV & 80.379 GeV & 4.219 TeV & 125.10 GeV & 10 TeV & 172.9 GeV \\ \hline \(\mathbf{\alpha_{s}}\) & 0.1183 & 0.0824 & 0.1206 & 0.0837 & 0.1129 & 0.0774 & 0.1079 \\ \hline \end{tabular}
\end{table}
Table XIX. Values of the strong coupling constant at different energy scales.
\[b_{i}^{(11)} =(0.82,0,0,0,0), c_{i}^{(11)} =(-0.016,0,0,0,0), \tag{115}\] \[b_{i}^{(22)} =(0,2.4,0.011,0,0), c_{i}^{(22)} =(0,-0.23,-0.002,0,0),\] \[b_{i}^{(23)} =(0,-0.63,0.17,0,0), c_{i}^{(23)} =(0,-0.018,0.0049,0,0),\] \[b_{i}^{(32)} =(0,-0.019,0.028,0,0), c_{i}^{(32)} =(0,0.0028,-0.0093,0,0),\] \[b_{i}^{(33)} =(0,0.0049,0.43,0,0), c_{i}^{(33)} =(0,0.00021,0.023,0,0),\] \[b_{i}^{(44)} =(0,0,0,4.4,0), c_{i}^{(44)} =(0,0,0,-0.68,0.0055),\] \[b_{i}^{(45)} =(0,0,0,1.5,-0.17), c_{i}^{(45)} =(0,0,0,-0.35,-0.0062),\] \[b_{i}^{(54)} =(0,0,0,0.18,0), c_{i}^{(54)} =(0,0,0,-0.026,-0.016),\] \[b_{i}^{(55)} =(0,0,0,0.061,0.82), c_{i}^{(55)} =(0,0,0,-0.013,0.018).\]
The coefficients and input parameters relevant to this calculation are given in Table XX.
### \(D-\overline{D}\) Mixing
The mass difference is given by \(\Delta M_{D}=2|\left\langle D\right|\mathcal{H}\left|\overline{D}\right\rangle|\) with the renormalised operators being:
\[\left\langle\mathcal{O}_{1}\right\rangle =\frac{1}{3}M_{D}f_{D}^{2}B_{1}(\mu), \tag{116}\] \[\left\langle\mathcal{O}_{2}\right\rangle =-\frac{5}{24}\left(\frac{M_{D}}{m_{u}(\mu)+m_{c}(\mu)}\right)^{2 }M_{D}f_{D}^{2}B_{2}(\mu),\] \[\left\langle\mathcal{O}_{3}\right\rangle =\frac{1}{24}\left(\frac{M_{D}}{m_{u}(\mu)+m_{c}(\mu)}\right)^{2 }M_{D}f_{D}^{2}B_{3}(\mu),\] \[\left\langle\mathcal{O}_{4}\right\rangle =\frac{1}{4}\left(\frac{M_{D}}{m_{u}(\mu)+m_{c}(\mu)}\right)^{2 }M_{D}f_{D}^{2}B_{4}(\mu),\] \[\left\langle\mathcal{O}_{5}\right\rangle =\frac{1}{12}\left(\frac{M_{D}}{m_{u}(\mu)+m_{c}(\mu)}\right)^{2 }M_{D}f_{D}^{2}B_{5}(\mu).\]
The coefficients and input parameter values are listed in Table XXI and the non-vanishing entries of the magic numbers are [31]:
\[a_{i}=(0.286,-0.692,0.787,-1.143,0.143)\]
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline
**Constants** & \(\mu\) & \(f_{D}\) & \(M_{D}\) & \(m_{u}\) & \(m_{c}\) \\ \hline
**Values** & 2.8 GeV & 201 MeV & 1.864 GeV & 2.01 MeV & 1.01 GeV \\ \hline \hline
**Constants** & \(B_{1}\) & \(B_{2}\) & \(B_{3}\) & \(B_{4}\) & \(B_{5}\) \\ \hline
**Values** & 0.865 & 0.82 & 1.07 & 1.08 & 1.455 \\ \hline \end{tabular}
\end{table}
Table XXI. Values of input parameters used in \(D-\overline{D}\) mixing.
\[b_{i}^{(11)} =(0.837,0,0,0,0), c_{i}^{(11)} =(-0.016,0,0,0,0), \tag{115}\] \[b_{i}^{(22)} =(0,2.163,0.012,0,0), c_{i}^{(22)} =(0,-0.20,-0.002,0,0),\] (116) \[b_{i}^{(23)} =(0,-0.567,0.176,0,0), c_{i}^{(23)} =(0,-0.016,0.006,0,0),\] (117) \[b_{i}^{(32)} =(0,-0.032,0.031,0,0), c_{i}^{(32)} =(0,-0.004,-0.010,0,0),\] (118) \[b_{i}^{(33)} =(0,0.008,0.474,0,0), c_{i}^{(33)} =(0,0,0.025,0,0),\] (119) \[b_{i}^{(44)} =(0,0,0,3.63,0), c_{i}^{(44)} =(0,0,0,-0.56,0.006),\] (120) \[b_{i}^{(45)} =(0,0,0,1.21,-0.19), c_{i}^{(45)} =(0,0,0,-0.29,-0.006),\] (121) \[b_{i}^{(54)} =(0,0,0,0.14,0), c_{i}^{(54)} =(0,0,0,-0.019,-0.016),\] (122) \[b_{i}^{(55)} =(0,0,0,0.045,0.839), c_{i}^{(55)} =(0,0,0,-0.009,0.018). \tag{123}\]
### \(B_{q}-\overline{B}_{q}\) Mixing
The operators used in evaluating \(\Delta M_{B_{q}}=2|\left\langle B_{q}\right|\mathcal{H}\left|\overline{B}_{q} \right\rangle|\) (in Sec. 4.4) are
\[\left\langle\mathcal{O}_{1}\right\rangle =f_{B_{q}}^{2}M_{B_{q}}\frac{8}{3}B_{1}(\mu_{b}), \tag{124}\] \[\left\langle\mathcal{O}_{2}\right\rangle =f_{B_{q}}^{2}M_{B_{q}}\frac{-5M_{B_{q}}^{2}}{3(\overline{m}_{b}( \mu_{b})+\overline{m}_{q}(\mu_{b}))^{2}}B_{2}(\mu_{b}),\] \[\left\langle\mathcal{O}_{3}\right\rangle =f_{B_{q}}^{2}M_{B_{q}}\frac{M_{B_{q}}^{2}}{3(\overline{m}_{b}( \mu_{b})+\overline{m}_{q}(\mu_{b}))^{2}}B_{3}(\mu_{b}),\] \[\left\langle\mathcal{O}_{4}\right\rangle =f_{B_{q}}^{2}M_{B_{q}}\left[\frac{2M_{B_{q}}^{2}}{(\overline{m}_{ b}(\mu_{b})+\overline{m}_{q}(\mu_{b}))^{2}}+\frac{1}{3}\right]B_{4}(\mu_{b}),\] \[\left\langle\mathcal{O}_{5}\right\rangle =f_{B_{q}}^{2}M_{B_{q}}\left[\frac{2M_{B_{q}}^{2}}{3(\overline{m}_{ b}(\mu_{b})+\overline{m}_{q}(\mu_{b}))^{2}}+1\right]B_{5}(\mu_{b}).\]
The central values of the combination of \(f_{B}^{2}B_{i}(\mu_{b}=4.18\) GeV) used are as follows [33]:
\[f_{B_{s}}^{2}B_{1}^{s}(\mu_{b}) = 0.0452\ \text{GeV}^{2}, f_{B_{d}}^{2}B_{d}^{4}(\mu_{b}) = 0.0305\ \text{GeV}^{2}, \tag{125}\] \[f_{B_{s}}^{2}B_{2}^{d}(\mu_{b}) = 0.0441\ \text{GeV}^{2}, f_{B_{d}}^{2}B_{d}^{4}(\mu_{b}) = 0.0288\ \text{GeV}^{2},\] (126) \[f_{B_{s}}^{2}B_{3}^{d}(\mu_{b}) = 0.0454\ \text{GeV}^{2}, f_{B_{d}}^{2}B_{d}^{4}(\mu_{b}) = 0.0281\ \text{GeV}^{2},\] (127) \[f_{B_{s}}^{2}B_{5}^{s}(\mu_{b}) = 0.0507\ \text{GeV}^{2}, f_{B_{d}}^{2}B_{d}^{4}(\mu_{b}) = 0.0387\ \text{GeV}^{2}, \tag{128}\]
The masses of quarks at \(\mu_{b}\) are \(m_{s}=77.9\) MeV, \(m_{d}=3.94\) MeV and \(m_{b}=4.18\) GeV.
In Sec. 6, \(B_{d}-\overline{B_{d}}\) mass difference was calculated using magic numbers. The renormalized operators in terms of \(B_{i}(\mu)\) parameters are same as in Eq. (114) with \(\{M_{D}\to M_{B_{d}},\,f_{D}\to f_{B_{d}},\,m_{u}\to m_{b},\,m_{c}\to m_{d}\}\). Other relevant input parameters are given in Table 12 and the non-vanishing entries of the magic numbers are [64]:
\[a_{i}=(0.286,-0.692,0.787,-1.143,0.143) \tag{129}\]
\[b_{i}^{(11)}=(0.865,0,0,0,0), c_{i}^{(11)}=(-0.017,0,0,0,0),\] \[b_{i}^{(22)}=(0,1.879,0.012,0,0), c_{i}^{(22)}=(0,-0.18,-0.003,0,0),\] \[b_{i}^{(23)}=(0,-0.493,0.18,0,0), c_{i}^{(23)}=(0,-0.014,0.008,0,0),\] \[b_{i}^{(32)}=(0,-0.044,0.035,0,0), c_{i}^{(32)}=(0,-0.005,-0.012,0,0),\] \[b_{i}^{(33)}=(0,0.011,0.54,0,0), c_{i}^{(33)}=(0,0,0.028,0,0), \tag{114}\] \[b_{i}^{(44)}=(0,0,0,2.87,0), c_{i}^{(44)}=(0,0,0,-0.48,0.005),\] \[b_{i}^{(45)}=(0,0,0,0.961,-0.22), c_{i}^{(45)}=(0,0,0,-0.25,-0.006),\] \[b_{i}^{(54)}=(0,0,0,0.09,0), c_{i}^{(54)}=(0,0,0,-0.013,-0.016),\] \[b_{i}^{(55)}=(0,0,0,0.029,0.863), c_{i}^{(55)}=(0,0,0,-0.007,0.019),\]
## Appendix C Form Factors for Meson Decay
The form factor that appears in Eq. (4.23) describing the decay width of meson to lighter meson and charged leptons takes the form [37]
\[f_{+}(q^{2})=f_{+}(0)/(1-\frac{q^{2}}{M_{V}^{2}}). \tag{115}\]
The values of the form factors and the vector meson masses are given in Table XXIII.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Constants** & \(\mu\) & \(f_{B_{d}}\) & \(M_{B_{d}}\) & \(m_{b}\) & \(m_{d}\) \\ \hline
**Values** & 4.6 GeV & 200 MeV & 5.279 GeV & 4.61 GeV & 5.4 MeV \\ \hline \hline
**Constants** & \(B_{1}\) & \(B_{2}\) & \(B_{3}\) & \(B_{4}\) & \(B_{5}\) \\ \hline
**Values** & 0.87 & 0.82 & 1.02 & 1.16 & 1.91 \\ \hline \end{tabular}
\end{table}
Table XXII: Values of input parameters used in \(B_{d}-\overline{B}_{d}\) mixing.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Transition** & \(f_{+}(0)\) & \(M_{V}(\text{GeV})\) \\ \hline \(K\to\pi\) & 0.9709 [65] & 0.892 \\ \hline \(B\to\pi\) & 0.29 & 5.32 \\ \hline \(B\to K\) & 0.36 & 5.42 \\ \hline \(D\to\pi\) & 0.69 & 2.01 \\ \hline \(D_{s}\to K\) & 0.72 & 2.01 \\ \hline \end{tabular}
\end{table}
Table XXIII: Parameter values used in calculation semileptonic decays of heavy mesons.
Kaon Mixing Box Diagram Expressions
The \(\lambda\) couplings contributing to kaon mixing box diagrams mediated by \(W_{L,\,R}\) in Sec. 5.4 take the form
\[\begin{split}\lambda^{LL}_{i=(u,c,t)}&=(\mathcal{V}^{* }_{i,1}-\delta\mathcal{V}^{\dagger}_{i,1})(\mathcal{V}_{i,2}-\delta\mathcal{V}_ {i,2}),\\ \lambda^{LR}_{i=(u,c,t)}&=(\mathcal{V}^{*}_{i,1}- \delta\mathcal{V}^{\dagger}_{i,1})(\mathcal{V}_{i,2}-\frac{\kappa_{R}^{2}}{ \kappa_{L}^{2}}\delta\mathcal{V}_{i,2}),\\ \lambda^{LR}_{i=(u,c,t)}&=(\mathcal{V}^{*}_{i,1}- \frac{\kappa_{R}^{2}}{\kappa_{L}^{2}}\delta\mathcal{V}^{\dagger}_{i,1})( \mathcal{V}_{i,2}-\delta\mathcal{V}_{i,2}),\\ \lambda^{LL}_{i=(U,C,T)}&=(V_{L_{d}}\rho^{\dagger}_ {L_{U}})_{1,i}(\rho_{L_{U}}V^{\dagger}_{L_{d}})_{i,2},\\ \lambda^{LR}_{i=(U,C,T)}&=\lambda^{RL}_{i=(U,C,T)}= \frac{\kappa_{L}}{\kappa_{R}}\lambda^{LL}_{i=(U,C,T)}.\end{split} \tag{105}\]
From Eq. (3.10), the script \(\mathcal{V}=V_{X_{u}}V^{\dagger}_{X_{d}}\) is interpreted as the \(V_{CKM}\) matrix elements, and \(\delta\mathcal{V}=\frac{1}{2}(V_{X_{u}}\rho^{\dagger}_{X_{D}}\rho_{X_{D}}V^{ \dagger}_{X_{d}}+V_{X_{u}}\rho^{\dagger}_{X_{U}}\rho_{X_{U}}V^{\dagger}_{X_{d}})\).
|
2305.02675
|
Collages of String Diagrams
|
We introduce collages of string diagrams as a diagrammatic syntax for glueing
multiple monoidal categories. Collages of string diagrams are interpreted as
pointed bimodular profunctors. As the main examples of this technique, we
introduce string diagrams for bimodular categories, string diagrams for functor
boxes, and string diagrams for internal diagrams.
|
Dylan Braithwaite, Mario Román
|
2023-05-04T09:38:14Z
|
http://arxiv.org/abs/2305.02675v2
|
# Collages of String Diagrams
###### Abstract
We introduce collages of string diagrams as a diagrammatic syntax for glueing multiple monoidal categories. Collages of string diagrams are interpreted as pointed bimodular profunctors. As the main examples of this technique, we introduce string diagrams for bimodular categories, string diagrams for functor boxes, and string diagrams for internal diagrams.
## 1 Introduction
String diagrams are a convenient and intuitive, sound and complete syntax for monoidal categories [21]. Monoidal categories are algebras of processes composing in parallel and sequentially [35]; string diagrams formalize the process diagrams of engineering [7, 9]. Formalization is not only of conceptual interest: it means we can sharpen our reasoning, scale our diagrams, or explain them to a computer [43].
However, the formal syntax of monoidal categories is not enough for all applications and, sometimes, we need to extend it. Functor boxes allow us to reason about translations between theories of processes [16, 38], ownership [40], higher-order processes [2], or programming effects [44]. Quantum combs not only model some classes of supermaps [13, 17, 24], but they coincide with the monoidal lenses of functional programming [6, 14, 51] and compositional game theory [23, 8]. Premonoidal categories, which appear in Moggi's semantics of programming effects [39, 52, 30], are now within the realm of string diagrammatic reasoning [47]. Internal diagrams extend the syntax of monoidal categories allowing us to draw diagrams inside tubular cobordisms and reason about topological quantum field theories [4], but also coends [48] and traces [27].
The extensions showcase the expressive power of string diagrams on surprisingly diverse application domains. At the same time, these different ideas could be regarded as separate ad-hoc extensions: they belong to different fields; they use different categorical formalisms. The overhead of learning and combining each one of them prevents the exchange of ideas between the different domains of application: e.g. an idea about topological quantum field diagrams does not transfer to premonoidal diagrams.
Figure 1: Examples from the literature. From left to right: functor boxes [38], premonoidal categories [47], internal diagrams [4], and combs or optics [13, 14, 24].
Collages.This manuscript claims that this division is only apparent and that all these extensions are particular instances of the same encompassing idea: that of glueing multiple string diagrams into what we call a _collage of string diagrams_. We introduce a formal notion of collage (Section 4.4) and employ string diagrammatic syntaxes for them, based on the calculus of bicategories (Sections 2.1, 3.1 and 5).
Even when collages of string diagrams are our novel contribution, collages are not yet another new concept to category theory. "Collage" was Bob Walters' term for a lax colimit in a module-like category [52]. This can be considered as a glueing of objects together along the action of a scalar. For example, given two sets \(A\) and \(B\), with an action of a monoid \(M\), we can construct their tensor product \(A\otimes_{M}B\), where \((a\cdot m)\otimes b=a\otimes(m\cdot b)\) for any scalar \(m\in M\). Categorifying this idea in a possible direction we obtain monoidal categories acting on _bimodular categories_. The following is the takeaway of this work.
Collages of string diagrams consist of multiple string diagrams of different monoidal categories glued together. Collages can be interpreted as _pointed bimodular profunctors_ between _bimodular categories_.
A bimodular category, sometimes referred to as a biactegory [10], is to a bimodule what a monoidal category is to a monoid. This is, a plain category \(\mathbb{A}\) endowed with a left action of a monoidal category \((\triangleright)\colon\mathbb{M}\times\mathbb{A}\to\mathbb{A}\) and a right action of another, possibly different, monoidal category \((\triangleleft)\colon\mathbb{A}\times\mathbb{N}\to\mathbb{A}\). We can _collage_ two bimodular categories along a common monoidal category that acts on both. Later on the paper, exploiting a second axis of categorification, we pass from bimodular categories to _bimodular profunctors_, which are a kind of 2-dimensional bimodule, and we define their collage. This structure facilitates glueing categories together in 2-dimensions: we can represent complexes of morphisms from different categories and glue them together. Collages of string diagrams are the syntactic representations of this glueing, in the same sense that ordinary string diagrams represent tensors in monoidal categories.
We observe that collages of bimodular categories embed into a tricategory of pointed bimodules. This provides a versatile setting where we can interpret many syntaxes already present in the literature.
Contributions.We introduce string diagrams of bimodular categories and we prove they construct the free bimodular category on a signature (Theorem 2.7). We introduce novel string diagrammatic syntax for _functor boxes_ and we prove it constructs the free lax monoidal functor on a suitable signature (Theorem 3.4). We describe the tricategory of pointed bimodular profunctors (Definition 4.6) and, in terms of it, we explain the semantics of functor boxes (Proposition 4.9) and internal diagrams (Theorem 5.3), for which we also provide a novel explicit formal syntax (Definition 5.2).
## 2 String Diagrams of Bimodular Categories
In algebra, a _bimodule_ is a structure that has both a left and a right action such that they are compatible. _Bimodular categories are to bimodules what monoidal categories are to monoids_. This means that a bimodular category is a category, \(\mathbb{C}\), acted on by two monoidal categories, \(\mathbb{M}\) and \(\mathbb{N}\)[53]. Bimodular categories are also known as "biactegories" [10, 36], while the name "bimodule category" has been reserved for actions of vector enriched categories with extra properties [19]. For our purposes, bimodular categories, \(\mathbb{C}\), glue the string diagrams of their two monoidal categories, \(\mathbb{M}\) and \(\mathbb{N}\).
**Definition 2.1**.: A _bimodular category_\((\mathbb{C},\mathbb{M},\mathbb{N})\) is a category \(\mathbb{C}\) endowed with a left monoidal action \((\triangleright)\colon\mathbb{M}\times\mathbb{C}\to\mathbb{C},\) and a right monoidal action \((\triangleleft)\colon\mathbb{C}\times\mathbb{N}\to\mathbb{C}\). These two actions must be compatible, meaning that there exists a natural isomorphism, \(\gamma_{M,N,X}:M\triangleright(X\triangleleft N)\longrightarrow(M\triangleright X )\triangleleft N,\) such that all formal equations between these isomorphisms and the coherence isomorphisms of both monoidal categories and monoidal actions hold.
A bimodular category is a _strict bimodular category_ whenever the two monoidal categories are strict, their two actions are strict and, moreover, the compatibility isomorphism is an identity. Every monoidal category \((\mathbb{C},\otimes,I)\) is a \((\mathbb{C},\mathbb{C})\)-bimodular category with its own tensor product defining the two actions.
**Proposition 2.2**.: _Strict bimodular categories over arbitrary strict monoidal categories form a category,_ **sBimod**_._ _Morphisms \((F,H,K)\colon(\mathbb{C},\mathbb{M},\mathbb{N})\to(\mathbb{D},\mathbb{P}, \mathbb{Q})\) consist of two strict monoidal functors \(H\colon\mathbb{M}\to\mathbb{P}\) and \(K\colon\mathbb{N}\to\mathbb{Q}\) and a functor \(F\colon\mathbb{C}\to\mathbb{D}\) that strictly preserves monoidal actions according to \(H\) and \(K\)._
### Signature of a Bimodular Category
The next sections prove that a variant of string diagrams is a sound and complete syntax for bimodular categories. String diagrams for bimodular categories consist of two monoidal regions glued by a bimodular wire. We first introduce a notion of bimodular signature (Definition 2.3) and then construct an adjunction (Theorem 2.8) using the notion of _collages_.
**Definition 2.3**.: A _bimodular graph_\((\mathcal{A},\mathcal{M},\mathcal{N})\) (the bimodular analogue of a multigraph [48]) is given by three sets of objects \((\mathcal{A}_{obj},\mathcal{M}_{obj},\mathcal{N}_{obj})\) and three different types of edges:
* the left-acting edges, a set \(\mathcal{M}(M_{0},\dots,M_{m};P_{0},\dots,P_{p})\) for each \(M_{0},\dots,M_{m},P_{0},\dots,P_{p}\in\mathcal{M}_{obj}\); and
* the right-acting edges, a set \(\mathcal{N}(N_{0},\dots,N_{n};Q_{0},\dots,Q_{q})\) for each \(N_{0},\dots,N_{n},Q_{0},\dots,Q_{q}\in\mathcal{N}_{obj}\);
* the _central edges_, a set of edges \(\mathcal{A}(M_{0},\dots,M_{m},A,N_{0},\dots,N_{n};O_{0},\dots,P_{p},B,Q_{0}, \dots,Q_{q})\), for each \(M_{0},\dots,M_{m},P_{0},\dots,P_{p}\in\mathcal{M}_{obj}\); each \(N_{0},\dots,N_{n},Q_{0},\dots,Q_{q}\in\mathcal{N}_{obj}\) and each \(A,B\in\mathcal{A}_{obj}\).
**Proposition 2.4**.: _Bimodular graphs form a category_ **bmGraph**_. We define a morphism of bimodular graphs \((I,f,g)\colon(\mathcal{A},\mathcal{M},\mathcal{N})\to(\mathcal{A}^{\prime}, \mathcal{M}^{\prime},\mathcal{N}^{\prime})\) to be a triple of functions on objects, \((l_{obj},f_{obj},g_{obj})\), that extend to the morphism sets. There exists a forgetful functor \(U\colon\mathrm{sBimod}\to\mathrm{bmGraph}\)._
Proof.: See Appendix, Proposition B.3.
So far we have described a syntactic presentation of strict bimodular categories. We would like to, additionally, go the other way and construct a free model from a syntactic presentation. Our approach is to note that the central edges in a bimodular graph can be considered as dividing the graph into two regions: one containing the left-acting vertices and edges and one containing the right-acting vertices and edges. Diagrams of this sort with multiple labelled regions can naturally be considered as string diagrams for bicategories: explicitly, the diagrams of the _collage_ of the bimodular category.
### The Collage of a Bimodular Category
Each profunctor induces a _collage category_; in an analogous fashion, a bimodular category induces a _collage bicategory_. This section proves that constructing the collage of a bimodular category is left adjoint to considering the bimodular hom-category between any two cells of a 2-category.
Figure 2: Left, right, and central edges of a bimodular graph.
**Definition 2.5**.: The _collage_ of an \((\mathbb{M},\mathbb{N})\)-bimodular category \(\mathbb{C}\) is a bicategory, \(\mathsf{Coll}_{\mathbb{C}}\). This bicategory has two 0-cells, \(M\) and \(N\), and it is defined by the following hom-categories. Endocells on \(M\) are given by the monoidal category \(\mathsf{Coll}_{\mathbb{C}}(M,M)=\mathbb{M}\); likewise, endocells on \(N\) are given by the monoidal category, \(\mathsf{Coll}_{\mathbb{C}}(N,N)=\mathbb{N}\). The 1-cells from \(M\) to \(N\) are given by the category \(\mathsf{Coll}_{\mathbb{C}}(M,N)=\mathbb{C}\); and composition of 1-cells is given by the monoidal actions. Finally, \(\mathsf{Coll}_{\mathbb{C}}(N,M)\) is the empty category.
**Definition 2.6**.: The category of strict bipointed 2-categories, \(\mathbf{2Cat}_{2}\), has objects, \((\mathbb{A},M,N)\), given by a strict 2-category \(\mathbb{A}\) and two chosen 0-cells on it, \(M\in\mathbb{A}\) and \(N\in\mathbb{A}\). A morphism of bipointed 2-categories is a strict 2-functor preserving the two chosen 0-cells.
**Theorem 2.7**.: _There exists an adjunction between strict bimodular categories and bipointed 2-categories given by the collage, \(\mathsf{Coll}_{\mathbb{C}}\colon\mathrm{sBimod}\to 2\mathrm{Cat}_{2}\), and picking the hom-category between the chosen 0-cells, \(\mathsf{Chosen}\colon 2\mathrm{Cat}_{2}\to\mathrm{sBimod}\). Moreover, the unit of this adjunction is a natural isomorphism._
Proof.: See Appendix, Theorem B.7.
### String Diagrams of Bimodular Categories, via Collages
We have the two ingredients for string diagrams of bimodular categories: string diagrams for bicategories, and collages, a way of embedding a bimodular category into a bicategory. This section combines both results to provide an adjunction from bimodular graphs to bimodular categories.
**Theorem 2.8**.: _There exists an adjunction between bimodular graphs and strict bimodular categories. The left side of this adjunction is given by finding the bimodular category whose collage is the free 2-category on the bimodular graph, \(\mathsf{bmStr}\colon\mathrm{bmGraph}\to\mathrm{sBimod}\). The right side of the adjunction is the previously mentioned forgetful functor \(\mathsf{U}\colon\mathrm{sBimod}\to\mathrm{bmGraph}\)._
Proof.: See Appendix, Theorem 2.8, the proof follows Figure 3.
_Remark 2.9_.: The string diagrams of bimodular categories particularize into the string diagrams of premonoidal and effectful categories. See the Appendix B.2 for details.
We have presented string diagrams for bimodular categories via the string diagrams of bicategories, and we will now give an example. We take inspiration from this first result to address now other syntaxes that depend on string diagrams of bicategories: the next section proposes string diagrams for functor boxes.
Figure 3: Summary of adjunctions for the string diagrams of bimodular categories.
### Example: Shared State
In the same way that premonoidal categories are particularly well-suited to describe stateful computations, bimodular categories are particularly well-suited to describe shared state between two processes. These two processes can be different and even live on different categories. As an example, consider the generators in Figure 4. They represent two different process theories (two different monoidal categories, \(\mathbb{A}\) and \(\mathbb{B}\)) that access a common state with get and put operations.
In the same way that monoidal categories are a good setting where to define monoids and comonoids, bimodular categories are a good setting where to define bimodules. In order to capture interacting shared state, the generators of Figure 4 are quotiented by the equations of a pair of semifrobenius modules with compatible comonoid actions and semimonoid actions (see Appendix, Figure 14, for details).
This setup is enough to exhibit one of the most salient features of shared state: _race conditions_. Race conditions were first studied by Huffman in 1954, who used diagrams to show how the behaviour of shared state is dependent on the relative timing of the actions of the parties [27]. We employ string diagrams of bimodular categories to show how two different timings of the actions - the leftmost and rightmost sides of the equation in Figure 5 - result in two different executions: even when the two get statements are compatible _(i)_, the two put statements interact causing the earlier of the two to be discarded _(ii,iii,iv)_; this causes the discrepancy with the intended protocol _(v)_.
Race conditions have a commonly accepted workaround: the _binary semaphore_[49]. Dijkstra described general semaphores with the aid of flow diagrams [18]; we use instead string diagrams of bimodular categories to implement a binary semaphore (Figure 6). We consider two different object generators for our bimodular category (free and locked): each operation must suitably lock or unlock the semaphore.
Figure 4: Signature for shared state.
Figure 5: Race condition in bimodular string diagrams.
This renders race conditions ill-typed, and renders most of the interaction equations unnecessary (in the Appendix, Figure 14).
## 3 String Diagrams of Functor Boxes
Functor boxes are a extension of the string diagrammatic notation that represents plain functors, lax, oplax and strong monoidal functors. Functor boxes were introduced by Cockett and Seely [15] and later studied by Mellies [37]. We introduce here a syntactic presentation of (op)lax functor boxes that has the advantage of treating each piece of the box as a separate entity in a bicategory and apply the string diagrammatic calculus of bicategories.
### Functor box signatures
**Definition 3.1**.: A _functor box signature_\(\mathcal{F}=(\mathcal{A},\mathcal{X},\mathcal{F}_{\bullet},\mathcal{F}^{ \bullet})\) consists of a pair of sets, \(\mathcal{A}_{obj}\) and \(\mathcal{X}_{obj}\), and four different types of edges:
* the plain edges, \(\mathcal{A}(A_{0},\ldots,A_{n};B_{0},\ldots,B_{m})\) for any objects \(A_{0},\ldots,A_{n},B_{0},\ldots,B_{m}\in\mathcal{A}_{obj}\);
* the functor box edges, \(\mathcal{X}(X_{0},...,X_{n};Y_{0},...,Y_{m})\) for any objects \(X_{0},\ldots,X_{n},Y_{0},\ldots,Y_{m}\in\mathcal{X}_{obj}\);
* the in-box edges, \(\mathcal{F}_{\bullet}(A_{0},...,A_{n};Y_{0},...,Y_{m})\) for any \(A_{0},...,A_{n}\in\mathcal{A}_{obj}\) and \(Y_{0},...,Y_{m}\in\mathcal{X}_{obj}\)
* the out-box edges, \(\mathcal{F}^{\bullet}(X_{0},...,X_{n};B_{0},...,B_{m})\) for any \(B_{0},...,B_{m}\in\mathcal{A}_{obj}\) and \(X_{0},...,X_{n}\in\mathcal{X}_{obj}\).
A _functor box signature morphism_\((h,k,l)\colon(\mathcal{A},\mathcal{X},\mathcal{F})\to(\mathcal{B},\mathcal{Y}, \mathcal{G})\) is a pair of functions between the object sets, \(h_{obj}\colon\mathcal{A}_{obj}\to\mathcal{B}_{obj}\) and \(k_{obj}\colon\mathcal{X}_{obj}\to\mathcal{Y}_{obj}\), that extend to a function between the edge sets;
* \(h\colon\mathcal{A}(A_{0},...,A_{n};B_{0},...,B_{m})\to\mathcal{B}(h(A_{0}),..., h(A_{n});h(B_{0}),...,h(B_{m}))\);
* \(k\colon\mathcal{X}(X_{0},...,X_{n};Y_{0},...,Y_{m})\to\mathcal{Y}(k(X_{0}),...,k(X_{n});k(Y_{0}),...,k(Y_{m}))\);
* \(l_{\bullet}\colon\mathcal{F}_{\bullet}(A_{0},...,A_{n};Y_{0},...,Y_{m})\to \mathcal{G}_{\bullet}(h(A_{0}),...,h(A_{n});k(Y_{0}),...,k(Y_{m}))\);
* \(l^{\bullet}\colon\mathcal{F}^{\bullet}(X_{0},...,X_{n};B_{0},...,B_{m})\to \mathcal{G}^{\bullet}(k(X_{0}),...,k(X_{n});h(B_{0}),...,h(B_{m}))\).
Functor box signatures and homomorphisms form a category, **Fbox**.
Figure 7: Syntactic bicategory of a lax monoidal functor box signature.
**Definition 3.2**.: The syntactic bicategory of a functor box signature \(\mathcal{F}=(\mathcal{A},\mathcal{X},\mathcal{F}_{\bullet},\mathcal{F}^{\bullet})\) is the bicategory freely presented by Figure 7, which we call \(\mathbb{S}_{\mathcal{A},\mathcal{X},\mathcal{F}}\).
In other words, the bicategory \(\mathbb{S}_{\mathcal{A},\mathcal{X},\mathcal{F}}\) contains exactly two 0-cells, labelled \(\mathcal{A}\) and \(\mathcal{X}\); it contains a 1-cell \(A\colon\mathcal{A}\to\mathcal{A}\) for each \(A\in\mathcal{A}_{obj}\); a 1-cell \(X\colon\mathcal{X}\to\mathcal{X}\) for each \(X\in\mathcal{X}_{obj}\) and, moreover, a pair of adjoint 1-cells \(F^{\uparrow}\colon\mathcal{A}\to\mathcal{X}\) and \(F^{\downarrow}\colon\mathcal{X}\to\mathcal{A}\). Finally, it contains a pair of 2-cells witnessing the adjunction \(F^{\uparrow}\dashv F^{\downarrow}\), given by \(n\colon\operatorname{id}\to F^{\uparrow}\,\sharp F^{\downarrow}\) and \(e\colon F^{\downarrow}\,\sharp F^{\uparrow}\to\operatorname{id}\) which additionally satisfy the snake equations; and it also contains
* a 2-cell, \(f\in\mathbb{S}(\mathcal{A},\mathcal{A})(A_{0}\,\sharp\ldots\,\sharp A_{n};\ B_{0}\,\sharp\ldots\,\sharp B_{m})\), for each _plain edge_;
* a 2-cell, \(g\in\mathbb{S}(\mathcal{X},\mathcal{X})(X_{0}\,\sharp\ldots\,\sharp\,X_{n};\ Y_{0}\,\sharp\ldots\,\sharp\,Y_{m})\), for each _functor box edge_;
* a 2-cell, \(u\in\mathbb{S}(\mathcal{A},\mathcal{A})(A_{0}\,\sharp\ldots\,\sharp A_{n};\ F^{ \uparrow}\,\sharp Y_{0}\,\sharp\ldots\,\sharp Y_{m}\,\sharp F^{\downarrow})\) for each _in-box edge_; and
* a 2-cell, \(v\in\mathbb{S}(\mathcal{A},\mathcal{A})(F^{\uparrow}\,\sharp X_{0}\,\sharp \ldots\,\sharp\,X_{n}\,\sharp F^{\downarrow};\ B_{0}\,\sharp\ldots\,\sharp B_{m})\) for each _out-box edge_.
### Lax Monoidal Functor Semantics
**Definition 3.3** (Lax functors category).: An object of the _lax functors category_, \(\mathbf{Lax}\), is a pair of strict monoidal categories \((\mathbb{A},\mathbb{X})\) together with a lax monoidal functor between them, \((F,\varepsilon,\mu)\); that is, a functor \(F\colon\mathbb{X}\to\mathbb{A}\) endowed with two natural transformations \(\varepsilon\colon I\to FI,\) and \(\mu\colon FX\otimes FY\to F(X\otimes Y),\) satisfying associativity \((\mu\otimes id)\,\sharp\,\mu=(id\otimes\mu)\,\sharp\,\mu\), left unitality \((\varepsilon\otimes id)\,\sharp\,\mu=id\) and right unitality \((id\otimes\varepsilon)\,\sharp\,\mu=id\).
A morphism of the _lax functors category_, from \((\mathbb{A},\mathbb{X},F,\varepsilon_{F},\mu_{F})\) to \((\mathbb{B},\mathbb{Y},G,\varepsilon_{G},\mu_{G})\) is a pair of strict monoidal functors \(H\colon\mathbb{X}\to\mathbb{A}\) and \(K\colon\mathbb{A}\to\mathbb{B}\) such that \(F\,\sharp\,K=H\,\sharp\,G\) and such that \(K(\varepsilon_{F})=\varepsilon_{G}\) and \(K(\mu_{F})=\mu_{G}\).
**Theorem 3.4**.: _There exists an adjunction between the category of functor box signatures, Fbox, and the category of pairs of strict monoidal categories with a lax monoidal functor between them, Lax. The free side of this adjunction is given by the syntax of Figure 7._
Proof.: See Appendix, Theorem C.3.
Collages, by themselves, explained the 2-region diagrams of bimodular categories; collages will also explain the two-region diagrams of functor boxes in Section 4.5. However, as currently defined, collages are only sufficient to encode the vertical boundaries. In order to additionally represent boundaries along the horizontal axis we can make use of profunctors between bimodular categories and extend our notion of collage to these structures (described in Appendix F). Following this thread we find that collages embed into a tricategory of pointed bimodular profunctors, described in the next section, which we consider a universe of interpretation for all of the graphical theories described.
## 4 Bimodular Profunctors
Where can we interpret all these string diagrams and provide compositional semantics for them? In this section, we introduce a single structure where all the previous calculi take semantics.
We will need two different ingredients: _coends_ and _bimodularity_. Coends and profunctors [32, 33], far from being a obscure concept from category theory, can be seen as the right tool to glue together morphisms from different categories [17, 47]; we follow a explicitly _pointed_ version of coend calculus, which keeps track of the transformation between profunctors we are constructing (Section 4.3). In a similar sense, bimodular categories tensor together objects from different monoidal categories. Both ideas combine into the calculus of pointed bimodular profunctors.
### Bimodular Profunctors
Consider \(\mathbb{C}\) and \(\mathbb{D}\), both \((\mathbb{M},\mathbb{N})\)-bimodular categories. A natural notion of morphism between them is a functor \(\mathbb{C}\to\mathbb{D}\) which is linear in both actions. However, there is another notion of morphism between them, which is a generalisation of a profunctor between categories to this bimodular setting. Bimodular profunctors are a generalized reformulation of the Tambara modules of Pastro and Street [42].
**Definition 4.1**.: Let \(\mathbb{M}\) and \(\mathbb{N}\) be two monoidal categories and let \(\mathbb{C}\) and \(\mathbb{D}\) be two \((\mathbb{M},\mathbb{N})\)-bimodular categories. A _bimodular profunctor_ from \(\mathbb{C}\) to \(\mathbb{D}\) is a profunctor \(T\colon\mathbb{C}^{op}\times\mathbb{D}\to\mathsf{Set}\) with a natural family of strengths,
\[t_{M}:T(X,Y)\to T(M\triangleright X,M\triangleright Y),\quad\text{ and }\quad t_{N}:T(X,Y)\to T(X \triangleleft N,Y\triangleleft N),\]
such that the actions are associative \(t_{M}\,{}^{\circ}_{\sharp}t_{N}=t_{M\otimes N}\), unital \(t_{I}=id\), and compatible, \(t_{M}\,{}^{\circ}_{\sharp}t_{N}=t_{N}\,{}^{\circ}_{\sharp}t_{M}\), up to the coherence isomorphisms of the monoidal category. See Appendix B for details.
**Proposition 4.2**.: _For any pair of monoidal categories, \(\mathbb{M}\) and \(\mathbb{N}\), there is a bicategory \(\nu_{\uparrow}\mathbf{Mod}_{\mathbb{N}}\) of \((\mathbb{M},\mathbb{N})\)-bimodular categories, bimodular profunctors, and natural transformations between them._
These will form the hom-bicategories of the tricategory we later define. The other significant piece of data we require is a family of tensors \(\otimes:\,\nu_{\uparrow}\mathbf{Mod}_{\mathbb{N}}\times_{\mathbb{N}}\mathbf{ Mod}_{\mathbb{O}}\to\nu_{\uparrow}\mathbf{Mod}_{\mathbb{O}}\), which we now study.
### Tensor of Bimodular Profunctors
The tensor of bimodular categories is similar to the tensor of modules over a monoid in classical algebra: we consider pairs of elements and we quotient out the action of a common scalar. In this case, the quotienting is substituted by an appropiate structural isomorphism: the _equilibrator_.
**Definition 4.3** (Tensor of bimodular categories).: Let \(\mathbb{C}\) be a \((\mathbb{M},\mathbb{N})\)-bimodular category and let \(\mathbb{D}\) be a \((\mathbb{N},\mathbb{O})\)-bimodular category. Their tensor product, \(\mathbb{C}\otimes_{\mathbb{N}}\mathbb{D}\), is a category with the same objects as \(\mathbb{C}\times\mathbb{D}\): we write them as \(X\otimes_{\mathbb{N}}Y\). The category is presented by the morphisms of \(\mathbb{C}\times\mathbb{D}\) and a free family of natural isomorphisms, called the _equilibrators_,
\[\tau_{X,N,Y}\colon(X\triangleleft N)\otimes_{N}Y\to X\otimes_{N}(N \triangleright Y),\text{ for each }N\in\mathbb{N},X\in\mathbb{C},Y\in\mathbb{D},\]
which are additionally quotiented by the following equations up to the structure isomorphisms of the monoidal actions, \(\tau_{X,M\otimes N,Y}=\tau_{X\triangleleft M,N,Y}\,{}^{\circ}\tau_{X,M,N \triangleright Y}\), and \(\tau_{X,J,Y}=\mathrm{id}\).
**Definition 4.4**.: Let \(\mathbb{C}\) and \(\mathbb{C}^{\prime}\) be two \((\mathbb{M},\mathbb{N})\)-bimodular categories and let \(\mathbb{D}\) and \(\mathbb{D}^{\prime}\) be a \((\mathbb{N},\mathbb{O})\)-bimodular categories. Given two bimodular profunctors, \(T\colon\mathbb{C}\to\mathbb{C}^{\prime}\) and \(R\colon\mathbb{D}\to\mathbb{D}^{\prime}\), their tensor is a bimodular profunctor, \(T\otimes_{\mathbb{N}}R\colon\mathbb{C}\otimes_{\mathbb{N}}\mathbb{D}\to \mathbb{C}^{\prime}\otimes_{\mathbb{N}}\mathbb{D}^{\prime}\), defined by
\[T\otimes_{\mathbb{N}}R(X\otimes_{N}Y;X^{\prime}\otimes_{N}Y^{\prime})=T(X;X^{ \prime})\times R(Y,Y^{\prime})/(\sim),\]
where \((\sim)\) is the equivalence relation generated by \((t_{N}(x),y)\sim(x,t_{N}(y))\).
### Pointed Profunctors
Profunctors deal with families of morphisms, and their natural isomorphisms determine correspondences between these families. However, when we use profunctors for the semantics of string diagrams, we most often want to single out a particular morphism between a particular pair of objects. A simple technique to achieve this is to use _pointed profunctors_ instead of simply profunctors: this technique was explicitly described by this second author [48] although it has implicit appearances in the literature [4, 27].
**Definition 4.5**.: A pointed profunctor \((P,p)\colon(\mathbb{A},X)\to(\mathbb{B},Y)\) between two pointed categories with a chosen object \(X\in\mathbb{A}_{obj}\) and \(Y\in\mathbb{B}_{obj}\) is a profunctor \(P\colon\mathbb{A}\to\mathbb{B}\) together with an element \(p\in P(A,B)\) of the profunctor evaluated on the chosen object of the categories.
From now on, we work using pointed profunctors instead of plain profunctors, see the Appendix D.1 for a short reference on "pointed coend calculus".
### The Tricategory of Pointed Bimodular Profunctors
We call _collages of string diagrams_ to the diagrams of the tricategory of pointed bimodular profunctors.
**Definition 4.6**.: The tricategory of pointed bimodular profunctors, \(\mathbb{B}\mathsf{m}\mathsf{P}\mathsf{r}\mathsf{o}\mathsf{f}_{\mathsf{pt}}\), has as \(0\)-cells the monoidal categories, \(\mathbb{M},\mathbb{N},\mathbb{O},\dots\). The \(1\)-cells between two monoidal categories \(\mathbb{M}\) and \(\mathbb{N}\) are _pointed bimodular categories_, \((\mathbb{A},\triangleright,\triangleleft,A)\), consisting of a \((\mathbb{M},\mathbb{N})\)-bimodular category with two actions \((\mathbb{A},\triangleright,\triangleleft)\) and some object of that category, \(A\in\mathbb{A}\). Pointed bimodular categories compose by the tensor of bimodular categories,
\[(\mathbb{A},\triangleright,\triangleleft,A)\otimes_{\mathbb{N}}(\mathbb{B}, \triangleright,\triangleleft,B)=(\mathbb{A}\otimes_{\mathbb{N}}\mathbb{B}, \triangleright,\triangleleft,A\otimes_{\mathbb{N}}B).\]
The \(2\)-cells between two pointed bimodular categories \((\mathbb{A},\triangleright,\triangleleft,A)\) and \((\mathbb{B},\triangleright,\triangleleft,B)\) are _pointed bimodular profunctors_\((P,t,p)\), consisting of a profunctor \(P\colon\mathbb{A}\to\mathbb{B}\) together with a point \(p\in P(A,B)\) that are moreover bimodular with compatible natural transformations \(t_{M}\colon P(A;B)\to P(M\triangleright A;M\triangleright B)\), and \(t_{N}\colon P(A;B)\to P(A\triangleleft N;B\triangleleft N)\). These \(2\)-cells compose by profunctor composition and by the tensor of bimodular profunctors.
Finally, the \(3\)-cells between two pointed bimodular profunctors \((P,t,p)\) and \((Q,r,q)\) are bimodular natural transformations that preserve the point, consisting of a natural transformation \(\alpha\colon P\to Q\) such that the \(\alpha(p)=q\) and moreover \(t_{M}\,\circ\,\alpha=\alpha\,\circ\,r_{M}\) and \(t_{N}\,\circ\,\alpha=\alpha\,\circ\,r_{N}\).
_Remark 4.7_.: At the moment of writing, it is unclear to the authors whether a string diagrammatic calculus for tricategories, described by transformations of the string diagrammatic calculus of bicategories, has been fully described and proved sound and complete. However, there seems to be consensus in that this would be the right language for tricategories: much literature assumes it. Let us close this section by tracking explicitly the assumptions we need to employ a diagrammatic syntax for bimodular profunctors.
**Conjecture 4.8**.: _The previous data satisfies all coherence conditions of a tricategory. Moreover, we can reason with tricategories using the calculus of deformations of string diagrams, extending the string diagrams for quasistrict monoidal 2-categories of Bartlett [2]._
### Functor Boxes via Collages of String Diagrams
The following Figure 8 details how to interpret functor boxes as collages of string diagrams. The colored region represents the domain of the lax monoidal functor; the white region represents the codomain. Morphisms of both categories are interpreted as elements of their respective hom-profunctors; and the laxators are used to merge colored regions. The only element that we will explicitly detail is the bimodular category that appears in the closing and opening wires of a functor box.
**Proposition 4.9** (Bimodular categories of a lax monoidal functor).: _Let \(\mathbb{X}\) and \(\mathbb{A}\) be two monoidal categories and let \(F\colon\mathbb{X}\to\mathbb{A}\) be a monoidal functor between them, endowed with natural transformations \(\psi_{0}\colon J\to F1\) and \(\psi_{2}\colon FX\otimes FY\to F(X\otimes Y)\). The following profunctors, \(\mathbb{A}\rtimes_{F}\mathbb{X}\colon\mathbb{A}\times\mathbb{X}\to\mathbb{A} \times\mathbb{X}\) and \(\mathbb{A}\rtimes_{F}\mathbb{X}\), are the monoidal functors of the monoidal functors of the monoidal functors of the monoidal functors of the monoidal functors of the monoidal functors of the monoidal functors of the monoidal functors of the monoidal functors of the monoidal functors of the monoidal functors of the monoidal functors of the monoidal functors of the monoidal functors of the monoidal functors of the monoidal functors of the monoidal functors of the monoidal functors of the monoidal functors of the monoidal functors of the monoidal functors of the monoidal functors of the monoidal functors of the monoidal functors of the monoidal functors of the monoidal functors of the monoidal functors of the monoidal functors of the monoidal functors of the monoidal functors of the monoidal functors of the monoidal functors of the monoidal functors of the monoidal functors of the monoidal functors of the monoidal functors of the monoidal functors of the monoidal functors of the monoidal functors of the monoidal functors of the monoidal functors of the monoidal functors of the monoidal functors of the monoidal of the monoidal functors of the monoidal functors of the monoidal functors of the monoidal functors of the monoidal of the monoidal functors of the monoidal of the monoidal functor of the monoidal of the monoidal monoidal of the monoidal monoidal of the monoidal monoidal of the monoidal monoidal of the monoidal monoidal of the monoidal monoidal of the monoidal monoidal of the monoidal monoidal of the monoidal monoidal of the monoidal monoidal of the monoidal monoidal of the monoidal monoidal of the monoidal monoidal of the monoidal monoidal of the monoidal monoidal of the monoidal monoidal of the monoidal monoidal of the monoidal monoidal of the monoidal monoidal of the monoidal monoidal of the monoidal monoidal of the monoidal monoidal of the monoidal monoidal of the monoidal of the monoidal monoidal of the monoidal of the monoidal monoidal of the monoidal of the monoidal of the monoidal monoidal of the monoidal of the monoidal of the monoidal of the monoidal monoidal of the monoidal of the monoidal of the monoidal of the monoidal of the monoidal of the monoidal of the monoidal of the monoidal of the monoidal of the monoidal of the monoidal monoidal of the monoidal of the monoidal monoidal of the monoidal monoidal of the monoidal of the monoidal of the monoidal monoidal of the monoidal of the monoidal of the monoidal monoidal of the monoidal monoidal of the monoidal monoidal of the monoidal monoidal of the monoidal monoidal monoidal of the monoidal
\(\mathbb{X}\ltimes_{F}\mathbb{A}\colon\mathbb{X}\times\mathbb{A}\to\mathbb{X}\times \mathbb{A}\) determine two promonads, and therefore two Kleisli categories._
\[\mathbb{A}\rtimes_{F}\mathbb{X}(A,X;B,Y)=\int^{M\in\mathbb{X}} \mathbb{A}(A;B\otimes FM)\times\mathbb{X}(M\otimes X;Y);\] \[\mathbb{X}\ltimes_{F}\mathbb{A}(X,A;Y,B)=\int^{M\in\mathbb{X}} \mathbb{A}(A;FM\otimes B)\times\mathbb{X}(M\otimes A;B);\]
_These two Kleisli categories are \((\mathbb{A},\mathbb{X})\) and \((\mathbb{X},\mathbb{A})\)-bimodular, respectively._
Proof.: See Appendix, Proposition D.4. The construction uses the laxity of the monoidal functor.
## 5 String Diagrams of Internal Diagrams
The tubular 3-dimensional cobordisms of internal diagrams are first described as a Frobenius algebra by Bartlett, Douglas, Schommer-Pries and Vicary [3]. We are indebted to this first introduction, which made internal diagrams into a convenient graphical notation in topological quantum field theory [3]. Internal diagrams themselves were later given a expicit semantics in a monoidal bicategory of pointed profunctors; this was the subject of this second author's contribution to _Applied Category Theory 2020_[45]. An important aspect of the syntax of internal diagrams is their 3-dimensional nature: the syntax not only contains string diagrams, but also reductions between them.
We introduce here a novel syntactic presentation of _internal diagrams_ that has the advantage of treating each piece of an internal diagram (including the closing and opening of tubes) as a separate entity in a tricategory. As a consequence, we are later able to introduce for the first time a more refined semantics in terms of a tricategory of _pointed bimodular profunctors_.
**Definition 5.1**.: A _polygraph_, \(\mathcal{G}\), is the signature for the string diagrams of a monoidal category. It consists of a set of objects, \(\mathcal{G}_{obj}\), and a set of morphisms \(\mathcal{G}(A_{0},...,A_{n};B_{0},...,B_{m})\) between any two lists of objects, \(A_{0},...,A_{n},B_{0},...,B_{m}\in\mathcal{G}_{obj}\).
**Definition 5.2**.: The _syntactic tricategory of internal diagrams_ over a \(\operatorname{polygraph}\mathcal{G}\) is the tricategory \(\mathbf{G}\) presented by the cells in Figure 9. In other words, it contains two 0-cells, \(\mathcal{I}\) and \(\mathcal{G}\), in white and blue in the figure, respectively. It contains a 1-cell \(A\colon\mathcal{G}\to\mathcal{G}\) for each object \(A\in\mathcal{G}_{obj}\) and two 1-cells,
Figure 8: Semantics for functor boxes in terms of pointed bimodular profunctors.
\(L_{\bullet}\colon\mathcal{I}\to\mathcal{G}\) and \(R_{\bullet}\colon\mathcal{G}\to\mathcal{I}\) forming two \(2\)-adjunctions \((L_{\bullet})\dashv(R_{\bullet})\) and \((R_{\bullet})\dashv(L_{\bullet})\) up to a \(3\)-cell. It contains the following \(2\)-cells,
* see Vicary and Heunen [24] for a reference on \(2\)-adjunctions and the swallowtail equations;
* two \(2\)-cells, \(A^{\pm}\colon L_{\bullet}\,\sharp A\,\sharp R_{\bullet}\to\operatorname{id}\) and \(A_{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
## 6 Conclusions
Collages of string diagrams provide an abundant graphical calculus. Functor boxes, tensors of bimodular categories and internal diagrams all exist in the graphical calculus of collages. Their technical underpinning is complex: we characterized them as diagrams of pointed bimodular profunctors, but these arrange themselves into a tricategory, which may be difficult to reason about.
Apart from introducing the technique of collages and formalizing multiple extensions to string diagrams, we would like to call the attention to the techniques we use: most of our results on soundness and completeness of diagrams are arranged into adjunctions, which allows us to prove them by reusing the better known results on soundness and completeness for monoidal categories and bicategories.
Related work.An important line of research revolves around _module categories_ and _fusion categories_, some specific enriched categories with actions with applications in topological quantum field theories [19, 20, 40]. Specially relevant and recent is Hoek's work, which constructs diagrams for a bimodule category [25, Theorem 3.5.2]. We follow the more elementary notion of bimodular category, called "_biactegory_" in the taxonomy of Capucci and Gavranovic [10]. Cockett and Pastro [14] have used instead _linear actions_ for concurrency, and even when we take inspiration from their work, their approach is more sophisticated and expressive than our toy example demonstrating bimodular categories (Figure 5).
Most work has been presented for some particular cases of collages: functor boxes have been extensively employed, but never reduced to string diagrams [15, 37]; internal diagrams have served both quantum theory and category theory [3, 26, 31], and can be given semantics into pointed profunctors [45], but again a presentation as string diagrams was missing. A convenient algebra of lenses [4, 5], a particular type of incomplete diagram, has been recently introduced [21], but this is still independent of the semantics of arbitrary internal diagrams.
Finally, the first author has published a blog post that accompanies this manuscript [9].
Further work.It should be possible to "destrictify" many of the results of this paper. We have only presented a 1-adjunction between strict bimodular categories and bipointed 2-categories; but a higher adjunction would allow us to reuse coherence for bicategories to automatically obtain coherence for bimodular categories. We have marked along the paper the conjectures where further work is warranted.
We conjecture pointed bimodular profunctors to also form a compact closed tricategory, with the dual of each monoidal category being the _reverse monoidal category_, \(A\otimes_{R\mathit{ev}}B=B\otimes A\). Even when it may be conceptually clear what a compact tricategory should be, it is technically challenging to come up with a concrete definition for it in terms of coherence equations.
Figure 10: Evaluating a comb in terms of internal string diagrams.
## Acknowledgements
The authors want to thank David A. Dalrymple for discussion on the string diagrammatic interpretation of functor boxes; and Matteo Capucci for several insightful conversations about notions of 2-dimensional profunctor, that helped us understand how to tie disparate aspects of this story together.
Dylan Braithwaite was supported by an Industrial CASE studentship from the UK Engineering and Physical Sciences Research Council (EPSRC) and the National Physical Laboratory. Mario Roman was supported by the European Union through the ESF Estonian IT Academy research measure (2014-2020.4.05.19-0001).
|
2307.03312
|
Reconstruction of generic anisotropic stiffness tensors from partial
data around one polarization
|
We study inverse problems in anisotropic elasticity using tools from
algebraic geometry. The singularities of solutions to the elastic wave equation
in dimension $n$ with an anisotropic stiffness tensor have propagation
kinematics captured by so-called slowness surfaces, which are hypersurfaces in
the cotangent bundle of $\mathbb{R}^n$ that turn out to be algebraic varieties.
Leveraging the algebraic geometry of families of slowness surfaces we show
that, for tensors in a dense open subset in the space of all stiffness tensors,
a small amount of data around one polarization in an individual slowness
surface uniquely determines the entire slowness surface and its stiffness
tensor. Such partial data arises naturally from geophysical measurements or
geometrized versions of seismic inverse problems. Additionally, we explain how
the reconstruction of the stiffness tensor can be carried out effectively,
using Gr\"obner bases. Our uniqueness results fail for very symmetric (e.g.,
fully isotropic) materials, evidencing the counterintuitive claim that inverse
problems in elasticity can become more tractable with increasing asymmetry.
|
Maarten V. de Hoop, Joonas Ilmavirta, Matti Lassas, Anthony Várilly-Alvarado
|
2023-07-06T21:51:55Z
|
http://arxiv.org/abs/2307.03312v1
|
# Reconstruction of generic anisotropic stiffness tensors from partial data around one polarization
###### Abstract.
We study inverse problems in anisotropic elasticity using tools from algebraic geometry. The singularities of solutions to the elastic wave equation in dimension \(n\) with an anisotropic stiffness tensor have propagation kinematics captured by so-called slowness surfaces, which are hypersurfaces in the cotangent bundle of \(\mathbb{R}^{n}\) that turn out to be algebraic varieties. Leveraging the algebraic geometry of families of slowness surfaces we show that, for tensors in a dense open subset in the space of all stiffness tensors, a small amount of data around one polarization in an individual slowness surface uniquely determines the entire slowness surface and its stiffness tensor. Such partial data arises naturally from geophysical measurements or geometrized versions of seismic inverse problems. Additionally, we explain how the reconstruction of the stiffness tensor can be carried out effectively, using Grobner bases. Our uniqueness results fail for very symmetric (e.g., fully isotropic) materials, evidencing the counterintuitive claim that inverse problems in elasticity can become more tractable with increasing asymmetry.
2020 Mathematics Subject Classification: Primary 86-10, 86A22, 14D06; Secondary 53Z05, 14P25, 14-04
## 1. Introduction
Inverse problems in anisotropic elasticity are notoriously challenging: a lack of natural symmetry leaves one with few tools to approach them. In this paper we embrace and harness asymmetry with the help of algebraic geometry, and develop a method to address inverse problems around the reconstruction of anisotropic stiffness tensors from a relatively small amount of empirical data. Along the way we prove a surprisingly strong uniqueness result in anisotropic elastic inverse problems, aided by the specific properties of albite, an abundant feldspar mineral in Earth's crust. We view our results as the beginning of a fruitful interaction between the fields of inverse problems and modern algebraic geometry.
Microlocal analysis, describing the geometry of wave propagation, and algebraic geometry, describing the geometry of zero sets of polynomials, become linked through the slowness polynomial, which is the determinant of the principal symbol of the elastic wave operator. The vanishing set of this polynomial is the slowness surface, which describes the velocities of differently polarized waves in different directions. Notably, we show that for a _generic_ anisotropic material
1. the polarizations of waves travelling through the material, corresponding to different sheets of the slowness surface, are coupled: a small Euclidean open subset of the slowness surface for a single polarization determines the whole slowness surface for all polarizations;
2. one can reconstruct the stiffness tensor field of the material from a slowness polynomial.
### The model
We work in \(\mathbb{R}^{n}\) for any \(n\); physical applications arise typically when \(n=2\) or \(3\).
#### 1.1.1. Waves in anisotropic linear elasticity
Linear elasticity posits that when a material is strained from a state of equilibrium, the force returning the system to equilibrium depends linearly on the displacement experienced. The displacement is described by the strain tensor (a symmetric \(n\times n\) matrix) and the restoring force by a stress tensor (also a symmetric \(n\times n\) matrix). Hooke's law, valid for small displacement to good accuracy, states that stress depends linearly on strain, and the coefficients of proportionality are gathered in a stiffness tensor. To be within the framework of linear elasticity, the displacement should be small, but one can also see the linear theory as a linearization of a more complicated underlying model.
Thus, the stiffness tensor of a material at a point \(x\in\mathbb{R}^{n}\) is a linear map \(\mathbf{c}=\mathbf{c}(x)\colon\mathbb{R}^{n\times n}\to\mathbb{R}^{n\times n}\) mapping strain\(\varepsilon\in\mathbb{R}^{n\times n}\) (describing infinitesimal deformations) to stress\(\sigma\in\mathbb{R}^{n\times n}\) (describing the infinitesimal restoring force), given in components as
\[\sigma_{ij}=\sum_{k,l=1}^{n}c_{ijkl}\varepsilon_{kl}.\]
Since both \((\sigma_{ij})\) and \((\varepsilon_{kl})\) are symmetric \(n\times n\) matrices, we must have
\[c_{ijkl}=c_{jikl}=c_{ijlk}. \tag{1}\]
This is the so-called minor symmetry of the stiffness tensor. In addition, the stiffness tensor is itself a symmetric linear map between symmetric matrices, which can be encoded in components as
\[c_{ijkl}=c_{klij}. \tag{2}\]
This condition is known as the major symmetry of the stiffness tensor. Scaling by the density \(\rho=\rho(x)\) of a material does not affect any of these properties, which leads to the reduced stiffness tensor\(\mathbf{a}=\mathbf{a}(x)\), whose components are \(a_{ijkl}:=\rho^{-1}c_{ijkl}\). Finally, a stiffness tensor is positive definite; combining the symmetries (1) and (2) and positivity leads to the following definition formalizing the properties just described.
**Definition 1**.: We say that \(\mathbf{a}=(a_{ijkl})\in\mathbb{R}^{n\times n\times n\times n}\) is a stiffness tensor if
\[a_{ijkl}=a_{jikl}=a_{klij}. \tag{3}\]
If, in addition, for every non-zero symmetric matrix \(A\in\mathbb{R}^{n\times n}\) we have
\[\sum_{i,j,k,l=1}^{n}a_{ijkl}A_{ij}A_{kl}>0, \tag{4}\]
then we say that \(\mathbf{a}\) is positive.
**Notation 2**.: The set of stiffness tensors in \(\mathbb{R}^{n\times n\times n\times n}\) is denoted \(E(n)\), and the subset of positive ones is denoted \(E^{+}(n)\). Both sets carry a natural Euclidean topology.
Denote the displacement from equilibrium of a material with stiffness tensor \(\mathbf{c}\) at point \(x\in\mathbb{R}^{n}\) and time \(t\in\mathbb{R}\) by \(u(x,t)\in\mathbb{R}^{n}\). The time evolution of \(u(x,t)\) is governed by the elastic wave equation
\[\sum_{j,k,l=1}^{n}\frac{\partial}{\partial x_{j}}\bigg{(}c_{ijkl}(x)\frac{ \partial}{\partial x_{k}}u_{l}(x,t)\bigg{)}-\rho(x)\frac{\partial^{2}}{ \partial t^{2}}u_{i}(x,t)=0, \tag{5}\]
which can also be written as \(\Box u=0\), where \(\Box=\Box_{\mathrm{c},\rho}\) is the matrix-valued elastic wave operator.
#### 1.1.2. Singularities and the principal symbol
Let \(\xi\in T_{x}^{*}\mathbb{R}^{n}\) be the momentum variable dual to \(x\in\mathbb{R}^{n}\) and let \(\omega\in\mathbb{R}\) be the dual variable of time \(t\in\mathbb{R}\). Following the terminology of microlocal analysis, a function \(u(x,t)\) is said to be singular at a point \((x_{0},t_{0})\) if \(u(x,t)\) is not a \(C^{\infty}\)-smooth function in any neighborhood of the point \((x_{0},t_{0})\). A more precise description of singularities is given by the wave front set \(\mathrm{WF}(u)\) of the function \(u(x,t)\), which consists of the points \((x,\xi,t,\omega)\) for which \(u(x,t)\) is non-smooth at \((x,t)\in\mathbb{R}^{n}\times\mathbb{R}\) in the direction \((\xi,\tau)\in\mathbb{R}^{n}\times\mathbb{R}\). See [10] for more details.
The singularities of \(u(x,t)\) propagate by the null bicharacteristic flow of the matrix-valued principal symbol of \(\square\):
\[\sigma(\square)_{il}(x,\xi,t,\omega)=-\sum_{j,k=1}^{n}c_{ijkl}(x)\xi_{j}\xi_{k }+\rho(x)\delta_{il}\omega^{2}.\]
A propagating singularity is annihilated by the principal symbol, so a point
\[(x,t,\xi,\omega)\in(\mathbb{R}^{n}\times\mathbb{R})\times(\mathbb{R}^{n} \times\mathbb{R})\]
can be in the wave front set of a solution \(u(x,t)\) of the elastic wave equation \(\square u(x,t)=0\) only when
\[\det(\sigma(\square)(x,\xi,t,\omega))=0. \tag{6}\]
Due to the homogeneity of the equation of motion, the frequency of oscillation has no effect on the propagation of singularities. It is therefore convenient to replace the momentum \(\xi\in T_{x}^{*}\mathbb{R}^{n}\) with the slowness vector\(\mathbf{p}=(p_{1},\dots,p_{n}):=\omega^{-1}\xi\in T_{x}^{*}\mathbb{R}^{n}\).
To make (6) explicit, we recall that the Christoffel matrix\(\Gamma(x,\mathbf{p})\) is the \(n\times n\) matrix whose \(il\)-th entry is
\[\Gamma(x,\mathbf{p})_{il}=\sum_{j,k=1}^{n}a_{ijkl}(x)p_{j}p_{k}. \tag{7}\]
By (1) and (2), the Christoffel matrix is symmetric. If the stiffness tensor is positive, then the Christoffel matrix is positive definite. With this notation, the principal symbol becomes simply
\[\sigma(\square)=\omega^{2}\rho(x)(\Gamma(x,\mathbf{p})-I_{n}),\]
where \(I_{n}\) is the \(n\times n\) identity matrix, and condition (6) can be rewritten as
\[\det(\Gamma(x,\mathbf{p})-I_{n})=0. \tag{8}\]
See Figure 1 for an example of the set of \(\mathbf{p}\)'s that satisfy this condition.
_Remark 3_.: Equation (8) can also be argued physically by freezing the stiffness tensor \(c(x)\) and density \(\rho(x)\) to constant values \(c(x_{0})\) and \(\rho(x_{0})\) and writing a plane wave Ansatz for the displacement field. This is less rigorous but relies on the same underlying ideas and leads to the same condition. The principal symbol can also be understood as a description of how the operator acts on plane waves.
#### 1.1.3. Polarization, slowness, and velocity
The set of points
\[S_{x}=\{\mathbf{p}\in T_{x}^{*}\mathbb{R}^{n}:\det(\Gamma(x,\mathbf{p})-I_{n} )=0\}\]
is called the slowness (hyper)surface at the point \(x\). A point \(\mathbf{p}\in T_{x}^{*}\mathbb{R}^{n}\) belongs to the slowness surface exactly when \(1\) is an eigenvalue of \(\Gamma(x,\mathbf{p})\); since the Christoffel matrix is \(2\)-homogeneous
in \(\mathbf{p}\), the slowness surface encodes the eigenvalue information of the Christoffel matrix. See Figure 1 for an example of a slowness surface when \(n=2\), where \(S_{x}\) is a curve.
_A priori_, the slowness surface contains no information about the eigen_vectors_ of the Christoffel matrix. These vectors are the polarizations of singularities and correspond to the direction of oscillation, whereas the slowness vector corresponds roughly to the direction of propagation. Our method is suited for situations where we observe only the singularities in space but not their polarizations. Singularities without polarization can aptly be called unpolarized phonons; the phonon is the particle (or wave packet) corresponding to the displacement field in wave-particle duality. The eigenvalues of the Christoffel matrix give rise to Hamiltonians -- one for each polarization -- that determine the time evolution of unpolarized phonons. The slowness surface is the union of the unit level sets of these Hamiltonians; it might not split cleanly into \(n\) branches and \(n\) well-defined and smooth Hamiltonians, due to degenerate eigenvalues, so we treat the slowness surface as a single object.
#### 1.1.4. An algebraic view
Considering the slowness polynomial \(\det(\Gamma(x,\mathbf{p})-I_{n})\) as a polynomial of degree \(2n\) in the variables \(p_{1},\ldots,p_{n}\), the slowness surface is an object of a very algebraic nature. Our core algebraic result, Theorem C is that for generic \(\mathbf{a}\), the slowness surface is an _irreducible algebraic variety_.
As our focus is on the analysis on a fiber of the cotangent bundle rather than the whole bundle, from now on we consider the point \(x\in\mathbb{R}^{n}\) fixed and drop it from the notation. Thus, for example, the Christoffel matrix will henceforth be denoted \(\Gamma(\mathbf{p})\). Similarly, we call the reduced stiffness tensor \(\mathbf{a}=\rho^{-1}\mathbf{c}\) the stiffness tensor for simplicity.
### Departure point
The goal of a typical inverse problem in anisotropic linear elasticity is to reconstruct the stiffness tensor field from some kind of boundary measurement -- or to prove that the field is uniquely determined by ideal boundary data. A good first step is to analyze the propagation of singularities microlocally from travel time data or hyperbolic Cauchy data1. This turns the analytic inverse problem into a geometric one, where the task is to recover the geometry governing the propagation of singularities.
Footnote 1: The hyperbolic Cauchy data is the set of all Dirichlet and Neumann boundary values of all solutions to the elastic wave equation. It is the graph of the Dirichlet-to-Neumann map.
In the anisotropic elastic setting this geometry is quite complicated. The innermost branch of the slowness surface, called the qP or quasi-pressure branch (see Figure 1), determines a Finsler geometry when the highest eigenvalue of the Christoffel matrix is non-degenerate. Finding the Finsler function on some subset of the tangent bundle amounts to finding a subset of the slowness surface at some or all points. For other polarizations (qS), the unit cosphere -- which is a branch of the slowness surface -- may fail to be convex. It is also common for the slowness surface to have singular points where two branches meet (see [Ilm] for details), which is an obstruction to having a smooth and globally defined Finsler geometry for slower polarizations.
Despite these issues, at most points \(x\in\mathbb{R}^{n}\) and in most directions \(\mathbf{p}\in T_{x}^{*}\mathbb{R}^{n}\) the Christoffel matrix \(\Gamma(x,\mathbf{p})\) has \(n\) different eigenvalues. In the neighborhood of such a point \((x,\mathbf{p})\) on the cotangent bundle the elastic wave equation (5) can be diagonalized microlocally and any solution splits nicely into \(n\) different polarizations. It is also this non-degenerate setting where our description of propagation of singularities is valid without additional caveats.
Ideally, the solution of such a geometric inverse problem produces a qP Finsler geometry in full. The unit cosphere of this geometry at each point is the qP branch of the slowness surface, e.g., [1, 1]. In some cases the recovery is not full but only a part of the slowness surface can be reconstructed; see [1, 1, 1]. In several applications one measures only the arrival times of the fastest waves which give the travel times related to the qP polarized waves. Further investigation will surely lead to mathematical results that provide full or partial information of the slowness surface of a single polarization. For our purposes, it is irrelevant where the partial knowledge of the slowness surface comes from, only that this information is indeed accessible.
Our contribution is to take the next step: We prove that generically a small subset of one branch2 of the slowness surface determines uniquely the entire slowness surface (with all branches) and the stiffness tensor field. No polarization information is required as an input to our methods. Taking the determinant of the principal symbol amounts to ignoring polarization
information. However, we can fill in the polarizations of the singularities _after_ reconstructing the stiffness tensor field.
### Algebraic goals
The \(n\) different branches of the slowness surface, each corresponding at least locally to a polarization, are coupled together by the simple fact that they are all in the vanishing set of the same polynomial -- the slowness polynomial. If we can show that this polynomial is irreducible (i.e., cannot be written as a product of two polynomials in a non-trivial way), then a small open subset of the slowness surface can be completed to the whole slowness surface by taking its Zariski closure, i.e., taking the vanishing set of a polynomial of the right shape that interpolates the known small set. Physically, this means that a small (Euclidean) open subset of the slowness surface, e.g., an open subset of the sheet of the slowness surface associated to qP-waves, determines both the entire sheet, as well as the sheets of the slowness surface associated to qS-polarized waves. Even though the sheet associated to qP-polarized waves and the sheets associated to qS-polarized waves are disjoint in the Euclidean topology, we will show that for stiffness tensors in a generic set these sheets all lie in the same connected component in the complex Zariski topology of the slowness surface.
We note that determining all the sheets of a slowness surfaces from a small open subset of the qP-sheet is impossible for _some_ stiffness tensors. For example, fully isotropic stiffness tensors parametrized by the two Lame parameters, which describe many real materials, give rise to the slowness polynomials of the form
\[P(\mathbf{p})=(c_{P}^{2}\left|\mathbf{p}\right|^{2}-1)(c_{S}^{2}\left| \mathbf{p}\right|^{2}-1)^{n-1}, \tag{9}\]
where \(c_{P}\) and \(c_{S}\) are the pressure and shear wave speeds of the material respectively. Such a polynomial is manifestly reducible. The slowness surface for such materials consists of two concentric spheres, the inner one of which is degenerate if \(n>2\). The two radii are independent. If we know a small subset of the pressure sphere, taking the Zariski closure completes it into the whole sphere but contains no information about the other sphere. This, however, is an exceptional property of a highly symmetric stiffness tensor. We therefore set out to prove that the slowness polynomial is irreducible for _most_ stiffness tensors.
Once the whole slowness surface or slowness polynomial is recovered, we can use it to reconstruct the stiffness tensor. The reason why this works is less obvious, but it will be explained in SS2 once we lay out some preliminaries.
From the point of view of inverse problems in analysis, geometry, and linear elasticity, we need two algebraic results that are straightforward to state but less straightforward to prove. These key results are given in SS1.4 below, with the definitions for Theorem B detailed in SS3. The algebraic results mentioned below hinge on the more technical results given in SS5.3.
### Main results
We have two main results on inverse problems:
**Theorem A** (Uniqueness of stiffness tensor from partial data).: _Let the dimension of the space be \(n\in\{2,3\}\). There is an open and dense subset \(U\subset E^{+}(n)\) of the set of positive stiffness tensors so that the following holds: If \(\mathbf{a}\in U\), then any non-empty Euclidean relatively open subset of the slowness surface corresponding to \(\mathbf{a}\) determines the stiffness tensor \(\mathbf{a}\) uniquely._
The next theorem states roughly that, generically, a two-layer model of a planet with piecewise constant stiffness tensor field is uniquely determined by geometric travel time data for rays traversing the interior of the planet. The two layers are a highly simplified model of the Earth with a mantle and a core, both with homogeneous but anisotropic materials.
We consider two data types for a two-layer model where in the outer layer \(\Omega\setminus\omega\subset\mathbb{R}^{n}\) the stiffness tensor is equal to \(\mathbf{A}\) and in the inner layer \(\omega\subset\mathbb{R}^{n}\) the stiffness tensor is equal to \(\mathbf{a}\). The first data type, denoted \(\mathcal{T}=\mathcal{T}(\omega,\Omega,\mathbf{a},\mathbf{A})\), contains only travel time information between boundary points, while the second data type, denoted \(\mathcal{D}=\mathcal{D}(\omega,\Omega,\mathbf{a},\mathbf{A})\), contains also directional information at the boundary. The precise definitions of these data sets are given below in Section 3. By \(X_{1}\approx X_{2}\) for two subsets \(X_{i}\) of the same Euclidean space, we mean that there are dense open subsets \(U_{i}\subset X_{i}\) for \(i=1,2\) so that \(U_{1}=U_{2}\). See SS3 for details of the various kinds of rays and data, and the notion of admissibility.
**Theorem B** (Two-layer model).: _Let \(n\in\{2,3\}\). For both \(i=1,2\), let \(\omega_{i}\), \(\Omega_{i}\subset\mathbb{R}^{n}\) be nested domains such that \(\Omega_{1}=\Omega_{2}\eqqcolon\Omega\). There is an open and dense subset \(U\) of stiffness tensors in the space of admissible pairs such that the following holds._
_For both \(i=1,2\), suppose that \(\mathbf{a}_{i}\), \(\mathbf{A}_{i}\in U\) are admissible nested stiffness tensors._
_If \(\mathcal{T}(\omega_{1},\Omega,\mathbf{a}_{1},\mathbf{A}_{1})\approx\mathcal{T} (\omega_{2},\Omega,\mathbf{a}_{2},\mathbf{A}_{2})\), then \(\mathbf{A}_{1}=\mathbf{A}_{2}\) and \(\omega_{1}=\omega_{2}\)._
_If \(\mathcal{D}(\omega_{1},\Omega,\mathbf{a}_{1},\mathbf{A}_{1})\approx\mathcal{D} (\omega_{2},\Omega,\mathbf{a}_{2},\mathbf{A}_{2})\), then \(\mathbf{A}_{1}=\mathbf{A}_{2}\), \(\omega_{1}=\omega_{2}\), and additionally \(\mathbf{a}_{1}=\mathbf{a}_{2}\)._
_Remarks 4_.:
1. The equivalence \(\approx\) ensures that exceptional rays play no role -- possible exceptions include gracing rays, zero transmission or reflection coefficients, and cancellation after multipathing. All these issues are typically rare; e.g. the transmission and reflection coefficients are in many cases analytic functions with isolated zeros [13]. If one assumes that the full data sets are equal, then one might prove results like ours by only comparing which rays are missing from the data or behave exceptionally. To ensure that the conclusion is reached using well-behaved rays and that the omission or inclusion of a small set of rays (whether well or ill behaved) is irrelevant, we will only assume that the data sets are only almost equal.
2. The second part of the statement should be seen in light of the first one. Namely, the second assumption implies the first one, so the stiffness tensor \(\mathbf{A}\) is uniquely determined by the data. Given this stiffness tensor in the outer layer, the knowledge of the slowness vector is almost the same as knowing the velocity vector or the tangential component or length of either one. Full directional data can be used to find which slowness vectors are admissible, and that in turn generically determines the stiffness tensor. The exact details are unpleasant, so the statement of Theorem B has been optimized for readability rather than strength. The result as stated above can be adapted to other measurement scenarios.
3. All polarizations are included in the data of Theorem B. The proof is based only on the fastest one (qP), but ignoring the other ones is not trivial. Even if incoming and outgoing waves at the surface are qP, there can be segments in other polarizations due to mode conversions inside.
4. Theorem B was stated geometrically. Geometric data of this kind may be obtained from boundary data for the elastic wave equation (5). It is not uncommon in geophysics to work directly with geometric ray data; see e.g. [10].
These results on inverse problems hinge on two algebraic results:
**Theorem C** (Generic irreducibility).: _The slowness polynomial associated to a generic stiffness tensor in dimension \(n\in\{2,3\}\) is irreducible over \(\mathbb{C}\)._
The word generic in Theorem C is used in the sense of algebraic geometry: Let \(m\) be the number of distinct components of a reduced stiffness tensor. A generic set of stiffness tensors is a subset of \(\mathbb{R}^{m}\) whose complement is a finite union of algebraic subsets of \(\mathbb{R}^{m}\) of dimension \(\leq m-1\), each of which is defined by a finite set of polynomials. In our case, this complement parametrizes the collection of stiffness tensors giving rise to reducible slowness surfaces; it is not empty: we already know that the slowness polynomial (9) associated to a fully isotropic stiffness tensor is reducible.
In the spirit of modern algebraic geometry, we prove Theorem C by considering _all_ slowness polynomials at once, in a family
\[f\colon\mathbf{S}\to\mathbb{A}_{\mathbb{R}}^{k},\]
where the coordinates at a point \(y=y(x)\in\mathbb{R}^{k}=\mathbb{A}^{k}(\mathbb{R})\) record the coefficients of a single slowness polynomial at \(x\in\mathbb{R}^{n}\), and the corresponding fiber \(f^{-1}(y)\subset\mathbf{S}\) is the slowness surface \(S_{x}\). The principle of _generic geometric integrality_, due to Grothendieck, ensures that if the map \(f\) satisfies a few technical hypotheses, then there is a Zariski open subset of \(Y\subset\mathbb{A}_{\mathbb{R}}^{k}\) such that the individual slowness surfaces in \(f^{-1}(Y)\) are irreducible, even over \(\mathbb{C}\) (equivalently, their corresponding slowness polynomials are \(\mathbb{C}\)-irreducible). A Zariski open subset of \(\mathbb{A}_{\mathbb{R}}^{k}\) is dense for the Euclidean topology, as long as it is not empty. Thus, we must check by hand the existence of a single \(\mathbb{C}\)-irreducible slowness polynomial to conclude. In the case \(n=3\), we use an explicit stiffness tensor, modelling a specific physical mineral, albite, to verify the non-emptiness of the set \(Y\). This task is accomplished by reduction to modulo a suitably chosen prime (see Lemma 14--we have included a proof for lack of a reference to this tailored lemma, although we hope it will be useful in other inverse-problem contexts).
Our second algebraic result shows that the correspondence between stiffness tensors and slowness polynomials is generically one-to-one:
**Theorem D** (Generic Unique Reconstruction).: _The slowness polynomial associated to a generic stiffness tensor in dimension \(n\in\{2,3\}\) determines the stiffness tensor._
We give two proofs of Theorem D in the case \(n=2\). The second proof ensues from studying the following question: Given a polynomial with real coefficients, what conditions must its coefficients satisfy for it to be a slowness polynomial? In other words: can we characterize slowness polynomials among all polynomials? We answer this question when \(n=2\). When \(n=3\), we give a proof of Theorem D that does not rely on a characterization of slowness polynomials among all polynomials, because we lack sufficient computational power to crunch through the symbolic calculations required to complete this characterization.
Theorems C and D are proved in SS5; however, for the benefit of readers without much exposure to algebraic geometry, we explain the principles involved in SS2 in the case \(n=2\), to avoid clutter.
### Orthorhombic stiffness tensors
All told, at one end of the symmetry spectrum, a fully isotropic stiffness tensor gives rise to a reducible slowness polynomial, and at the other end, a general fully anisotropic stiffness tensor gives rise to an irreducible slowness polynomial. What happens with stiffness tensors endowed with some symmetry that lies somewhere in between full isotropy and full anisotropy? In other words: Are there classes of stiffness tensors endowed with a small amount of symmetry whose generic member still gives rise to an irreducible slowness polynomial? In SS5.7, we show that a slowness surface in dimension \(3\) associated to a generic orthorhombic stiffness tensor (Definition 19) is irreducible: see Theorem 20.
In contrast with a general fully anisotropic slowness polynomial, a slowness polynomial associated to a material with orthorhombic symmetry can arise from more than one stiffness tensor,
as had already been observed by Helbig and Carcione in [10]. They gave sufficient conditions for the existence of what they called "anomalous companions" of an orthorhombic stiffness tensor. We tighten their results to show that their conditions are also generically necessary:
**Theorem E**.: _For a generic orthorhombic slowness polynomial \(\tilde{P}(\mathbf{p})\) (22) there are exactly four (not necessarily positive) orthorhombic stiffness tensors that give rise to \(\tilde{P}(\mathbf{p})\)._
Following Helbig and Carcione [10], we explain how to verify if an anomalous companion of an orthorhombic stiffness tensor satisfies the positivity condition required by the physical world. Surprisingly, the criterion involves Cayley's cubic surface, a central object in the classical algebro-geometric canon.
### Related results
Motivated by seismological considerations, inverse boundary value problems in elasticity have been studied since 1907, when Wiechert and Zoeppritz posed them in their paper "Uber Erdbebenwellen"(On Earthquake Waves) [11]; see also [1, 2]. The first breakthrough results in _elastostatics_ for isotropic media were by Nakamura and Uhlmann [13], followed by results by Eskin and Ralston [1] for full boundary data and Imanuvilov, Uhlmann and Yamamoto [12] for partial boundary data. Stefanov, Uhlmann and Vasy [14] studied recovery of smooth \(P\)- and \(S\)-wave speeds in the elastic wave equation from knowledge of the Dirichlet-to-Neumann map in the isotropic case, see also [1] on the reconstruction of the density tensor. Beretta, Francini and Vessella [1] studied the stability of solutions to inverse problems. Uniqueness results for the tomography problem with interfaces, again, in the isotropic case, in the spirit of Theorem B, were considered by Caday, de Hoop, Katsnelson and Uhlmann [1], as well as by Stefanov, Uhlmann and Vasy [14].
The related inverse travel-time problem (for the corresponding Riemannian metric) has been studied in isotropic media using integral geometry in [15, 16, 17, 18] and metric geometry in [1].
Anisotropic versions of the _dynamic_ inverse boundary value problem have been studied in various different settings. Rachele and Mazzucato studied the geometric invariance of elastic inverse problems in [19]. In [19, 19], they showed that for certain classes of transversely isotropic media, the slowness surfaces of which are ellipsoidal, two of the five material parameters are partially determined by the dynamic Dirichlet-to-Neumann map. Before that, Sacks and Yakhno [14] studied the inverse problem for a layered anisotropic half space using the Neumann-to-Dirichlet map as input data, observing that only a subset of the components of the stiffness tensor can be determined expressed by a "structure" condition. De Hoop, Nakamura and Zhai [10] studied the recovery of piecewise analytic density and stiffness tensor of a three-dimensional domain from the local dynamic Dirichlet-to-Neumann map. They give global uniqueness results if the material is transversely isotropic with known axis of symmetry or orthorhombic with known symmetry planes on each subdomain. They also obtain uniqueness of a fully anisotropic stiffness tensor, assuming that it is piecewise constant and that the interfaces which separate the subdomains have curved portions. Their method of proof requires the use of the (finite in time) Laplace transform. Following this transform, some of the techniques are rooted in the proofs of analogous results for the inverse boundary value problem in the elastostatic case [19, 18]. Carstea, Nakamura and Oksanen [1] avoid the use of the Laplace transform and obtain uniqueness, in the piecewise constant case, closer to the part of the boundary where the measurements are taken for shorter observation times and further away from that part of the boundary for longer times.
Under certain conditions, the dynamic Dirichlet-to-Neumann map determines the scattering relation, allowing a transition from analytic to geometric data. Geometric inverse problems in anisotropic elasticity have received increasing attention over the past few years. In the case of transversely anisotropic media the elastic parameters are determined by the boundary travel times of all the polarizations [11, 12]. A compact Finsler manifold is determined by its boundary distance map [11], a foliated and reversible Finsler manifold by its broken scattering relation [11], and one can reconstruct the Finsler geometry along a geodesic from sphere data [11].
Linearizing about the isotropic case, that is, assuming "weak" anisotropy, leads to the mixed ray transform for travel times between boundary points. De Hoop, Saksala, Uhlmann and Zhai [11] proved "generic" uniqueness and stability for this transform on a three-dimensional compact simple Riemannian manifold with boundary, characterizing its kernel. Before that De Hoop, Saksala and Zhai [11] studied the mixed ray transform on simple 2-dimensional Riemannian manifolds. Linearizing about an isotropic case but only with conformal perturbations leads to a scalar geodesic ray transform problem on a reversible Finsler manifold, and the injectivity of that transform was established in [10] in spherical symmetry.
Assuming lack of symmetry naturally leads to the occurrence of singular points in the slowness surface. This is inherent in exploiting algebraic geometry to obtain the results in this paper. However, the singular points lead to fundamental complications in the application of microlocal analysis to a parametric construction revealing the geometry of elastic wave propagation, see [12, 13, 14]. The points are typically associated with conical refraction [15, 16, 17, 18, 19, 20].
### Outline
The paper is organized as follows. In SS2 we explain the general algebro-geometric framework underlying the proofs of Theorems C and D in dimension 2, where the number of parameters is small, making it easier to digest the ideas involved. We pivot in SSSS3-4 to the study of inverse problems, setting up precise definitions for Theorems A and B in SS3 and giving proofs for these theorems in SS4. In SS5 we prove Theorems C and D, as well as a version of Theorem C for stiffness tensors with orthorhombic symmetry.
### Acknowledgements
We thank Mohamed Barakat, Olivier Benoist, Daniel Erman, Bjorn Poonen, and Karen Smith for useful discussions around the algebro-geometric content of the paper. M.V. de H. was supported by the Simons Foundation under the MATH + X program, the National Science Foundation under grant DMS-2108175, and the corporate members of the Geo-Mathematical Imaging Group at Rice University. J. I. was supported by Academy of Finland grants 332890 and 351665. M. L. was supported by Academy of Finland grants 284715 and 303754. J. I. and M. L. were also supported by the Centre of Inverse Modelling and Imaging. A. V.-A. was partially supported by NSF grants DMS-1902274 and DMS-2302231, as well as NSF Grant DMS-1928930 while in residence at MSRI/SLMath in Berkeley (Spring 2023). He thanks Francois Charles for hosting him at the Departement de Mathematiques et Applications of the Ecole Normale Superieur in Summer 2022, where part of this paper was written.
## 2. Algebro-geometric principles: a case study in dimension 2
To illustrate how algebraic geometry bears on inverse problems in anisotropic elasticity, we consider a two-dimensional model, where a slowness surface is in fact a curve. An anisotropic stiffness tensor in this case is determined by six general real parameters: although such a tensor
\(\mathbf{a}=(a_{ijkl})\in\mathbb{R}^{16}\) has 16 components, once we take into account the major and minor symmetries of the tensor, only 6 distinct parameters are left. Following Voigt's notation (see SS5.1), they are
\[b_{11} =a_{1111}, b_{22} =a_{2222},\] \[b_{12} =a_{1122}=a_{2211}, b_{23} =a_{2212}=a_{2221}=a_{1222}=a_{2122},\] \[b_{13} =a_{1112}=a_{1121}=a_{1211}=a_{2111}, b_{33} =a_{1212}=a_{2112}=a_{1221}=a_{2121}.\]
The corresponding Christoffel matrix is
\[\Gamma(\mathbf{p})=\begin{pmatrix}b_{11}p_{1}^{2}+2b_{13}p_{1}p_{2}+b_{33}p_{2 }^{2}&b_{13}p_{1}^{2}+(b_{33}+b_{12})p_{1}p_{2}+b_{23}p_{2}^{2}\\ b_{13}p_{1}^{2}+(b_{33}+b_{12})p_{1}p_{2}+b_{23}p_{2}^{2}&b_{33}p_{1}^{2}+2b_{2 3}p_{1}p_{2}+b_{22}p_{2}^{2}\end{pmatrix}. \tag{10}\]
The slowness curve is the vanishing set of \(\det\left(\Gamma(\mathbf{p})-I_{2}\right)\) in \(\mathbb{R}^{2}\):
\[S:=\{\mathbf{p}\in\mathbb{R}^{2}\mid\det\left(\Gamma(\mathbf{p})-I_{2}\right) =0\}.\]
The polynomial \(\det\left(\Gamma(\mathbf{p})-I_{2}\right)\) has degree 4 in the variables \(p_{1}\) and \(p_{2}\), but not every monomial of degree \(\leq 4\) in \(p_{1}\) and \(p_{2}\) appears in it. In fact,
\[\det\left(\Gamma(\mathbf{p})-I_{2}\right)=c_{1}p_{1}^{4}+c_{2}p_{1}^{3}p_{2}+c _{3}p_{1}^{2}p_{2}^{2}+c_{4}p_{1}^{2}+c_{5}p_{1}p_{2}^{3}+c_{6}p_{1}p_{2}+c_{ 7}p_{2}^{4}+c_{8}p_{2}^{2}+c_{9}, \tag{11}\]
for some constants \(c_{i}\in\mathbb{R}\), \(i=1\ldots,9\). These constants are not arbitrary; they have to satisfy relations like \(c_{9}=1\), or the more vexing
\[-4c_{1}^{2} +4c_{1}c_{4}c_{8}-c_{1}c_{6}^{2}+8c_{1}c_{7}-4c_{1}c_{8}^{2}-c_{ 2}^{2}-2c_{2}c_{5}\] \[+2c_{2}c_{6}c_{8}-c_{3}c_{6}^{2}-4c_{4}^{2}c_{7}+2c_{4}c_{5}c_{6} +4c_{4}c_{7}c_{8}-c_{5}^{2}-c_{6}^{2}c_{7}-4c_{7}^{2}=0 \tag{12}\]
in order to arise from a stiffness tensor (see SS2.4 and SS5.6).
### Goals
We want to know two things:
1. For general choices of the parameters \(b_{ij}\), the curve \(S\subset\mathbb{R}^{2}\) is irreducible, even over the complex numbers.
2. For general choices of \(c_{1},\ldots,c_{9}\) corresponding to a slowness polynomial, there is a unique set of \(b_{ij}\)'s giving rise to the polynomial (11), and this polynomial can be explicitly computed if we approximate \(c_{1}\ldots,c_{9}\) by rational numbers.
We accomplish both of these goals by leveraging powerful results in both the theory of schemes3, as developed by Alexander Grothendieck, and the application of computational techniques under the banner of Grobner bases.
Footnote 3: Schemes over a field form a category that is richer and more flexible than the corresponding category of varieties.
### Generic Irreducibility
To realize our first goal, we must compactify a slowness curve and consider _all slowness curves at once_, in a family. This allows us to apply a suite of results from scheme theory, including "general geometric integrality". So think now of the parameters \(b_{ij}\) as indeterminates, and the entries of \(\Gamma(\mathbf{p})\) as belonging to the polynomial ring \(A[p_{0},p_{1},p_{2}]\) with coefficients in the polynomial ring \(A:=\mathbb{R}[b_{11},b_{12},b_{13},b_{22},b_{23},b_{33}]\). Then the homogenized slowness polynomial is given by
\[\begin{split}\tilde{P}(\mathbf{p})&=\det(\Gamma( \mathbf{p})-p_{0}^{2}I_{2})\\ &=(b_{11}b_{33}-b_{13}^{2})p_{1}^{4}+2(b_{11}b_{23}-b_{12}b_{13}) p_{1}^{3}p_{2}+(b_{11}b_{22}-b_{12}^{2}-2b_{12}b_{33}+2b_{13}b_{23})p_{1}^{2}p_{2}^{2} \\ &\quad-(b_{11}+b_{33})p_{1}^{2}p_{0}^{2}+2(b_{13}b_{22}-b_{12}b_{ 23})p_{1}p_{2}^{3}-2(b_{13}+b_{23})p_{1}p_{2}p_{0}^{2}+(b_{22}b_{33}-b_{23}^{2} )p_{2}^{4}\\ &\quad-(b_{22}+b_{33})p_{2}^{2}p_{0}^{2}+p_{0}^{4}.\end{split} \tag{13}\]
It is a homogeneous polynomial of degree \(4\) in the variables \(p_{0}\), \(p_{1}\) and \(p_{2}\), with coefficients in the polynomial ring \(A\). Its zero-locus thus traces a curve in the projective place \(\mathbb{P}^{2}_{A}\) with homogeneous coordinates \((p_{0}:p_{1}:p_{2})\) and coefficient ring \(A\). Since \(\mathbb{P}^{2}_{A}\) is naturally isomorphic, as an \(\mathbb{R}\)-scheme, to the fiber product \(\mathbb{A}^{6}_{\mathbb{R}}\times\mathbb{P}^{2}_{\mathbb{R}}\), the vanishing of \(\tilde{P}(\mathbf{p})\) is also naturally a hypersurface in the product of an affine space over \(\mathbb{R}\) with coordinates \(b_{11},\ldots,b_{33}\) and the projective plane over \(\mathbb{R}\) with homogeneous coordinates \((p_{0}:p_{1}:p_{2})\). Let
\[\mathbf{S}:= \left\{\mathbf{p}=(p_{0}:p_{1}:p_{2})\in\mathbb{P}^{2}_{A}: \tilde{P}(\mathbf{p})=0\right\}\] \[\simeq \left\{((b_{11},\ldots,b_{33}),(p_{0}:p_{1}:p_{2}))\in\mathbb{A} ^{6}_{\mathbb{R}}\times\mathbb{P}^{2}_{\mathbb{R}}:\tilde{P}(\mathbf{p})=0 \right\}.\]
We call \(\mathbf{S}\) the slowness bundle. Let \(\iota\colon\mathbf{S}\hookrightarrow\mathbb{A}^{6}_{\mathbb{R}}\times \mathbb{P}^{2}_{\mathbb{R}}\) be the inclusion map. Composing \(\iota\) with the projection \(\pi_{1}\colon\mathbb{A}^{6}_{\mathbb{R}}\times\mathbb{P}^{2}_{\mathbb{R}} \rightarrow\mathbb{A}^{6}_{\mathbb{R}}\) gives a fibration
\[f:=\pi_{1}\circ\iota\colon\mathbf{S}\rightarrow\mathbb{A}^{6}_{\mathbb{R}}\]
that we call the slowness curve fibration. For a point \(\mathbf{b}=(b_{11},\ldots,b_{33})\in\mathbb{A}^{6}(\mathbb{R})=\mathbb{R}^{6}\), the fiber \(f^{-1}(b)\) is the curve of degree \(4\) in the projective plane \(\mathbb{P}^{2}_{\mathbb{R}}\) obtained by specializing the parameters \(b_{ij}\) in \(\tilde{P}(\mathbf{p})\) according to the coordinates of \(\mathbf{b}\).
A theorem of Grothendieck known as "generic geometric integrality" [EGAIV-3, Theoreme 12.2.4(viii)] allows us to conclude that the set of points \(\mathbf{b}\in\mathbb{R}^{6}\) such that the fiber \(f^{-1}(\mathbf{b})\) is irreducible, even over \(\mathbb{C}\), is an open subset for the Zariski topology of \(\mathbb{A}^{6}_{\mathbb{R}}\). This leaves two tasks for us: to show that the map \(f\) satisfies the hypotheses of [EGAIV-3, Theoreme 12.2.4(viii)] (i.e., that it is proper, flat, and of finite presentation), and that the open set in \(\mathbb{A}^{6}_{\mathbb{R}}\) furnished by generic geometric irreducibility is not empty! For the latter, because the target \(\mathbb{A}^{6}_{\mathbb{R}}\) is an irreducible variety, it suffices to produce a single choice of parameters \(b_{ij}\) such that the corresponding slowness curve is irreducible over \(\mathbb{C}\).
For this final step, we use a standard number-theoretic strategy: reduction modulo a well-chosen prime. To wit, we choose a slowness polynomial \(\tilde{P}(\mathbf{p})\) with all \(b_{ij}\in\mathbb{Z}\); to check it is irreducible over \(\mathbb{C}\), it suffices to show it is irreducible over a fixed algebraic closure \(\overline{\mathbb{Q}}\) of \(\mathbb{Q}\) (see [20, Tag 020J]). Furthermore, a putative factorization would have to occur already over a finite Galois field extension \(K\subset\overline{\mathbb{Q}}\) of \(\mathbb{Q}\), because all the coefficients involved in such a factorization would be algebraic numbers, and therefore have finite degree over \(\mathbb{Q}\). Reducing the polynomial modulo a nonzero prime ideal \(\mathfrak{p}\) in the ring of integers \(\mathcal{O}_{K}\) of \(K\), by applying the unique ring homomorphism \(\mathbb{Z}\rightarrow\mathcal{O}_{K}/\mathfrak{p}\mathcal{O}_{K}=:\mathbb{F}_ {\mathfrak{p}}\) to its coefficients, we would see a factorization of \(\tilde{P}(\mathbf{p})\) in the residue polynomial ring \(\mathbb{F}_{\mathfrak{p}}[\mathbf{p}]\), namely, the reduction of the factorization that occurs over \(K\). The finite field \(\mathbb{F}_{\mathfrak{p}}\) is an extension of the finite field \(\mathbb{F}_{p}\) with \(p\) elements, where \(\mathfrak{p}\cap\mathbb{Z}=(p)\). In Lemma 14, we show that if \(\tilde{P}(\mathbf{p})\) is irreducible in the finite field \(\mathbb{F}_{p^{d}}\) of cardinality \(p^{d}\), where \(d=\deg\left(\tilde{P}(\mathbf{p})\right)\), then it is also irreducible over the finite field \(\mathbb{F}_{\mathfrak{p}}\), and hence is irreducible over \(K\), hence over \(\overline{\mathbb{Q}}\), hence over \(\mathbb{C}\). (In fact, we effectively show that \(K=\mathbb{Q}\).) What makes this strategy compelling is that \(\mathbb{F}_{p^{d}}\) is _finite_, so checking whether \(\tilde{P}(\mathbf{p})\) is irreducible in \(\mathbb{F}_{p^{d}}[\mathbf{p}]\) is a finite, fast computation in any modern computer algebra system.
_Remark 5_.: Readers versed in algebraic geometry might wonder if it might not be easier to use "generic smoothness" [11, Corollary III.10.7] to prove that a generic slowness polynomial is irreducible. Unfortunately, in dimensions \(n\notin\{2,4,8\}\), a slowness surface is _always_ singular [Ilm], and since \(n=3\) is the only interesting case from a physical point of view, we must avoid using "generic smoothness.
### Irreducibility over \(\mathbb{C}\) vs. connectedness over \(\mathbb{R}\)
It is possible for the set \(S(\mathbb{R})\subseteq\mathbb{R}^{2}\) of real points of a slowness curve \(S\) to be disconnected in the Euclidean topology, even if the algebraic variety \(S\), considered over the complex numbers, is connected in the Zariski topology. For example, taking
\[b_{11}=10,\quad b_{12}=2,\quad b_{13}=3,\quad b_{22}=12,\quad b_{23}=5,\quad b_{ 33}=20,\]
we obtain the slowness curve (using coordinates \(x=p_{1}\) and \(y=p_{2}\)):
\[S:\quad 1-30x^{2}+191x^{4}-16xy+88x^{3}y-32y^{2}+66x^{2}y^{2}+52xy^{3}+215y^{4} =0.\]
This curve has two real connected components (see Figure 1). However, as a complex algebraic variety, \(S\) is irreducible, hence connected. Its natural compactification in the complex projective plane is a smooth genus \(3\) complex curve, which is a \(3\)-holed connected \(2\)-dimensional real manifold.
### Unique reconstruction
Our second goal, unique reconstruction of generic stiffness tensors, has both a theoretical facet and a computational facet, which are in some sense independent. Comparing the coefficients of (13) and 11, after dehomogenizing by setting \(p_{0}=1\), we want to ideally solve the system of simultaneous equations
\[\begin{split} c_{1}&=(b_{11}b_{33}-b_{13}^{2}),& c_{5}=2(-b_{12}b_{23}+b_{13}b_{22}),\\ c_{2}&=2(b_{11}b_{23}-b_{12}b_{13}),& c_{6} =-2(b_{13}+b_{23}),\\ c_{3}&=(b_{11}b_{22}-b_{12}^{2}-2b_{12}b_{33}+2b_{13 }b_{23}),& c_{7}=(b_{22}b_{33}-b_{23}^{2}),\\ c_{4}&=-(b_{11}+b_{33}),& c_{8}=-(b_{22}+b_{33}). \end{split} \tag{14}\]
That is, given constants \(c_{1},\dots,c_{8}\), we would like to determine all \(6\)-tuples \((b_{11},\dots,b_{33})\) that satisfy (14). To this end, we homogenize the system in a slightly different way than before with a new variable \(r\), so that all the right hand sides are homogeneous polynomials of degree \(2\):
\[\begin{split} c_{1}&=(b_{11}b_{33}-b_{13}^{2}),& c_{5}=2(-b_{12}b_{23}+b_{13}b_{22}),\\ c_{2}&=2(b_{11}b_{23}-b_{12}b_{13}),& c_{6} =-2r(b_{13}+b_{23}),\\ c_{3}&=(b_{11}b_{22}-b_{12}^{2}-2b_{12}b_{33}+2b_{1 3}b_{23}),& c_{7}=(b_{22}b_{33}-b_{23}^{2}),\\ c_{4}&=-r(b_{11}+b_{33}),& c_{8}=-r(b_{22}+b_{33}), \\ c_{9}&=r^{2}.\end{split} \tag{15}\]
This homogenization allows us to define a _rational map_ of complex projective spaces
\[\begin{split} g\colon\mathbb{P}^{6}_{[b_{11},\dots,b_{33},r]} \dashrightarrow\mathbb{P}^{8}_{[c_{1},\dots,c_{8},c_{8}]},\\ [b_{11},\dots,b_{33},r]&\mapsto[b_{11}b_{33}-b_{13 }^{2},\dots,-r(b_{22}+b_{33}),r^{2}].\end{split} \tag{16}\]
We would like to show that a general nonempty fiber of this map consists of exactly one point; this would mean that among all tuples \((c_{1},\dots,c_{9})\) that are possibly the coefficients of a slowness polynomial, most tuples arise from exactly one stiffness tensor.
The map \(g\) is not defined at points where the right hand sides of (15) simultaneously vanish. Call this locus \(\Pi\subset\mathbb{P}^{6}\). The algebro-geometric operation of _blowing up_\(\mathbb{P}^{6}\) at \(\Pi\) gives a scheme \(X:=\operatorname{Bl}_{\Pi}(\mathbb{P}^{6})\) together with a projection map \(X\to\mathbb{P}^{6}\) that resolves the indeterminacy locus of \(g\), in the sense that the composition \(X\to\mathbb{P}^{6}\dashrightarrow\mathbb{P}^{8}\) can be extended to a full morphism
\(\tilde{g}\colon X\to\mathbb{P}^{8}\), so that the triangle
commutes. This is what is represented by the dashed arrow.
Let \(Y\) be the image of \(\tilde{g}\). Using upper semi-continuity of the fiber dimension function for the surjective map \(\tilde{g}\colon X\to Y\), we show there is a Zariski open subset of \(Y\) whose fibers are zero-dimensional. Then, using upper semi-continuity of the degree function for finite morphisms, we show there is a Zariski open subset of \(Y\) whose fibers consist of precisely one point. This will complete the proof of generic unique reconstruction of stiffness tensors.
As a bonus, we can use an effective version of Chevalley's Theorem [1], implemented in the package ZariskiFrames [1] to compute equations for the image of \(\tilde{g}\). This, for example, explains how we arrived at the constraint (12) for the tuple \((c_{1},\dots,c_{9})\). Tuples of coefficients \((c_{1},\dots,c_{9})\) in the image of \(\tilde{g}\) are said give rise to admissible slowness polynomials.
### In practice: Use Grobner bases
As a computational matter, given a specific tuple \((c_{1},\dots,c_{9})\) stemming from an admissible slowness polynomial, reconstructing its stiffness tensor can be done essentially instantaneously using Grobner bases. We work over the field \(\mathbb{Q}\) so that we can use any one of several computational algebra systems with Grobner bases packages, e.g., magma, Macaulay2, Singular, Maple, or Sage. Thanks to Buchberger's criterion (see [1, SS2.6]), it is possible to check the result of our calculations by hand, albeit laboriously.
A (reduced) Grobner basis for an ideal \(I\subset\mathbb{Q}[b_{11},b_{12},b_{13},b_{22},b_{23},b_{33}]\) under the lexicographic ordering \(b_{11}>\dots>b_{33}\) is a basis for \(I\) whose leading terms generate the ideal consisting of _all_ leading terms of all polynomials in \(I\). In the event that there is exactly one tuple \((b_{11},b_{12},b_{13},b_{22},b_{23},b_{33})\in\mathbb{Q}^{6}\) that satisfies the relations defining \(I\), this Grobner basis will consist of precisely this tuple. For example, given the admissible slowness polynomial
\[\tilde{P}(\mathbf{p})=-3625p_{1}^{4}+1590p_{1}^{3}p_{2}+7129p_{1}^{2}p_{2}^{2} -50p_{1}^{2}p_{0}^{2}+8866p_{1}p_{2}^{3}+304p_{1}p_{2}p_{0}^{2}-8049p_{2}^{4} -14p_{2}^{2}p_{0}^{2}+p_{0}^{4}\]
the following short piece of magma code [1] reconstructs the components of the stiffness tensor:
P<b11,b12,b13,b22,b23,b33> := PolynomialRing(Rationals(),6); relations := [ -3625 - (b11*b33 - b13^2), 1590 - 2*(b11*b23 - b13*b12), 7129 - (b11*b22 + 2*b13*b23 - b12^2 - 2*b33*b12), -50 + (b11 + b33), 8866 - 2*(b13*b22 - b23*b12), 304 + 2*(b13 + b23), -8049 - (b33*b22 - b23^2), -14 + (b33 + b22) ]; I := ideal<P | relations>; GroebnerBasis(I); The computation takes less than a millisecond, and returns
b11 - 20, b12 - 39, b13 + 65, b22 + 16, b23 + 87, b33 - 30 ]
indicating the parameters \((b_{11},b_{12},b_{13},b_{22},b_{23},b_{33})=(20,39,-65,-16,-87,30)\) of the _only_ stiffness tensor that gives rise to the specific polynomial \(\tilde{P}(\mathbf{p})\) above, as the reader can check.
As mentioned above, this kind of Grobner basis computation is independent of the theoretical result asserting that a generic slowness polynomial arises from a unique stiffness tensor. In fact, these results complement each other nicely: Theorem D implies that a Grobner basis computation will succeed when applied to a generic slowness polynomial.
## 3. The two-layer model
### Nested domains and stiffness tensors
We say that \(\omega\), \(\Omega\subset\mathbb{R}^{n}\) are nested domains if they are smooth, strictly convex, and bounded domains such that \(\bar{\omega}\subset\Omega\). Let \(\mathbf{a}\) and \(\mathbf{A}\in E^{+}(n)\) be two stiffness tensors associated to the regions \(\omega\) and \(\Omega\setminus\omega\), respectively. We call these tensors admissible nested stiffness tensors if the following conditions hold:
1. For both tensors the largest eigenvalue of the Christoffel matrix \(\Gamma(\mathbf{p})\) is simple for all \(\mathbf{p}\neq 0\). We refer to the corresponding subset of the slowness surface as the qP-branch.
2. The qP-branch of the slowness surface of \(\mathbf{a}\) is inside that of \(\mathbf{A}\). (In other words, the slowness surfaces \(s\), \(S\subset\mathbb{R}^{n}\) of \(\mathbf{a}\) and \(\mathbf{A}\) are the boundaries of nested domains in the sense defined above.)
The two domains and the stiffness tensors are illustrated in Figure 2.
If the stiffness tensors \(\mathbf{a}\) and \(\mathbf{A}\) are isotropic, then the nestedness condition above simply means that the qP wave speed of \(\mathbf{a}\) is strictly higher than that of \(\mathbf{A}\). If \(\omega\) and \(\Omega\) are concentric balls, then the condition is equivalent with the Herglotz condition interpreted in a distributional sense; cf. [1]. The Herglotz condition is widely used in the theory of geometric inverse problems as a generalization of the condition that a Riemannian manifold with a boundary has no trapped geodesics.
The piecewise constant stiffness tensor field corresponding to a pair of nested domains \(\omega,\Omega\) and admissible nested stiffness tensors \(\mathbf{a},\mathbf{A}\) is the function \(\Omega\to E^{+}(n)\) taking the value \(\mathbf{a}\) in \(\omega\) and \(\mathbf{A}\) in \(\Omega\setminus\omega\).
### Admissible rays
Intuitively speaking, among all physically realizable piecewise linear ray paths, admissible rays are geometrically convenient ray paths. We make no claims about the amplitudes of the corresponding waves, but we expect most admissible rays to have non-zero amplitudes. We will describe separately the behaviour where the stiffness tensor is smooth and the behaviour at the interfaces \(\partial\omega\) and \(\partial\Omega\). Admissible ray paths will be piecewise linear paths satisfying certain conditions.
Suppose first that the stiffness tensor \(\mathbf{a}(x)\in E^{+}(n)\) is smooth. For every \((x,\mathbf{p})\in T^{*}\Omega\) the Christoffel matrix \(\Gamma_{\mathbf{a}}(x,\mathbf{p})\) has \(n\) positive eigenvalues, possibly with repetitions. For any \(m\in\{1,\ldots,n\}\), let \(G_{\mathbf{a}}^{m}\subset T^{*}M\) denote the subset where the \(m\)-th eigenvalue \(\lambda_{\mathbf{a}}^{m}(x,\mathbf{p})\) of \(\Gamma_{\mathbf{a}}(x,\mathbf{p})\)
is simple. In this set the eigenvalue defines a smooth Hamiltonian \(H^{m}_{\mathbf{a}}(x,\mathbf{p})=\frac{1}{2}[\lambda^{m}_{\mathbf{a}}(x,\mathbf{p} )^{2}-1]\). An admissible ray path is the projection of an integral curve of the Hamiltonian flow from \(T^{*}\Omega\) to the base \(\Omega\). (The cotangent vector on the fiber we refer to as the momentum.)
In our setting \(\mathbf{a}\) is constant, so these integral curves are straight lines with constant speed parametrization. The speed depends on direction and polarization (or the eigenvalue index \(m\) or the branch of the slowness surface -- these are all equivalent).
At an interface two ray paths meet. We set two conditions for the incoming and outgoing _paths_:
1. Neither path is tangent to the interface. (This is convenient but ultimately unimportant.)
2. The component of the momentum tangent to the interface is the same for both incoming and outgoing rays.
The two meeting rays can be on the same or opposite sides of the interface, corresponding to reflected and refracted rays, respectively. The polarization is free to change.
The outer boundary \(\partial\Omega\) is also an interface. There the rays may either terminate ("refract to/from outside \(\Omega\)") or be reflected back in.
An admissible ray is a piecewise linear path, and we refer to the linear segments as legs.
_Remark 6_.: Our definition of an admissible ray path excludes degenerate polarizations (which correspond to singular points on the slowness surface) and rays travelling along an interface. In the proof of Theorem B it is irrelevant whether these are included; their exclusion is not used nor would their inclusion be an issue. Rays tangent to an interface are irrelevant in the same way, as are the rare cases where the reflection or transmission coefficient is zero despite there being a kinematically possible ray.
### Data
We consider two kinds of data: pure travel time data (to be denoted by \(\mathcal{T}\)) and travel time data decorated with direction information (to be denoted by \(\mathcal{D}\)).
The full data set corresponding to the four parameters \((\omega,\Omega,\mathbf{a},\mathbf{A})\) is the set
\[\mathcal{D}(\omega,\Omega,\mathbf{a},\mathbf{A}) =\{(t,x,p,y,q);\,x,y\in\partial\Omega,\,\text{there is an admissible ray path}\] from
\[x\]
to
\[y\]
with initial momentum
\[p\]
, \[\text{final momentum $q$, and total length $t$}\}.\]
The pure travel time data set without directional information is
\[\mathcal{T}(\omega,\Omega,\mathbf{a},\mathbf{A})=\{(t,x,y);(t,x,p,y,q)\in \mathcal{D}(\omega,\Omega,\mathbf{a},\mathbf{A})\text{ for some }p\in T^{*}_{x}\bar{\Omega}\text{ and }q\in T^{*}_{y}\bar{\Omega}\}.\]
These two sets may be seen as subsets: \(\mathcal{D}(\omega,\Omega,\mathbf{a},\mathbf{A})\subset\mathbb{R}\times( \partial T^{*}\bar{\Omega})^{2}\) and \(\mathcal{T}(\omega,\Omega,\mathbf{a},\mathbf{A})\subset\mathbb{R}\times( \partial\Omega)^{2}\).
## 4. Inverse problems proofs
This section is devoted to the proof of the inverse problems results, Theorems A and B. We will make use of Theorems C and D; besides them, we only need very basic algebraic geometry.
### Proof of Theorem A
The first result follows easily from our algebraic results, Theorems C and D that we prove in SS5.
Proof of Theorem A.: Theorem C implies that there is an open and dense (in the Zariski sense) set \(W_{1}\subset E(n)\) so that the slowness polynomial \(P_{\mathbf{a}}\) is irreducible for all \(a\in W_{1}\). Theorem D
implies that there is an open and dense set \(W_{2}\subset E(n)\) so that if \(P_{\mathbf{a}_{1}}=P_{\mathbf{a}_{2}}\) for \(\mathbf{a}_{1},\mathbf{a}_{2}\in W_{1}\), then \(\mathbf{a}_{1}=\mathbf{a}_{2}\).
The set \(W\coloneqq W_{1}\cap W_{2}\) is open and dense (in the Zariski sense) in \(E(n)\). If \(\mathbf{a}\in W\), then the slowness polynomial \(P_{\mathbf{a}}\) is irreducible. The Zariski-closure of the relatively open (in the Euclidean sense) subset of the slowness surface is a subvariety of the slowness surface, and it is of full dimension. Due to irreducibility this closure is the whole slowness surface. Thus for \(\mathbf{a}\in W\) a small subset of the slowness surface determines the whole slowness surface, which in turn determines the stiffness tensor.
The positivity property of the stiffness tensor was irrelevant. The claim remains true in the physically relevant open subset \(E^{+}(n)\subset E(n)\) by taking \(U=W\cap E^{+}(n)\).
### Proof of Theorem B
This proof will rely on Theorem A without the use any algebraic geometry. We will split the proof in three parts, proven separately below:
1. If \(\mathcal{T}(\omega_{1},\Omega,\mathbf{a}_{1},\mathbf{A}_{1})\approx\mathcal{T }(\omega_{2},\Omega,\mathbf{a}_{2},\mathbf{A}_{2})\), then \(\mathbf{A}_{1}=\mathbf{A}_{2}\).
2. If \(\mathcal{T}(\omega_{1},\Omega,\mathbf{a}_{1},\mathbf{A}_{1})\approx\mathcal{T }(\omega_{2},\Omega,\mathbf{a}_{2},\mathbf{A}_{2})\), then \(\omega_{1}=\omega_{2}\).
3. If \(\mathcal{D}(\omega_{1},\Omega,\mathbf{a}_{1},\mathbf{A}_{1})\approx\mathcal{D }(\omega_{2},\Omega,\mathbf{a}_{2},\mathbf{A}_{2})\), then \(\mathbf{a}_{1}=\mathbf{a}_{2}\).
Roughly speaking, we will prove the first part by studying the travel times of nearby points, the second part by varying a line segment and detecting when it hits \(\partial\omega_{i}\), and the third part by peeling off the top layer to get a problem on \(\partial\omega\) that is similar to the first step. These parts are illustrated in Figure 2.
For any \(x\in\partial\Omega\), let \(\nu(x)\) denote the inward-pointing unit normal vector to the boundary \(\partial\Omega\). Given any direction \(d\in\mathbb{S}^{n-1}\), we denote
\[\partial_{d}\Omega=\{x\in\partial\Omega;d\cdot\nu(x)>0\}.\]
This is the subset of the boundary where \(d\) points inwards. The boundary of this set, \(\partial\partial_{d}\Omega\subset\partial\Omega\), is the set where \(d\) is tangent to the boundary.
Due to strict convexity of \(\Omega\) there is a unique \(\tau(x,d)>0\) for every \(x\in\partial_{d}\Omega\) so that \(x+\tau(x,d)d\in\partial\Omega\). This is the distance through \(\Omega\) starting at \(x\) in the direction \(d\). For \(x\in\partial\partial_{d}\Omega\) we set \(\tau(x,d)=0\), and we do not define the function at all where \(d\) points outwards.
For both \(i=1,2\), we denote
\[v_{i}(x,d)=\frac{\tau(x,d)}{\inf\{t>0;(t,x,x+\tau(x,d)d)\in\mathcal{T}(\omega_{ i},\Omega,\mathbf{a}_{i},\mathbf{A}_{i})\}}. \tag{17}\]
This can be seen as a speed through \(\Omega\) starting from \(x\) and ending in the direction \(d\) from \(x\), but not knowing the initial and final directions of the minimizing ray or whether the ray has reflected from the interfaces \(\partial\omega\) or \(\partial\Omega\), or whether it has met \(\partial\omega\) tangentially. This function \(v_{i}\) is a suitable form of data for the first two steps of the proof.
**Lemma 7**.: _Let \(n\geq 2\). Let \(\omega,\Omega\subset\mathbb{R}^{n}\) be nested domains. Take any \(d\in\mathbb{S}^{n-1}\) and \(x\in\partial_{d}\Omega\). Denote \(y\coloneqq x+\tau(x,d)d\)._
_Let \(\mathbf{a},\mathbf{A}\in E^{+}(n)\) be two stiffness tensors whose Christoffel matrices \(\Gamma_{\mathbf{a}}(p)\) and \(\Gamma_{\mathbf{A}}(p)\) have a simple largest eigenvalue for all \(p\neq 0\)._
1. _If_ \(\mathbf{a}=\mathbf{A}\) _or_ \(x+d\mathbb{R}\cap\bar{\omega}=\emptyset\)_, then the fastest admissible ray path between_ \(x\) _and_ \(y\) _is the qP polarized ray travelling along the straight line between the points._
2. _If_ \(\mathbf{a}\) _and_ \(\mathbf{A}\) _are admissible nested stiffness tensors and_ \(x+d\mathbb{R}\cap\omega\neq\emptyset\)_, then the shortest travel time between_ \(x\) _and_ \(y\) _is strictly larger than it would be if_ \(\mathbf{a}\) _were equal to_ \(\mathbf{A}\)
Proof.: a) The qP slowness surface is strictly convex as observed in [1], so the integral curves of the Hamiltonian flow do indeed minimize length. With a constant stiffness tensor this minimization property is global.
b) The nestedness property of the qP branches of the slowness surfaces imply that all ray paths for the tensor \(\mathbf{a}\) are slower than those of \(\mathbf{A}\) in the same direction. Therefore for every admissible ray path that meets \(\omega\) the total travel time is strictly bigger than the length of that piecewise smooth curve measured in the qP Finsler geometry of \(\mathbf{A}\). Therefore the shortest travel time of an admissible ray path has to be longer than it would be if \(\mathbf{a}\) were changed to be equal to \(\mathbf{A}\).
We will denote the data sets by \(\mathcal{D}_{i}\coloneqq\mathcal{D}(\omega,\Omega,\mathbf{a}_{i},\mathbf{A}_{ i})\) and \(\mathcal{T}_{i}\) similarly.
Proof of Theorem B, part 1.: The set \(U\) is taken to be that provided by Theorem A.
The functions \(v_{i}(x,d)\) of (17) are defined in the subset
\[\{(x,d)\in\partial\Omega\times\mathbb{S}^{n-1};x\in\overline{\partial_{d} \Omega}\}\]
and are continuous in a neighborhood \(\mathcal{U}\) of the subset \(\tau^{-1}(0)\), corresponding to short geodesics that do not meet \(\bar{\omega}_{i}\).
The assumption \(\mathcal{T}_{1}\approx\mathcal{T}_{2}\) implies that the functions \(v_{1}\) and \(v_{2}\) agree in an open and dense subset of \(\mathcal{U}\), and by continuity they agree on all of \(\mathcal{U}\). Thus near the boundary we may work as if \(\mathcal{T}_{1}=\mathcal{T}_{2}\).
Figure 2. A cartoon of the proof of Theorem B. In the first step, we use short geodesics near the boundary, depicted as the blue line segments to the right. In the second step, we vary this family of line segments until they hit \(\partial\omega_{i}\) and no longer exist, depicted as the dashed parts of the blue line segments. In the third step we take two points on \(\partial\omega_{i}\) and find their qP distance, depicted as the solid red line, hitting them with all possible rays through the now known mantle, depicted as the dashed red lines.
In fact, these functions \(v_{i}\) only depend on \(d\) in \(\mathcal{U}\). Fix any direction \(d\in\mathbb{S}^{n-1}\). By strict convexity of the nested domains \(\omega\) and \(\Omega\), there is a neighborhood \(Y\subset\partial\Omega\) of \(\partial\partial_{d}\Omega\) so that for all \(x\in\hat{Y}\coloneqq Y\cap\partial_{d}\Omega\) the ray starting at \(x\) in the direction \(d\) does not meet \(\bar{\omega}_{1}\cup\bar{\omega}_{2}\). (We remind the reader that the direction \(d\) is tangent to the boundary precisely in the set \(\partial\partial_{d}\Omega\). Therefore in a small neighborhood of this set the line segments in the direction \(d\) through \(\Omega\) are short.) By Lemma 7 a qP polarized ray travelling from \(x\) in the direction \(d\) minimizes the travel time between \(x\) and \(x+\tau(x,d)d\).
This implies that both functions \(v_{i}(\,\cdot\,,d)\) are constant in \(\hat{Y}\). By the assumption of the agreement of the data \(\mathcal{T}\), these two functions agree. Let us denote to shared constant value by \(v(d)\). Therefore the two models give rise to the same surfaces
\[S^{*}=\{v(d)d;d\in\mathbb{S}^{n-1}\}.\]
The surface \(S^{*}\) is the strictly convex unit sphere of the Finsler geometry corresponding to the qP polarized waves; cf. [1, SS 2]. By taking the Legendre transform, the set \(S^{*}\) determines the dual sphere \(S\subset\mathbb{R}^{n}\) in the usual sense of dual norms. This cosphere \(S\) is exactly the qP branch of the slowness surface.
By assumption each \(\mathbf{A}_{i}\) is in \(U\), the open and dense set provided by Theorem A. Therefore this branch of the slowness surface determines the stiffness tensor, and so \(\mathbf{A}_{1}=\mathbf{A}_{2}\).
Proof of Theorem B, part 2.: We denote \(\mathbf{A}\coloneqq\mathbf{A}_{1}=\mathbf{A}_{2}\).
Again, fix any direction \(d\in\mathbb{S}^{n-1}\). Let \(Y_{i}^{d}\subset\partial_{d}\Omega\) be the subset where \(v_{i}(x,d)\) takes the constant value \(v(d)\); cf. part 1 of the proof. Let us denote \(\mathcal{U}^{\prime}_{i}=\{(x,d);x\in Y_{i}^{d}\}\). As the data is defined as subsets of (the real axis and two copies of) the set \(\partial\Omega\times\mathbb{S}^{n-1}\), it follows from approximate equality of the data (the assumption \(\mathcal{T}_{1}\approx\mathcal{T}_{2}\)) that
\[\overline{\mathcal{U}^{\prime}_{1}}=\overline{\mathcal{U}^{\prime}_{2}} \eqqcolon\overline{U^{\prime}}.\]
We will use this set to describe the inner domains \(\omega_{i}\).
It follows from Lemma 7 and the definition of \(v_{i}\) as a directed travel time that if the line \(x+d\mathbb{R}\) meets \(\omega_{i}\), then \(x\notin Y_{i}^{d}\), and that if it does not meet \(\bar{\omega}_{i}\), then \(x\in Y_{i}^{d}\). The line \(x+d\mathbb{R}\) is tangent to \(\partial\omega_{i}\) if and only if \(x\in\partial Y_{i}^{d}\). We thus know which lines meet \(\omega_{i}\), and can write the smaller domain as
\[\omega_{i}=\Omega\setminus\overline{\bigcup_{(x,d)\in\mathcal{U}^{\prime}_{i} }(x+d\mathbb{R})}=\Omega\setminus\bigcup_{(x,d)\in\overline{\mathcal{U}^{ \prime}}}(x+d\mathbb{R}).\]
Therefore \(\omega_{1}=\omega_{2}\) as claimed.
We can rephrase the proof above loosely as follows. We may think that \(Y_{1}^{d}=Y_{2}^{d}\) (although this was not assumed to hold perfectly and for all \(d\)) and say that the two strictly convex and smooth domains \(\omega_{1}\) and \(\omega_{2}\) have the same tangent lines so they are equal.
Proof of Theorem B, part 3.: As in the previous proofs, we can essentially replace the assumption \(\mathcal{D}_{1}\approx\mathcal{D}_{2}\) with the stronger one \(\mathcal{D}_{1}=\mathcal{D}_{2}\) because we are using open subsets of the data sets rather than relying on rare features. We omit the details in this instance for clarity.
By the previous parts of the theorem, we now know that \(\mathbf{A}_{1}=\mathbf{A}_{2}\eqqcolon\mathbf{A}\) and \(\omega_{1}=\omega_{2}\eqqcolon\omega\). It remains to show that \(\mathbf{a}_{1}=\mathbf{a}_{2}\). As in part 1, it suffices to prove that some non-empty open subsets of the qP branches of the two slowness surfaces agree.
Each point of \(\partial\Omega\times\mathbb{S}^{n-1}\) defines a ray starting at the given point on \(\partial\Omega\) in the given direction in \(\mathbb{S}^{n-1}\). For any \(x\in\Omega\), there is a subset \(F_{x}\subset\partial\Omega\times\mathbb{S}^{n-1}\) so that the corresponding rays meet \(x\)
The set \(F_{x}\) may be thought of as the graph of the unit vector field on \(\partial\Omega\) pointing towards \(x\). Let \(F^{\prime}_{x}\subset F_{x}\) be the subset corresponding to rays that do not meet \(\omega\) before \(x\).
Let \(f_{\mathbf{A}}\colon\mathbb{R}^{n}\to[0,\infty)\) be the smooth and strictly convex norm whose unit sphere is the qP branch of the slowness surface corresponding to the stiffness tensor \(\mathbf{A}\). Let \(f^{*}_{\mathbf{A}}\) be the dual norm and let \(\varphi_{\mathbf{A}}\colon\mathbb{R}^{n}\to\mathbb{R}^{n}\) be the norm-preserving and homogeneous (but possibly non-linear) Legendre transformation satisfying \(f(p)^{2}=\langle\varphi_{\mathbf{A}}(p),p\rangle\). For a direction \(v\in\mathbb{S}^{n-1}\), let us denote \(Q_{\mathbf{A}}(v)=\varphi_{\mathbf{A}}^{-1}(v/f^{*}_{\mathbf{A}}(v))\). In words, \(Q_{\mathbf{A}}(v)\) is the momentum corresponding to the qP polarized wave travelling in the direction \(v\) in a material given by \(\mathbf{A}\). The Legendre transform is depicted in Figure 1, where points (or arrows) on the cotangent space correspond to arrows (or points, respectively) on the tangent space.
Let us then define
\[F^{\prime\prime}_{x}=\{(z,Q_{\mathbf{A}}(v));(z,v)\in F^{\prime}_{x}\}.\]
This is the set of qP momenta (instead of directions) on the boundary so that the corresponding rays meet \(x\) without hitting \(\bar{\omega}\) first.
For each \((z,p)\in F^{\prime\prime}_{x}\) the travel time (according to the Hamiltonian flow) of the qP wave from \(z\) to \(x\) is \(f^{*}_{\mathbf{A}}(x-z)\).
Now let \(x,y\in\partial\omega\) be two distinct points and define
\[T_{i}(x,y) =\inf\{t-f^{*}_{\mathbf{A}}(x-z)-f^{*}_{\mathbf{A}}(y-w);\] \[\quad(t,z,p,w,-q)\in\mathcal{D}_{i}\] \[\quad\text{and }(z,p)\in F^{\prime\prime}_{x}\text{ and }(w,q)\in F ^{\prime\prime}_{y}\}.\]
Each ray considered here starts with a qP polarized leg from a point \(z\in\partial\Omega\) to \(x\in\partial\omega\) and ends in a similar leg from \(y\) to \(w\). As the travel times of the first and last legs are removed from the total travel time, our \(T(x,y)\) is the shortest travel time between the points \(x\) and \(y\) with an admissible ray path.
Because the qP branches of the slowness surfaces of \(\mathbf{a}_{i}\) and \(\mathbf{A}\) are nested by assumption, all momenta are available for the segments of the ray path in \(\omega\) starting at \(x\) and \(y\).
All the travel times and the geometry between \(z\) and \(x\) and also between \(y\) and \(w\) are the same between the two models by the previous two steps, and the only remaining dependence on \(i\) is what happens between \(x\) and \(y\).
We claim that when \(x\) and \(y\) are sufficiently close to each other,
\[T_{i}(x,y)=f^{*}_{\mathbf{a}_{i}}(x-y). \tag{18}\]
This means that the shortest admissible ray path between \(x\) and \(y\) is the direct qP ray within \(\omega\). This is seen as follows: If a ray path has a leg in the outer layer \(\Omega\setminus\bar{\omega}\) between \(x\) and \(y\) (which may well happen, as we do not a priori know the geometry of the rays we are looking at), then by strict convexity of \(\omega\) this leg must come all the way to the outer boundary \(\partial\Omega\). If \(x\) and \(y\) are so close to each other that \(f^{*}_{\mathbf{a}_{i}}(x-y)\) is less than the \(f^{*}_{\mathbf{A}}\)-distance between \(\partial\omega\) and \(\partial\Omega\), then any leg joining \(\partial\omega\) to \(\partial\Omega\) takes a longer time than the straightest option through \(\omega\), despite the waves being slower in \(\omega\) than in \(\Omega\setminus\bar{\omega}\). Within \(\bar{\omega}\) the shortest travel time is clearly achieved by going in a straight line with the fastest polarization; cf. the Lemma 7.
If we fix \(x\in\partial\omega\), we have found that equation (18) holds for all \(y\) in a small punctured neighborhood of \(x\) on \(\partial\omega\) for both \(i=1,2\). Because \(\mathcal{D}_{1}=\mathcal{D}_{2}\) implies \(T_{1}(x,y)=T_{2}(x,y)\), we have found that there is an open set \(Y_{x}\subset\partial\omega\) so that
\[f^{*}_{\mathbf{a}_{1}}(x-y)=f^{*}_{\mathbf{a}_{2}}(x-y)\]
for all \(y\in Y_{x}\). By strict convexity of \(\partial\omega\) the set \(Y_{x}\) contains an open set of directions, so the unit spheres of \(f^{*}_{\mathbf{a}_{1}}\) and \(f^{*}_{\mathbf{a}_{2}}\) agree on an open set. The same thus holds for \(f_{\mathbf{a}_{1}}\) and \(f_{\mathbf{a}_{2}}\) as well, and the claim \(\mathbf{a}_{1}=\mathbf{a}_{2}\) follows from Theorem A.
## 5. The algebraic geometry of families slowness surfaces
This section contains the technical algebro-geometric arguments needed to prove Theorems C and D; it demands more expertise from the reader than SS2. Our arguments use only material typically covered in a first course on scheme-theoretic algebraic geometry. Standard references for this material include [1, 1, 10, 11]. We provide copious references to specific propositions and theorems to help orient readers less familiar with schemes.
### Independent components of a stiffness tensor: Voigt notation
The major and minor symmetries of a (reduced) stiffness tensor allow for a simplification of notation that eliminates clutter, following Voigt. In dimension \(2\), one replaces pairs of indices \(ij\) by a single index \(k\) according to the rule
\[11\leadsto 1,\quad 22\leadsto 2,\quad 12\leadsto 3. \tag{19}\]
To avoid confusion, when we contract indices following this convention, we also replace the letter \(a\) with the letter \(b\): for example, the reduced stiffness tensor component \(a_{1112}=a_{(11)(12)}\) is replaced by \(b_{13}\).
In dimension \(3\) one replaces pairs of indices \(ij\) by a single index \(k\) according to the rule
\[11\leadsto 1,\quad 22\leadsto 2,\quad 33\leadsto 3,\quad 23\leadsto 4,\quad 13 \leadsto 5,\quad 12\leadsto 6. \tag{20}\]
Thus, for example, the reduced stiffness tensor component \(a_{2312}=a_{(23)(12)}\) is replaced by \(b_{46}\).
Next, we count the number of independent parameters of the form \(a_{ijkl}\), or equivalently, the number of independent parameters of the form \(b_{rs}\), once we take the symmetries (3) into account. The set of distinct \(a_{ijkl}\) is in bijection with a set of unordered pairs of undirected pairs of indices \(\{1,\dots,n\}\): more precisely, a set whose elements have the form \(a_{(ij)(kl)}\), where the indices belong to \(\{1,\dots,n\}\) and one can freely commute the indices within a pair of parentheses or commute the pairs, but one cannot freely move indices from one pair to another. The number of unordered pairs of indices \(1,\dots,n\) is \(\psi(n)\coloneqq\frac{1}{2}n(n+1)\). Therefore the number of independent components of a stiffness tensor is
\[\psi(\psi(n))=\frac{1}{8}n(n+1)(n^{2}+n+2).\]
When \(n=2\), we obtain \(\psi(\psi(2))=6\), which matches our work in SS2, where we saw the six independent parameters \(b_{11}\), \(b_{12}\), \(b_{13}\), \(b_{22}\), \(b_{23}\), and \(b_{33}\). In dimension \(n=3\), there are \(\psi(\psi(3))=21\) independent stiffness tensor components.
### Algebro-geometric set-up
In this section, we omit the positivity condition that a stiffness tensor satisfies (Definition 1), in order to import ideas from the scheme-theoretic formulation of algebraic geometry, following Grothendieck.
#### 5.2.1. The slowness polynomial
Let \(A\) be a finitely generated \(\mathbb{Q}\)-algebra, and let \(R:=A[p_{0},\dots,p_{n}]\) be a polynomial ring in \(n+1\) variables with coefficients in \(A\). We view the Christoffel matrix (7) as a symmetric \(n\times n\) matrix \(\Gamma(\mathbf{p})\) with entries in \(R\), whose \(il\)-th entry is
\[\Gamma(\mathbf{p})_{il}=\sum_{1\leq j,k\leq n}a_{ijkl}p_{j}p_{k},\]
and where the parameters \(a_{ijkl}\in A\) are subject to the symmetry relations (3). Denoting by \(I_{n}\) the \(n\times n\) identity matrix over \(A\), the slowness polynomial\(P(\mathbf{p})\in R\) is
\[P(\mathbf{p}):=\det(\Gamma(\mathbf{p})-I_{n}).\]
This is a polynomial of total degree \(d=2n\) in \(p_{1},\ldots,p_{n}\). The homogenized slowness polynomial \(\tilde{P}(\mathbf{p})\in R\) is obtained by setting
\[\tilde{P}(\mathbf{p}):=\det(\Gamma(\mathbf{p})-p_{0}^{2}I_{n}).\]
The completed slowness hypersurface\(\tilde{S}\) is the algebraic hypersurface in the projective space \(\mathbb{P}_{A}^{n}\) where \(\tilde{P}\) vanishes. More precisely, the quotient ring homomorphism \(A[v_{0},\ldots,v_{n}]\to A[v_{0},\ldots,v_{n}]/(\tilde{P})\) describes a closed embedding \(\tilde{S}\hookrightarrow\mathbb{P}_{A}^{n}\) via the Proj construction (see, for example, [10, SSII.2 and Exercise II.3.12]).
From now on, we specialize to the case
\[A=\mathbb{Q}[a_{ijkl}:1\leq i,j,k,l\leq n],\]
where the \(a_{ijkl}\) are indeterminates subject to the symmetry relations (3). By SS5.1, the ring \(A\) is a free \(\mathbb{Q}\)-algebra on \(m:=\psi(\psi(n))=\frac{1}{8}n(n+1)(n^{2}+n+2)\) generators.
**Example 8**.: Let \(n=2\). Then \(A=\mathbb{Q}[a_{ijkl}:1\leq i,j,k,l\leq 2]\), and there are only \(\psi(\psi(2))=6\) distinct \(a_{ijkl}\)'s, which we relabel \(b_{11}\), \(b_{12}\), \(b_{13}\), \(b_{22}\), \(b_{23}\), and \(b_{33}\) using Voigt notation (19) as we did in SS2. Thus, \(A\) is the polynomial ring \(\mathbb{Q}[b_{11},b_{12},b_{13},b_{22},b_{23},b_{33}]\), and the Christoffel matrix \(\Gamma(\mathbf{p})\) is given as in (10), with associated homogenized slowness polynomial \(\tilde{P}(\mathbf{p})\) as in (13). The associated completed slowness hypersurface \(\mathbf{S}\) is a quartic curve on \(\mathbb{P}_{A}^{2}\) defined by the condition \(\tilde{P}(\mathbf{p})=0\).
#### 5.2.2. The slowness fibration
Generalizing Example 8, the homogenized slowness polynomial can be viewed as a homogeneous polynomial of degree \(2n\) in the graded ring \(A[v_{0},\ldots,v_{n}]\), where \(A=\mathbb{Q}[b_{ij}:1\leq i\leq j\leq\psi(n)]\), a polynomial ring in \(m\) variables. From this perspective, the completed slowness hypersurface may be viewed as a hypersurface in the product of an affine space and a projective space:
\[\mathbf{S}=\left\{\tilde{P}(\mathbf{b})=0\right\}\subset\mathbb{P}_{A}^{n} \simeq\mathbb{A}_{\mathbb{Q}}^{m}\times\mathbb{P}_{\mathbb{Q}}^{n}=\operatorname {Spec}A\times_{\operatorname{Spec}\mathbb{Q}}\operatorname{Proj}\mathbb{Q}[v_{ 0},\ldots,v_{n}].\]
We call \(\mathbf{S}\) the slowness bundle, and denote this closed immersion by \(\iota\colon\mathbf{S}\hookrightarrow\mathbb{A}_{\mathbb{Q}}^{m}\times\mathbb{ P}_{\mathbb{Q}}^{n}\). Composing \(\iota\) with the projection \(\pi_{1}\colon\mathbb{A}_{\mathbb{Q}}^{m}\times\mathbb{P}_{\mathbb{Q}}^{n} \rightarrow\mathbb{A}_{\mathbb{Q}}^{m}\) onto the first factor gives a fibration
\[f:=\pi_{1}\circ\iota\colon\mathbf{S}\rightarrow\mathbb{A}_{\mathbb{Q}}^{m}\]
that we call the slowness surface fibration. The fiber \(f^{-1}(\mathbf{b})\) of \(f\) above a rational point \(\mathbf{b}\in\mathbb{A}^{m}(\mathbb{Q})=\mathbb{Q}^{m}\) is the hypersurface of degree \(2n\) in \(\mathbb{P}_{\mathbb{Q}}^{n}\) obtained by specializing the parameters \(b_{ij}\) according to the coordinates of \(\mathbf{b}\).
For a field extension \(K/\mathbb{Q}\), we write \(f_{K}\colon S_{K}\rightarrow\mathbb{A}_{K}^{m}\) for the slowness surface fibration obtained as above after replacing \(\mathbb{Q}\) by \(K\) everywhere. This is known in algebraic geometry as the "base-extension of the morphism \(f\) by the map \(\operatorname{Spec}K\rightarrow\operatorname{Spec}\mathbb{Q}\)". We are mostly interested in the cases \(K=\mathbb{R}\) and \(K=\mathbb{C}\). We call \(f_{\mathbb{C}}\colon\mathbf{S}_{\mathbb{C}}\rightarrow\mathbb{A}_{\mathbb{C}}^{m}\) the complexified slowness surface fibration.
### Key results
The precise result underpinning Theorem C is the following.
**Theorem 9**.: _The set_
\[\operatorname{Irr}(f):=\{\mathbf{b}\in\mathbb{A}_{\mathbb{R}}^{m}:f_{\mathbb{C }}^{-1}(\mathbf{b})\text{ is an irreducible hypersurface}\}\]
_is Zariski-open in \(\mathbb{A}_{\mathbb{Q}}^{m}\). Consequently, it is either empty, or it is the complement of a finite union of algebraic varieties, each of dimension \(\leq m-1\)._
Theorem 9 follows from the following result, due to Grothendieck, which uses the full power of scheme theory.
**Theorem 10** (Generic Geometric Irreducibility).: _Let \(f\colon X\to Y\) be a morphism of schemes. Assume that \(f\) is proper, flat, and of finite presentation. Then the set of \(y\in Y\) such that the fiber \(X_{y}:=f^{-1}(y)\) is geometrically irreducible is Zariski open in \(Y\)._
Proof.: See [1, Theoreme 12.2.4(viii)].
_Remark 11_.: In the event that \(Y\) is a locally Noetherian scheme, one can replace the condition "of finite presentation" with "of finite type"[1, Tag 01TX]. However, this condition is in turn subsumed by the properness condition (by definition of properness!).
Since the coordinate ring \(A\) of the affine space \(\mathbb{A}_{\mathbb{Q}}^{m}\) is Noetherian, the scheme \(\mathbb{A}_{\mathbb{Q}}^{m}\) is locally Noetherian. By Remark 11, to deduce Theorem 9 from Theorem 10, we must show that the slowness surface fibration \(f\colon\mathbf{S}\to\mathbb{A}_{\mathbb{Q}}^{m}\) is a proper, flat morphism. We say a few words about what these conditions mean first.
In algebraic geometry, the notion of properness mimics the analogous notion between complex analytic spaces: the preimage of a compact set is compact. In particular, a proper morphism takes closed sets to closed sets; see [1, Section 01W0]. Flatness is an algebraic condition that, in conjunction with properness and local Noetherianity of the target, guarantees the nonempty fibers of \(f\) vary nicely (e.g., they all have the same Euler characteristic); see [1, Section 01U2]. To prove that the slowness surface fibration \(f\colon\mathbf{S}\to\mathbb{A}_{\mathbb{Q}}^{m}\) is flat, we shall use the "miracle flatness" criterion.
**Theorem 12** (Miracle Flatness).: _Let \(f\colon X\to Y\) be a morphism of finite type, equidimensional schemes over a field. Suppose that \(X\) is Cohen-Macaulay, \(Y\) is regular, and the fibers of \(f\) have dimension \(\dim X-\dim Y\). Then \(f\) is a flat morphism._
Proof.: See [20, 26.2.11].
**Proposition 13**.: _The slowness surface fibration \(f\colon\mathbf{S}\to\mathbb{A}_{\mathbb{Q}}^{m}\) is a proper flat morphism. If \(K/\mathbb{Q}\) is a field extension, the same conclusion holds for the base-extension \(f_{K}\colon S_{K}\to\mathbb{A}_{K}^{m}\)._
Proof.: First, we prove that \(f\) is proper. The scheme \(\mathbb{P}_{\mathbb{Q}}^{n}\) being projective, its structure morphism \(\mathbb{P}_{\mathbb{Q}}^{n}\to\operatorname{Spec}\mathbb{Q}\) is proper. Consider the fibered product diagram
Proper morphisms are stable under base change [1, II Corollary 4.8(c)], and hence \(\pi_{1}\) is proper. Closed immersions being proper [1, II Corollary 4.8(a)], the morphism \(\iota\colon S\hookrightarrow\mathbb{P}_{A}^{n}\) is also proper. Finally, a composition of proper morphisms is proper [1, II Corollary 4.8(b)], whence \(f=\pi_{1}\circ\iota\) is proper.
Next, we show that the morphism \(f\colon\mathbf{S}\to\mathbb{A}_{\mathbb{Q}}^{m}\) is flat via Theorem 12. The schemes \(\mathbf{S}\) and \(\mathbb{A}_{\mathbb{Q}}^{m}\) are of finite type over a field and equidimensional (\(\mathbf{S}\) is a hypersurface in \(\mathbb{P}_{A}^{n}\)), so it suffices to verify that \(\mathbf{S}\) is a Cohen-Macaulay scheme, that \(\mathbb{A}_{\mathbb{Q}}^{m}\) is regular, and that the fibers of \(f\) all
have dimension \(n-1\). The surface \(\mathbf{S}\) is a hypersurface in a projective space, so it is a local complete intersection, hence a Cohen-Macaulay scheme [13, Tag 00SA]. The affine space \(\mathbb{A}_{\mathbb{Q}}^{m}\) is smooth, hence regular; the fibers of \(f\) are all hypersurfaces of \(\mathbb{P}_{\mathbb{Q}}^{n}\), because the coefficient of \(p_{0}^{2}\) in the defining equation of every fiber is non-zero, and hence all have dimension \(n-1\). Hence, the morphism \(f\) is flat.
The claim for the base-extension \(f_{K}\colon S_{K}\to\mathbb{A}_{K}^{m}\) follows either by replacing \(\mathbb{Q}\) with \(K\) in the above arguments, or by noting that proper and flat are properties of morphisms that are stable under base-extension (see, e.g., [13, Lemma 01U9]).
Proof of Theorem 9.: The conclusion that \(\operatorname{Irr}(f)\) is Zariski open in \(\mathbb{A}_{\mathbb{R}}^{m}\) follows from Theorem 10 and Proposition13, taking into account Remark 11. It follows that \(\operatorname{Irr}(f)\) is either empty, or it is the complement of a proper closed subset of \(\mathbb{A}_{\mathbb{R}}^{m}\). Such a set is determined by an ideal \(I\subseteq\mathbb{R}[b_{ij}:1\leq i,j\leq\psi(n)]\)[10, Corollary II.5.10]. The base-extension \(I_{\mathbb{C}}=I\otimes_{\mathbb{R}}\mathbb{C}\) determines a proper closed subset of \(\mathbb{A}_{\mathbb{C}}^{m}\), whose finitely many irreducible components have dimension \(\ \leq m-1\). This closed subset descends to finitely many irreducible components in \(\mathbb{A}_{\mathbb{R}}^{m}\), consisting of complex conjugate pairs of irreducible varieties in \(\mathbb{A}_{\mathbb{C}}^{m}\).
### Ex uno plura
All our work so far does not preclude the possibility that the Zariski open subset \(\operatorname{Irr}(f)\) of \(\mathbb{A}_{\mathbb{R}}^{m}\) defined in Theorem 9 is empty! We verify that this is not the case in dimensions \(n\in\{2,3\}\) by giving examples of slowness polynomials that are irreducible over \(\mathbb{C}\). We use a standard arithmetic trick: reduction modulo a prime. The principle involved is simple: if a polynomial \(F(x_{0},\ldots,x_{n})\) with coefficients in \(\mathbb{Z}\) factors nontrivially, then it also factors when we reduce its coefficients modulo any prime \(p\). Thus, if a polynomial with coefficients in \(\mathbb{Z}\) is irreducible when considered over the finite field \(\mathbb{F}_{p}\), then it must be irreducible over \(\mathbb{Z}\). This principle is extraordinarily useful, because by finiteness of \(\mathbb{F}_{p}\) checking whether the reduction \(\bar{F}(x_{0},\ldots,x_{n})\in\mathbb{F}_{p}[x_{0},\ldots,x_{n}]\) is irreducible is a finite, fast computation. Guaranteeing that the polynomial remains irreducible when considered over \(\mathbb{C}\) requires working over a finite extension \(\mathbb{F}_{p^{d}}\) of \(\mathbb{F}_{p}\) with controlled degree \(d\). We make this idea explicit in the following lemma, whose proof we include for lack of a good reference.
**Lemma 14**.: _Let \(F(x_{0},\ldots,x_{n})\in\mathbb{Z}[x_{0},\ldots,x_{n}]\) be a homogeneous polynomial of degree \(d\). Suppose there is a prime \(p\) such that the reduction \(\bar{F}(x_{0},\ldots,x_{n})\in\mathbb{F}_{p}[x_{0},\ldots,x_{n}]\) of \(F\) modulo \(p\) is irreducible in the finite field \(\mathbb{F}_{p^{d}}\) of cardinality \(p^{d}\). Then \(F(x_{0},\ldots,x_{n})\) is irreducible in \(\mathbb{C}[x_{0},\ldots,x_{n}]\)._
Proof.: By [13, Tag 020J], to prove that the polynomial \(F\) is irreducible over \(\mathbb{C}\), it suffices to show that it is irreducible in \(\overline{\mathbb{Q}}[x_{0},\ldots,x_{n}]\), where \(\overline{\mathbb{Q}}\) denotes a fixed algebraic closure of \(\mathbb{Q}\). The field \(\overline{\mathbb{Q}}\) consists of all algebraic numbers: the roots of single-variable polynomials with rational coefficients; it is countable.
There is a Galois field extension \(K/\mathbb{Q}\) of _finite_ degree where \(F\) already factors into \(\overline{\mathbb{Q}}\)-irreducible polynomials. To see this, note that each coefficient of each factor in a \(\overline{\mathbb{Q}}\) factorization is an algebraic number, hence has finite degree over \(\mathbb{Q}\); we can let \(K\) be the Galois closure of the field obtained from \(\mathbb{Q}\) by adjoining all the coefficients of all the factors of \(F\) over \(\overline{\mathbb{Q}}\) (see also [13, Tag 04KZ]).
Let \(\mathfrak{p}\) be a prime ideal in the ring of integers \(\mathcal{O}_{K}\) of \(K\) lying over \(p\), i.e., \(\mathfrak{p}\cap\mathbb{Z}=(p)\). The field \(\mathbb{F}_{\mathfrak{p}}:=\mathcal{O}_{K}/\mathfrak{p}\mathcal{O}_{K}\) is a finite field extension of \(\mathbb{F}_{p}\). Let \(\bar{F}=g_{1},\ldots,g_{m}\) be a factorization of \(\bar{F}\) in \(\mathbb{F}_{\mathfrak{p}}[x_{0},\ldots,x_{n}]\). The Galois group \(G:=\operatorname{Gal}(\mathbb{F}_{\mathfrak{p}}/\mathbb{F}_{p})\) acts on the set \(\{g_{1},\ldots,g_{m}\}\). The orbits of this action correspond to the irreducible factors of the reduction of \(\bar{F}\in\mathbb{F}_{p}[x_{0},\ldots,x_{n}]\)
This reduction is irreducible, because by hypothesis \(\bar{F}\) is irreducible over the larger field \(\mathbb{F}_{p^{d}}\), so the action of \(G\) on \(\{g_{1},\ldots,g_{m}\}\) is transitive. It follows that the factors \(\{g_{1},\ldots,g_{m}\}\) of \(\bar{F}\) must all have the same degree, and hence \(m\mid d\). By the orbit-stabilizer theorem, the stabilizer \(H_{g_{i}}\leq G\) of \(g_{i}\) has index \(m\); it is a normal subgroup of \(G\) because \(G\) is cyclic. By Galois theory, the polynomial \(\bar{F}\) already factors over the fixed field \(K(\mathfrak{p})^{H_{g_{i}}}\simeq\mathbb{F}_{p^{m}}\) as \(g_{i}\cdot h_{i}\) for some \(h_{i}\in\mathbb{F}_{p^{m}}[x_{0},\ldots,x_{n}]\). However, by hypothesis, the polynomial \(\bar{F}\in\mathbb{F}_{p}[x_{0},\ldots,x_{n}]\) is irreducible over \(\mathbb{F}_{p^{d}}\), and hence is irreducible over \(\mathbb{F}_{p^{m}}\), because \(\mathbb{F}_{p^{m}}\subset\mathbb{F}_{p^{d}}\) as \(m\mid d\). This implies that \(m=1\), i.e., \(\bar{F}\) is irreducible in \(\mathbb{F}_{\mathfrak{p}}[x_{0},\ldots,x_{n}]\), and therefore \(F\) irreducible in \(K[x_{0},\ldots,x_{n}]\). By definition of the field \(K\), we conclude that \(F\) is irreducible in \(\overline{\mathbb{Q}}[x_{0},\ldots,x_{n}]\).
_Remark 15_.: The hypothesis that \(F\in\mathbb{Z}[x_{0},\ldots,x_{n}]\) is homogeneous can be weakened. We used this hypothesis tacitly above: we assumed that the reduction \(\bar{F}\in\mathbb{F}_{\mathfrak{p}}[x_{0},\ldots,x_{n}]\) has degree \(d\). This is certainly the case if \(F\) is homogeneous and \(\bar{F}\) is irreducible (and hence nonzero).
**Example 16**.: Let \(n=2\). Using the notation of Example 8, consider the stiffness tensor with components
\[b_{11}=20,\quad b_{12}=39,\quad b_{13}=-65,\quad b_{22}=-16,\quad b_{23}=-87, \quad b_{33}=30.\]
The corresponding homogenized slowness polynomial
\[\tilde{P}(\mathbf{p})=-3625p_{1}^{4}+1590p_{1}^{3}p_{2}+7129p_{1}^{2}p_{2}^{2 }-50p_{1}^{2}p_{0}^{2}+8866p_{1}p_{2}^{3}+304p_{1}p_{2}p_{0}^{2}-8049p_{2}^{4} -14p_{2}^{2}p_{0}^{2}+p_{0}^{4}\]
is irreducible over \(\mathbb{C}\): apply Lemma 14 with \(d=4\) and \(p=7\): a magma calculation shows that the reduction of this polynomial modulo \(7\) is irreducible in the finite field \(\mathbb{F}_{7^{4}}\); see [1].
**Example 17**.: Let \(n=3\). Using Voigt notation (20), consider the stiffness tensor of albite, an abundant feldspar mineral in the Earth's crust, which has the components [1]
\[b_{11}=691, b_{12}=340, b_{13}=308, b_{14}=51, b_{15}=-24, b_{16}=-9,\] \[b_{22}=1835, b_{23}=55, b_{24}=-39, b_{25}=-77, b_{26}=-58, b_{33}=1795,\] \[b_{34}=-87, b_{35}=71, b_{36}=-98, b_{44}=249, b_{45}=-24, b_{46}=-72,\] \[b_{55}=268, b_{56}=5, b_{66}=335.\]
The corresponding homogenized slowness polynomial
\[\tilde{P}(\mathbf{p}) =61808197p_{1}^{6}-29183112p_{1}^{5}p_{2}+12224260p_{1}^{5}p_{3}+ 295917556p_{1}^{4}p_{2}^{2}-121937730p_{1}^{4}p_{2}p_{3}\] \[\quad+348169660p_{1}^{4}p_{3}^{2}-505771p_{1}^{4}p_{0}^{2}-7097562 6p_{1}^{3}p_{2}^{3}+152129354p_{1}^{3}p_{2}^{2}p_{3}-119421358p_{1}^{3}p_{2}p _{3}^{2}\] \[\quad+155018p_{1}^{3}p_{2}p_{0}^{2}-174550934p_{1}^{3}p_{3}^{3}- 8528p_{1}^{3}p_{2}p_{0}^{2}+383486749p_{1}^{2}p_{2}^{4}-300844962p_{1}^{2}p_{ 2}^{3}p_{3}\] \[\quad+1468226482p_{1}^{2}p_{2}^{2}p_{3}^{2}-1740692p_{1}^{2}p_{2} ^{2}p_{0}^{2}-272180462p_{1}^{2}p_{2}p_{3}^{3}+436994p_{1}^{2}p_{2}p_{3}p_{0}^ {2}\] \[\quad+404800725p_{1}^{2}p_{3}^{4}-1875763p_{1}^{2}p_{3}^{2}p_{0}^ {2}+1294p_{1}^{2}p_{0}^{4}-13416750p_{1}p_{2}^{5}+282989760p_{1}p_{2}^{4}p_{3}\] \[\quad+154078108p_{1}p_{2}^{3}p_{3}^{2}+134674p_{1}p_{2}^{3}p_{0}^ {2}+67718200p_{1}p_{2}^{2}p_{3}^{3}-536166p_{1}p_{2}^{2}p_{3}p_{0}^{2}+82126914p _{1}p_{2}p_{3}^{4}\] \[\quad+44930p_{1}p_{2}p_{2}^{2}p_{3}^{2}-182p_{1}p_{2}p_{0}^{4}-993 441366p_{1}p_{5}^{3}+422102p_{1}p_{3}^{3}p_{0}^{2}-50p_{1}p_{3}p_{0}^{4}+14188 0986p_{2}^{6}\] \[\quad-135372072p_{2}^{5}p_{3}+1205554155p_{2}^{4}p_{3}^{2}-1144986p _{2}^{4}p_{0}^{2}-88997104p_{2}^{3}p_{3}^{3}+413432p_{2}^{3}p_{3}p_{0}^{2}\] \[\quad+959527532p_{2}^{2}p_{3}^{4}-4481283p_{2}^{2}p_{3}^{2}p_{0}^ {2}+2419p_{2}^{2}p_{0}^{4}-38299560p_{2}p_{3}^{5}+167364p_{2}p_{3}^{3}p_{0}^{2}\] \[\quad-242p_{2}p_{3}p_{0}^{4}+115762815p_{3}^{6}-981561p_{3}^{4}p_{ 0}^{2}+2312p_{3}^{2}p_{0}^{4}-p_{0}^{6}\]
is irreducible over \(\mathbb{C}\): apply Lemma 14 with \(d=6\) and \(p=5\): a magma calculation shows that the reduction of this polynomial modulo \(7\) is irreducible in the finite field \(\mathbb{F}_{5^{6}}\); see [1].
Proof of Theorem C.: By Theorem 9 we know that the subset \(\operatorname{Irr}(f)\) of the parameter space of stiffness tensors whose corresponding homogenized slowness polynomials are irreducible over \(\mathbb{C}\) is a Zariski open subset of \(\mathbb{A}_{\mathbb{R}}^{m}\). Since \(\mathbb{A}_{\mathbb{R}}^{m}\) is irreducible, a Zariski open subset is dense, as long as it is not empty. Example 16 shows that \(\operatorname{Irr}(f)\) is not empty when \(n=2\), and Example 17 is nonempty when \(n=3\).
### Generic unique reconstruction of stiffness tensors
We prove Theorem D, i.e., that a generic stiffness tensor in dimensions 2 and 3 is uniquely associated to its slowness polynomial. While the _proof_ of this fact uses heavy-duty machinery from algebraic geometry, we may perform the reconstruction of a stiffness tensor from a particular slowness polynomial _quickly in practice_, using simple ideas from the theory of Grobner bases.
Proof of Theorem D.: We begin with the case \(n=2\) and explain the necessary modifications for \(n=3\) case at the end of the proof. Using the notation of SS2, we define the rational map (16) between complex projective spaces
\[g\colon\mathbb{P}^{6}_{[b_{11},\dots,b_{33},r]}\dashrightarrow \mathbb{P}^{8}_{[c_{1},\dots,c_{9}]},\] \[[b_{11},\dots,b_{33},r]\mapsto[b_{11}b_{33}-b_{13}^{2},\dots,-r( b_{22}+b_{33}),r^{2}].\]
The closed subset \(\Pi\) of \(\mathbb{P}^{6}\) where \(g\) is not defined is 2-dimensional, although we will not use this fact explicitly. Let \(X:=\operatorname{Bl}_{\Pi}(\mathbb{P}^{6})\) be the blow-up of \(\mathbb{P}^{6}\) along \(\Pi\)[10, Example II.7.17.3]. This scheme comes with a morphism \(\pi\colon X\to\mathbb{P}^{6}\) such that the composition
\[f:=g\circ\pi\colon X\to\mathbb{P}^{8}\]
is a proper morphism, and such that \(X\setminus\pi^{-1}(\Pi)\simeq\mathbb{P}^{6}\setminus\Pi\). Let \(Y\) be the image of \(f\), so that the map \(f\colon X\to Y\) is a surjective proper morphism. Properness ensures that the fiber dimension function
\[h\colon Y \to\mathbb{R}\] \[y \mapsto\dim f^{-1}(y)\]
is upper semi-continuous, i.e., for each \(x\in\mathbb{R}\) the set \(h^{-1}((-\infty,x))\) is Zariski open [11, Tag 0D4I]. In particular, if there is a point \(y\in Y\) such that \(\dim f^{-1}(y)=0\), then there is a nonempty Zariski open subset \(V\subseteq Y\) over which all fibers are 0-dimensional. (Note that both \(X\) and \(Y\) are irreducible varieties.) The Grobner basis calculation in dimension 2 in SS2 shows precisely that such a point \(y\in Y\) exists.
The fibers over \(V\) have finite cardinality, so the induced morphism \(f\colon U:=f^{-1}(V)\to V\) is quasi-finite. It is also proper, as it is a base-extension of a proper morphism. A proper, quasifinite morphism is finite [11, Tag 02OG]. Finally, the fiber degree function is also an upper semi-continuous function on the target of a finite morphism [14, 13.7.5]. Our Grobner basis calculation also shows that there is a point \(u\in U\) such that \(f^{-1}(f(u))\) consists of a _single_ point. So by upper semi-continuity, there is a Zariski open subset \(V^{\prime}\subseteq V\) such that, for all \(y\in V^{\prime}\), the fiber \(f^{-1}(y)\) consists of exactly one point.
We conclude that the map \(f\colon X\to Y\) is generically injective. Note that the locus where \(r=1\) is the distinguished dense open affine chart \(D_{+}(r)\subset\mathbb{P}^{6}\), and that \(f\) and \(g\) coincide on \(D_{+}(r)\cap(\mathbb{P}^{6}\setminus\Pi)\), so \(f\) is still generically injective after "dehomogenizing \(r\)". This concludes the proof of the Theorem in the case \(n=2\).
The argument in dimension 3 is analogous, but there are more parameters to the stiffness tensor, as well as coefficients in the corresponding slowness polynomial. The map (16) is thus
replaced by a higher-dimensional version \(g\colon\mathbb{P}^{21}\dashrightarrow\mathbb{P}^{49}\). We need only check that there is a point \(x\in X\) in the domain of the corresponding map \(f\colon X\to Y\) such that \(f^{-1}(f(x))\) consists of a single point. We use the slowness polynomial \(\tilde{P}(\mathbf{p})\) of Example 17: we give code in Appendix A that shows there is exactly one stiffness tensor associated to \(\tilde{P}(\mathbf{p})\).
_Remark 18_.: The proof of Theorem D works in dimension \(n\) provided one has a single example of a slowness polynomial in dimension \(n\) that arises from a unique stiffness tensor.
### Which polynomials are slowness polynomials?
In dimension \(2\), we have seen (11) that the slowness polynomial has the form
\[c_{1}p_{1}^{4}+c_{2}p_{1}^{3}p_{2}+c_{3}p_{1}^{2}p_{2}^{2}+c_{4}p_{1}^{2}+c_{5} p_{1}p_{2}^{3}+c_{6}p_{1}p_{2}+c_{7}p_{2}^{4}+c_{8}p_{2}^{2}+c_{9} \tag{21}\]
for some \((c_{1},\dots,c_{9})\in\mathbb{R}^{9}\). However, not every polynomial of this kind arises from a stiffness tensor. For example, a close inspection of (11) shows that we must have \(c_{9}=1\). Furthermore, the remaining coefficients \(c_{1},\dots,c_{8}\) are subject to the relations (14). We can use elimination theory to compute the exact set of constraints that must be satisfied by \(c_{1},\dots,c_{8}\) (implicitly, from now on we simply take for granted that \(c_{9}=1\)). As a by-product, we shall obtain a _second proof_ of Theorem D in dimension \(n=2\). While in principle a similar argument could be used in the case \(n=3\), the required computations are currently infeasible.
Let \(X\) be the variety in the affine space \(\mathbb{A}^{14}_{\mathbb{C}}\) with coordinates \(b_{11},\dots,b_{33},c_{1},\dots,c_{8}\) cut out by the equations (14). More precisely, \(X\) is \(\operatorname{Spec}\mathbb{C}[b_{11},\dots,b_{33},c_{1},\dots,c_{8}]/I\), where \(I\) is the ideal of \(\mathbb{C}[b_{11},\dots,b_{33},c_{1},\dots,c_{8}]\) given by
\[I:= \langle c_{1}-(b_{11}b_{33}-b_{13}^{2}),c_{2}-2(b_{11}b_{23}-b_{1 2}b_{13}),\] \[c_{3}-(b_{11}b_{22}-b_{12}^{2}-2b_{12}b_{33}+2b_{13}b_{23}),c_{4} +(b_{11}+b_{33}),\] \[c_{5}-2(-b_{12}b_{23}+b_{13}b_{22}),c_{6}+2(b_{13}+b_{23}),\] \[c_{7}-(b_{22}b_{33}-b_{23}^{2}),c_{8}+(b_{22}+b_{33})\rangle.\]
We consider the two projections
and by a slight abuse of notation, we also denote their restrictions to \(X\) by \(p\colon X\to\mathbb{A}^{8}_{\mathbb{C}}\) and \(q\colon X\to\mathbb{A}^{6}_{\mathbb{C}}\). An elementary but important observation is that \(q\colon X\to\mathbb{A}^{6}_{\mathbb{C}}\) is an isomorphism, because the ring map
\[\mathbb{C}[b_{11},\dots,b_{33},c_{1},\dots,c_{8}]\to\mathbb{C}[b_{11},\dots,b _{33}]\]
that sends \(b_{ij}\) to itself and maps \(c_{i}\) according to the relations (14) (so, e.g., \(c_{1}\) maps to \(b_{11}b_{33}-b_{13}^{2}\)) is surjective and has kernel \(I\). This tells us that \(X\) is a \(6\)-dimensional complex algebraic variety.
We now turn to the projection \(p\colon X\to\mathbb{A}^{8}_{\mathbb{C}}\). The image \(p(X)\) consists of \(8\)-tuples \((c_{1},\dots,c_{8})\) that, together with \(c_{9}=1\), give a set of coefficients of a polynomial that is the slowness polynomial of at least one stiffness tensor (not necessarily positive) in dimension \(2\). By [3, SS4.4, Theorem 4], the Zariski _closure_ of the image of \(p\) is cut out by the elimination ideal
\[J:=I\cap\mathbb{C}[c_{1},\dots,c_{8}]\subseteq\mathbb{C}[c_{1},\dots,c_{8}],\]
and a basis for this ideal can be extracted from an appropriate Grobner basis for \(I\) by elimination theory (e.g., [11, SS3.1, Theorem 2]). A magma calculation [11] shows that
\[J=\langle\ -16c_{1}^{2}c_{3}+4c_{1}c_{2}^{2}-8c_{1}c_{2}c_{5}+16c_{1}c_{3}c _{4}c_{8}-4c_{1}c_{3}c_{6}^{2}+32c_{1}c_{3}c_{7}-16c_{1}c_{3}c_{8}^{2}-12c_{1}c _{5}^{2}\] \[\qquad\qquad\ \
\[2c_{2}-c_{4}c_{6}+2c_{5}-c_{6}c_{8},4c_{1}-2c_{4}c_{8}+c_{6}^{2}-4c_{7}+2c_{8}^{ 2},\]
\[8c_{4}c_{7}-2c_{4}c_{8}^{2}-2c_{5}c_{6}+c_{6}^{2}c_{8}-8c_{7}c_{8}+2c_{8}^{3},\]
\[2c_{4}c_{5}-c_{4}c_{6}c_{8}-2c_{5}c_{8}+8c_{6}c_{7}-c_{6}c_{8}^{2},\]
\[4c_{5}^{2}-4c_{5}c_{6}c_{8}+c_{6}^{2}c_{8}^{2}+64c_{7}^{2}-32c_{7}c_{8}^{2}+4c_{ 8}^{4}),\]
\[I_{3}=\langle c_{8}^{2}-4c_{7},c_{6}c_{8}-2c_{5},c_{4}c_{8}-c_{1}-c_{3}-c_{7}, 2c_{6}c_{7}-c_{5}c_{8},\]
\[c_{6}^{2}+2c_{1}-2c_{3}+2c_{7},c_{4}c_{6}-2c_{2},c_{4}^{2}-4c_{1}\rangle,\]
\[J_{3}=\langle 1\rangle.\]
Note that \(V(J_{3})=\emptyset\).
Second proof of Theorem D when \(n=2\).: Since \(p\colon X\to Y\) is a dominant morphism of integral schemes of finite type over a field, both of the same dimension, Chevalley's theorem [11, Exercise II.3.22e] implies that there is a Zariski open subset \(U\subset Y\) such that the fiber \(p^{-1}(u)\) for \(u\in U\) is a finite set. In other words, for each \(u\in U\), there are only finitely many possible values of \(b_{11},\dots,b_{33}\) such that the relations (14) hold; more plainly, there are only finitely many stiffness tensors associated to a slowness polynomial corresponding to a point \(u\in U\). It is possible to choose \(U\) so that the number of stiffness tensors is _constant_ as one varies \(u\in U\). This constant is the degree of the map \(p\), which is equal to the degree of the function field extension \([\mathbb{C}(X):\mathbb{C}(Y)]\). We use magma to compute this quantity and show that it is \(1\); see [10]. The computation in fact gives explicit expressions for \(b_{11},\dots,b_{33}\) in terms of \(c_{1},\dots,c_{8}\). It shows that the map \(p\colon X\to Y\) is a surjective, birational morphism, i.e., \(p\) has an inverse defined on a Zariski open subset of \(Y\).
The case \(n=3\) of Theorem D can in principle be proved using the same template as above. However, the symbolic computations required when computing Grobner bases are well beyond the capabilities of modern-day desktop computers. The slowness polynomials involved have 50 monomials, with coefficients \(c_{1},\dots,c_{50}\), and the stiffness tensor has 21 components \(b_{11},\dots,b_{66}\). The analogous correspondence diagram for \(n=3\) has the form
\[\mathbb{A}_{\mathbb{C}}^{71}=\operatorname{Spec}\mathbb{C}[b_{11},\dots,b_{66},c_{1},\dots,c_{50}]\]
Using the map \(q\) as before we can show that the variety \(X\subset\mathbb{A}_{\mathbb{C}}^{71}\) parametrizing slowness polynomials in terms of stiffness tensors has dimension 21. As before the closure \(Y=\overline{p(X)}\) of the image of \(p\) could in principle be computed using elimination theory. This would give a set of polynomials generating an ideal \(J\) describing the closure of the image \(\overline{p(X)}\).
### Stiffness tensors with orthorhombic symmetry
Full anisotropy of a stiffness tensor is not an essential hypothesis in the algebro-geometric content of this paper. We illustrate this principle by showing that a slowness surface corresponding to a generic stiffness tensor of a material with orthorhombic symmetries is irreducible. In contrast to the case of a generic fully anisotropic tensor, a slowness surface associated to a generic orthorhombic tensor can have up to four stiffness tensors associated with it. In [10], Helbig and Carcione give sufficient conditions for this phenomenon to occur. We show here their conditions are also necessary in
the generic case. As with triclinic media, Grobner bases can be used to perform the explicit reconstruction of the possible stiffness tensors.
**Definition 19**.: An orthorhombic stiffness tensor is a stiffness tensor \(\mathbf{a}=(a_{ijkl})\in\mathbb{R}^{3\times 3\times 3\times 3}\) such that
\[a_{1123} =a_{1113}=a_{1112}=a_{2223}=a_{2213}=a_{2212}\] \[=a_{3323} =a_{3313}=a_{3312}=a_{2313}=a_{2312}=a_{1312}=0.\]
Using Voigt notation (20), such a tensor has 21 components \(b_{ij}\), \(1\leq i\leq j\leq 6\), but
\[b_{13}=b_{14}=b_{15}=b_{24}=b_{25}=b_{26}=b_{34}=b_{35}=b_{36}=b_{45}=b_{46}=b_{ 56}=0,\]
leaving at most 9 independent components \(b_{11}\), \(b_{12}\), \(b_{13}\), \(b_{22}\), \(b_{23}\), \(b_{33}\), \(b_{44}\), \(b_{55}\), \(b_{66}\).
The Christoffel matrix of an orthorhombic stiffness tensor is
\[\Gamma(\mathbf{p})=\begin{pmatrix}b_{11}p_{1}^{2}+b_{66}p_{2}^{2}+b_{55}p_{3}^ {2}&(b_{12}+b_{66})p_{1}p_{2}&(b_{13}+b_{55})p_{1}p_{3}\\ (b_{12}+b_{66})p_{1}p_{2}&b_{66}p_{1}^{2}+b_{22}p_{2}^{2}+b_{44}p_{3}^{2}&(b_{23 }+b_{44})p_{2}p_{3}\\ (b_{13}+b_{55})p_{1}p_{3}&(b_{23}+b_{44})p_{2}p_{3}&b_{55}p_{1}^{2}+b_{44}p_{2} ^{2}+b_{33}p_{3}^{2}\end{pmatrix}\]
We modify this polynomial by multiplying its terms by powers of a new variable \(p_{0}\) to make all terms of the polynomial to have the same degree. The homogenized slowness polynomial of such a tensor has the form
\[\begin{split}\tilde{P}(\mathbf{p})&=\det(\Gamma( \mathbf{p}-p_{0}^{2}I_{3})\\ &=c_{1}p_{1}^{6}+c_{2}p_{1}^{4}p_{2}^{2}+c_{3}p_{1}^{4}p_{3}^{2}+c_{4} p_{1}^{4}p_{0}^{2}+c_{5}p_{1}^{2}p_{2}^{4}+c_{6}p_{1}^{2}p_{2}^{2}p_{3}^{2}+c_{7} p_{1}^{2}p_{2}^{2}p_{0}^{2}\\ &\quad+c_{8}p_{1}^{2}p_{3}^{4}+c_{9}p_{1}^{2}p_{3}^{2}p_{0}^{2}+c_{10} p_{1}^{2}p_{4}^{4}+c_{11}p_{2}^{2}+c_{12}p_{2}^{4}p_{3}^{2}+c_{13}p_{4}^{2}p_{0}^{ 2}\\ &\quad+c_{14}p_{2}^{2}p_{3}^{4}+c_{15}p_{2}^{2}p_{3}^{2}p_{0}^{2} +c_{16}p_{2}^{2}p_{0}^{4}+c_{17}p_{3}^{6}+c_{18}p_{3}^{4}p_{0}^{2}+c_{19}p_{3} ^{2}p_{0}^{4}+c_{20}p_{0}^{6},\end{split} \tag{22}\]
where, for example, we have
\[c_{7}=-b_{11}b_{22}-b_{11}b_{44}+b_{12}^{2}+2b_{12}b_{66}-b_{22}b_{55}-b_{44}b_ {66}-b_{55}b_{66}. \tag{23}\]
The slowness bundle \(\mathbf{S}\) is naturally a hypersurface in the product of a 9-dimensional affine space \(\mathbb{A}_{\mathbb{R}}^{9}\) with coordinates \(b_{11},\dots,b_{66}\) and the projective space \(\mathbb{P}_{\mathbb{R}}^{3}\) with homogeneous coordinates \((p_{0}:p_{1}:p_{2}:p_{3})\)
\[\mathbf{S}:=\left\{\tilde{P}(\mathbf{p})=0\right\}\subset\mathbb{A}_{\mathbb{ R}}^{9}\times\mathbb{P}_{\mathbb{R}}^{3}.\]
As before, the composition of the inclusion \(\iota\colon\mathbf{S}\hookrightarrow\mathbb{A}_{\mathbb{R}}^{9}\times \mathbb{P}_{\mathbb{R}}^{3}\) with the projection \(\pi_{1}\colon\mathbb{A}_{\mathbb{R}}^{9}\times\mathbb{P}_{\mathbb{R}}^{3} \to\mathbb{A}_{\mathbb{R}}^{9}\) gives rise to the slowness surface fibration
\[f:=\pi_{1}\circ\iota\colon\mathbf{S}\to\mathbb{A}_{\mathbb{R}}^{9}\]
**Theorem 20**.: _The slowness polynomial associated to a generic orthorhombic stiffness tensor is irreducible over \(\mathbb{C}\)._
Proof.: A generic geometric integrality argument, following the proof of Theorem 9 shows that the set of \(\mathbf{b}\in\mathbb{A}_{\mathbb{R}}^{9}\) such that the complexified fiber \(f_{\mathbb{C}}^{-1}(\mathbf{b})\subset\mathbb{P}_{\mathbb{C}}^{3}\) is an irreducible surface forms a Zariski open subset of the parameter space \(\mathbb{A}_{\mathbb{R}}^{9}\). All that remains to show is that this set is not empty, by producing a single orthorhombic stiffness tensor with an associated slowness polynomial that is irreducible. Consider the orthorhombic stiffness tensor obtained by rounding out values for the stiffness tensor of olivine [1], a common mineral in the Earth's mantle:
\[\begin{split} b_{11}=321,\quad b_{12}&=68,\quad b_{13 }=72,\quad b_{22}=197,\quad b_{23}=77,\\ b_{33}&=234,\quad b_{44}=64,\quad b_{55}=77,\quad b_{66 }=79.\end{split} \tag{24}\]
Its corresponding homogenized slowness polynomial is
\[\tilde{P}(\mathbf{p}) =1952643p_{1}^{6}+5308889p_{1}^{4}p_{2}^{2}+6230406p_{1}^{4}p_{3}^{ 2}-56159p_{1}^{4}p_{0}^{2}+4261967p_{1}^{2}p_{2}^{4}\] \[\quad+9884047p_{1}^{2}p_{2}^{2}p_{3}^{2}-94721p_{1}^{2}p_{2}^{2}p_ {0}^{2}+5189310p_{1}^{2}p_{3}^{4}-108883p_{1}^{2}p_{3}^{2}p_{0}^{2}+477p_{1}^{2 }p_{0}^{4}\] \[\quad+996032p_{2}^{6}+3365543p_{2}^{4}p_{3}^{2}-33227p_{2}^{4}p_{0} ^{2}+3517205p_{2}^{2}p_{3}^{4}-73952p_{2}^{2}p_{3}^{2}p_{0}^{2}\] \[\quad+340p_{2}^{2}p_{0}^{4}+1153152p_{3}^{6}-37922p_{3}^{4}p_{0}^{ 2}+375p_{3}^{2}p_{0}^{4}-p_{0}^{6}. \tag{25}\]
which is irreducible over \(\mathbb{C}\) by Lemma 14, applied with \(d=6\) and \(p=5\); see [1].
As mentioned in SS1.5, a general orthorhombic slowness polynomial can arise in more than one way from an orthorhombic stiffness tensor. We make this idea precise by proving Theorem E.
Proof of Theorem E.: Inspection of the relations of the form (23) for \(c_{1},\ldots,c_{20}\) suggest that, up to a global scalar, the nine coefficients
\[c_{1},c_{4},c_{10},c_{11},c_{13},c_{16},c_{17},c_{18},c_{19}\]
uniquely determine the quantities
\[b_{11},b_{22},b_{33},b_{44},b_{55},b_{66}.\]
More precisely, we have
\[c_{1} =b_{11}b_{55}b_{66},\] \[c_{4} =-(b_{11}b_{55}+b_{11}b_{66}+b_{55}b_{66}),\] \[c_{10} =b_{11}+b_{55}+b_{66},\] \[c_{11} =b_{22}b_{44}b_{66},\] \[c_{13} =-(b_{22}b_{44}+b_{22}b_{66}+b_{44}b_{66}),\] \[c_{16} =b_{22}+b_{44}+b_{66},\] \[c_{17} =b_{33}b_{44}b_{55},\] \[c_{18} =-(b_{33}b_{44}+b_{33}b_{55}+b_{44}b_{55}),\] \[c_{19} =b_{33}+b_{44}+b_{55}.\]
Homogenizing the right hand sides above to make sure they all have degree \(3\), by introducing an extra variable \(r\), we obtain
\[\tilde{c}_{1} =b_{11}b_{55}b_{66},\] \[\tilde{c}_{4} =-(b_{11}b_{55}+b_{11}b_{66}+b_{55}b_{66})r,\] \[\tilde{c}_{10} =(b_{11}+b_{55}+b_{66})r^{2},\] \[\tilde{c}_{11} =b_{22}b_{44}b_{66},\] \[\tilde{c}_{13} =-(b_{22}b_{44}+b_{22}b_{66}+b_{44}b_{66})r,\] \[\tilde{c}_{16} =(b_{22}+b_{44}+b_{66})r^{2},\] \[\tilde{c}_{17} =b_{33}b_{44}b_{55},\] \[\tilde{c}_{18} =-(b_{33}b_{44}+b_{33}b_{55}+b_{44}b_{55})r,\] \[\tilde{c}_{19} =(b_{33}+b_{44}+b_{55})r^{2}.\]
This allows us to define a rational map of projective spaces
\[g\colon\mathbb{P}^{6}\dashrightarrow\mathbb{P}^{8}\] \[[b_{11},b_{22},b_{33},b_{44},b_{55},b_{66},r]\mapsto[\tilde{c}_{1},\tilde{c}_{4},\tilde{c}_{10},\tilde{c}_{11},\tilde{c}_{13},\tilde{c}_{16}, \tilde{c}_{17},\tilde{c}_{18},\tilde{c}_{19}]\]
Now we proceed as in the proof of Theorem D: after resolving the indeterminacy locus4\(\Pi\) of \(g\) through a blow-up process to get a surjective proper morphism \(f\colon X\to Y\), upper semi-continuity of fiber dimension together with upper semi-continuity of degree for finite morphisms show there is a Zariski open subset of \(Y\) over which all fibers consist of a single point. This subset is not empty (and therefore is Zariski dense) because a Grobner basis calculation shows that the nine coefficients of (25)
Footnote 4: This locus has dimension \(3\), as one can verify with magma, for example.
\[c_{1} =1952643, c_{11} =996032, c_{17} =1153152,\] \[c_{4} =-56159, c_{13} =-33227, c_{18} =-37922,\] \[c_{10} =477, c_{16} =340, c_{19} =375\]
give rise to a unique set of values of \(b_{11},b_{22},b_{33},b_{44},b_{55},b_{66}\), namely those in (24); see [1].
Next, we note that
\[c_{15}=b_{23}^{2}+2b_{23}b_{44}-b_{22}b_{33}-b_{22}b_{55}-b_{33}b_{66}-b_{44}b_ {55}-b_{44}b_{66}, \tag{26}\]
so if we know \(c_{15}\) and \(b_{11},b_{22},b_{33},b_{44},b_{55},b_{66}\), then there are two possible values for \(b_{23}\), obtained by solving the above equation, interpreted as a quadratic in the single variable \(b_{23}\). Similarly, the relations
\[c_{7} =b_{12}^{2}+2b_{12}b_{66}-b_{11}b_{22}-b_{11}b_{44}-b_{22}b_{55}-b _{44}b_{66}-b_{55}b_{66}, \tag{28}\] \[c_{9} =b_{13}^{2}+2b_{13}b_{55}-b_{11}b_{33}-b_{11}b_{44}-b_{33}b_{66}-b _{44}b_{55}-b_{55}b_{66} \tag{27}\]
show that there are two possible values each for \(b_{13}\) and \(b_{12}\).
This seems to suggest that there are up to eight different stiffness tensors that can give rise to an orthorhombic slowness surface. However, the solutions to the three quadratic equations above are coupled, and there are only four possible triples \((b_{12},b_{13},b_{23})\) for a given set of coefficients \(c_{1},\dots,c_{20}\). Put differently: \(b_{12}\) is determined by the values of \(b_{13}\) and \(b_{23}\): to see this, we consider the ideal generated by the relations of the form (23) for \(c_{2}\), \(c_{5}\), \(c_{6}\) and \(c_{7}\)
\[\langle c_{2}-(b_{11}b_{22}b_{55}+b_{11}b_{44}b_{66}-b_{12}^{2}b_{5 5}-2b_{12}b_{55}b_{66}),\] \[c_{5}-(b_{11}b_{22}b_{44}-b_{12}^{2}b_{44}-2b_{12}b_{44}b_{66}+b_{ 22}b_{55}b_{66}),\] \[c_{6}-(b_{11}b_{22}b_{33}-b_{11}b_{23}^{2}-2b_{11}b_{23}b_{44}-b_ {12}^{2}b_{33}+2b_{12}b_{13}b_{23}+2b_{12}b_{13}b_{44}\] \[\qquad+2b_{12}b_{23}b_{55}-2b_{12}b_{33}b_{66}+2b_{12}b_{44}b_{55} -b_{13}^{2}b_{22}-2b_{13}b_{22}b_{55}\] \[\qquad+2b_{13}b_{23}b_{66}+2b_{13}b_{44}b_{66}+2b_{23}b_{55}b_{66 }+4b_{44}b_{55}b_{66}),\] \[c_{7}-(-b_{11}b_{22}-b_{11}b_{44}+b_{12}^{2}+2b_{12}b_{66}-b_{22} b_{55}-b_{44}b_{66}-b_{55}b_{66})\rangle\]
in the polynomial ring \(A[b_{12},c_{2},c_{5},c_{6},c_{7}]\), where \(A=\mathbb{Q}(b_{11},b_{22},b_{33},b_{44},b_{55},b_{66},b_{13},b_{23})\), and compute a Grobner basis [1] for it under the lexicographic order
\[b_{12}>c_{2}>c_{5}>c_{6}>c_{7}.\]
Inspection of the basis gives the equality
\[b_{12}=\frac{1}{2\beta}c_{6}+\frac{b_{33}}{2\beta}c_{7}-\frac{\alpha}{2\beta},\]
where
\[\alpha :=-b_{11}b_{33}b_{44}-2b_{11}b_{44}b_{23}-b_{11}b_{23}^{2}-b_{22}b_ {33}b_{55}-2b_{22}b_{55}b_{13}-b_{22}b_{13}^{2}\] \[\quad-b_{33}b_{44}b_{66}-b_{33}b_{55}b_{66}+4b_{44}b_{55}b_{66}+2b _{44}b_{66}b_{13}+2b_{55}b_{66}b_{23}+2b_{66}b_{13}b_{23}\] \[\beta :=(b_{44}b_{55}+b_{44}b_{13}+b_{55}b_{23}+b_{13}b_{23}),\]
showing that \(b_{12}\) is determined by \(c_{6}\), \(c_{7}\), and the stiffnesses \(b_{11},b_{22},b_{33},b_{44},b_{55},b_{66},b_{13},b_{23}\).
The proof of Theorem E shows that the coefficients \(c_{1},\ldots,c_{20}\) of an orthorhombic slowness surface determine the stiffnesses \(b_{11},b_{22},b_{33},b_{44},b_{55},b_{66}\), and that there are four possible triples for the remaining stifresses
\[(b_{12},b_{13},b_{23}),\quad(b_{12},b_{13}^{*},b_{23}^{*}),\quad(b_{12}^{*},b_ {13}^{*},b_{23}),\quad(b_{12}^{*},b_{13},b_{23}^{*}), \tag{29}\]
where
\[b_{12}+b_{12}^{*} =-2b_{66}\] \[b_{13}+b_{13}^{*} =-2b_{55}\] \[b_{23}+b_{23}^{*} =-2b_{44},\]
reflecting that roots of the quadratic equations (26), (27), and (28) must add up to minus the coefficient of the linear term. We note that the three solutions \((b_{12},b_{13}^{*},b_{23}^{*})\), \((b_{12}^{*},b_{13}^{*},b_{23})\), and \((b_{12}^{*},b_{13},b_{23}^{*})\) are exactly the "anomalous companions" in [10, SS3]. Helbig and Carcione arrive at the existence of anomalous companions by making three quite reasonable assumptions that stiffness coefficients might satisfy in order for there to exist more than one set of stiffnesses that gives rise to the same slowness surface. In other words, their conditions give sufficient conditions for the existence of anomalous companions. Our work shows that for a generic orthorhombic slowness polynomial, the anomalous companions in [10] are the _only_ possible anomalous companions.
#### 5.7.1. Positivity of anomalous companions
For completeness, we summarize here the analysis in [10] characterizing which triples of stiffnesses (29) give rise to positive orthorhombic stiffness tensors. Positivity requires that the \(6\times 6\) matrix of stiffnesses
\[\begin{pmatrix}b_{11}&b_{12}&b_{13}\\ b_{12}&b_{22}&b_{23}\\ b_{13}&b_{23}&b_{33}\\ &&b_{44}\\ &&&b_{55}\\ &&&&b_{66}\end{pmatrix}\]
be positive definite. By Sylvester's criterion [11, SS7.6], this implies that
\[b_{11}>0,\quad b_{22}>0,\quad b_{33}>0,\quad b_{44}>0,\quad b_{55}>0,\quad b_{66 }>0,\]
and that the \(2\times 2\) minors
\[b_{11}b_{22}-b_{12}^{2},\quad b_{11}b_{33}-b_{13}^{2},\quad b_{22}b_{33}-b_{23} ^{2}\]
are also positive, implying the inequalities
\[-\sqrt{b_{11}b_{22}}<b_{12}<\sqrt{b_{11}b_{22}},\quad-\sqrt{b_{11}b_{33}}<b_{13}< \sqrt{b_{11}b_{33}},\quad-\sqrt{b_{22}b_{33}}<b_{23}<\sqrt{b_{22}b_{33}}. \tag{30}\]
In addition, the \(3\times 3\) leading principal minor must also be positive:
\[b_{11}b_{22}b_{33}+2b_{12}b_{13}b_{23}-b_{11}b_{23}^{2}-b_{22}b_{13}^{2}-b_{33} b_{12}^{2}>0. \tag{31}\]
Let
\[x:=\frac{b_{12}}{\sqrt{b_{11}b_{22}}},\quad y:=\frac{b_{13}}{\sqrt{b_{11}b_{33} }},\quad\text{and}\quad z:=\frac{b_{23}}{\sqrt{b_{22}b_{33}}}\]
Then the conditions (30) and (31) become, respectively,
\[-1<x<1\quad-1<y<1,\quad-1<z<1\]
and
\[1+2xyz-x^{2}-y^{2}-z^{2}>0.\]
The affine surface \(1+2xyz-x^{2}-y^{2}-z^{2}=0\) is the ubiquitous Cayley cubic surface! Positivity of an anomalous companion is equivalent to having the point corresponding to the companion lying inside the finite "tetrahedral" region in \(\mathbb{R}^{3}\) determined by the four singularities of the cubic surface. See Figure 3.
## Appendix A An example of unique reconstruction in dimension 3
To perform the unique reconstruction of the stiffness tensor parameters for the slowness polynomial (17), we perform a Grobner basis calculation in magma, as follows:
P<b11,b12,b13,b14,b15,b16,b22,b23,b24,b25,b26,b33,b34,b35,b36,
b44,b45,b46,b55,b56,b66> := PolynomialRing(Rationals(),21);
Q<p1,p2,p3> := PolynomialRing(P,3);
Figure 3. The Cayley cubic surface \(1+2xyz-x^{2}-y^{2}-z^{2}=0\) in \(\mathbb{R}^{3}\). Points on the roughly tetrahedral shape in the middle correspond to anomalous companions of an orthorhombic stiffness tensor that are physically realizable, while points on the four cone-shaped regions extending to infinity do not.
// Construct the entries of the Christoffel matrix.
Gamma11 := b11*p1^2 + 2*b16*p1*p2 + 2*b15*p1*p3 + b66*p2^2 + 2*b56*p2*p3 + b55*p3^2; Gamma12 := b16*p1^2 + (b12 + b66)*p1*p2 + (b14 + b56)*p1*p3 + b26*p2^2 + (b46 + b25)*p2*p3 + b45*p3^2; Gamma13 := b15*p1^2 + (b14 + b56)*p1*p2 + (b13 + b55)*p1*p3 + b46*p2^2 + (b36 + b45)*p2*p3 + b35*p3^2; Gamma21 := Gamma12; Gamma22 := b66*p1^2 + 2*b26*p1*p2 + 2*b46*p1*p3 + b22*p2^2 + 2*b24*p2*p3 + b44*p3^2; Gamma23 := b56*p1^2 + (b46 + b25)*p1*p2 + (b36 + b45)*p1*p3 + b24*p2^2 + (b23 + b44)*p2*p3 + b34*p3^2; Gamma31 := Gamma13; Gamma32 := Gamma23; Gamma33 := b55*p1^2 + 2*b45*p1*p2 + 2*b35*p1*p3 + b44*p2^2 + 2*b34*p2*p3 + b33*p3^2;
// Construct the Christoffel matrix.
Gammap := Matrix(Q,[[Gamma11,Gamma12,Gamma13], [Gamma21,Gamma22,Gamma23], [Gamma31,Gamma32,Gamma33]]);
I := IdentityMatrix(Q,3);
// Construct the general slowness polynomial
SlownessPoly := Determinant(Gammap - I);
// extract the coefficients of the slowness polynomial as
// a polynomial in p1, p2, p3
GeneralCoeffs := Coefficients(SlownessPoly);
// these are the coefficients of the particular slowness polynomial
CoeffsFrom3DExample :=
[ 61808197, -29183112, 12224260, 295917556, -121937730, 348169660, -505771, -70975626, 152129354, -119421358, 155018, -174550934, -8528, 383486749, -300844962, 1468226482, -1740692, -272180462, 436994, 404080725, -1875763, 1294, -13416750, 282989760, 154078108, 134674, 67718200, -536166, 82126914, 44930, -182, -99344136, 422102, -50, 141880986, -135372072, 1205554155, -1144986, -88997104, 413432, 959527532, -4481283, 2419, -38299560, 167364, -242, 115762815, -981561, 2312, -1 ];
// Construct the ideal in the polynomial ring P whose generators
// set the coefficients of general slowness polynomial
// equal to the coefficients of the specific slowness polynomial
I := ideal<P | [GeneralCoeffs[i] - CoeffsFrom3DExample[i] : i in [1..#CoeffsFrom3DExample]]>;
// Compute a Groebner basis of I, hoping this will reconstruct // the coefficients of the stiffness tensor time GroebnerBasis(I);
The output of this code is
[ b11 - 691, b12 - 340, b13 - 308, b14 - 51, b15 + 24, b16 + 9, b22 - 1835, b23 - 55, b24 + 39, b25 + 77, b26 + 58, b33 - 1795, b34 + 87, b35 - 71, b36 + 98, b44 - 249, b45 + 24, b46 + 72, b55 - 268, b56 - 5, b66 - 335
] Time: 0.160
which shows not only that there is exactly one stiffness tensor that gives rise to the slowness polynomial (17), but also recovers the components of this unique stiffness tensor in 0.16 seconds in a 2.3 GHz Quad-Core Intel Core I7 processor.
|
2306.14547
|
A note on Strichartz estimates for the wave equation with orthonormal
initial data
|
This note is concerned with Strichartz estimates for the wave equation and
orthonormal families of initial data. We provide a survey of the known results
and present what seems to be a reasonable conjecture regarding the cases which
have been left open. We also provide some new results in the maximal-in-space
boundary cases.
|
Neal Bez, Shinya Kinoshita, Shobu Shiraki
|
2023-06-26T09:39:12Z
|
http://arxiv.org/abs/2306.14547v1
|
# A note on Strichartz estimates for the wave equation with orthonormal initial data
###### Abstract.
This note is concerned with Strichartz estimates for the wave equation and orthonormal families of initial data. We provide a survey of the known results and present what seems to be a reasonable conjecture regarding the cases which have been left open. We also provide some new results in the maximal-in-space boundary cases.
This work was supported by JSPS Kakenhi grant numbers 19H00644, 19H01796, 22H00098 and 23H01080 (Bez), 21J00514, 22KJ0446 (Kinoshita), and 19H01796, 22H00098 (Shiraki). The third author is also supported by Centro de Analise Matematica, Geometria e Sistemas Dinamicos (CAMGSD)
## 1. Introduction
Let \(\beta\) be a positive integer. Let \(\alpha\) be a positive integer. Let \(\beta\) be a positive integer. Let \(\alpha\) be a positive integer. Let \(\beta_{\alpha}\) be a positive integer. Let \(\beta_{\alpha}\) be a positive integer. Let \(\alpha_{1},\beta_{1},\beta_{2}\) be a positive integer. Let \(\beta_{\alpha}\) be a positive integer. Let \(\alpha_{1},\beta_{1},\beta_{2}\) be a positive integer. Let \(\beta_{\alpha}\) be a positive integer. Let \(\beta_{\alpha}\) be a positive integer. Let \(\alpha_{1},\beta_{1},\beta_{2}\) be a positive integer. Let \(\beta_{\alpha}\) be a positive integer.
### A conjecture
The most pressing issue appears to be closing the gap between the necessary condition (1.5) and the sufficient condition \(\beta\leq\beta_{\frac{d-1}{2}}(q,r)\) given in Theorem 1.1(i). For this it seems reasonable to us to believe that Theorem 1.1(i) can be improved up to the necessary condition in (1.5). This amounts to the following conjecture.
**Conjecture 1.2**.: _Let \(d\geq 2\), \(q\in(2,\infty),r\in[2,\infty)\), and \(\frac{1}{q}\leq\frac{d-1}{2}(\frac{1}{2}-\frac{1}{r})\). Then the estimate (1.1) holds in each of the following cases._
(i)_\(\beta\leq\beta_{\frac{d}{2}}(q,r)\) when \(\frac{d}{r}>\frac{d-1}{q}\)._
(ii)_\(\beta<\frac{q}{2}\) when \(\frac{d}{r}\leq\frac{d-1}{q}\)._
In Figures 1 and 2, the following points also arise:
\[A^{\prime}=\bigg{(}\frac{d-1}{2(d+1)},\frac{d}{2(d+1)}\bigg{)},\quad E^{\prime }=\bigg{(}\frac{d-2}{2d},\frac{1}{2}\bigg{)},\quad F=\bigg{(}\frac{(d-1)^{2}}{ 2(d^{2}+1)},\frac{d(d-1)}{2(d^{2}+1)}\bigg{)}.\]
Conjecture 1.2(i) says that (1.1) holds for \(\beta\leq\beta_{\frac{d}{2}}(q,r)\) in the interior of \(OFC\) and the line segment \([C,F)\), and Conjecture 1.2(ii) says that (1.1) holds for \(\beta<\frac{q}{2}\) in the interior of \(ODEF\), or \((O,F)\), or \([F,E)\).
Observe that the line segment \([C,E^{\prime}]\) corresponds to the sharp admissible line
\[\frac{1}{q}=\frac{d}{2}\bigg{(}\frac{1}{2}-\frac{1}{r}\bigg{)}\]
for the Strichartz estimates associated with the Schrodinger propagator \(e^{it\Delta}\). The analogue of (1.1) for Schrodinger propagator takes the form
\[\bigg{\|}\sum_{j}\lambda_{j}|e^{it\Delta}f_{j}|^{2}\bigg{\|}_{L^{\frac{q}{2}} _{t}L^{\frac{r}{2}}_{x}}\lesssim\|\lambda\|_{\ell^{\beta}} \tag{1.6}\]
where \((f_{j})_{j}\) is a family of orthonormal functions in the homogeneous Sobolev space \(\dot{H}^{s}(\mathbb{R}^{d})\), \(s=\frac{d}{2}-\frac{d}{r}-\frac{2}{q}\), and interestingly this family of estimates is much better understood than (1.1). In particular, it is known (see [9, 10, 11, 4]) that for \(q\in(2,\infty),r\in[2,\infty)\), and \(\frac{1}{q}\leq\frac{d}{2}(\frac{1}{2}-\frac{1}{r})\), the estimate (1.6) holds in each of the following cases.
_(i) \(\beta\leq\beta_{\frac{d}{2}}(q,r)\) when \(\frac{d}{r}>\frac{d-1}{q}\)_ (i.e. the interior of \(OA^{\prime}C\) and \([C,A^{\prime})\)).
_(ii) \(\beta<\frac{q}{2}\)_ when \(\frac{d}{r}\leq\frac{d-1}{q}\) (i.e. the interior of \(ODE^{\prime}A^{\prime}\) and the line segments \((O,A^{\prime}]\), \([A^{\prime},E^{\prime})\)).
Furthermore, the restriction in (1.5) is also known to be necessary for (1.6) too. Thus, apart from certain critical/boundary cases, we have an almost complete understanding of when the estimates (1.6) are valid. We refer the reader to Frank _et. al_[9] for the origin of this line of study for the Schrodinger equation, and to [10, 11, 4] for further developments on (1.6).
### Boundary cases
The boundary cases \(q=\infty\) and \(r=\infty\) have been excluded from the statements of Theorem 1.1 and Conjecture 1.2, and for the remainder of this note we mainly focus on the case \(r=\infty\). When \(q=\infty\), we may simply use the Sobolev inequality for orthonormal functions proved by Lieb [17] to quickly obtain (1.1) with \(q=\infty\), \(r\in(2,\infty)\) and \(\beta<\frac{r}{2}=\beta_{\frac{d}{2}}(\infty,r)\); in light of the necessary condition (1.5), we see that this result is sharp up to the critical case \(\beta=\frac{r}{2}\).
When \(r=\infty\), matters appear to be less simple and for the remainder of this note we shall be concerned with the estimate
\[\bigg{\|}\sum_{j}\lambda_{j}|Uf_{j}|^{2}\bigg{\|}_{L^{\frac{q}{2}}_{t}L^{\infty }_{x}}\lesssim\|\lambda\|_{\ell^{\beta}}, \tag{1.7}\]
where \((f_{j})_{j}\) is a family of orthonormal functions in \(\dot{H}^{\frac{d}{2}-\frac{1}{q}}(\mathbb{R}^{d})\). Also we recall from our discussion of the classical Strichartz estimates (1.2) that we are interested in the range \(q\in(4,\infty)\) when \(d=2\)
Figure 1. The case \(d\geq 3\) (note that \(D=E\) when \(d=3\)). On the lines \([C,E]\) and \([O,A]\) we have \(\frac{1}{q}=\frac{d-1}{2}(\frac{1}{2}-\frac{1}{r})\) and \(\frac{d-2}{q}=\frac{d-1}{r}\), respectively.
Figure 4. For \(d=2\), the gap between the necessary condition (1.5) (represented by the dark gray region) and Theorem 1.1(i) (represented by the light gray region) in the sharp admissible case \(\frac{1}{q}=\frac{1}{2}(\frac{1}{2}-\frac{1}{r})\).
Figure 3. For \(d\geq 3\), the gap between the necessary condition (1.5) (represented by the dark gray region) and Theorem 1.1(i) (represented by the light gray region) in the sharp admissible case \(\frac{1}{q}=\frac{d-1}{2}(\frac{1}{2}-\frac{1}{r})\). The exponent \(r_{d}\) is given by \(r_{d}=\frac{2(d^{2}+1)}{(d-1)^{2}}\).
and \(q\in(2,\infty)\) when \(d\geq 3\). As such, we introduce the notation
\[q_{d}:=\begin{cases}4&\quad(d=2),\\ 2&\quad(d\geq 3).\end{cases}\]
Our first new result is the following.
**Theorem 1.3**.: _For any \(d\geq 2\), the estimate (1.7) holds with \(q\in(q_{d},\infty)\) and \(\beta<\frac{q}{2}\)._
Although it is unclear to use whether one can extend Theorem 1.3 to the critical case \(\beta=\frac{q}{2}\), we are at least able to show the following restricted weak-type estimates.
**Theorem 1.4**.: _For any \(d\geq 3\) and \(q\in(q_{d},\infty)\), the estimate_
\[\Big{\|}\sum_{j}\lambda_{j}|Uf_{j}|^{2}\Big{\|}_{L^{\frac{q}{2},\infty}_{t}L^{ \infty}_{x}}\lesssim\|\lambda\|_{\ell^{\frac{q}{2},1}} \tag{1.8}\]
_holds whenever \((f_{j})_{j}\) is a family of orthonormal functions in \(\dot{H}^{\frac{d}{2}-\frac{1}{q}}(\mathbb{R}^{d})\)._
The rest of the paper is devoted to proving Theorems 1.3 and 1.4, and this is taken up in Sections 2 and 3, respectively. Prior to that, we introduce a small amount of notation.
### Notation
For non-negative quantities \(A\), \(B\), the notation \(A\lesssim B\) (or \(B\gtrsim A\)) denotes an estimate \(A\leq CB\) for some constant \(C>0\), and we write \(A\sim B\) when both \(A\lesssim B\) and \(B\lesssim A\).
Let \(\varphi\in C_{0}^{\infty}\) be supported in the annulus \(\{\xi\in\mathbb{R}^{n}:2^{-1}<|\xi|<2\}\) and satisfy
\[\sum_{k\in\mathbb{Z}}\varphi(2^{-k}\xi)=1,\qquad\xi\in\mathbb{R}^{d}\setminus \{0\}.\]
Define the family of the Littlewood-Paley operators \((P_{k})_{k\in\mathbb{Z}}\) by \(\widehat{P_{k}f}(\xi)=\varphi_{k}(\xi)\widehat{f}(\xi)\), where \(\varphi_{k}(\xi)=\varphi(2^{-k}\xi)\) for each \(k\in\mathbb{Z}\). Also, let \(\chi\in C_{0}^{\infty}\) be supported in the ball \(\{\xi\in\mathbb{R}^{n}:|\xi|<1\}\) with \(\chi(0)=1\).
For \(1\leq p<\infty\), \(1\leq q\leq\infty\), we write \(L^{p,q}(\mathbb{R}^{d})\) for the associated Lorentz function space and \(\ell^{p,q}\) for the associated Lorentz sequence space. We refer the reader to [12, 21] for the definition and wider discussion on these space. In particular, it is well known that
\[L^{p,p}(\mathbb{R}^{d})=L^{p}(\mathbb{R}^{d}),\quad L^{p,q_{1}}(\mathbb{R}^{d })\subseteq L^{p,q_{2}}(\mathbb{R}^{d})\quad\text{when $q_{1}\leq q_{2}$}.\]
## 2. A sketch proof of Theorem 1.3
In [3], the analogue of (1.7) for the fractional Schrodinger propagators has been obtained.
**Theorem 2.1** (see [3]).: _Let \(a\in(0,\infty)\setminus\{1\}\) and \(d\geq 1\). For \(q\in(q_{d+1},\infty)\) and \(\beta<\frac{q}{2}\), the estimate_
\[\Big{\|}\sum_{j}\lambda_{j}|e^{it(-\Delta)^{\frac{q}{2}}}f_{j}|^{2}\Big{\|}_{ L^{\frac{q}{2}}_{t}L^{\infty}_{x}}\lesssim\|\lambda\|_{\ell^{\beta}} \tag{2.1}\]
_holds whenever \((f_{j})_{j}\) is a family of orthonormal functions in \(\dot{H}^{\frac{d}{2}-\frac{\alpha}{q}}(\mathbb{R}^{d})\)._
One may follow the argument in [3] to prove Theorem 1.3 with very minor modifications, and so we only present a very brief sketch here and refer the reader to [3] for further details.
Key to the argument is the dispersive estimate of the integral kernel associated with the wave propagator:
\[\Big{|}\int e^{i(x\cdot\xi+t|\xi|)}|\xi|^{-d+\frac{2}{q}+i\kappa}\chi(\xi)\, \mathrm{d}\xi\Big{|}\lesssim C(\kappa)|t|^{-\frac{2}{q}}, \tag{2.2}\]
where \(d\geq 2\), \(\frac{4}{d-1}<q<\infty\), \(\kappa\in\mathbb{R}\), and \(C(\kappa)>0\) satisfies \(C(\kappa)\lesssim e^{\varepsilon|\kappa|}\) for an arbitrary small \(\varepsilon>0\). The corresponding estimate for the fractional Schrodinger propagator
\[\Big{|}\int e^{i(x\cdot\xi+t|\xi|^{a})}|\xi|^{-d+\frac{2a}{q}+i\kappa}\chi(\xi )\,\mathrm{d}\xi\Big{|}\lesssim C(\kappa)|t|^{-\frac{2}{q}} \tag{2.3}\]
holds for \(d\geq 1\), \(a\in(0,\infty)\backslash\{1\}\), \(\frac{4}{d}\leq q<\infty\), \(\kappa\in\mathbb{R}\), and \(C(\kappa)>0\) as above (for a proof of (2.2) and (2.3), see for example [14] and [13]). Notice that the condition on \(q\) in (2.2) corresponds to the one for (2.3), but \(d\) is replaced by \(d-1\) and the endpoint \(q=\frac{4}{d-1}\) is excluded; the lack of an endpoint estimate here turns out to be harmless since our goal is \(\beta<\frac{q}{2}\).
The argument proceeds using a duality principle observed by Frank-Sabin [10, Lemma 3] in an abstract setting. In our case, we see that (1.7) with \(U\) replaced by \(\chi(|D|)U\) is equivalent to
\[\|WU\chi^{2}(|D|)|D|^{-(d-1+\frac{2}{q})}U^{*}\overline{W}\|_{\mathcal{C}^{ \beta^{\prime}}}\lesssim\|W\|_{L^{\widetilde{q}}_{t}L^{2}_{x}}^{2}, \tag{2.4}\]
where \(\beta^{\prime}\) denotes the (standard) conjugate of \(\beta\), and \(\widetilde{q}\) denotes the (half-)conjugate of \(q\) given by
\[\frac{1}{\beta}+\frac{1}{\beta^{\prime}}=1,\qquad\frac{1}{q}+\frac{1}{ \widetilde{q}}=\frac{1}{2}.\]
Here, \(\mathcal{C}^{\beta^{\prime}}\) with \(\beta^{\prime}\geq 1\) denotes the Schatten space on \(L^{2}(\mathbb{R}^{d+1})\) defined as
\[\mathcal{C}^{\beta^{\prime}}=\{T\in\mathrm{Com}(L^{2}(\mathbb{R}^{d+1})):\|T\| _{\mathcal{C}^{\beta^{\prime}}}:=\|(\mu_{j}(T))_{j\in\mathbb{N}}\|_{\ell^{ \beta^{\prime}}}<\infty\},\]
where \(\mathrm{Com}(L^{2}(\mathbb{R}^{d+1}))\) denotes the set of compact operators on \(L^{2}(\mathbb{R}^{d+1})\) and \((\mu_{j}(T))_{j}\) denotes the family of the singular values of \(T\) (i.e. non-zero eigenvalues of \(\sqrt{T^{*}T}\)). Note that establishing (1.7) with \(\chi(|D|)U\) suffices since one can rescale to deduce the same estimate with \(\chi(\varepsilon|D|)U\) for any \(\varepsilon>0\), and then take a limit \(\varepsilon\to 0\).
The space \(\mathcal{C}^{1}\) consists of trace-class operators and \(\mathcal{C}^{\infty}\) consists of compact operators (endowed with the standard operator norm). Hence \(\mathcal{C}^{\beta^{\prime}}\) with \(1<\beta^{\prime}<\infty\) is regarded as an intermediate space. Of particular importance to the analysis in this note is that the class \(\mathcal{C}^{2}\) consists of Hilbert-Schmidt operators and that there is a very useful expression of the \(\mathcal{C}^{2}\) norm; if \(T\) is given
\[Tf(x)=\int_{\mathbb{R}^{d+1}}K(x,y)f(y)\,\mathrm{d}y,\qquad f\in L^{2}(\mathbb{ R}^{d+1}),\]
then \(\|T\|_{\mathcal{C}^{2}}=\|K\|_{L^{2}(\mathbb{R}^{d+1}\times\mathbb{R}^{d+1})}\). For more details about Schatten spaces, see Simon's textbook [20].
Our goal is to prove (2.4) for \(\widetilde{q}\in(2,\widetilde{q}_{d})\) and \(\beta^{\prime}>\frac{\widetilde{q}}{2}\). We write \(\mathcal{U}=\chi(|D|)U\) and divide into two cases depending on \(\widetilde{q}\).
**Case 1: \(d\geq 3\) and \(4\leq\widetilde{q}<\infty\):** We aim to get the following \(\mathcal{C}^{2}\) and \(\mathcal{C}^{\infty}\) bilinear estimates: if \(2<\widetilde{q}_{0}<4\), \(2<\widetilde{q}_{1}<\infty\),
\[\|W_{1}\mathcal{U}|D|^{-(d-1+\frac{2}{q_{0}})+i\kappa}\mathcal{U} ^{*}\overline{W}_{2}\|_{\mathcal{C}^{2}}\lesssim C(\kappa)\|W_{1}\|_{L^{ \widetilde{q}_{0},4}_{t}L^{2}_{x}}\|W_{2}\|_{L^{\widetilde{q}_{0},4}_{t}L^{2}_ {x}}, \tag{2.6}\] \[\|W_{1}\mathcal{U}|D|^{-(d-1+\frac{2}{q_{1}})+i\kappa}\mathcal{U} ^{*}\overline{W}_{2}\|_{\mathcal{C}^{\infty}}\lesssim C(\kappa)\|W_{1}\|_{L^{ \widetilde{q}_{1},\infty}_{t}L^{2}_{x}}\|W_{2}\|_{L^{\widetilde{q}_{1},\infty }_{t}L^{2}_{x}}, \tag{2.5}\]
where \(C(\kappa)\lesssim e^{\varepsilon|\kappa|}\) for an arbitrary small \(\varepsilon>0\). Then, by applying a bilinear version of Stein's analytic interpolation to the above estimates, we may obtain (2.4) for the case \(4\leq\widetilde{q}<\infty\). For this purpose, the complex power \(i\kappa\) is essential in (2.5) and (2.6). One may also notice that, thanks to the Lorentz improvements on the right-hand sides of (2.5) and (2.6), the outcome is also slightly stronger than (2.4), namely, in Case 1 and for \(\beta^{\prime}>\frac{\widetilde{q}}{2}\) one obtains
\[\|W\mathcal{U}|D|^{-(d-1+\frac{2}{q})}\mathcal{U}^{*}\overline{W}\|_{\mathcal{ C}^{\beta^{\prime}}}\lesssim\|W\|_{L^{\widetilde{q}}_{t}L^{2,\beta^{\prime}}_{t}L^{2}_ {x}}^{2}.\]
The proofs of (2.5) and (2.6) are based on the dispersive estimate for the wave propagator (2.2) and the Lorentz refinement of the Hardy-Littlewood-Sobolev inequality due to O'Neil [19]. The interested
reader is encouraged to visit [3] where further details regarding the bilinear analytic interpolation of (2.5) and (2.6) can also be found.
**Case 2: \(d\geq 2\) and \(2<\widetilde{q}<4\):** We approach this case by first establishing the somewhat weaker estimates
\[\|W_{1}\mathcal{U}|D|^{-(d-1+\frac{2}{q})+i\kappa}\mathcal{U}^{*}\overline{W}_{ 2}\|_{\mathcal{C}^{\beta^{\prime}}}\lesssim C(\kappa)\|W_{1}\|_{L_{t}^{ \widetilde{q},2}L_{x}^{2}}\|W_{2}\|_{L_{t}^{\widetilde{q},2}L_{x}^{2}}, \tag{2.7}\]
where \(\beta^{\prime}>\frac{\widetilde{q}}{2}\) and \(C(\kappa)\lesssim e^{\varepsilon|\kappa|}\) for an arbitrary small \(\varepsilon>0\). In fact, we can offset the loss in (2.7) (in the sense of \(L_{t}^{\widetilde{q},2}\subsetneq L_{t}^{\widetilde{q}}\) when \(\widetilde{q}>2\)) by capitalizing on the gain in (2.5) (in the sense of \(L_{t}^{\widetilde{q}_{0}}\subsetneq L_{t}^{\widetilde{q}_{0},4}\) if \(\widetilde{q}_{0}<4\)), and use bilinear analytic interpolation once again to obtain the desired estimate (2.4) in Case 2 (see [3] for further details of this step).
To prove (2.7), we use a frequency-localization argument and use a different bilinear interpolation argument in this spirit of Keel-Tao [15] based on the estimates
\[\|W_{1}\mathcal{U}P_{k}|D|^{-2s+i\kappa}P_{k}\mathcal{U}^{*} \overline{W}_{2}\|_{\mathcal{C}^{1}} \lesssim(2^{k})^{d-2s}\|W_{1}\|_{L_{t}^{2}L_{x}^{2}}\|W_{2}\|_{L_{t}^{2}L_{x }^{2}}, \tag{2.9}\] \[\|W_{1}\mathcal{U}P_{k}|D|^{-2s+i\kappa}P_{k}\mathcal{U}^{*} \overline{W}_{2}\|_{\mathcal{C}^{2}} \lesssim C(\kappa)(2^{k})^{d-2s-1+\frac{1}{q_{1}}+\frac{1}{\widetilde {q}_{2}}}\|W_{1}\|_{L_{t}^{\widetilde{q}_{1}}L_{x}^{2}}\|W_{2}\|_{L_{t}^{ \widetilde{q}_{2}}L_{x}^{2}}, \tag{2.8}\]
where \(\widetilde{q}_{1},\widetilde{q}_{2}\in[2,\infty)\) are such that \(\frac{1}{q_{1}}+\frac{1}{q_{2}}>\frac{1}{2}\), and \(C(\kappa)\lesssim e^{\varepsilon|\kappa|}\) for an arbitrary small \(\varepsilon>0\). As in Case 1, (2.9) readily follows from a stronger form of the dispersive estimate (2.2) with decay \((1+|t|)^{-d/2}\). The estimate (2.8) is a manifestation of the frequency localization1 and Bessel's inequality for orthonormal sequences, and for its proof, it is convenient to make use of the representation
Footnote 1: Note that (2.8) without the frequency localization is not permitted due to the failure of the associated Sobolev embedding.
\[\|T\|_{\mathcal{C}^{\beta^{\prime}}}=\sup_{(\phi,\psi)\in\mathcal{B}}\|(T\phi_ {j},\psi_{j})_{L^{2}(\mathbb{R}^{d+1})}\|_{\ell^{\beta^{\prime}}}\]
for Schatten norms (see [20, Proposition 2.6]). Here, \(\mathcal{B}\) denotes the set of pairs of orthonormal sequences in \(L^{2}(\mathbb{R}^{d+1})\).
## 3. Proof of Theorem 1.4
Key to our proof of (1.8) is a clever summation idea going back to Bourgain [6] (which has been used in several papers; see also [7, Section 6.2] and [16, Lemma 2.3] for example) which allows us to pass from frequency-local to frequency-global estimates.
**Lemma 3.1** (see [4]).: _Let \(q_{0},q_{1},r\in[2,\infty]\), \(\beta_{0},\beta_{1}\in[2,\infty]\) and \((g_{j})_{j}\) be a uniformly bounded sequence in \(L_{t}^{q_{0}}L_{x}^{r}\cap L_{t}^{q_{1}}L_{x}^{r}\). Suppose there exist \(\varepsilon_{0}\), \(\varepsilon_{1}>0\) such that_
\[\Big{\|}\sum_{j}\lambda_{j}|P_{k}g_{j}|^{2}\Big{\|}_{L_{t}^{\frac{q_{0}}{2}, \infty}L_{x}^{\frac{r}{2}}}\lesssim 2^{-\varepsilon_{0}k}\|\lambda\|_{\ell^{ \beta_{0}}}\]
_and_
\[\Big{\|}\sum_{j}\lambda_{j}|P_{k}g_{j}|^{2}\Big{\|}_{L_{t}^{\frac{q_{1}}{2}, \infty}L_{x}^{\frac{r}{2}}}\lesssim 2^{\varepsilon_{1}k}\|\lambda\|_{\ell^{ \beta_{1}}}\]
_for all \(k\in\mathbb{Z}\), then_
\[\Big{\|}\sum_{j}\lambda_{j}|g_{j}|^{2}\Big{\|}_{L_{t}^{\frac{q}{2},\infty}L_{ x}^{\frac{r}{2}}}\lesssim\|\lambda\|_{\ell^{\beta,1}},\]
_where \(\frac{1}{q}=\frac{1-\theta}{q_{0}}+\frac{\theta}{q_{1}}\), \(\frac{1}{\beta}=\frac{1-\theta}{\beta_{0}}+\frac{\theta}{\beta_{1}}\) and \(\theta=\frac{\varepsilon_{0}}{\varepsilon_{0}+\varepsilon_{1}}\)._
The above lemma was proved in [4, Proposition 2.2]. Let us give a sketch of the argument when \(q_{1}=\infty\) (since this case does not immediately follow from the argument in [4] as it stands).
Proof of Lemma 3.1 when \(q_{1}=\infty\).: It is enough to show the desired inequality for \(\lambda=\mathbf{1}_{E}\) (here, \(\mathbf{1}_{E}(j)=0\) if \(j\in E\), and zero otherwise), so that we aim for
\[|J_{\nu}|\lesssim\left(\frac{(\#E)^{\frac{1}{\rho}}}{\nu}\right)^{\frac{8}{2}}, \tag{3.1}\]
where \(\nu>0\) and
\[J_{\nu}:=\Big{\{}t\in\mathbb{R}:\Big{\|}\sum_{j\in E}|g_{j}(t,\cdot)|^{2}\Big{\|} _{L^{\frac{p}{\rho}}_{x}}>\nu\Big{\}}.\]
For a fixed \(M\in\mathbb{Z}\) (chosen later), set
\[J_{\nu,M}^{0}:=\Big{\{}t\in\mathbb{R}:\Big{\|}\sum_{j\in E}|\sum_{k\geq M+1}P_{ k}g_{j}(t,\cdot)|^{2}\Big{\|}_{L^{\frac{p}{\rho}}_{x}}>\frac{\nu}{2}\Big{\}}\]
and
\[J_{\nu,M}^{1}:=\Big{\{}t\in\mathbb{R}:\Big{\|}\sum_{j\in E}|\sum_{k\leq M}P_{ k}g_{j}(t,\cdot)|^{2}\Big{\|}_{L^{\frac{p}{\rho}}_{x}}>\frac{\nu}{2}\Big{\}}.\]
Regarding \(J_{\nu,M}^{0}\), let us observe that
\[\Big{\|}\sum_{j\in E}|\sum_{k\geq M+1}P_{k}g_{j}(t,\cdot)|^{2}\Big{\|}_{L^{ \frac{q_{0}}{\tau},\infty}_{t}L^{\frac{p}{\rho}}_{x}}\lesssim 2^{-\varepsilon_{0} M}(\#E)^{\frac{1}{\beta_{0}}}.\]
In fact, applying Minkowski's inequality twice gives
\[\Big{\|}\sum_{j\in E}|\sum_{k\geq M+1}P_{k}g_{j}(t,\cdot)|^{2}\Big{\|}_{L^{ \frac{q_{0}}{\tau},\infty}_{t}L^{\frac{p}{\rho}}_{x}}\lesssim\Big{(}\sum_{k \geq M+1}\Big{\|}\sum_{j}|P_{k}g_{j}|^{2}\Big{\|}_{L^{\frac{q_{0}}{\tau}, \infty}_{t}L^{\frac{p}{\rho}}_{x}}^{\frac{1}{2}}\Big{)}^{2},\]
which is further bounded by (up to some constant) \(2^{-\varepsilon_{0}M}(\#E)^{\frac{1}{\beta_{0}}}.\) Hence, one readily sees that
\[|J_{\nu,M}^{0}|\lesssim\nu^{-\frac{q_{0}}{2}}\Big{\|}\sum_{j\in E}|\sum_{k\geq M +1}P_{k}g_{j}(t,\cdot)|^{2}\Big{\|}_{L^{\frac{q_{0}}{\tau},\infty}_{t}L^{ \frac{p}{\rho}}_{x}}^{\frac{q_{0}}{\tau},\infty}\lesssim 2^{-\frac{\varepsilon_{0} Mq_{0}}{2}}\nu^{-\frac{q_{0}}{2}}(\#E)^{\frac{q_{0}}{\tau\beta_{0}}}.\]
A similar calculation reveals that there exists \(C>0\) such that
\[\Big{\|}\sum_{j\in E}|\sum_{k\leq M}P_{k}g_{j}(t,\cdot)|^{2}\Big{\|}_{L^{ \infty}_{t}L^{\frac{p}{\rho}}_{x}}\leq C2^{\varepsilon_{1}M}(\#E)^{\frac{1}{ \beta_{1}}},\]
and now we select \(M\) to be the largest integer such that
\[C2^{\varepsilon_{1}M}(\#E)^{\frac{1}{\beta_{1}}}<\frac{\nu}{2}.\]
With this choice, we have \(J_{\nu,M}^{1}=\emptyset\) and \(2^{M}\sim\nu^{\frac{1}{\varepsilon_{1}}}(\#E)^{-\frac{1}{\beta_{1}\varepsilon_ {1}}}.\) Since \(|J_{\nu}|\leq|J_{\nu,M}^{0}|\), we obtain (3.1) after a straightforward computation.
Proof of Theorem 1.4.: Let \((f_{j})_{j\in\mathbb{N}}\) be a family of orthonormal functions in \(L^{2}(\mathbb{R}^{d})\). First note that
\[\Big{\|}\sum_{j}\lambda_{j}|U|D|^{-s}P_{k}f_{j}|^{2}\Big{\|}_{L^{\infty}_{t}L^ {\infty}_{x}}\lesssim(2^{k})^{d-2s}\|\lambda\|_{\ell^{\infty}}. \tag{3.2}\]
This is the estimate dual to (2.8) (which can easily be verified directly by an application of Bessel's inequality as in [4]).
First we consider the case \(d\geq 4\) in which case it is known (see for example [8, Theorem 1]) that the classical Strichartz estimate (1.3) holds with \(f\) replaced by \(P_{0}f\). Thus, by recaling,
\[\|U|D|^{-s}P_{k}f_{j}\|_{L^{2}_{t}L^{\infty}_{x}}\lesssim(2^{k})^{\frac{d-1}{2 }-s} \tag{3.3}\]
holds for each \(f_{j}\), and therefore the triangle inequality implies
\[\Big{\|}\sum_{j}\lambda_{j}|U|D|^{-s}P_{k}f_{j}|^{2}\Big{\|}_{L^{1}_{t}L^{\infty} _{x}}\lesssim(2^{k})^{d-1-2s}\|\lambda\|_{\ell^{1}}. \tag{3.4}\]
Although (3.2) and (3.3) are valid for all \(s\in\mathbb{R}\), if we fix \(q\in(2,\infty)\) and apply these estimates with \(s_{q}=\frac{d}{2}-\frac{1}{q}\), then Lemma 3.1 immediately gives the desired estimate
\[\Big{\|}\sum_{j}\lambda_{j}|U|D|^{-s_{q}}f_{j}|^{2}\Big{\|}_{L^{\frac{q}{2}, \infty}_{t}L^{\infty}_{x}}\lesssim\|\lambda\|_{\ell^{\frac{q}{2},1}}.\]
The case when \(d=3\) requires a bit more work to show (1.8) for \(2<q<\infty\) since (3.3) with \(d=3\) is known to be false (see [18]). Thus, instead of (3.4), we aim for
\[\Big{\|}\sum_{j}\lambda_{j}|U|D|^{-s}P_{k}f_{j}|^{2}\Big{\|}_{L^{\frac{q}{2}}_ {t}L^{\infty}_{x}}\lesssim(2^{k})^{3-\frac{2}{q}-2s}\|\lambda\|_{\ell^{\frac{ q}{2}}} \tag{3.5}\]
with \(q\) larger than but abritrarily close to \(2\) (here the implicit constant blows up as \(q\to 2\)). It is clear that Lemma 3.1 combined with (3.2) and (3.5) yields (1.8).
Let us check (3.5). First, by rescaling, we may assume \(k=0\). Using duality [10, Lemma 3], it suffices to show
\[\|WS\overline{W}\|_{\mathcal{C}^{2m}}\lesssim\|W\|^{2}_{L^{2m+1}_{t}L^{2}_{x}}, \tag{3.6}\]
where \(S:=UP_{0}P_{0}^{*}U^{*}\) and \(m\) a sufficiently large natural number. Recall that the \(\mathcal{C}^{2}\)-norm can be expressed by the \(L^{2}\)-norm of its integral kernel, and in addition, from the definition of singular values, it holds that \(\|T\|^{2}_{\mathcal{C}^{2m+1}}=\|T^{*}T\|_{\mathcal{C}^{2m}}\). Therefore, since \(S\) is self-adjoint, we have
\[\|WS\overline{W}\|^{2^{m-1}}_{\mathcal{C}^{2m}}=\|(WS\overline{W})^{2^{m-1}} \|_{\mathcal{C}^{2}}.\]
Hence our task is to get an \(L^{2}\) bound for the integral kernel \(\mathcal{K}_{2^{m-1}}\) of \((WS\overline{W})^{2^{m-1}}\). The notation for the variables in the forthcoming argument requires a little care and our choices will become clear as we proceed.
Let us write \(\zeta_{j}=(t_{j},x_{j})\in\mathbb{R}^{1+3}\). Then
\[(WS\overline{W})F(\zeta_{1})=\int_{\mathbb{R}^{4}}F(\zeta_{2})\mathcal{K}_{1} (\zeta_{1},\zeta_{2})\,\mathrm{d}\zeta_{2}\]
with \(\mathcal{K}_{1}(\zeta_{1},\zeta_{2})=W(\zeta_{1})\overline{W}(\zeta_{2})K( \zeta_{1},\zeta_{2})\) and
\[K(\zeta_{1},\zeta_{2})=\int_{\mathbb{R}^{3}}|\varphi(\xi)|^{2}e^{i((x_{1}-x_{2 })\cdot\xi+(t_{1}-t_{2})|\xi|)}\,\mathrm{d}\xi.\]
To handle this, we shall invoke the following frequency-localized dispersive estimate (see e.g. [14]):
\[\sup_{x_{1},x_{2}}|K(\zeta_{1},\zeta_{2})|\lesssim\langle t_{1}-t_{2}\rangle^{ -1}, \tag{3.7}\]
where \(\langle t\rangle:=1+|t|\).
For \(m=2\) we have
\[(WS\overline{W})^{2}G(\zeta_{1})=\int_{\mathbb{R}^{4}}G(\zeta_{3})\Big{(}\int _{\mathbb{R}^{4}}W(\zeta_{1})|W(\zeta_{2})|^{2}\overline{W}(\zeta_{3})K(\zeta _{1},\zeta_{2})K(\zeta_{2},\zeta_{3})\,\mathrm{d}\zeta_{2}\Big{)}\mathrm{d} \zeta_{3}\]
and from this we have an expression for \(\mathcal{K}_{2}(\zeta_{1},\zeta_{3})\). Inductively,
\[\mathcal{K}_{2^{m}}(\zeta_{1},\zeta_{2^{m}+1}) =\int_{\mathbb{R}^{4}}\mathcal{K}_{2^{m-1}}(\zeta_{1},\zeta_{2^{m-1 }+1})\mathcal{K}_{2^{m-1}}(\zeta_{2^{m-1}+1},\zeta_{2^{m}+1})\,\mathrm{d}\zeta_{ 2^{m-1}+1}\] \[=\int_{(\mathbb{R}^{4})^{2^{m-1}}}W(\zeta_{1})\prod_{j=2}^{2^{m}}| W(\zeta_{j})|^{2}\overline{W}(\zeta_{2^{m}+1})\prod_{k=1}^{2^{m}}K(\zeta_{k}, \zeta_{k+1})\,\prod_{\ell=2}^{2^{m}}\mathrm{d}\zeta_{\ell}\]
for all \(m\geq 1\), and consequently
\[|\mathcal{K}_{2^{m}}(\zeta_{1},\zeta_{2^{m}+1})|\lesssim|W(\zeta_{1})|\bigg{(} \int_{\mathbb{R}^{2^{m-1}}}\prod_{j=2}^{2^{m}}h(t_{j})\prod_{k=1}^{2^{m}} \langle t_{k}-t_{k+1}\rangle^{-1}\,\prod_{\ell=2}^{2^{m}}\mathrm{d}t_{\ell} \bigg{)}|W(\zeta_{2^{m}+1})|,\]
where \(h(t):=\|W(t,\cdot)\|_{L_{x}^{2}}^{2}\).
Now we are ready to estimate the relevant Schatten norm of \(WS\overline{W}\):
\[\|WS\overline{W}\|_{\mathcal{C}^{2^{m}}}^{2^{m}}\] \[=\int_{(\mathbb{R}^{4})^{2}}|\mathcal{K}_{2^{m-1}}(\zeta_{1}, \zeta_{2^{m-1}+1})|^{2}\,\mathrm{d}\zeta_{1}\mathrm{d}\zeta_{2^{m-1}+1}\] \[\lesssim\int_{\mathbb{R}^{2}}h(t_{1})\bigg{(}\int_{\mathbb{R}^{2 ^{m-1}-1}}\prod_{j=2}^{2^{m-1}}h(t_{j})\prod_{k=1}^{2^{m-1}}\langle t_{k}-t_{k +1}\rangle^{-1}\,\prod_{\ell=2}^{2^{m-1}}\mathrm{d}t_{\ell}\bigg{)}^{2}h(t_{2^ {m-1}+1})\,\mathrm{d}t_{1}\mathrm{d}t_{2^{m-1}+1}\]
By expanding the square and an appropriate choice of labelling2, we see that
Footnote 2: If we use \(t_{2},\ldots,t_{2m-1},t_{2}^{*},\ldots,t_{2m-1}^{*}\) for the variables arising from expanding the square, then the claimed formula follows by relabelling \(t_{j}=t_{2m+2-j}^{2}\) for \(j=2^{m-1}+2,\ldots,2^{m}\).
\[\|WS\overline{W}\|_{\mathcal{C}^{2^{m}}}^{2^{m}}\lesssim\int_{\mathbb{R}^{2^{ m}}}\prod_{j=1}^{2^{m+1}}f_{j}(L_{j}\mathbf{t})\,\mathrm{d}\mathbf{t}, \tag{3.8}\]
where \(L_{j}:\mathbb{R}^{2^{m}}\ni\mathbf{t}\mapsto v_{j}\cdot\mathbf{t}\in\mathbb{R}\) with
\[v_{j}=\begin{cases}e_{j}&\text{ if }j=1,\ldots,2^{m},\\ e_{j-2^{m}}-e_{j+1-2^{m}}&\text{ if }j=2^{m}+1,\ldots,2^{m+1},\end{cases}\]
under the relation \(e_{2^{m}+1}=e_{1}\), and
\[f_{j}(t)=\begin{cases}h(t)&\text{ if }j=1,\ldots,2^{m},\\ \langle t\rangle^{-1}&\text{ if }j=2^{m}+1,\ldots,2^{m+1}.\end{cases}\]
One may notice that the term on the right-hand side of (3.8) has the structure of the left-hand side of the Brascamp-Lieb inequality and, in fact, we shall invoke the following characterization of finiteness of the Brascamp-Lieb constant (in the rank-one case) due to Barthe [1]. Let \(n\), \(M\in\mathbb{N}\) and for \(I=\{1,\ldots,M\}\) we denote by \(\mathbf{1}_{I}\) the vector of \(\mathbb{R}^{M}\) such that \(j\)-th element of \(\mathbf{1}_{I}\) is \(1\) if \(j\in I\) and \(0\) elsewhere.
**Lemma 3.2** (see Proposition 3 in [1]).: _There exists \(C<\infty\) such that_
\[\int_{\mathbb{R}^{n}}\prod_{j=1}^{M}f_{j}(x\cdot v_{j})\,\mathrm{d}x\leq C\prod _{j=1}^{M}\|f_{j}\|_{L^{p_{j}}(\mathbb{R})}\]
_holds for all non-negative \(f_{j}\) in \(L^{p_{j}}(\mathbb{R})\) if and only if the following holds. The vector \(\frac{1}{p}=(\frac{1}{p_{1}}\,\ldots,\frac{1}{p_{M}})\) belongs to the convex hull of \(\mathbf{1}_{I}\) for subsets such that \((v_{i})_{i\in I}\) forms a basis of \(\mathbb{R}^{n}\)._
We will use this lemma with \(n=2^{m}\) and \(M=2^{m+1}\). Let \(I_{0}=\{2^{m}+1,2^{m}+2,\ldots,2^{m+1}\}\) so that \(\mathbf{1}_{I_{0}}=(0,\ldots,0;1,\ldots,1)\); i.e. the first half of elements are all \(0\) and the rests are all \(1\) (the separator ";" is positioned so that the components have been divided into exactly two equal-sized parts). For integers \(i\), \(j\) with \(1\leq i\leq 2^{m}<j\leq 2^{m+1}\) the transposition operator swapping \(i\)-th element and \(j\)-th element, denoted by \([i,j]\), provides
\[[i,j]\mathbf{1}_{I_{0}}=(0,\ldots,0,1,0,\ldots,0;1,\ldots,1,0,1,\ldots,1).\]
From the definition of \((v_{j})_{j}\), if \(1\leq i\leq 2^{m}\), the vectors
\[(v_{j})_{j\in(\{i\}\cup(I_{0}\setminus\{2^{m}+i\}))}\]
form a basis of \(\mathbb{R}^{2^{m}}\). In \(\mathbb{R}^{2^{m+1}}\), considering \(2^{m}\) points \([1,2^{m}+1]\mathbf{1}_{I},\ldots,[2^{m},2^{m+1}]\mathbf{1}_{I}\), one may discover that the middle point of the plane spanned by those vectors, namely,
\[(2^{-m},\ldots,2^{-m};1-2^{-m},\ldots,1-2^{-m})\]
lies on the boundary of the convex hull of characteristic vectors. Hence, if we set
\[(p_{1}^{-1},\ldots,p_{2^{m}}^{-1};p_{2^{m}+1}^{-1},\ldots,p_{2^{m+1}}^{-1})=(2^ {-m},\ldots,2^{-m};1-2^{-m},\ldots,1-2^{-m}),\]
then Lemma 3.2 implies that
\[\int_{\mathbb{R}^{2^{m}}}\prod_{j=1}^{2^{m+1}}f_{j}(L_{j}\mathbf{t})\,\mathrm{ d}\mathbf{t}\lesssim\prod_{j=1}^{2^{m+1}}\|f_{j}\|_{L_{t}^{p_{j}}}\lesssim\|W\|_{L_{t} ^{2^{m+1}}L_{x}^{2}}^{2^{m+1}},\]
which ends the proof.
## Appendix: Necessary conditions
We provide a justification of the necessary condition (1.5) for (1.1). The proof here is not new and may be found in [5, Proposition 9].
**The necessity of \(\beta\leq\frac{q}{2}\).** The argument we give here is the same as in the proof of [5, Proposition 9]; since \(r<\infty\) was a blanket assumption throughout [5], here we consider the only case \(r=\infty\).
Consider the family \((f_{j})_{j\geq 1}\) given by
\[f_{j}(x)=Ug(-j,x)\]
where the function \(g\) satisfies
\[\widehat{g}(\xi)=c\frac{1_{[\pi,3\pi]}(|\xi|)}{|\xi|^{s+\frac{d-1}{2}}}\]
and the constant \(c>0\) will be chosen shortly. Parseval's identity and changing to polar coordinates reveals
\[\langle f_{j},f_{k}\rangle_{\dot{H}^{s}}=\frac{1}{(2\pi)^{d}}\int_{\mathbb{R} ^{d}}e^{i(j-k)|\xi|}|\widehat{g}(\xi)|^{2}|\xi|^{2s}\,\mathrm{d}\xi=\frac{c^{2 }|\mathbb{S}^{d-1}|}{(2\pi)^{d}}\int_{\pi}^{3\pi}e^{i(k-j)\rho}\,\mathrm{d}\rho\]
and so \((f_{j})_{j\geq 1}\) is an orthonormal family in \(\dot{H}^{s}(\mathbb{R}^{d})\) upon an appropriate choice of \(c\).
Also, assuming \(\lambda_{j}>0\) for each \(j\) we have
\[\bigg{\|}\sum_{j}\lambda_{j}|Uf_{j}|^{2}\bigg{\|}_{L_{t}^{\frac{q }{2}}L_{x}^{2}}^{\frac{q}{2}} \geq\int_{\mathbb{R}}\bigg{(}\sum_{j}\lambda_{j}|Ug(t-j,0)|^{2} \bigg{)}^{\frac{q}{2}}\mathrm{d}t\] \[\geq\sum_{n\geq 1}\lambda_{n}^{\frac{q}{2}}\int_{n}^{n+\varepsilon_ {0}}|Ug(t-n,0)|^{q}\,\mathrm{d}t\] \[=C\sum_{n\geq 1}\lambda_{n}^{\frac{q}{2}}\]
where \(\varepsilon_{0}>0\) is a sufficiently small constant and
\[C=\int_{0}^{\varepsilon_{0}}|Ug(t,0)|^{q}\,\mathrm{d}t=c^{q}\int_{0}^{\varepsilon _{0}}\bigg{|}\int_{|\xi|\in[\pi,3\pi]}e^{it|\xi|}\frac{\mathrm{d}\xi}{|\xi|^{s+ \frac{d-1}{2}}}\bigg{|}^{q}\mathrm{d}t.\]
The constant \(C\) is clearly finite, and a sufficiently small choice of \(\varepsilon_{0}\) guarantees \(|t|\xi||\leq\frac{1}{10}\) for \(\xi\in[\pi,3\pi]\) and thus
\[C \geq c^{q}\int_{0}^{\varepsilon_{0}}\bigg{|}\int_{|\xi|\in[\pi,3 \pi]}\cos(t|\xi|)\frac{\mathrm{d}\xi}{|\xi|^{s+\frac{d-1}{2}}}\bigg{|}^{q} \mathrm{d}t\] \[\geq\bigg{(}\frac{c}{2}\bigg{)}^{q}\int_{0}^{\varepsilon_{0}} \bigg{(}\int_{|\xi|\in[\pi,3\pi]}\frac{\mathrm{d}\xi}{|\xi|^{s+\frac{d-1}{2}} }\bigg{)}^{q}\mathrm{d}t>0.\]
Hence, if (1.1) holds we deduce \(\|\lambda\|_{\frac{q}{2}}\lesssim\|\lambda\|_{\beta}\) and this implies \(\beta\leq\frac{q}{2}\).
### The necessity of \(\beta\leq\beta_{\frac{q}{2}}(q,r)\)
Let \(B(x,r)\) be the ball in \(\mathbb{R}^{d}\) centred at \(x\) with radius \(r\). Consider the family \((f_{j})_{j=1}^{N}\) given by
\[\widehat{f}_{j}(\xi)=1_{B(0,1)}(R(\xi-v_{j})).\]
Here, \(v_{1},\ldots,v_{N}\in\mathbb{R}^{d}\) are fixed vectors satisfying \(|v_{j}|\in[1,2]\) and chosen so that the balls \(\{B(v_{j},\frac{1}{R})\}_{j=1}^{N}\) are disjoint. In particular, this means that the functions \((\widehat{f}_{j})_{j=1}^{N}\) have disjoint support and thus \((f_{j})_{j=1}^{N}\) is an orthogonal family. Moreover, we choose a maximal collection of vectors \(v_{1},\ldots,v_{N}\) under the above constraints, which means \(N\sim R^{d}\).
Next we claim that \(|Uf_{j}|\gtrsim R^{-d}1_{B(0,c^{\prime}R)}\) for an appropriately small choice of the constant \(c^{\prime}\). To see this, first assume that \(v_{j}=\rho e_{d}\) for some \(\rho\in[1,2]\), and write
\[x\cdot\xi+t|\xi| =\bar{x}\cdot\bar{\xi}+(x_{d}+t)(\xi_{d}-\rho)+t(|\xi|-\xi_{d})+ \rho(x_{d}+t)\] \[=\bar{x}\cdot\bar{\xi}+(x_{d}+t)(\xi_{d}-\rho)+t\frac{|\bar{\xi} |^{2}}{|\xi|+\xi_{d}}+\rho(x_{d}+t)=:\Phi_{t,x}(\xi)+\rho(x_{d}+t).\]
Here, we use the notation \(x=(\bar{x},x_{d})\). If \(c^{\prime}\) is chosen sufficiently small, then for \((t,x)\in B(0,c^{\prime}R)\) and \(\xi\) in the support of \(\widehat{f}_{j}\) (in particular, \(|\bar{\xi}|\leq\frac{1}{R},|\xi_{d}-\rho|\leq\frac{1}{R}\)) we get
\[|\Phi_{t,x}(\xi)|\leq\frac{1}{10}\]
and thus
\[|Uf_{j}(t,x)|=\bigg{|}\int_{B(\rho e_{d},\frac{1}{R})}e^{i\Phi_{t,x}(\xi)}\, \mathrm{d}\xi\bigg{|}\gtrsim\frac{1}{R^{d}}\]
as claimed. Taking \(\lambda_{j}=\|f_{j}\|_{\dot{H}^{s}}^{2}\) and \(g_{j}=\|f_{j}\|_{\dot{H}^{s}}^{-1}f_{j}\) for each \(j=1,\ldots,N\), and recalling that \(N\sim R^{d}\), we easily deduce that
\[\bigg{\|}\sum_{j=1}^{N}\lambda_{j}|Ug_{j}|^{2}\bigg{\|}_{L_{t}^{\frac{\bar{ \theta}}{2}}L_{x}^{\frac{\bar{\theta}}{2}}}=\bigg{\|}\sum_{j=1}^{N}|Uf_{j}|^{2 }\bigg{\|}_{L_{t}^{\frac{\bar{\theta}}{2}}L_{x}^{\frac{\bar{\theta}}{2}}} \gtrsim R^{\frac{2}{q}+\frac{2d}{r}-d}.\]
If we assume (1.1) then
\[R^{\frac{2}{q}+\frac{2d}{r}-d}\lesssim\bigg{(}\sum_{j=1}^{N}\|f_{j}\|_{\dot{H}^ {s}}^{2\beta}\bigg{)}^{\frac{1}{\beta}}\lesssim R^{\frac{d}{\beta}-d}\]
since \(\|f_{j}\|_{\dot{H}^{s}}^{2}\sim R^{-d}\). Therefore \(\frac{2}{q}+\frac{2d}{r}\leq\frac{d}{\beta}\) and this yields the necessary condition \(\beta\leq\beta_{\frac{q}{2}}(q,r)\).
_Acknowledgements._ The first author would like to express his appreciation to Sanghyuk Lee and Shohei Nakamura for a number of extremely helpful discussions related to this work. The second author shows his deep gratitude to Yutaka Terasawa and Satoshi Masaki for the opportunity at the RIMS conference "Harmonic Analysis and Partial Differential Equations" in 2022. The third author also thanks Mitsuru Sugimoto for his continuous encouragement.
|
2301.01714
|
Canonical steering ellipsoids of pure symmetric multiqubit states with
two distinct spinors and volume monogamy of steering
|
Quantum steering ellipsoid formalism provides a faithful representation of
all two-qubit states and helps in obtaining correlation properties of the state
through the steering ellipsoid. The steering ellipsoids corresponding to the
two-qubit subsystems of permutation symmetric $N$-qubit states is analysed
here. The steering ellipsoids of two-qubit states that have undergone local
operations on both the qubits so as to bring the state to its canonical form
are the so-called canonical steering ellipsoids. We construct and analyze the
geometric features of the canonical steering ellipsoids corresponding to pure
permutation symmetric $N$-qubit states with two distinct spinors. Depending on
the degeneracy of the two spinors in the pure symmetric $N$-qubit state, there
arise several families which cannot be converted into one another through
Stochastic Local Operations and Classical Communications (SLOCC). The canonical
steering ellipsoids of the two-qubit states drawn from the pure symmetric
$N$-qubit states with two distinct spinors allow for a geometric visualization
of the SLOCC-inequivalent class of states. We show that the states belonging to
the W-class correspond to oblate spheroid centered at $(0,0,1/(N-1))$ with
fixed semiaxes lengths $1/\sqrt{N-1}$ and $1/(N-1)$. The states belonging to
all other SLOCC inequivalent families correspond to ellipsoids centered at the
origin of the Bloch sphere. We also explore volume monogamy relations of states
belonging to these families, mainly the W-class of states.
|
B G Divyamani, I Reena, Prasanta K Panigrahi, A R Usha Devi, Sudha
|
2023-01-01T19:46:21Z
|
http://arxiv.org/abs/2301.01714v2
|
Canonical steering ellipsoids of pure symmetric multiqubit states with two distinct spinors and volume monogamy of steering
###### Abstract
Quantum steering ellipsoid formalism provides a faithful representation of all two-qubit states and is useful in obtaining their correlation properties. The steering ellipsoids of two-qubit states that have undergone local operations on both the qubits so as to bring the state to its canonical form are the so-called _canonical steering ellipsoids_. The steering ellipsoids corresponding to the two-qubit subsystems of permutation symmetric \(N\)-qubit states are considered here. We construct and analyze the geometric features of the canonical steering ellipsoids corresponding to pure permutation symmetric \(N\)-qubit states with two distinct spinors. Depending on the degeneracy of the two spinors in the pure symmetric \(N\)-qubit state, there arise several families which cannot be converted into one another through Stochastic Local Operations and Classical Communications (SLOCC). The canonical steering ellipsoids of the two-qubit states drawn from the pure symmetric \(N\)-qubit states with two distinct spinors allow for a geometric visualization of the SLOCC equivalent class of states. We show that the states belonging to the W-class correspond to oblate spheroids centered at \((0,0,1/(N-1))\) with fixed semiaxes lengths \(1/\sqrt{N-1}\) and \(1/(N-1)\). The states belonging to all other SLOCC inequivalent families correspond to ellipsoids centered at the origin of the Bloch sphere. We also explore volume monogamy relations of states belonging to these families, mainly the W-class of states.
pacs: 03.65.Ud, 03.67.Bg
## I Introduction
The Bloch sphere representation of a single qubit contains valuable geometric information needed for quantum information processing tasks. A natural generalization and an analogous picture for a two-qubit system is provided by the _quantum steering ellipsoid_[1; 2; 3] and is helpful in understanding correlation properties such as quantum discord [4; 5], volume monogamy of steering [2; 3] etc., Quantum steering ellipsoid is the set of all Bloch vectors to which one party's qubit could be'steered' when all possible measurements are carried out on the qubit belonging to other party. The volume of the steering ellipsoids [1] corresponding to the two-qubit subsystems of an \(N\)-qubit state, \(N>3\), capture monogamy properties of the state effectively [2; 3] and provides insightful information about two-qubit entanglement.
While the quantum steering ellipsoid [1; 2; 3] is the set of all Bloch vectors of first qubit steered by local operations on second qubit, the so-called _canonical steering ellipsoid_[6; 7; 8] is the steering ellipsoid of a two-qubit state that has attained a canonical form under suitable SLOCC operations on _both the qubits_. It has been shown that the SLOCC canonical forms of a two-qubit state can either be a Bell diagonal form or a nondiagonal one (when the two-qubit state is rank-deficient) [6; 8]. The canonical steering ellipsoids corresponding to the two-qubit states can thus have only two distinct forms [6; 8] and provide a much simpler geometric picture representing the set of all SLOCC equivalent two-qubit states.
The canonical steering ellipsoids corresponding to the two-qubit subsystems of pure three-qubit permutation symmetric states are analyzed in Ref. [9]. It has been shown that [9] the two SLOCC inequivalent families of pure three-qubit permutation symmetric states, the W-class of states (with two distinct spinors) and the GHZ class of states (with three distinct spinors) correspond to distinct canonical steering ellipsoids. While an ellipsoid centered at the origin of the Bloch sphere is the canonical steering ellipsoid for the GHZ class of states, an oblate spheroid with its center shifted along the polar axis is the one for W-class of states. Using these, the volume monogamy relations are established and the obesity of the steering ellipsoids is made use of to obtain expressions for concurrence of states belonging to these two SLOCC inequivalent families in Ref. [9].
In this paper, we extend the analysis to a class of \(N\)-qubit pure states which are symmetric under exchange of qubits. Through the SLOCC canonical forms of the two-qubit reduced state, extracted from pure symmetric _multiqubit_ states with two distinct spinors and the Lorentz canonical forms of their real representative, we examine the features of _canonical steering ellipsoids_ associated with them. We identify the special features of the canonical steering ellipsoid representing \(N\)-qubit states of the W-class and these features distinguish this class from all other SLOCC inequivalent families of pure symmetric \(N\)-qubit states. We discuss the volume monogamy of steering for pure permutation symmetric \(N\)-qubit states and obtain the volume monogamy relation satisfied by W-class of states. An expression for obesity of the steering ellipsoid and thereby an expression for concurrence of two-qubit subsystems of \(N\)-qubit states belonging to the W-class is obtained.
Contents of this paper are organized as follows: In Sec.II, we give a brief review on SLOCC classification of pure permutation symmetric multiqubit states based on Majorana representation [10; 11; 12; 13] and obtain the two-qubit subsystems of the states belonging to SLOCC inequivalent families of pure symmetric multiqubit states with two distinct spinors. Sec. III provides an outline of the real matrix representation of a two-qubit density matrix and their Lorentz canonical forms under SLOCC transformation of the two-qubit density matrix. We also obtain the Lorentz canonical forms of two-qubit subsystems corresponding to SLOCC inequivalent families, in Sec. III. In Sec.IV, we analyse the nature of steering ellipsoids associated with the distinct Lorentz canonical forms obtained in Sec. III. The volume monogamy of steering for pure symmetric multiqubit states with two distinct spinors is discussed along with illustration for W-class of states, in Sec. V. Summary of our results is presented in Sec. VI.
## II Majorana geometric representation of pure symmetric \(N\)-qubit states with two distinct spinors
Ettore Majorana, in his novel 1932 paper [10] proposed that a pure spin \(j=\frac{N}{2}\) quantum state can be represented as a _symmetrized_ combination of \(N\) constituent spinors as follows:
\[|\Psi_{\rm sym}\rangle={\cal N}\,\sum_{P}\,\hat{P}\,\{|\epsilon_{1},\epsilon _{2},\ldots\epsilon_{N}\rangle\}, \tag{1}\]
where
\[|\epsilon_{l}\rangle=\left(\cos(\alpha_{l}/2)\,|0\rangle+\sin(\alpha_{l}/2)\,| 1\rangle\right)e^{i\beta_{l}/2},\ \ l=1,\,2,\ldots,\,N. \tag{2}\]
The symbol \(\hat{P}\) corresponds to the set of all \(N!\) permutations of the spinors (qubits) and \({\cal N}\) corresponds to an overall normalization factor. The name Majorana _geometric_ representation is owing to the fact that it leads to an intrinsic geometric picture of the state in terms of \(N\) points on the unit sphere. In fact, the spinors \(|\epsilon_{l}\rangle\), \(l=1,\,2,\ldots,\,N\) of (2) correspond geometrically to \(N\) points on the unit sphere \(S^{2}\), with the pair of angles \((\alpha_{l},\beta_{l})\) determining the orientation of each point on the sphere.
The pure symmetric \(N\)-qubit states characterized by _two_ distinct qubits are given by [11; 12; 13],
\[|D_{N-k,k}\rangle={\cal N}\,\sum_{P}\,\hat{P}\,\{|\underbrace{\epsilon_{1}, \epsilon_{1},\ldots,\epsilon_{1}}_{N-k};\ \underbrace{\epsilon_{2},\epsilon_{2},\ldots,\epsilon_{2}}_{k}\rangle\}. \tag{3}\]
Here, one of the spinors say \(|\epsilon_{1}\rangle\) occurs \(N-k\) times whereas the other spinor \(|\epsilon_{2}\rangle\) occurs \(k\) times in each term of the symmetrized combination. Under identical local unitary transformations, the pure symmetric \(N\)-qubit states with two distinct spinors can be brought to the canonical form [13],
\[|D_{N-k,k}\rangle \equiv \sum_{r=0}^{k}\,\beta_{r}^{(k)}\,\left|\frac{N}{2},\frac{N}{2}- r\right\rangle,\ \ \ \ k=1,\,2,\,3,\ldots\left[\frac{N}{2}\right] \tag{4}\] \[\beta_{r}^{(k)} = {\cal N}\,\sqrt{\frac{N!(N-r)!}{r!}}\,\frac{a^{k-r}\,b^{r}}{(N-k )!(k-r)!},\ \ \ \ 0\leq a<1,\ \ b=\sqrt{1-a^{2}}. \tag{5}\]
Notice that \(\left|\frac{N}{2},\frac{N}{2}-r\right\rangle\), \(r=0,\,1,\,2\ldots\), are the Dicke states, which are common eigenstates of collective angular momentum operators \(J^{2}\) and \(J_{z}\). They are basis states of the \(N+1\) dimensional symmetric subspace of collective angular momentum space of \(N\) qubits. The states \(|D_{N-k,k}\rangle\) (see (4), (5)) are characterized by only one real parameter '\(a\)' and thus form one parameter family of states \(\{{\cal D}_{N-k,k}\}\)[13; 14]. When \(a=0\), the states \(|D_{N-k,k}\rangle\) reduce to the Dicke states \(|N/2,\,N/2-k\rangle\)[13; 14] in which \(|\epsilon_{1}\rangle=|0\rangle\) and \(|\epsilon_{2}\rangle=|1\rangle\) (see (3)). When \(a\longrightarrow 1\), \(|D_{N-k,k}\rangle\) becomes a separable state consisting of only one spinor \(|\epsilon_{1}\rangle\) or \(|\epsilon_{2}\rangle\).
It is important to notice that in the family \(\{{\cal D}_{N-k,k}\}\), different values of \(k\), (\(k=1,\,2,\,3,\ldots\left[\frac{N}{2}\right]\)), correspond to different SLOCC inequivalent classes [13]. That is, a state \(|D_{N-k,k}\rangle\) cannot be converted into \(|D_{N-k^{\prime},k^{\prime}}\rangle\), \(k\neq k^{\prime}\) through any choice of local unitary (identical) transformations. In fact, different values of \(k\) lead to different _degeneracy configurations_[13] of the two spinors \(|\epsilon_{1}\rangle\), \(|\epsilon_{2}\rangle\) in the state \(|D_{N-k,k}\rangle\). When \(k=1\), one gets the W-class of states \(\{{\cal D}_{N-1,1}\}\) where one of the qubits say \(|\epsilon_{1}\rangle\) repeats only once in each term of the symmetrized combination (see (3)) and the other qubit \(|\epsilon_{2}\rangle\) repeats \(N-1\) times. The N-qubit W-state
\[|W_{N}\rangle=\frac{1}{\sqrt{N}}\left[|000\ldots 1\rangle+|000\ldots 10 \rangle+\cdots+|100\ldots 00\rangle\right]\equiv\left|\frac{N}{2},\frac{N}{2}-1\right\rangle\]
belongs to the family \(\{{\cal D}_{N-1,1}\}\) and hence the name _W-class of states_. The Dicke state
\[\left|\frac{N}{2},\frac{N}{2}-2\right\rangle=\sqrt{\frac{2}{N(N-1)}}\left[|00 0\ldots 011\rangle+|000\ldots 0110\rangle+\cdots+|110\ldots 00\rangle \right].\]
is a typical state of the family \(\{{\cal D}_{N-2,2}\}\). In all, there are \(\left[\frac{N}{2}\right]\) SLOCC inequivalent families in the set of all pure permutation symmetric \(N\)-qubit states with two-distinct spinors [15].
### Two-qubit reduced density matrices of the states \(|D_{N-k,\,k}\rangle\)
The two-qubit marginal \(\rho^{(k)}\) corresponding to any random pair of qubits in the pure symmetric \(N\)-qubit state \(|D_{N-k,\,k}\rangle\in\{{\cal D}_{N-k,k}\}\) is obtained by tracing over the remaining \(N-2\) qubits in it. In Ref. [16], it has been shown, using the algebra of addition of angular momenta, \(j_{1}=1\) (corresponding to two-qubit marginal) and \(j_{2}=(N-2)/2\) (corresponding to the remaining \(N-2\) qubits), that the two-qubit reduced density matrix \(\rho^{(k)}\) has the form
\[\rho^{(k)}=\left(\begin{array}{cccc}A^{(k)}&B^{(k)}&B^{(k)}&C^{(k)}\\ B^{(k)}&D^{(k)}&D^{(k)}&E^{(k)}\\ B^{(k)}&D^{(k)}&D^{(k)}&E^{(k)}\\ C^{(k)}&E^{(k)}&E^{(k)}&F^{(k)}\end{array}\right). \tag{6}\]
The elements \(A^{(k)}\), \(B^{(k)}\), \(C^{(k)}\), \(D^{(k)}\), \(E^{(k)}\) and \(F^{(k)}\) are real and are explicitly given by [16]
\[A^{(k)}=\sum_{r=0}^{k}\,\left(\beta_{r}^{k}\right)^{2}\left(c_{1}^{(r)}\right) ^{2},\,\,\,B^{(k)}=\frac{1}{\sqrt{2}}\sum_{r=0}^{k-1}\,\beta_{r}^{(k)}\, \beta_{r+1}^{(k)}\,c_{1}^{(r)}c_{0}^{(r+1)}\]
\[C^{(k)}=\sum_{r=0}^{k-2}\,\,\beta_{r}^{(k)}\beta_{r+2}^{(k)}\,\,c_{1}^{(r)}c_{ -1}^{(r+2)},\,\,\,D^{(k)}=\frac{1}{2}\sum_{r=1}^{k}\,\left(\beta_{r}^{(k)} \right)^{2}\left(c_{0}^{(r)}\right)^{2} \tag{7}\]
\[E^{(k)}=\frac{1}{\sqrt{2}}\sum_{r=0}^{k-1}\,\beta_{r}^{(k)}\beta_{r+1}^{(k)}\, c_{0}^{(r)}c_{-1}^{(r+1)},\,\,\,\,\,\,\,\,F^{(k)}=\sum_{r=0}^{k}\,\left(\beta_{r}^{(k)} \right)^{2}\left(c_{-1}^{(r)}\right)^{2}.\]
where, \(\beta_{r}^{(k)}\) are given as functions of the parameter '\(a\)' in (5) and
\[c_{1}^{(r)}=\sqrt{\frac{(N-r)(N-r-1)}{N(N-1)}},\,\,\,\,\,c_{-1}^{(r )}=\sqrt{\frac{r\,(r-1)}{N(N-1)}},\] \[c_{0}^{(r)}=\sqrt{\frac{2r\,(N-r)}{N(N-1)}} \tag{8}\]
are the Clebsch-Gordan coefficients \(c_{m_{2}}^{(r)}\,\,=\,\,C\left(\frac{N}{2}-1,\,1,\,\frac{N}{2};m-m_{2},\,m_ {2},m\right)\), \(m\,\,=\,\,\frac{N}{2}-r\), \(m_{2}=1,\,0,\,-1\)[17]. In particular, for W-class of states i.e., when \(k=1\), we have
\[\rho^{(1)}={\rm Tr}_{N-2}\left(|D_{N-1,\,1}\rangle\langle D_{N-1, \,1}|\right)\] \[=\left(\left(\beta_{0}^{(1)}\right)^{2}+\left(\beta_{1}^{(1)}\,c_ {1}^{(1)}\right)^{2}\right)|1,\,1\rangle\langle 1,\,1|\] \[\quad+\left(\beta_{1}^{(1)}\,c_{0}^{(1)}\right)^{2}|1,\,0\rangle \langle 1,\,0|+\beta_{0}^{(1)}\beta_{1}^{(1)}\,c_{0}^{(1)}|1,\,1\rangle\langle 1,\,0|\] \[\quad+\beta_{0}^{(1)}\beta_{1}^{(1)}\,c_{0}^{(1)}|1,\,0\rangle \langle 1,\,1| \tag{9}\]
Here (see (5)) we have \(\beta_{0}^{(1)}={\cal N}N\,a\), \(\beta_{1}^{(1)}={\cal N}\,\sqrt{N(1-a^{2})}\) with \({\cal N}=\frac{1}{\sqrt{N^{2}\,a^{2}+N(1-a^{2})}}\) and the associated non-zero Clebsch-Gordan coefficients (see (8)) are given by
\[c_{1}^{(1)}=\sqrt{\frac{N-2}{N}},\quad c_{0}^{(1)}=\sqrt{\frac{2}{N}}. \tag{10}\]
In the standard two-qubit basis \(\{|0_{A},0_{B}\rangle,|0_{A},1_{B}\rangle,|1_{A},0_{B}\rangle,|1_{A},1_{B}\rangle\}\), the two-qubit density matrix \(\rho^{(1)}\) drawn from the states \(|D_{N-1,1}\rangle\) takes the form
\[\rho^{(1)}=\left(\begin{array}{cccc}A^{(1)}&B^{(1)}&B^{(1)}&0\\ B^{(1)}&D^{(1)}&D^{(1)}&0\\ B^{(1)}&D^{(1)}&D^{(1)}&0\\ 0&0&0&0\end{array}\right) \tag{11}\]
where
\[A^{(1)} = \frac{N^{2}a^{2}+(N-2)(1-a^{2})}{N^{2}\,a^{2}+N(1-a^{2})},\ \ B^{(1)}=\frac{a\sqrt{1-a^{2}}}{1+a^{2}(N-1)},\] \[D^{(1)} = \frac{1-a^{2}}{N^{2}\,a^{2}+N(1-a^{2})}, \tag{12}\]
In a similar manner, the two-qubit subsystems of pure symmetric \(N\)-qubit states \(|D_{N-k,k}\rangle\) belonging to each SLOCC inequivalent family \(\{{\cal D}_{N-k,\,k}\}\), \(k=2,\,3,\ldots,\left[\frac{N}{2}\right]\) can be obtained as a function of \(N\) and '\(a\)' using Eqs. (6), (7), (8). As is shown in Refs. [8; 9], the real representative \(\Lambda^{(k)}\) of the two-qubit subsystem \(\rho^{(k)}\) and its Lorentz canonical form \(\widetilde{\Lambda}^{(k)}\) are essential in obtaining the geometric representation of the states \(|D_{N-k,k}\rangle\), for all \(k\). We thus proceed to obtain \(\Lambda^{(k)}\) and its Lorentz canonical form \(\widetilde{\Lambda}^{(k)}\) in the following.
## III The real representation of \(\rho^{(k)}\) and its Lorentz canonical forms
The real representative \(\Lambda^{(k)}\) of the two-qubit state \(\rho^{(k)}\) is a \(4\times 4\) real matrix with its elements given by
\[\Lambda^{(k)}_{\mu\,\nu}={\rm Tr}\,\left[\rho^{(k)}\left(\sigma_{\mu}\otimes \sigma_{\nu}\right)\right] \tag{13}\]
That is, \(\Lambda^{(k)}_{\mu\,\nu}\), \(\mu,\nu=0,\,1,\,2,\,3\) are the coefficients of expansion of \(\rho^{(k)}\), expanded in the Hilbert-Schmidt basis \(\{\sigma_{\mu}\otimes\sigma_{\nu}\}\):
\[\rho^{(k)}=\frac{1}{4}\,\sum_{\mu,\,\nu=0}^{3}\,\Lambda^{(k)}_{\mu\,\nu}\, (\sigma_{\mu}\otimes\sigma_{\nu})\,, \tag{14}\]
Here, \(\sigma_{i}\), \(i=1\), \(2\), \(3\) are the Pauli spin matrices and \(\sigma_{0}\) is the \(2\times 2\) identity matrix;
\[\sigma_{0}=\left(\begin{array}{cc}1&0\\ 0&1\end{array}\right),\ \ \sigma_{1}=\left(\begin{array}{cc}0&1\\ 1&0\end{array}\right),\ \ \sigma_{2}=\left(\begin{array}{cc}0&-i\\ i&0\end{array}\right),\ \ \sigma_{3}=\left(\begin{array}{cc}1&0\\ 0&-1\end{array}\right). \tag{15}\]
It can be readily seen that (see (13), (14)) the real \(4\times 4\) matrix \(\Lambda^{(k)}\) has the form
\[\Lambda^{(k)}=\left(\begin{array}{cccc}1&r_{1}&r_{2}&r_{3}\\ s_{1}&t_{11}&t_{12}&t_{13}\\ s_{2}&t_{21}&t_{22}&t_{23}\\ s_{3}&t_{31}&t_{32}&t_{33}\end{array}\right), \tag{16}\]
where \({\bf r}=(r_{1},\,r_{2},\,r_{3})^{T}\), \({\bf s}=(s_{1},\,s_{2},\,s_{3})^{T}\) are Bloch vectors of the individual qubits and \(T=(t_{ij})\) is the correlation matrix;
\[r_{i}=\Lambda^{(k)}_{i\,0}={\rm Tr}\,\left[\rho^{(k)}\left(\sigma _{i}\otimes\sigma_{0}\right)\right] \tag{17}\] \[s_{j}=\Lambda^{(k)}_{0\,j}={\rm Tr}\,\left[\rho^{(k)}\left( \sigma_{0}\otimes\sigma_{j}\right)\right]\] (18) \[t_{ij}=\Lambda^{(k)}_{i\,j}={\rm Tr}\,\left[\rho^{(k)}\left( \sigma_{i}\otimes\sigma_{j}\right)\right],\ \ \ \ i,\,j=1,\,2,\,3. \tag{19}\]
For a symmetric two-qubit density matrix, the Bloch vectors \({\bf r}\) and \({\bf s}\) are identical and hence \(r_{i}=s_{i},\,i=1,\,2,\,3\); From the structure of \(\rho^{(k)}\) in (6) and using (17), (18), (19) we obtain the general form of the real matrix \(\Lambda^{(k)}\) as
\[\Lambda^{(k)}=\left(\begin{array}{cccc}1&\frac{2(B^{(k)}+E^{(k)})}{A^{(k)}+2 D^{(k)}+F^{(k)}}&0&\frac{A^{(k)}-F^{(k)}}{A^{(k)}+2D^{(k)}+F^{(k)}}\\ \frac{2(B^{(k)}+E^{(k)})}{A^{(k)}+2D^{(k)}+F^{(k)}}&\frac{2(C^{(k)}+D^{(k)})}{A ^{(k)}+2D^{(k)}+F^{(k)}}&0&\frac{2(B^{(k)}-E^{(k)})}{A^{(k)}+2D^{(k)}+F^{(k)}}\\ 0&0&\frac{2(D^{(k)}-C^{(k)})}{A^{(k)}+2D^{(k)}+F^{(k)}}&0\\ \frac{A^{(k)}-F^{(k)}}{A^{(k)}+2D^{(k)}+F^{(k)}}&\frac{2(B^{(k)}-E^{(k)})}{A^{ (k)}+2D^{(k)}+F^{(k)}}&0&1-\frac{4D^{(k)}}{A^{(k)}+2D^{(k)}+F^{(k)}}\end{array} \right). \tag{20}\]
The elements of \(\Lambda^{(k)}\), for different \(k\), can be evaluated using (7), (8)):
### Lorentz canonical forms of \(\Lambda^{(k)}\)
Under SLOCC transformation, the two-qubit density matrix \(\rho^{(k)}\) transforms to \(\widetilde{\rho}^{(k)}\) as
\[\rho^{(k)}\longrightarrow\widetilde{\rho}^{(k)}=\frac{\left(A\otimes B \right)\rho^{(k)}\left(A^{\dagger}\otimes B^{\dagger}\right)}{\text{Tr}\left[ \rho^{(k)}\left(A^{\dagger}\,A\otimes B^{\dagger}\,B\right)\right]}. \tag{21}\]
Here, \(A,B\in\text{SL}(2,\text{C})\) denote \(2\times 2\) complex matrices with unit determinant. A suitable choice of \(A\) and \(B\) takes the two-qubit density matrix \(\rho^{(k)}\) to its canonical form \(\widetilde{\rho}^{(k)}\).
The transformation of \(\rho^{(k)}\) in (21) leads to the transfomation [8; 9]
\[\Lambda^{(k)}\longrightarrow\widetilde{\Lambda}^{(k)}=\frac{L_{A}\,\Lambda^{ (k)}\,L_{B}^{T}}{\left(L_{A}\,\Lambda^{(k)}\,L_{B}^{T}\right)_{00}}. \tag{22}\]
of its real representative \(\Lambda^{(k)}\). In (22), \(L_{A},\,L_{B}\in SO(3,1)\) are \(4\times 4\) proper orthochronous Lorentz transformation matrices [18] corresponding respectively to \(A\), \(B\in SL(2,C)\) and the superscript '\(T\)' denotes transpose operation. The Lorentz canonical form \(\widetilde{\Lambda}^{(k)}\) of \(\Lambda^{(k)}\) and thereby the SLOCC canonical form of the two-qubit density matrix \(\rho^{(k)}\) (see (21)) can be obtained by constructing the \(4\times 4\) real symmetric matrix \(\Omega^{(k)}=\Lambda^{(k)}\,G\,\left(\Lambda^{(k)}\right)^{T}\), where \(G=\text{diag}\left(1,-1,-1,-1\right)\) denotes the Lorentz metric. Using the defining property [18]\(L^{T}\,G\,L=G\) of Lorentz transformation \(L\), it can be seen that \(\Omega^{(k)}\) undergoes a _Lorentz congruent transformation_ under SLOCC (up to an overall factor) [8] as
\[\Omega^{(k)}\rightarrow\widetilde{\Omega}^{(k)}_{A} =\widetilde{\Lambda}^{(k)}\,G\,\left(\widetilde{\Lambda}^{(k)} \right)^{T}\] \[=L_{A}\,\Lambda^{(k)}\,L_{B}^{T}\,G\,L_{B}\,{\Lambda^{(k)}}^{T}L_ {A}^{T}\] \[=L_{A}\,\Omega^{(k)}\,L_{A}^{T}. \tag{23}\]
It has been shown in Ref. [8] that \(\widetilde{\Lambda}^{(k)}\) can either be a real \(4\times 4\) diagonal matrix or a non-diagonal matrix with only one off-diagonal element, depending on the eigenvalues, eigenvectors of \(G\,\Omega^{(k)}=G\left(\Lambda^{(k)}\,G\,\left(\Lambda^{(k)}\right)^{T}\right)\).
* The diagonal canonical form \(\widetilde{\Lambda}^{(k)}_{I_{c}}\) results when the eigenvector \(X_{0}\) associated with the highest eigenvalue \(\lambda_{0}\) of \(G\,\Omega^{(k)}\) obeys the Lorentz invariant condition \(X_{0}^{T}\,G\,X_{0}>0\). The diagonal canonical form \(\widetilde{\Lambda}^{(k)}_{I_{c}}\) is explicitly given by \[\Lambda^{(k)}\longrightarrow\widetilde{\Lambda}^{(k)}_{I_{c}} =\frac{L_{A_{1}}\,\Lambda^{(k)}\,L_{B_{1}}^{T}}{\left(L_{A_{1}}\, \Lambda^{(k)}\,L_{B_{1}}^{T}\right)_{00}}\] \[=\text{diag}\,\left(1,\,\sqrt{\frac{\lambda_{1}}{\lambda_{0}}}, \sqrt{\frac{\lambda_{2}}{\lambda_{0}}},\,\pm\,\sqrt{\frac{\lambda_{3}}{\lambda_ {0}}}\right),\] (24) where \(\lambda_{0}\geq\lambda_{1}\geq\lambda_{2}\geq\lambda_{3}>0\) are the _non-negative_ eigenvalues of \(G\,\Omega^{(k)}\). The Lorentz transformations \(L_{A_{1}},\,L_{B_{1}}\in SO(3,1)\) in (24) respectively correspond to \(SL(2,C)\) transformation matrices \(A_{1},\,B_{1}\) which take the
two-qubit density matrix \(\rho^{(k)}\) to its SLOCC canonical form \(\widetilde{\rho}^{(k)}_{I_{e}}\) through the transformation (21). The diagonal form of \(\widetilde{\Lambda}^{(k)}_{I_{e}}\) readily leads, on using (14), to Bell-diagonal form \[\widetilde{\rho}^{(k)}_{I_{e}}=\frac{1}{4}\,\left(\sigma_{0}\otimes\sigma_{0}+ \sum_{i=1,2}\,\sqrt{\frac{\lambda_{i}}{\lambda_{0}}}\,\left(\sigma_{i}\otimes \sigma_{i}\right)\pm\sqrt{\frac{\lambda_{3}}{\lambda_{0}}}\,\left(\sigma_{3} \otimes\sigma_{3}\right)\right)\] (25) as the canonical form of the two-qubit state \(\rho^{(k)}\).
2. The Lorentz canonical form of \(\Lambda^{(k)}\) turns out to be a non-diagonal matrix (with only one non-diagonal element) given by \[\Lambda^{(k)}\longrightarrow\widetilde{\Lambda}^{(k)}_{II_{e}}=\frac{L_{A_{2}} \,\Lambda^{(k)}\,L_{B_{2}}^{T}}{\left(L_{A_{2}}\,\Lambda^{(k)}\,L_{B_{2}}^{T} \right)_{00}}=\left(\begin{array}{cccc}1&0&0&0\\ 0&a_{1}&0&0\\ 0&0&-a_{1}&0\\ 1-a_{0}&0&0&a_{0}\end{array}\right)\] (26) when the non-negative eigenvalues of \(G\Omega^{(k)}\) are doubly degenerate with \(\lambda_{0}\geq\lambda_{1}\) and the eigenvector \(X_{0}\) belonging to the highest eigenvalue \(\lambda_{0}\) satisfies the Lorentz invariant condition \(X_{0}^{T}\,G\,X_{0}=0\). In Ref. [8], it has been shown that when the maximum amongst the doubly degenerate eigenvalues of \(G\Omega^{(k)}\) possesses an eigenvector \(X_{0}\) satisfying the condition \(X_{0}^{T}\,G\,X_{0}=0\), the real symmetric matrix \(\Omega^{(k)}=\Lambda^{(k)}G\left(\Lambda^{(k)}\right)^{T}\) attains the non-diagonal Lorentz canonical form given by \[\Omega^{(k)}_{II_{e}} = \widetilde{\Lambda}^{(k)}_{II_{e}}\,G\,\left(\widetilde{\Lambda} ^{(k)}_{II_{e}}\right)^{T}=L_{A_{2}}\,\Omega^{(k)}\,L_{A_{2}}^{T}\] (27) \[= \left(\begin{array}{cccc}\phi_{0}&0&0&\phi_{0}-\lambda_{0}\\ 0&-\lambda_{1}&0&0\\ 0&0&-\lambda_{1}&0\\ \phi_{0}-\lambda_{0}&0&0&\phi_{0}-2\lambda_{0}\end{array}\right).\] The parameters \(a_{0}\), \(a_{1}\) in (26) are related to the eigenvalues \(\lambda_{0}\), \(\lambda_{1}\) of \(G\Omega^{(k)}\) and the \(00^{\rm th}\) element of \(\widetilde{\Omega}^{(k)}_{II_{e}}\) (see (27)). It can be seen that [8] \[a_{0}=\frac{\lambda_{0}}{\phi_{0}},\ \ a_{1}=\sqrt{\frac{\lambda_{1}}{ \phi_{0}}},\ \ {\rm where}\ \ \phi_{0}=\left(\Omega^{(k)}_{II_{e}}\right)_{00}=\left[\left(L_{A_{2}}\, \Lambda^{(k)}\,L_{B_{2}}^{T}\right)_{00}\right]^{2}.\] (28) The Lorentz matrices \(L_{A_{2}}\), \(L_{B_{2}}\in SO(3,1)\) correspond to the SL(2,C) transformations \(A_{2}\), \(B_{2}\) that transform \(\rho^{(k)}\) to its SLOCC canonical form \(\rho^{(k)}_{II_{e}}\) (see 21). The non-diagonal canonical form \(\widetilde{\Lambda}^{(k)}_{II_{e}}\) leads to the SLOCC canonical form \(\widetilde{\rho}^{(k)}_{II_{e}}\) of the two-qubit density matrix \(\rho^{(k)}\), on using (14); \[\widetilde{\rho}^{(k)}_{II_{e}}=\frac{1}{2}\left(\begin{array}{cccc}1&0&0&a_ {1}\\ 0&1-a_{0}&0&0\\ 0&0&0&0\\ a_{1}&0&0&a_{0}\end{array}\right);\ \ \ 0\leq a_{1}^{2}\leq a_{0}\leq 1.\] (29)
Lorentz canonical form of \(\Lambda^{(1)}\) corresponding to W-class of states \(\{\mathcal{D}_{N-k,k}\}\):
Using the explicit structure of the two-qubit state \(\rho^{(1)}\) given in (11), (12), its real representative \(\Lambda^{(1)}\) is obtained as (see (13))
\[\Lambda^{(1)}=\left(\begin{array}{cccc}1&\frac{2a\sqrt{1-a^{2}}}{1+a^{2}(N-1 )}&0&1+\frac{2a^{2}}{1+a^{2}(N-1)}-\frac{2}{N}\\ \frac{2a\sqrt{1-a^{2}}}{1+a^{2}(N-1)}&\frac{2(1-a^{2})}{N(1+a^{2}(N-1))}&0& \frac{2a\sqrt{1-a^{2}}}{1+a^{2}(N-1)}\\ 0&\frac{2(1-a^{2})}{N(1+a^{2}(N-1))}&0\\ 1+\frac{2a^{2}}{1+a^{2}(N-1)}-\frac{2}{N}&\frac{2a\sqrt{1-a^{2}}}{1+a^{2}(N-1) }&0&1+\frac{a^{2}}{1+a^{2}(N-1)}-\frac{4}{N}\end{array}\right)=\left(\Lambda^{ (1)}\right)^{T}. \tag{30}\]
We now construct the \(4\times 4\) symmetric matrix \(\Omega^{(1)}\) and obtain
\[\Omega^{(1)} =\Lambda^{(1)}\,G\,\left(\Lambda^{(1)}\right)^{T}=\Lambda^{(1)}\,G \,\Lambda^{(1)}\] \[=\chi\left(\begin{array}{cccc}N-1&0&0&N-2\\ 0&-1&0&0\\ 0&0&-1&0\\ N-2&0&0&N-3\end{array}\right),\quad\chi=\left[\frac{2(1-a^{2})}{N\,(1+a^{2}(N-1 ))}\right]^{2}. \tag{31}\]
The eigenvalues of the matrix \(G\,\Omega^{(1)},\;G=\mbox{diag}\,(1,\,-1,\,-1)\) are readily seen to be four-fold degenerate and are given by
\[\lambda_{0}=\lambda_{1}=\lambda_{2}=\lambda_{3}=\chi=\left[\frac{2(1-a^{2})}{N \,(1+a^{2}(N-1))}\right]^{2}. \tag{32}\]
It can be seen that \(X_{0}=(1,\,0,\,0,\,-1)\) is an eigenvector of \(G\,\Omega^{(1)}\) belonging to the four-fold degenerate eigenvalue \(\lambda_{0}\) and obeys the Lorentz invariant condition \(X_{0}^{T}\,G\,X_{0}=0\). We notice here that \(\Omega^{(1)}\) is already in the canonical form (27). On comparing (31) with (27), we get
\[\phi_{0}=(\Omega^{(1)})_{00}=(N-1)\chi. \tag{33}\]
On substituting the parameters \(a_{0}\), \(a_{1}\) (see (28), (32), (33)) in (26), we arrive at the Lorentz canonical form of the real matrix \(\Lambda^{(1)}\) as
\[\widetilde{\Lambda}^{(1)}=\left(\begin{array}{cccc}1&0&0&0\\ 0&\frac{1}{\sqrt{N-1}}&0&0\\ 0&0&-\frac{1}{\sqrt{N-1}}&0\\ \frac{N-2}{N-1}&0&0&\frac{1}{N-1}\end{array}\right). \tag{34}\]
It can be readily seen that \(\widetilde{\Lambda}^{(1)}\), the Lorentz canonical form corresponding to the W-class of states, is independent of the parameter '\(a\)'.
### Lorentz canonical form of \(\Lambda^{(k)}\), \(k=2,\,3,\ldots,\left[\frac{N}{2}\right]\)
Here, we evaluate the real representative \(\Lambda^{(k)}\) of \(\rho^{(k)}\) for different values of \(k\) (\(k=2,\,3,\ldots,\left[\frac{N}{2}\right]\)) making use of Eqs. (7), (8),(20). We then construct the real symmetric matrix \(\Omega^{(k)}=\Lambda^{(k)}\,G\left(\Lambda^{(k)}\right)^{T}\) for \(k=2,\,3,\ldots,\left[\frac{N}{2}\right]\) and observe that \(G\Omega^{(k)}=G\Lambda^{(k)}\,G\left(\Lambda^{(k)}\right)^{T}\) has _non-degenerate eigenvalues_\(\lambda_{0}\neq\lambda_{1}\neq\lambda_{2}\neq\lambda_{3}\) when \(k=2,3,\,\ldots,\left[\frac{N}{2}\right]\) and the highest eigenvalue \(\lambda_{0}\) possesses an eigenvector \(X_{0}\) satisfying the relation \(X_{0}^{T}\,G\,X_{0}>0\). The Lorentz canonical form \(\widetilde{\Lambda}^{(k)}\), \(k=2,\,3,\ldots,\left[\frac{N}{2}\right]\), is thus given by the diagonal matrix (see (24)).
\[\widetilde{\Lambda}^{(k)}=\mbox{diag}\,\left(1,\,\sqrt{\lambda_{1}/\lambda_{0} },\,\sqrt{\lambda_{2}/\lambda_{0}},\,\pm\sqrt{\lambda_{3}/\lambda_{0}}\right).\]
The eigenvalues \(\lambda_{p}\), (\(\mu=0,\,1,\,2,\,3\)) of \(G\Omega^{(k)}\) are dependent on the parameters '\(a\)', \(k\) and \(N\) characterizing the state \(|D_{N-k,\,k}\rangle\), when \(k\) takes any of the integral values greater than \(1\) and less than \(\left[\frac{N}{2}\right]\). Hence the canonical form \(\widetilde{\Lambda}^{(k)}\), \(k=2,3,\,\ldots,\left[\frac{N}{2}\right]\) is different for different states \(|D_{N-k,\,k}\rangle\) unlike in the case of \(\widetilde{\Lambda}^{(1)}\) (see (34)), the canonical form of W-class of states, which depends only on the number of qubits \(N\).
## IV Geometric representation of the states \(|D_{N-k,k}\rangle\)
In this section, based on the two different canonical forms of \(\Lambda^{(k)}\) obtained in Sec. III, we find the nature of canonical steering ellipsoids associated with the pure symmetric multiqubit states \(|D_{N-k,k}\rangle\) belonging to SLOCC inequivalent families \(\{{\cal D}_{N-k,\,k}\}\). To begin with, we give a brief outline [8; 9] of obtaining the steering ellipsoids of a two-qubit density matrix \(\rho^{(k)}\) based on the form of its real representative \(\Lambda^{(k)}\).
In the two-qubit state \(\rho^{(k)}\), local projective valued measurements (PVM) \(Q>0\), \(Q=\sum_{\mu=0}^{3}\,q_{\mu}\,\sigma_{\mu}\), \(q_{0}=1\), \(\sum_{i=1}^{3}\,q_{i}^{2}=1\) on Bob's qubit leads to collapsed state of Alice's qubit characterized by its Bloch-vector \({\bf p}_{A}=(p_{1},\,p_{2},\,p_{3})^{T}\) through the transformation [8]
\[\left(1,p_{1},\,p_{2},\,p_{3}\right)^{T}=\Lambda^{(k)}\,\left(1,q_{1},\,q_{2}, \,q_{3}\right)^{T},\,\,\,\,q_{1}^{2}+q_{2}^{2}+q_{3}^{2}=1. \tag{35}\]
Notice that the vector \({\bf q}_{B}=\left(q_{1},\,q_{2},\,q_{3}\right)^{T}\), \(q_{1}^{2}+q_{2}^{2}+q_{3}^{2}=1\) represents the entire Bloch sphere and the steered Bloch vectors \({\bf p}_{A}\) of Alice's qubit constitute an ellipsoidal surface \({\cal E}_{A|\,B}\) enclosed within the Bloch sphere. When Bob employs convex combinations of PVMs i.e., positive operator valued measures (POVMs), to steer Alice's qubit, he can access the points inside the steering ellipsoid. Similar will be the case when Bob's qubit is steered by Alice through local operations on her qubit.
For the Lorentz canonical form \(\widetilde{\Lambda}_{I_{c}}^{(k)}\) (see (24)) of the two-qubit state \(\widetilde{\rho}_{I_{c}}^{(k)}\), it follows from (35) that
\[p_{1}=\sqrt{\frac{\lambda_{1}}{\lambda_{0}}}\,q_{1},\,\,\,p_{2}=\sqrt{\frac{ \lambda_{2}}{\lambda_{0}}}\,q_{2},\,\,\,p_{3}=\pm\sqrt{\frac{\lambda_{3}}{ \lambda_{0}}}q_{3}, \tag{36}\]
are steered Bloch points \({\bf p}_{A}\) of Alice's qubit. They are seen to obey the equation
\[\frac{\lambda_{0}\,p_{1}^{2}}{\lambda_{1}}+\frac{\lambda_{0}\,p_{2}^{2}}{ \lambda_{2}}+\frac{\lambda_{0}\,p_{3}^{2}}{\lambda_{3}}=1 \tag{37}\]
of an ellipsoid with semiaxes \((\sqrt{\lambda_{1}/\lambda_{0}},\,\sqrt{\lambda_{2}/\lambda_{0}},\,\sqrt{ \lambda_{3}/\lambda_{0}})\) and center \((0,0,0)\) inside the Bloch sphere \(q_{1}^{2}+q_{2}^{2}+q_{3}^{2}=1\). We refer to this as the _canonical steering ellipsoid_ representing the set of all two-qubit density matrices which are on the SLOCC orbit of the state \(\widetilde{\rho}_{I_{c}}^{(k)}\) (see (21)).
For the second Lorentz canonical form \(\widetilde{\Lambda}_{II_{c}}\) (see (26)), we get the coordinates of steered Alice's Bloch vector \({\bf p}_{A}\), on using (35);
\[p_{1}=a_{1}q_{1},\,\,\,\,p_{2}=-a_{1}q_{2},\,\,\,p_{3}=\left(1-a_{0}\right)+a_ {0}q_{3},\,\,\,\,\,q_{1}^{2}+q_{2}^{2}+q_{3}^{2}=1 \tag{38}\]
and they satisfy the equation
\[\frac{p_{1}^{2}}{a_{1}^{2}}+\frac{p_{2}^{2}}{a_{1}^{2}}+\frac{\left(p_{3}- \left(1-a_{0}\right)\right)^{2}}{a_{0}^{2}}=1. \tag{39}\]
Eq. (39) represents the canonical steering spheroid (traced by Alice's Bloch vector \({\bf p}_{A}\)) inside the Bloch sphere with its center at \((0,\,0,\,1-a_{0})\) and lengths of the semiaxes given by \(a_{0}=\lambda_{0}/\phi_{0}\), \(a_{1}=\sqrt{\lambda_{1}/\phi_{0}}\) given in (28). In other words, a shifted spheroid inscribed within the Bloch sphere, represents two-qubit states that are SLOCC equivalent to \(\widetilde{\rho}_{II_{c}}^{(k)}\) (see (29)).
### Canonical steering ellipsoids of W-class of states
We have seen in Sec. III B that the Lorentz canonical form of \(\Lambda^{(1)}\), the real representative of the symmetric two-qubit state \(\rho^{(1)}\) drawn from the W-class of states \(|D_{N-1,1}\rangle\) has a _non-diagonal_ form (see (34). On comparing (34) with the canonical form in (26), we get
\[a_{1}=\frac{1}{\sqrt{N-1}},\,\,\,a_{0}=\frac{1}{N-1}. \tag{40}\]
From (39) and the discussions prior to it, it can be readily seen that the quantum steering ellipsoid associated with \(\widetilde{\Lambda}^{(1)}\) in (34) is a spheroid centered at \((0,0,\frac{N-2}{N-1})\) inside the Bloch sphere, with fixed semiaxes lengths \((\frac{1}{\sqrt{N-1}},\,\frac{1}{\sqrt{N-1}},\,\frac{1}{N-1})\) (see Fig. 1). It is interesting to note that the Lorentz canonical form \(\widetilde{\Lambda}^{(1)}\) is not dependent on the state parameter '\(a\)', \(0\leq a<1\) and hence all states \(|D_{N-1,\,1}\rangle\) in the family \(\{{\cal D}_{N-1,\,1}\}\) are represented by a spheroid, all its parameters such as center, semiaxes, volume etc., dependent only on the number of qubits \(N\).
Canonical steering ellipsoids of the states \(|D_{N-k,k}\rangle\), \(k=2,\,3,\ldots,\left[\frac{N}{2}\right]\)
As is seen in Sec. III C, the Lorentz canonical form of \(\Lambda^{(k)}\), \(k=2,\,3,\ldots,\left[\frac{N}{2}\right]\), the real representative of the two-qubit states \(\rho^{(k)}\) drawn from the pure symmetric \(N\)-qubit states \(|D_{N-k,k}\rangle\), has the diagonal form (see (24)). The values of \(\lambda_{0}\), \(\lambda_{1}\), \(\lambda_{2}\), \(\lambda_{3}\), the eigenvalues of the matrix \(G\,\Omega^{k}\) can be evaluated for each value of \(k\), \(k=2,\,3,\ldots,\left[\frac{N}{2}\right]\) for a chosen \(N\). From (37) and the discussions therein, it follows that the canonical steering ellipsoids of the states \(|D_{N-k,k}\rangle\), \(k=2,\,3,\ldots,\left[\frac{N}{2}\right]\) is an ellipsoid centered at the origin of the Bloch sphere with lengths of the semiaxes given by \(\sqrt{\lambda_{1}/\lambda_{0}},\,\sqrt{\lambda_{2}/\lambda_{0}},\,\sqrt{ \lambda_{3}/\lambda_{0}}\). The eigenvalues \(\lambda_{\mu},\,\mu=0,\,1,\,2,\,3\) of \(G\Omega^{(k)}\) depend on the parameter '\(a\)' also, unlike in the case of W-class of states where they depend only on \(N\), the number of qubits. Thus each state \(|D_{N-k,k}\rangle\) belonging to the family \(\{\mathcal{D}_{N-k,k}\}\), \(k=2,\,3,\ldots,\left[\frac{N}{2}\right]\) is represented by an ellipsoid whose semiaxes depend on the values of \(k\), \(N\) and '\(a\)'. The canonical steering ellipsoids corresponding to the 10-qubit pure symmetric states \(|D_{10-k,k}\rangle\) with chosen values of \(k\) and '\(a\)' are shown in In Fig. 2.
In particular, the canonical steering ellipsoids corresponding to Dicke states are _oblate spheroids_ centered at the _origin_ (see Fig.3).
## V Volume monogamy relations for pure symmetric multiqubit states \(|D_{N-k,k}\rangle\)
Monogamy relations restrict shareability of quantum correlations in a multipartite state. They find potential applications in ensuring security in quantum key distribution [19; 20]. Milne _et. al._[2; 3] introduced a geometrically intuitive monogamy relation for the volumes of the steering ellipsoids representing the two-qubit subsystems of multiqubit pure states, which is stronger than the well-known Coffman-Kundu-Wootters monogamy relation [21]. In this section we explore how volume monogamy relation [2] imposes limits on volumes of the quantum steering ellip
Figure 2: (Colour online) Steering ellipsoids centered at the origin of the Bloch sphere representing Lorentz canonical form of pure symmetric 10-qubit states \(|D_{10-k,k}\rangle\) for \(k=1\) to \(k=5\). The length of the semi-axes of the ellipsoids for the 10-qubit states chosen here are (i) (0.91, 0.71, 0, 62) (ii) (0.83, 0.59, 0.41) (iii) (0.745, 0.533, 0.279) (iv) (0.656, 0.53, 0.185)
soids representing the two-qubit subsystems \(\rho^{(k)}=\text{Tr}_{N-2}\)\([|D_{N-k,k}\rangle\langle D_{N-k,k}|]\) of pure symmetric multiqubit states \(|D_{N-k,k}\rangle\).
For the two-qubit state \(\rho_{AB}(=\rho^{(k)})\) (see (14)), we denote by \(\mathcal{E}_{A|B}\), the quantum steering ellipsoid containing all steered Bloch vectors of Alice when Bob carries out local operations on his qubit. The volume of \(\mathcal{E}_{A|B}\) is given by [1]
\[V_{A|B}=\left(\frac{4\pi}{3}\right)\,\frac{|\det\Lambda|}{(1-r^{2})^{2}}, \tag{41}\]
where \(r^{2}=\mathbf{r}\cdot\mathbf{r}=r_{1}^{2}+r_{2}^{2}+r_{3}^{2}\) (see (17)). As the steering ellipsoid is constrained to lie within the Bloch sphere, i.e., \(V_{A|B}\leq V_{\text{unit}}=(4\pi/3)\), one can choose to work with the _normalized volumes_\(v_{A|B}=\frac{V_{A|B}}{4\pi/3}\), the ratio of the volume of the steering ellipsoid to the volume of a unit sphere [3].
The volume monogamy relation satisfied by a _pure_ three-qubit state shared by Alice, Bob and Charlie is given by [1; 2; 3]
\[\sqrt{V_{A|B}}+\sqrt{V_{C|B}}\leq\sqrt{\frac{4\pi}{3}}. \tag{42}\]
where \(V_{A|B},\ V_{C|B}\) are respectively the volumes of the ellipsoids corresponding to steered states of Alice and Charlie when Bob performs all possible local measurements on his qubit. The _normalized_ form of the volume monogmay relation (42) turns out to be
\[\sqrt{v_{A|B}}+\sqrt{v_{C|B}}\leq 1, \tag{43}\]
where \(v_{A|B}=\frac{V_{A|B}}{4\pi/3}\) are the _normalized volumes_.
The monogamy relation (43) is not, in general, satisfied by mixed three-qubit states [3] and it has been shown that
\[\left(v_{A|B}\right)^{\frac{2}{3}}+\left(v_{C|B}\right)^{\frac{2}{3}}\leq 1, \tag{44}\]
is the volume monogamy relation for pure as well as mixed three-qubit states [3].
As there are \(\frac{1}{2}(N-2)(N-1)\) three-qubit subsystems in a \(N\)-qubit state, each of which obey monogamy relation (44), on adding these relations and simplifying, one gets [3]
\[\left(v_{A|B}\right)^{\frac{2}{3}}+\left(v_{C|B}\right)^{\frac{2}{3}}+\left(v _{D|B}\right)^{\frac{2}{3}}+\ldots\leq\frac{N-1}{2}. \tag{45}\]
The relation (45) is the volume monogamy relation satisfied by pure as well as mixed \(N\)-qubit states [3]. For \(N=3\), it reduces to (44).
For multiqubit states that are invariant under exchange of qubits, \(v_{A|B}=v_{C|B}=v_{D|B}=\cdots=v_{N}\) where \(v_{N}\) denotes the normalized volume of the steering ellipsoid corresponding to any of the \(N-1\) qubits, the steering performed by, say \(N\)th qubit. Eq. (45) thus reduces to
\[\left(N-1\right)\left(v_{N}\right)^{\frac{2}{3}}\leq\frac{N-1}{2}\Longrightarrow \left(v_{N}\right)^{\frac{2}{3}}\leq\frac{1}{2} \tag{46}\]
implying that \(\left(v_{N}\right)^{\frac{2}{3}}\leq\frac{1}{2}\) is the volume monogamy relation for permutation symmetric multiqubit states.
Figure 3: (Colour online) Oblate spheroids centered at the origin representing the Lorentz canonical form of the \(N\)-qubit Dicke states \(|N/2,N/2-k\rangle\) (equivalently, the states \(|D_{N-k,k}\rangle\), with \(a=0\))
### Volume monogamy relations governing the W-class of states \(\{{\cal D}_{N-1,1}\}\)
On denoting the normalized volume of a steering ellipsoid corresponding to the states \(|D_{N-1,1}\rangle\) by \(v_{N}^{(1)}\), we have (see (41))
\[v_{N}^{(1)}=\frac{|\det\Lambda^{(1)}|}{(1-r^{2})^{2}}, \tag{47}\]
where \(\Lambda^{(1)}\) is given in (30) and
\[r_{1}=\frac{2a\sqrt{1-a^{2}}}{1+a^{2}(N-1)},\ \ r_{2}=0,\ \ r_{3}=1+\frac{2a^{2}}{1+a ^{2}(N-1)}-\frac{2}{N} \tag{48}\]
Under suitable Lorentz transformations, the real matrix \(\Lambda^{(1)}\) (see (30)) associated with the state \(\rho^{(1)}\) (see (11)) gets transformed to its Lorentz canonical form \(\widetilde{\Lambda}^{(1)}\) (see (34)). It follows that (see (28), (32))
\[\left(L_{A}\,\Lambda^{(1)}\,L_{B}^{T}\right)_{00}=\sqrt{\phi_{0}}=2\sqrt{N-1} \left[\frac{1-a^{2}}{N(1+(N-1)\,a^{2})}\right]. \tag{49}\]
Using the property \(\det L_{A}=\det L_{B}=1\) of orthochronous proper Lorentz transformations [18] and substituting \(|\det\widetilde{\Lambda}^{(1)}|=\frac{1}{(N-1)^{2}}\) in (22), we obtain
\[|\det\widetilde{\Lambda}^{(1)}|=\frac{1}{(N-1)^{2}}=|\det L_{A}|\,|\det L_{B}| \,\biggl{|}\det\left(\frac{\Lambda^{(1)}}{\sqrt{\phi_{0}}}\right)\biggr{|}= \frac{|\det\,\Lambda^{(1)}|}{\phi_{0}^{2}}. \tag{50}\]
Eq. (50) leads to \(|\det\,\Lambda^{(1)}|=\phi_{0}^{2}\,|\det\widetilde{\Lambda}^{(1)}|\). The normalized volume \(v_{N}^{(1)}\) of the quantum steering ellipsoid corresponding to W-class of states thus becomes (see (47))
\[v_{N}^{(1)}=|\det\widetilde{\Lambda}^{(1)}|\frac{\phi_{0}^{2}}{(1-r^{2})^{2}} \tag{51}\]
From (48) and (49) it readily follows that \(\phi_{0}^{2}=(1-r^{2})^{2}\) and hence (see (51)) the simple form for the normalized volume of the corresponding steering ellipsoid associated with the two-qubit state \(\rho^{(1)}\) turns out to be
\[v_{N}^{(1)}=\frac{\phi_{0}^{2}}{(N-1)^{2}\,(1-r^{2})^{2}}=\frac{1}{(N-1)^{2}}. \tag{52}\]
The volume monogamy relation \(\left(v_{N}^{(1)}\right)^{\frac{2}{3}}\leq\frac{1}{2}\) (see (46)) takes the form
\[\left(\frac{1}{(N-1)^{2}}\right)^{2/3}\leq\frac{1}{2}\,\Longrightarrow\,(N-1 )^{\frac{-4}{3}}\leq\frac{1}{2} \tag{53}\]
and is readily satisfied for any \(N\geq 3\) as can be seen in Fig. 4.
### Relation between obesity of steering ellipsoids and concurrence
We recall here that the _obesity_\({\cal O}(\rho_{AB})=|\det\Lambda|^{1/4}\) of the quantum steering ellipsoid [2] depicting a two-qubit state \(\rho_{AB}\) is an upper bound for the concurrence \(C(\rho_{AB})\):
\[C(\rho_{AB})\leq{\cal O}(\rho_{AB})=|\det\Lambda|^{1/4}. \tag{54}\]
Furthermore, if \(\rho_{AB}\longrightarrow\widetilde{\rho}_{AB}=(A\otimes B)\rho_{AB}\,(A^{ \dagger}\otimes B^{\dagger})/({\rm Tr}(A^{\dagger}\,A\otimes B^{\dagger}B)\rho _{AB}|\), \(A,B\in SL(2,C)\) it follows that [2]
\[\frac{{\cal O}(\rho_{AB})}{C(\rho_{AB})}=\frac{{\cal O}(\widetilde{\rho}_{AB}) }{C(\widetilde{\rho}_{AB})}. \tag{55}\]
We make use of the relation (55) to obtain a relation for concurrence [22] of a pair of qubits in the symmetric \(N\)-qubit pure states \(|D_{N-k,k}\rangle\), \(k=1,\,2,\ldots,\left[\frac{N}{2}\right]\). For the states \(|D_{N-1,1}\rangle\) belonging to W-class, we readily get (see (30), (34))
\[\det\Lambda^{(1)}=\left(\frac{2(1-a^{2})}{N(1+a^{2}(N-1))}\right)^{4},\quad \det\widetilde{\Lambda}^{(1)}=\left(\frac{1}{N-1}\right)^{2} \tag{56}\]
and thereby the obesities \(\mathcal{O}(\rho^{(1)})\), \(\mathcal{O}(\widetilde{\rho}^{(1)})\):
\[\mathcal{O}(\rho^{(1)})=\frac{2(1-a^{2})}{N(1+a^{2}(N-1))},\quad\mathcal{O}( \widetilde{\rho}^{(1)})=\frac{1}{\sqrt{N-1}} \tag{57}\]
It is not difficult to evaluate the concurrence of the canonical state \(\widetilde{\rho}^{(1)}\) and it is seen that
\[C(\widetilde{\rho}^{(1)})=\mathcal{O}(\widetilde{\rho}^{(1)})=\frac{1}{\sqrt {N-1}}. \tag{58}\]
We thus obtain (see (55),(58))
\[C(\rho^{(1)})=\mathcal{O}(\rho^{(1)})=\frac{2(1-a^{2})}{N(1+a^{2}(N-1))}. \tag{59}\]
The value of concurrence in (59) matches exactly with that obtained using \(C(\rho^{(1)})=\max(0,\mu_{1}-\mu_{2}-\mu_{3}-\mu_{4})\) where \(\mu_{1}\geq\mu_{2}\geq\mu_{3}\geq\mu_{4}\) are square-roots of the eigenvalues of the matrix \(R=\rho^{(1)}\left(\sigma_{2}\otimes\sigma_{2}\right)\rho^{(1)^{*}}\left( \sigma_{2}\otimes\sigma_{2}\right)\)[22]. We have seen that the state \(|D_{N-1,\,1}\rangle\) reduces to W-state when \(a=0\) and hence for the \(N\)-qubit W-state, concurrence of any pair of qubits is given by \(C(\rho^{(1)}_{W})=\frac{2}{N}\) (see (59)).
## VI Summary
In this work, we have analyzed the canonical steering ellipsoids and volume monogamy relations of the pure symmetric \(N\)-qubit states characterized by two distinct Majorana spinors. We have shown that the entire W-class of states has a geometric representation in terms of a _shifted oblate spheroid_ inscribed within the Bloch sphere. The center of the spheroid, the length of its semiaxes and its volume are shown to be dependent only on the number of qubits \(N\) implying that all states in the \(N\)-qubit W-class are characterized by a _single_ spheroid, shifted along the polar axis of the Bloch sphere. All other SLOCC inequivalent families of pure symmetric \(N\)-qubit states with two distinct spinors are shown to be geometrically represented by _ellipsoids centered at the origin_. Except the W-state (and its obverse counterpart) which are represented by a _shifted spheroid_, all other \(N\)-qubit Dicke states are represented by an _oblate spheroid centered at the origin_. A discussion on volume monogamy relations applicable to identical subsystems of a pure symmetric \(N\)-qubit state is given here and a volume monogamy relation applicable for W-class of states is
obtained. A relation connecting concurrence of the two-qubit state and obesity of the associated quantum steering ellipsoid with its canonical counterparts is made use of to obtain concurrence of the states belonging to W-class. It would be interesting to examine the features of canonical steering ellipsoids and volume monogamy relations for the SLOCC inequivalent families of pure symmetric multiqubit states with more than two distinct spinors; in particular, the class of pure symmetric \(N\)-qubit states belonging to GHZ-class (with three distinct spinors).
## Acknowledgements
BGD thanks IASC-INSA-NASI for the award of Summer Research Fellowship-2022, during this work. Sudha, ARU and IR are supported by the Department of Science and Technology (DST), India through Project No. DST/ICPS/QUST/2018/107.
|
2310.17637
|
Positivity-preserving and entropy-bounded discontinuous Galerkin method
for the chemically reacting, compressible Navier-Stokes equations
|
This article concerns the development of a fully conservative,
positivity-preserving, and entropy-bounded discontinuous Galerkin scheme for
the multicomponent, chemically reacting, compressible Navier-Stokes equations
with complex thermodynamics. In particular, we extend to viscous flows the
fully conservative, positivity-preserving, and entropy-bounded discontinuous
Galerkin method for the chemically reacting Euler equations that we previously
introduced. An important component of the formulation is the
positivity-preserving Lax-Friedrichs-type viscous flux function devised by
Zhang [J. Comput. Phys., 328 (2017), pp. 301-343], which was adapted to
multicomponent flows by Du and Yang [J. Comput. Phys., 469 (2022), pp. 111548]
in a manner that treats the inviscid and viscous fluxes as a single flux. Here,
we similarly extend the aforementioned flux function to multicomponent flows
but separate the inviscid and viscous fluxes, resulting in a different
dissipation coefficient. This separation of the fluxes allows for use of other
inviscid flux functions, as well as enforcement of entropy boundedness on only
the convective contribution to the evolved state, as motivated by physical and
mathematical principles. We also detail how to account for boundary conditions
and incorporate previously developed techniques to reduce spurious pressure
oscillations into the positivity-preserving framework. Furthermore, potential
issues associated with the Lax-Friedrichs-type viscous flux function in the
case of zero species concentrations are discussed and addressed. The resulting
formulation is compatible with curved, multidimensional elements and general
quadrature rules with positive weights. A variety of multicomponent, viscous
flows is computed, ranging from a one-dimensional shock tube problem to
multidimensional detonation waves and shock/mixing-layer interaction.
|
Eric J. Ching, Ryan F. Johnson, Sarah Burrows, Jacklyn Higgs, Andrew D. Kercher
|
2023-10-26T17:52:33Z
|
http://arxiv.org/abs/2310.17637v2
|
Positivity-preserving and entropy-bounded discontinuous Galerkin method for the chemically reacting, compressible Navier-Stokes equations
###### Abstract
This article concerns the development of a fully conservative, positivity-preserving, and entropy-bounded discontinuous Galerkin scheme for simulating the multicomponent, chemically reacting, compressible Navier-Stokes equations with complex thermodynamics. In particular, we extend to viscous flows the fully conservative, positivity-preserving, and entropy-bounded discontinuous Galerkin method for the chemically reacting Euler equations that we previously introduced. An important component of the formulation is the positivity-preserving Lax-Friedrichs-type viscous flux function devised by Zhang [_J. Comput. Phys._, 328 (2017), pp. 301-343], which was adapted to multicomponent flows by Du and Yang [_J. Comput. Phys._, 469 (2022), pp. 111548] in a manner that treats the inviscid and viscous fluxes as a single flux. Here, we similarly extend the aforementioned flux function to multicomponent flows but separate the inviscid and viscous fluxes, resulting in a different dissipation coefficient. This separation of the fluxes allows for use of other inviscid flux functions, as well as enforcement of entropy boundedness on only the convective contribution to the evolved state, as motivated by physical and mathematical principles. We also discuss in detail how to account for boundary conditions and incorporate previously developed pressure-equilibrium-preserving techniques into the positivity-preserving framework. Comparisons between the Lax-Friedrichs-type viscous flux function and more conventional flux functions are provided, the results of which motivate an adaptive solution procedure that employs the former only when the element-local solution average has negative species concentrations, nonpositive density, or nonpositive pressure. The resulting formulation is compatible with curved, multidimensional elements of arbitrary shape and general quadrature rules with positive weights. A variety of multicomponent, viscous flows is computed, ranging from a one-dimensional shock tube problem to multidimensional detonation waves and shock/mixing-layer interaction. We find that just as in the inviscid, multicomponent case, the robustness benefits of the enforcement of an entropy bound are much more pronounced than in the monocomponent, calorically perfect setting. Where appropriate, we demonstrate that mass, total energy, and atomic elements are discretely conserved.
keywords: Discontinuous Galerkin method; Combustion; Detonation; Minimum entropy principle; Positivity-preserving; Multicomponent Navier-Stokes equations
## 1 Introduction
In the past two decades, interest in the discontinuous Galerkin (DG) method for fluid flow simulations has surged dramatically. This method benefits from arbitrarily high order of accuracy on unstructured grids, as well as a compact stencil and high arithmetic intensity suited for modern computing systems. However, one of the primary obstacles to widespread use of this numerical scheme is its susceptibility to nonlinear
instabilities in underresolved regions and near non-smooth features. Robustness is an even greater concern when mixtures of thermally perfect gases and chemical reactions are considered [1; 2]. For instance, it is well-known that spurious pressure oscillations are generated in moving interface problems when fully conservative schemes are employed [3; 4; 5], often leading to solver divergence. A number of quasi-conservative methods, such as the double-flux technique [6; 7; 8], have been proposed to circumvent this issue, typically at the expense of energy conservation. Recently, two of the authors introduced a fully conservative DG scheme that does not generate such oscillations in smooth regions of the flow [2]. Strang splitting was employed to decouple the temporal integration of the convective and diffusive operators from that of stiff chemical source terms. Artificial viscosity was used to stabilize the solution near shocks and other non-smooth features. However, artificial viscosity alone is often not sufficient to guarantee stability. To further increase robustness, we made key advancements to this fully conservative DG method, focusing on the inviscid case, to ensure satisfaction of the positivity property (i.e., nonnegative species concentrations, positive density, and positive pressure) and an entropy bound based on the minimum entropy principle for the multicomponent Euler equations [9; 10], which states that the spatial minimum of specific thermodynamic entropy of entropy solutions is a nondecreasing function of time. The main ingredients of this DG formulation [10; 11] are (a) an invariant-region-preserving inviscid flux function [12], (b) a simple linear-scaling limiter [13], (c) satisfaction of a time-step-size constraint for the transport step with a strong-stability-preserving explicit time integrator, (d) incorporation of the pressure-equilibrium-preserving techniques introduced by Johnson and Kercher [2], and (e) an entropy-stable DG discretization in time based on diagonal-norm summation-by-parts operators for the reaction step. It was found that the formulation was capable of robustly and accurately computing complex inviscid, reacting flows using high-order polynomials and relatively coarse meshes. Enforcement of entropy boundedness was critical for stability in simulations of multidimensional detonation waves.
The consideration of viscous flows brings about additional complications. Using conventional viscous flux functions, such as the second form of Bassi and Rebay (BR2) [14] and the symmetric interior penalty method (SIPG) [15], positivity is not guaranteed to be maintained. Specifically, it is possible for the constraint on the time step size to be arbitrarily small for the solution to satisfy said property. To remedy this problem, Zhang [16] introduced a Lax-Friedrichs-type viscous flux function for the monocomponent Navier-Stokes equations, accompanied by a strictly positive upper bound on the time step size to guarantee satisfaction of the positivity property. Although it may be surprising that the viscous flux can be a primary source of negative concentrations, nonpositive density, and/or nonpositive pressure, we call attention to certain numerical challenges specific to multicomponent-flow simulations: not only are nonlinear instabilities more likely to occur, but also species concentrations are typically close or equal to zero, such that small numerical errors can easily lead to negative concentrations. Furthermore, in the monocomponent case, the mass conservation equation is identical between the Euler system and the Navier-Stokes system; therefore, the diffusive operator does not directly contribute to negative densities. However, this is not true in the multicomponent case, which means that the viscous flux can indeed be largely responsible for negative concentrations. Note that many multicomponent-flow codes simply "clip" negative species concentrations, but such an intrusive strategy violates mass conservation and pollutes the solution with low-order errors.
Du and Yang [17] recently extended the aforementioned Lax-Friedrichs-type viscous flux function to multicomponent flows. Specifically, they combined the inviscid and viscous fluxes into a single flux, such that the resulting dissipation coefficient accounts for both fluxes simultaneously. Entropy boundedness was not considered. Instead of operator splitting, they employed an exponential multistage/multistep, explicit time integration scheme [18; 19; 20] that can handle stiff source terms. Although in the present study we use operator splitting since it has proven successful to date and its accuracy is less reliant on "well-prepared" initial conditions [18; 19; 20], exponential multistage/multistep time integrators are indeed worthy of future investigation.
In this work, we develop a fully conservative, positivity-preserving, and entropy-bounded DG method for the compressible, multicomponent, chemically reacting Navier-Stokes equations. We focus on the transport step since the treatment of stiff chemical source terms is identical to that in the inviscid case. Enforcement of a lower bound on the specific thermodynamic entropy is performed on only the convective contribution to the evolved state since the viscous flux function is not fully compatible with said entropy bound. This was also done by Dzanic and Witherden [21], albeit in a different manner, in their entropy-based filtering framework.
Furthermore, at least in the monocomponent, calorically perfect setting, the minimum entropy principle does not hold for the Navier-Stokes equations unless the thermal diffusivity is zero [22; 23]. Although such analysis has not yet been performed for the multicomponent Navier-Stokes equations with the thermally perfect gas model, we do not expect the conclusion to change. Our primary contributions are as follows:
* We extend the aforementioned positivity-preserving Lax-Friedrichs-type viscous flux function [16] to multicomponent flows. Specifically, unlike in [17], we treat the inviscid and viscous fluxes separately, resulting in a different dissipation coefficient. The rationale for separating the fluxes is twofold. First, enforcing a bounded entropy on the convective contribution necessitates isolating the fluxes. Secondly, in our experience, we have found the HLLC inviscid flux function to perform more favorably than the Lax-Friedrichs inviscid flux function. We also discuss the treatment of boundary conditions in more detail.
* Entropy boundedness is enforced on only the convective contribution in a rigorous manner that maintains full compatibility with the positivity property.
* We show that if any of the species concentrations is zero, then the Lax-Friedrichs-type viscous flux function alone cannot guarantee that all species concentrations (or more specifically, their element averages) at the following time step remain nonnegative. This is true regardless of whether the inviscid and viscous fluxes are treated simultaneously or separately. A remedy for this pathological case is proposed.
* We incorporate the pressure-equilibrium-preserving techniques by Johnson and Kercher [2] into the positivity-preserving framework, which imposes an additional constraint on the time step size.
* The performance of the Lax-Friedrichs-type viscous flux function is assessed. Optimal convergence for smooth flows is observed. However, comparisons with the BR2 scheme indicate that when possible, the latter is generally still preferred. As such, we employ an adaptive solution procedure that only employs the Lax-Friedrichs-type viscous flux function when necessary.
* The proposed formulation is compatible with multidimensional elements of arbitrary shape and geometric-approximation order. We first apply it to a series of one-dimensional viscous flows: advection-diffusion of a thermal bubble, a premixed flame, and shock-tube flow. More complex viscous flow problems are then considered, namely a two-dimensional detonation wave enclosed by adiabatic walls and three-dimensional shock/mixing-layer interaction. Discrete conservation of mass, total energy, and atomic elements is demonstrated. Just as in the inviscid case, enforcement of the entropy bound significantly improves the stability of the solution.
The remainder of this article is organized as follows. The governing equations, transport properties, and thermodynamic relations are summarized in Section 2, followed by a review of the basic DG discretization in Section 3. Section 4 presents the positivity-preserving and entropy-bounded DG method for the transport step. Results for a variety of test cases are given in the next section. The paper concludes with some final remarks.
## 2 Governing equations
The compressible, multicomponent, chemically reacting Navier-Stokes equations in \(d\) spatial dimensions are given by
\[\frac{\partial y}{\partial t}+\nabla\cdot\mathcal{F}\left(y,\nabla y\right)- \mathcal{S}\left(y\right)=0, \tag{2.1}\]
where \(y\) is the state vector, \(\nabla y\) is its spatial gradient, \(t\) is time, \(\mathcal{F}\) is the flux, and \(\mathcal{S}=\left(0,\ldots,0,0,\omega_{1},\ldots,\omega_{n_{s}}\right)^{T}\) is the chemical source term, with \(\omega_{i}\) corresponding to the production rate of the \(i\)th species. The physical coordinates are denoted by \(x=\left(x_{1},\ldots,x_{d}\right)\). The vector of state variables is expanded as
\[y=\left(\rho v_{1},\ldots,\rho v_{d},\rho e_{t},C_{1},\ldots,C_{n_{s}}\right)^{T}, \tag{2.2}\]
where \(\rho\) is density, \(v=(v_{1},\ldots,v_{d})\) is the velocity vector, \(e_{t}\) is the specific total energy, \(C=(C_{1},\ldots,C_{n_{s}})\) is the vector of molar concentrations, and \(n_{s}\) is the number of species. The partial density of the \(i\)th species is defined as
\[\rho_{i}=W_{i}C_{i},\]
where \(W_{i}\) is the molecular weight of the \(i\)th species, from which the density can be computed as
\[\rho=\sum_{i=1}^{n_{s}}\rho_{i}.\]
The mole and mass fractions of the \(i\)th species are given by
\[X_{i}=\frac{C_{i}}{\sum_{i=1}^{n_{s}}C_{i}},\quad Y_{i}=\frac{\rho_{i}}{\rho}.\]
The equation of state for the mixture is written as
\[P=R^{0}T\sum_{i=1}^{n_{s}}C_{i}, \tag{2.3}\]
where \(P\) is the pressure, \(T\) is the temperature, and \(R^{0}\) is the universal gas constant. The specific total energy is the sum of the mixture-averaged specific internal energy, \(u\), and the specific kinetic energy, written as
\[e_{t}=u+\frac{1}{2}\sum_{k=1}^{d}v_{k}v_{k},\]
where the former is the mass-weighted sum of the specific internal energies of each species, given by
\[u=\sum_{i=1}^{n_{s}}Y_{i}u_{i}.\]
With the thermally perfect gas model, \(u_{i}\) is defined as
\[u_{i}=h_{i}-R_{i}T=h_{\rm ref,i}+\int_{T_{\rm ref}}^{T}c_{p,i}(\tau)d\tau-R_{i }T,\]
where \(h_{i}\) is the specific enthalpy of the \(i\)th species, \(R_{i}=R^{0}/W_{i}\), \(T_{\rm ref}\) is the reference temperature of 298.15 K, \(h_{\rm ref,i}\) is the reference-state species formation enthalpy, and \(c_{p,i}\) is the specific heat at constant pressure of the \(i\)th species, which is approximated with a polynomial as a function of temperature based on the NASA coefficients [24, 25], i.e.,
\[c_{p,i}=\sum_{k=0}^{n_{p}}a_{ik}T^{k}. \tag{2.4}\]
The mixture-averaged specific thermodynamic entropy is obtained via a mass-weighted sum of the specific entropies of each species as
\[s=\sum_{i=1}^{n_{s}}Y_{i}s_{i},\]
with \(s_{i}\) defined as
\[s_{i}=s_{\rm ref,i}^{o}+\int_{T_{\rm ref}}^{T}\frac{c_{v,i}(\tau)}{\tau}d\tau -R_{i}\log\frac{C_{i}}{C_{\rm ref}},\]
where \(s_{\text{ref},i}^{o}\) is the species formation entropy at the reference temperature and reference pressure, \(P_{\text{ref}}\), of 1 atm, \(c_{v,i}=c_{p,i}-R_{i}\) is the specific heat at constant volume of the \(i\)th species, and \(C_{\text{ref}}=P_{\text{ref}}/R^{0}T_{\text{ref}}\) is the reference concentration.
The flux can be expressed as the difference between the convective flux, \(\mathcal{F}^{c}\), and the viscous flux, \(\mathcal{F}^{v}\), i.e.,
\[\mathcal{F}\left(y,\nabla y\right)=\left(\mathcal{F}^{c}\left(y\right)- \mathcal{F}^{v}\left(y,\nabla y\right)\right),\]
where the \(k\)th spatial components are defined as
\[\mathcal{F}_{k}^{c}\left(y\right)=\left(\rho v_{k}v_{1}+P\delta_{k1},\ldots, \rho v_{k}v_{d}+P\delta_{kd},v_{k}\left(\rho e_{t}+P\right),v_{k}C_{1},\ldots, v_{k}C_{n_{s}}\right)^{T} \tag{2.5}\]
and
\[\mathcal{F}_{k}^{v}\left(y,\nabla y\right)=\left(\tau_{1k},\ldots,\tau_{dk}, \sum_{j=1}^{d}\tau_{kj}v_{j}+\sum_{i=1}^{n_{s}}W_{i}C_{i}h_{i}V_{ik}-q_{k},C_{ 1}V_{1k},\ldots,C_{n_{s}}V_{n_{s}k}\right)^{T}, \tag{2.6}\]
respectively. \(\tau\) is the viscous stress tensor, \(q\) is the heat flux, and \(V_{ik}\) is the \(k\)th spatial component of the diffusion velocity of the \(i\)th species, defined as
\[V_{ik}=\hat{V}_{ik}-\frac{\sum_{l=1}^{n_{s}}W_{l}C_{l}\hat{V}_{lk}}{\rho},\quad \hat{V}_{ik}=\frac{\bar{D}_{i}}{C_{i}}\frac{\partial C_{i}}{\partial x_{k}}- \frac{\bar{D}_{i}}{\rho}\frac{\partial\rho}{\partial x_{k}},\]
which includes a standard correction to ensure mass conservation (i.e., \(\sum_{i=1}^{n_{s}}W_{i}C_{i}V_{ik}=0\)) [26; 27]. \(\bar{D}_{i}\) is the mixture-averaged diffusion coefficient of the \(i\)th species, obtained as [28]
\[\bar{D}_{i}=\frac{1}{\bar{W}}\frac{\sum_{j=1,j\neq i}^{n_{s}}X_{j}W_{j}}{\sum_ {j=1,j\neq i}^{n_{s}}X_{j}/D_{ij}},\]
where \(\bar{W}=\rho/\sum_{i}C_{i}\) is the mixture molecular weight and \(D_{ij}\) is the binary diffusion coefficient between the \(i\)th and \(j\)th species, which is a positive function of temperature and pressure [29; 30]. Note that \(\bar{D}_{i}\) can be nonzero for \(C_{i}=0\). The \(k\)th spatial components of the viscous stress tensor and the heat flux are written as
\[\tau_{k}\left(y,\nabla y\right)=\mu\left(\frac{\partial v_{1}}{\partial x_{k }}+\frac{\partial v_{k}}{\partial x_{1}}-\delta_{k1}\frac{2}{3}\sum_{j=1}^{d} \frac{\partial v_{j}}{\partial x_{j}},\ldots,\frac{\partial v_{d}}{\partial x _{k}}+\frac{\partial v_{k}}{\partial x_{d}}-\delta_{kd}\frac{2}{3}\sum_{j=1}^{d }\frac{\partial v_{j}}{\partial x_{j}}\right),\]
where \(\mu\) is the dynamic viscosity, calculated using the Wilke model [31], and
\[q_{k}\left(y,\nabla y\right) = -\lambda_{T}\frac{\partial T}{\partial x_{k}},\]
where \(\lambda_{T}\) is the thermal conductivity, computed with the Mathur model [32], respectively. The viscous flux can also be written as
\[\mathcal{F}^{v}\left(y,\nabla y\right)=G\left(y\right):\nabla y \tag{2.7}\]
where \(G\left(y\right)\) is the homogeneity tensor [33], obtained by differentiating the viscous flux with respect to the gradient, i.e., \(G(y)=\partial\mathcal{F}^{v}/\partial\nabla y\). Additional information on the thermodynamic relations, transport properties, and chemical reaction rates can be found in [2] and [10].
## 3 Discontinuous Galerkin discretization
This section summarizes the DG discretization of Equation 2.1 and the approach introduced by Johnson and Kercher [2] to suppress spurious pressure oscillations in smooth regions of the flow.
Let the computational domain, \(\Omega\), be partitioned by \(\mathcal{T}\), which consists of non-overlapping cells, \(\kappa\), with boundaries \(\partial\kappa\). Let \(\mathcal{E}=\mathcal{E}_{\mathcal{I}}\cup\mathcal{E}_{\partial}\) be the set of interfaces, \(\epsilon\), such that \(\cup_{\epsilon\in\mathcal{E}}\epsilon=\cup_{\kappa\in\mathcal{T}}\partial\kappa\), comprised of the interior interfaces,
\[\epsilon_{\mathcal{I}}\in\mathcal{E}_{\mathcal{I}}=\left\{\epsilon_{\mathcal{ I}}\in\mathcal{E}\,|\,\epsilon_{\mathcal{I}}\cap\partial\Omega=\emptyset\right\},\]
and boundary interfaces,
\[\epsilon_{\partial}\in\mathcal{E}_{\partial}=\left\{\epsilon_{\partial}\in \mathcal{E}\,|\,\epsilon_{\partial}\subset\partial\Omega\right\}.\]
At a given interior interface, there exists \(\kappa^{+},\kappa^{-}\in\mathcal{T}\) such that \(\epsilon_{\mathcal{I}}=\partial\kappa^{+}\cap\partial\kappa^{-}\). \(n^{+}\) is the outward-facing normal of \(\kappa^{+}\), and \(n^{+}=-n^{-}\). The discrete subspace \(V_{h}^{p}\) over \(\mathcal{T}\) is defined as
\[V_{h}^{p} = \left\{\mathfrak{v}\in\left[L^{2}\left(\Omega\right)\right]^{m} \Big{|}\,\forall\kappa\in\mathcal{T},\left.\mathfrak{v}\right|_{\kappa}\in \left[\mathcal{P}_{p}(\kappa)\right]^{m}\right\}, \tag{3.1}\]
where \(m=n_{s}+d+1\) is the number of state variables and \(\mathcal{P}_{p}(\kappa)\) in one spatial dimension is the space of polynomial functions of degree no greater than \(p\) in \(\kappa\). In multiple dimensions, the choice of polynomial space often depends on the element type [33].
To solve for the discrete solution, we require \(y\in V_{h}^{p}\) to satisfy
\[\sum_{\kappa\in\mathcal{T}}\left(\frac{\partial y}{\partial t}, \mathfrak{v}\right)_{\kappa}-\sum_{\kappa\in\mathcal{T}}\left(\mathcal{F}^{c} \left(y,\nabla y\right),\nabla\mathfrak{v}\right)_{\kappa}+\sum_{\epsilon\in \mathcal{E}}\left(\mathcal{F}^{c\dagger}\left(y,n\right),\left[\mathfrak{v} \right]\right)_{\epsilon}-\sum_{\epsilon\in\mathcal{E}}\left(\left\{\mathcal{F }^{v}\left(y,\nabla y\right)\right\}\cdot n-\delta^{v}\left(y,\nabla y,n\right),\left[\mathfrak{v}\right]\right)_{\epsilon}\] \[+\sum_{\kappa\in\mathcal{T}}\left(G\left(y^{+}\right):\left( \left\{\!\left\{y\right\}\!\right\}-y^{+}\right)\otimes n,\nabla\mathfrak{v} \right)_{\partial\kappa}-\sum_{\kappa\in\mathcal{T}}\left(\mathcal{S}\left(y \right),\mathfrak{v}\right)_{\kappa}=0\qquad\forall\mathfrak{v}\in V_{h}^{p}, \tag{3.2}\]
where \(\left(\cdot,\cdot\right)\) denotes the inner product, \(\mathcal{F}^{c\dagger}\) is the inviscid flux function, \(\left\{\!\left\{\cdot\right\}\!\right\}\) is the average operator, \(\left[\!\left\{\cdot\right\}\!\right]\) is the jump operator, and \(\delta^{v}\) is a viscous-flux penalty term that depends on the viscous flux function. Note that Equation (3.2) corresponds to a primal formulation [34, 33]; in [16], a flux formulation is used. It is worth mentioning that the penalty term for many conventional viscous flux functions is not a function of the gradient, i.e., \(\delta^{v}=\delta^{v}(y,n)\); however, as will be seen in Section 4.1, the penalty term for the proposed Lax-Friedrichs-type viscous flux function indeed depends on the gradient. In this work, we employ the HLLC inviscid numerical flux [35]. To compute \(\delta^{v}\), we consider the BR2 scheme [14] and the proposed Lax-Friedrichs-type flux function. The jump operator, average operator, inviscid flux function, and penalty term are defined as
\[\left[\!\left[v\right]\!\right]=v^{+}-v^{-}\text{ on }\epsilon \qquad\forall\epsilon\in\mathcal{E}_{\mathcal{I}},\] \[\left\{\!\left\{y\right\}\!\right\}=\frac{1}{2}\left(y^{+}+y^{-} \right)\text{ on }\epsilon \qquad\forall\epsilon\in\mathcal{E}_{\mathcal{I}},\] \[\left\{\!\left\{\mathcal{F}^{\nu}\left(y,\nabla y\right)\right\} \!\right\}=\frac{1}{2}\left(\mathcal{F}^{\nu}\left(y^{+},\nabla y^{+}\right)+ \mathcal{F}^{\nu}\left(y^{-},\nabla y^{-}\right)\right)\text{ on }\epsilon \qquad\forall\epsilon\in\mathcal{E}_{\mathcal{I}},\] \[\mathcal{F}^{c\dagger}\left(y,n\right)=\mathcal{F}^{c\dagger} \left(y^{+},y^{-},n\right)\text{ on }\epsilon \qquad\forall\epsilon\in\mathcal{E}_{\mathcal{I}},\] \[\delta^{v}\left(y,\nabla y,n\right)=\delta^{v}\left(y^{+},y^{-},\nabla y^{+},\nabla y^{-},n\right)\text{ on }\epsilon \qquad\forall\epsilon\in\mathcal{E}_{\mathcal{I}},\]
at interior interfaces and
\[\left[\!\left[v\right]\!\right]=v^{+}\text{ on }\epsilon \qquad\forall\epsilon\in\mathcal{E}_{\partial},\] \[\left\{\!\left\{y\right\}\!\right\}=y_{\partial}\left(y^{+},n^{+ }\right)\text{ on }\epsilon \qquad\forall\epsilon\in\mathcal{E}_{\partial},\] \[\left\{\!\left\{\mathcal{F}^{\nu}\left(y,\nabla y\right)\right\} \!\right\}=\mathcal{F}_{\partial}^{\nu}\left(y_{\partial}\left(y^{+},n^{+} \right),\nabla y^{+}\right)\text{ on }\epsilon \qquad\forall\epsilon\in\mathcal{E}_{\partial},\] \[\mathcal{F}^{c\dagger}\left(y,n\right)=\mathcal{F}_{\partial}^{c \dagger}\left(y^{+},n^{+}\right)\text{ on }\epsilon \qquad\forall\epsilon\in\mathcal{E}_{\partial},\] \[\delta^{v}\left(y,\nabla y,n\right)=\delta^{v}_{\partial}\left(y^ {+},y_{\partial}\left(y^{+},n^{+}\right),\nabla y^{+},n^{+}\right)\text{ on }\epsilon \qquad\forall\epsilon\in\mathcal{E}_{\partial},\]
at boundary interfaces, where \(y_{0}\left(y^{+},n^{+}\right)\) is the boundary state, \(\mathcal{F}_{\partial}^{\text{\tiny{c}}{\dagger}}\left(y^{+},n^{+}\right)\) is the inviscid boundary flux function, \(\mathcal{F}_{\partial}^{\text{\tiny{c}}{\nu}}\left(y_{0}\left(y^{+}\right), \nabla y^{+},n^{+}\right)\) is the viscous boundary flux, and \(\delta_{\partial}^{\text{\tiny{c}}{\nu}}\left(y^{+},y_{0}\left(y^{+},n^{+} \right),\nabla y^{+},n^{+}\right)\) is the boundary penalty term. Appendix A provides a discussion of the prescription of various boundary conditions.
Strang splitting [36] is applied to decouple the temporal integration of the transport operators from that of the stiff chemical source term over a given interval \(\left(t_{0},t_{0}+\Delta t\right]\) as
\[\frac{\partial y}{\partial t}+\nabla\cdot\mathcal{F}\left(y\right) =0\text{ in }\Omega\times\left(t_{0},t_{0}+\nicefrac{{\Delta t}}{{2}}\right], \tag{3.3}\] \[\frac{\partial y}{\partial t}-\mathcal{S}\left(y\right) =0\text{ in }\left(t_{0},t_{0}+\Delta t\right],\] (3.4) \[\frac{\partial y}{\partial t}+\nabla\cdot\mathcal{F}\left(y\right) =0\text{ in }\Omega\times\left(t_{0}+\nicefrac{{\Delta t}}{{2}},t_{0}+\Delta t \right]. \tag{3.5}\]
Equations (3.3) and (3.5) are advanced in time using a strong-stability-preserving Runge-Kutta method (SSPRK) [37, 38], whereas Equation (3.4) is solved using a fully implicit, temporal DG discretization. Since the reaction step is identical between the inviscid and viscous cases, we refer the reader to [10] and [11] for more details on the DG discretization in time for Equation (3.4). Here, we focus on the transport step.
We assume a nodal basis, such that the local solution approximation is given by
\[y_{\kappa}=\sum_{j=1}^{n_{b}}y_{\kappa}(x_{j})\phi_{j}, \tag{3.6}\]
where \(\phi_{j}\) is the \(j\)th basis function, \(n_{b}\) is the number of basis functions, and \(x_{j}\) is the physical coordinate of the \(j\)th node. Unless otherwise specified, the volume and surface integrals in Equation (3.2) are computed using a quadrature-free approach [39, 40]. Furthermore, the flux can be approximated as
\[\mathcal{F}_{\kappa}\left(y\right)\approx\sum_{k=1}^{n_{c}}\mathcal{F}\left(y _{\kappa}\left(x_{k}\right),\nabla y_{\kappa}\left(x_{k}\right)\right)\varphi _{k}, \tag{3.7}\]
where \(n_{c}\geq n_{b}\) and \(\left\{\varphi_{1},\ldots,\varphi_{n_{c}}\right\}\) is a set of basis functions that may be different from those in Equation (3.6). As discussed in [2], pressure equilibrium is maintained in smooth regions of the flow and at material interfaces if \(n_{c}=n_{b}\) and the integration points are in the solution nodal set. However, if over-integration is desired (i.e., \(n_{c}>n_{b}\)), the standard flux interpolation (3.7) results in the generation of spurious pressure oscillations. Therefore, in the case of over-integration, Equation (3.7) is replaced with
\[\mathcal{F}_{\kappa}\left(y\right)\approx\sum_{k=1}^{n_{c}}\mathcal{F}_{\kappa }\left(\widetilde{y}_{\kappa}\left(x_{k}\right),\nabla y_{\kappa}\left(x_{k} \right)\right)\varphi_{k}=\sum_{k=1}^{n_{c}}\mathcal{F}_{\kappa}^{c}\left( \widetilde{y}_{\kappa}\left(x_{k}\right)\right)\varphi_{k}-\sum_{k=1}^{n_{c}} \mathcal{F}_{\kappa}^{v}\left(\widetilde{y}_{\kappa}\left(x_{k}\right), \nabla y_{\kappa}\left(x_{k}\right)\right)\varphi_{k}, \tag{3.8}\]
where
\[\mathcal{F}_{\kappa}^{v}\left(\widetilde{y}_{\kappa}\left(x_{k}\right),\nabla y _{\kappa}\left(x_{k}\right)\right)=G\left(\widetilde{y}_{\kappa}\left(x_{k} \right)\right):\nabla y_{\kappa}\left(x_{k}\right)\]
and \(\widetilde{y}\) is a modified state given by
\[\widetilde{y}\left(y,\widetilde{P}\right)=\left(\rho v_{1},\ldots,\rho v_{d}, \widetilde{\rho u}\left(C_{1},\ldots,C_{n_{s}},\widetilde{P}\right)+\frac{1}{ 2}\sum_{k=1}^{d}\rho v_{k}v_{k},C_{1},\ldots,C_{n_{s}}\right)^{T}. \tag{3.9}\]
\(\widetilde{P}\) in Equation (3.9) is a polynomial in \(\mathcal{P}_{p}(\kappa)\) that approximates the pressure as
\[\widetilde{P}_{\kappa}=\sum_{j=1}^{n_{b}}P\left(y_{\kappa}\left(x_{j}\right) \right)\phi_{j},\]
from which the modified internal energy, \(\widetilde{\rho u}\), is calculated. Furthermore, in Equation (3.2), \(\delta^{v}\left(y,\nabla y,n\right)\) and \(G\left(y^{+}\right):\left(\left\{y\right\}\right]-y^{+}\right)\otimes n\) are replaced with \(\delta^{v}\left(\widetilde{y},\nabla y,n\right)\) and \(G\left(\widetilde{y}^{+}\right):\left(\left\{\widetilde{y}\right\}- \widetilde{y}^{+}\right)\otimes n\), respectively. The modified flux interpolation (3.8) successfully preserves pressure equilibrium within and between elements (unless the solution is not adequately resolved, in which case measurable pressure disturbances are inevitable).
Since the linear-scaling limiter used to enforce the positivity property and entropy boundedness does not completely eliminate small-scale nonphysical artifacts [13; 12; 41; 42], especially near non-smooth features, we add the artificial dissipation term [33]
\[-\sum_{\kappa\in\mathcal{T}}\left(\nu_{\rm AV}\nabla y,\nabla\mathfrak{v} \right)_{\kappa} \tag{3.10}\]
to the LHS of Equation (3.2), where \(\nu_{\rm AV}\) is the artificial viscosity, calculated as [2]
\[\nu_{\rm AV}=\left(C_{\rm AV}+S_{\rm AV}\right)\left(\frac{h^{2}}{p+1}\left| \frac{\partial T}{\partial y}\cdot\frac{\mathcal{R}\left(y,\nabla y\right)}{T }\right|\right).\]
\(S_{\rm AV}\) is a shock sensor based on solution variations inside a given element [43], \(C_{\rm AV}\) is a user-defined parameter, \(h\) is a length scale associated with the element, and \(\mathcal{R}\left(y,\nabla y\right)\) is the strong form of the residual (2.1). In our previous work, this artificial viscosity formulation effectively mitigated small-scale nonlinear instabilities in various multicomponent-flow problems [2; 10]. However, other types of artificial viscosity or limiters can be employed instead. Additional details on the basic DG discretization and the issue of pressure equilibrium preservation can be found in [2].
## 4 Transport step: Positivity-preserving, entropy-bounded discontinuous Galerkin method
Let \(\mathcal{G}_{\sigma}\) denote the following set:
\[\mathcal{G}_{\sigma}=\left\{y\mid\rho>0,\rho u^{*}>0,C_{1}\geq 0,\ldots,C_{n_{s}} \geq 0,\chi_{\sigma}\geq 0\right\}, \tag{4.1}\]
where \(\sigma\in\mathbb{R}\), \(\chi_{\sigma}=\rho s-\rho\sigma\), and \(u^{*}\) is the "shifted" internal energy [44], calculated as
\[u^{*}=u-u_{0},\quad u_{0}=\left.u\right|_{T=0}, \tag{4.2}\]
such that \(u^{*}>0\) if and only if \(T>0\), provided \(c_{v,i}>0\), \(i=1,\ldots,n_{s}\)[45]. Note that \(\rho>0\) and \(u^{*}>0\) imply \(P(y)>0\). The \(\chi_{\sigma}>0\) inequality is associated with entropy boundedness, which will be discussed in more detail later in this section. Let \(\mathcal{G}\) denote a similar set, but without the entropy constraint, i.e.,
\[\mathcal{G}=\left\{y\mid\rho>0,\rho u^{*}>0,C_{1}\geq 0,\ldots,C_{n_{s}}\geq 0 \right\}. \tag{4.3}\]
Since \(\rho u^{*}(y)\) is a concave function of the state [10], \(\mathcal{G}\) is a convex set. If all species concentrations are strictly positive, then for a given \(\sigma\), \(\chi_{\sigma}\) is concave [12; 10; 9] and \(\mathcal{G}_{\sigma}\) is also a convex set. However, if any of the species concentrations is zero, then \(\chi_{\sigma}\) is no longer concave [9; 46]. For the remainder of this paper, in any discussion of entropy, \(\mathcal{G}_{\sigma}\) is always assumed to be a convex set. Note that this assumption does not seem to have any discernible negative effects on the solver [10; 11]. In addition, as will be made clear in the upcoming subsection and Remark 2, positive species concentrations are assumed until Section 4.7, wherein this restriction is relaxed to allow for consideration of zero concentrations.
### Positivity-preserving Lax-Friedrichs-type viscous flux function
In this subsection, we extend the local Lax-Friedrichs-type viscous flux function by Zhang [16] to multicomponent flows with species diffusion. In particular, we consider the viscous flux separately from the inviscid flux, unlike Du and Yang [17], who adapted said flux function to multicomponent flows in a manner that treats both fluxes simultaneously. Unless otherwise specified, we assume this flux function is employed for the remainder of the section. The penalty term takes the form [16]
\[\delta^{v}(y^{+},y^{-},\nabla y^{+},\nabla y^{-},n)=\frac{\beta}{2}\left(y^{ +}-y^{-}\right),\]
where \(\beta>0\) is the dissipation coefficient. The lemma below introduces a constraint on the definition of \(\beta\) that is essential for satisfaction of the positivity property by the DG formulation, as will be discussed in
Section 4.2.2. In Section 5.1, we demonstrate that this viscous flux function achieves optimal convergence for smooth flows. For compatibility with boundary conditions and the aforementioned pressure-equilibrium-preserving techniques, the definition of \(\beta\) is first presented in terms of the following expansion of the viscous flux:
\[\mathcal{F}^{v}=\left(\mathcal{F}^{v}_{\rho v},\mathcal{F}^{v}_{\rho e_{t}}, \mathcal{F}^{v}_{C_{1}},\ldots,\mathcal{F}^{v}_{C_{n_{s}}}\right)^{T},\]
where \(\mathcal{F}^{v}_{\rho v}\) is the viscous momentum flux, \(\mathcal{F}^{v}_{\rho e_{t}}\) is the viscous total-energy flux, and \(\mathcal{F}^{v}_{C_{i}}\) is the viscous molar flux of the \(i\)th species. Furthermore, until Section 4.7, we assume that all species concentrations are strictly positive, unless otherwise specified.
**Lemma 1**.: _Assume that \(y=(\rho v,\rho e_{t},C)^{T}\) is in \(\mathcal{G}\) and that \(C_{i}>0,\,\forall i\). Then \(y\pm\beta^{-1}\mathcal{F}^{v}\cdot n\), where \(n\) is a given unit vector, is also in \(\mathcal{G}\) under the following conditions:_
\[\beta>\beta^{*}\left(y,\mathcal{F}^{v},n\right)=\left.\max\left\{\max_{i=1, \ldots,n_{s}}\frac{\left|\mathcal{F}^{v}_{C_{i}}\cdot n\right|}{C_{i}},\beta_ {T}\right\}\right|_{\left(y,\mathcal{F}^{v},n\right)}, \tag{4.4}\]
_where_
\[\beta_{T}=\frac{\left|b\right|+\sqrt{b^{2}+2\rho^{2}u^{*}\left|\mathcal{F}^{v }_{\rho v}\cdot n\right|^{2}}}{2\rho^{2}u^{*}}, \tag{4.5}\]
_with \(b=\rho\mathcal{F}^{v}_{\rho e_{t}}\cdot n-\rho v\cdot\mathcal{F}^{v}_{\rho v}\cdot n\)._
Proof.: \(y\pm\beta^{-1}\mathcal{F}^{v}\cdot n\) can be expanded as
\[y\pm\beta^{-1}\mathcal{F}^{v}\cdot n= \left(\rho v\pm\beta^{-1}\mathcal{F}^{v}_{\rho v}\cdot n,\rho e_{ t}\pm\beta^{-1}\mathcal{F}^{v}_{\rho e_{t}}\cdot n,C_{1}\pm\beta^{-1} \mathcal{F}^{v}_{C_{1}}\cdot n,\ldots,C_{n_{s}}\pm\beta^{-1}\mathcal{F}^{v}_{C _{n_{s}}}\cdot n\right)^{T}.\]
First, we focus on positivity of density and species concentrations. For the \(i\)th species, \(C_{i}\pm\beta^{-1}\mathcal{F}^{v}_{C_{i}}\cdot n>0\) if and only if \(\beta>\left|\mathcal{F}^{v}_{C_{i}}\cdot n\right|/C_{i}\). Accounting for all species yields
\[\beta>\max_{i=1,\ldots,n_{s}}\frac{\left|\mathcal{F}^{v}_{C_{i}}\cdot n \right|}{C_{i}}. \tag{4.6}\]
Density is then also positive.
Next, we focus on positivity of temperature. For a given \(y=\left(\rho v,\rho e_{t},C\right)^{T}\), let \(Z(y)\) be defined as
\[Z(y)=\rho^{2}u^{*}(y)=\rho(y)\rho e_{t}-\left|\rho v\right|^{2}/2-\rho^{2}u_{0 }(y). \tag{4.7}\]
Note that \(Z(y)>0\) implies \(T(y)>0\). \(Z\left(y\pm\beta^{-1}\mathcal{F}^{v}\cdot n\right)\) can be expressed as
\[Z\left(y\pm\beta^{-1}\mathcal{F}^{v}\cdot n\right)= \sum_{i=1}^{n_{s}}W_{i}\left(C_{i}\pm\beta^{-1}\mathcal{F}^{v}_{C_ {i}}\cdot n\right)\left(\rho e_{t}\pm\beta^{-1}\mathcal{F}^{v}_{\rho e_{t}} \cdot n\right)\] \[-\frac{1}{2}\left|\rho v\pm\beta^{-1}\mathcal{F}^{v}_{\rho v} \cdot n\right|^{2}-\left[\sum_{i=1}^{n_{s}}W_{i}\left(C_{i}\pm\beta^{-1} \mathcal{F}^{v}_{C_{i}}\cdot n\right)\right]^{2}u_{0},\]
which, after multiplying both sides by \(\beta^{2}\) and some algebraic manipulation, can be rewritten as
\[\beta^{2}Z\left(y\pm\beta^{-1}\mathcal{F}^{v}\cdot n\right)= \rho^{2}u^{*}\beta^{2}\pm b\beta+g, \tag{4.8}\]
where \(b=\rho e_{t}M+\rho\mathcal{F}^{v}_{\rho e_{t}}\cdot n-\rho v\cdot\mathcal{F}^ {v}_{\rho v}\cdot n-2\rho u_{0}M\), \(g=M\mathcal{F}^{v}_{\rho e_{t}}\cdot n-\frac{1}{2}\left|\mathcal{F}^{v}_{\rho v }\cdot n\right|^{2}-u_{0}M^{2}\), and \(M=\sum_{i=1}^{n_{s}}W_{i}\mathcal{F}^{v}_{C_{i}}\cdot n\). By mass conservation, \(M=0\), such that \(b=\rho\mathcal{F}^{v}_{\rho e_{t}}\cdot n-\rho v\cdot\mathcal{F}^{v}_{\rho v}\cdot n\) and \(g=-\frac{1}{2}\left|\mathcal{F}^{v}_{\rho v}\cdot n\right|^{2}\). Setting the RHS of Equation (4.8) equal to zero yields two quadratic equations with \(\beta\) as the unknowns. Since \(\rho^{2}u^{*}\) is positive, the two quadratic equations are convex. Furthermore, since \(b^{2}+2\rho^{2}u^{*}\left|\mathcal{F}^{v}_{\rho v}\cdot n\right|^{2}\geq 0\), for each of the two quadratic equations, the roots are real and at least one is nonnegative. A sufficient condition to ensure \(Z\left(y\pm\beta^{-1}\mathcal{F}^{v}\cdot n\right)>0\) is \(\beta>\beta_{T}\geq 0\), where \(\beta_{T}\), given by Equation (4.5), is the largest of all roots of the quadratic equations. Combining this with the inequality (4.6) yields (4.4).
_Remark 2_.: Lemma 1 and the inequality (4.6) assume that the species concentrations are positive. If \(C_{i}=0\) and \(\mathcal{F}^{v}_{C_{i}}\cdot n\neq 0\), then there exists no finite value of \(\beta\) such that \(C_{i}\pm\beta^{-1}\mathcal{F}^{v}_{C_{i}}\cdot n\geq 0\) since \(\mathcal{F}^{v}_{C_{i}}\) is not directly proportional to \(C_{i}\). Specifically, the \(k\)th spatial component of \(\mathcal{F}^{v}_{C_{i}}(y,\nabla y)\) can be written as
\[\mathcal{F}^{v}_{C_{i},k}(y,\nabla y) =C_{i}V_{ik}\] \[=C_{i}\hat{V}_{ik}-\frac{C_{i}\sum_{l=1}^{n_{s}}W_{l}C_{l}\hat{V}_ {lk}}{\rho}\] \[=\bar{D}_{i}\frac{\partial C_{i}}{\partial x_{k}}-\frac{C_{i}\bar {D}_{i}}{\rho}\frac{\partial\rho}{\partial x_{k}}-\frac{C_{i}}{\rho}\sum_{l=1 }^{n_{s}}W_{l}\left(\bar{D}_{l}\frac{\partial C_{l}}{\partial x_{k}}-\frac{C_ {l}\bar{D}_{l}}{\rho}\frac{\partial\rho}{\partial x_{k}}\right), \tag{4.9}\]
such that \(\mathcal{F}^{v}_{C_{i},k}\) can be nonzero even if \(C_{i}=0\). As previously mentioned, however, it is crucial to account for zero concentrations. In Section 4.7, we relax this restriction and discuss how to ensure nonnegative species concentrations, which is done in a different manner from how positive density and temperature are guaranteed.
_Remark 3_.: The constraint on \(\beta\) in (4.4) is left in abstract form, i.e., in terms of \(\mathcal{F}^{v}=\left(\mathcal{F}^{v}_{\rho v},\mathcal{F}^{v}_{\rho e_{t}}, \mathcal{F}^{v}_{C_{1}},\ldots,\mathcal{F}^{v}_{C_{n_{s}}}\right)^{T}\). This is to allow for consideration of, for example, \(\widetilde{y}\pm\mathcal{F}^{v}\left(\widetilde{y},\nabla y\right)\cdot n\), where \(y\neq\widetilde{y}\), which is necessary for the modified flux interpolation (3.8) and for boundary conditions. If we take \(\mathcal{F}^{v}=\mathcal{F}^{v}\left(y,\nabla y\right)\) and substitute the definitions of each component of \(\mathcal{F}^{v}\), the constraint on \(\beta\) reduces to
\[\beta>\max\left\{\max_{i=1,\ldots,n_{s}}\left|V_{i}\cdot n\right|,\beta_{T} \right\}, \tag{4.10}\]
where Equation (4.5) is now given by
\[\beta_{T}=\frac{\left|b\right|+\sqrt{b^{2}+2\rho^{2}u^{*}\left|\tau\cdot n \right|^{2}}}{2\rho^{2}u^{*}},\]
with
\[b=\rho q\cdot n+\rho\sum_{i=1}^{n_{s}}W_{i}C_{i}h_{i}V_{i}\cdot n.\]
If species diffusion is neglected, these expressions recover those in [16] for the monocomponent case.
_Remark 4_.: \(\sum_{i}W_{i}\left[C_{i}\pm\beta^{-1}\mathcal{F}^{v}_{C_{i}}(C,\nabla C) \cdot n\right]\) recovers \(\rho\) since
\[\sum_{i=1}^{n_{s}}W_{i}\left[C_{i}\pm\beta^{-1}\mathcal{F}^{v}_{ C_{i}}(C,\nabla C)\cdot n\right] =\sum_{i=1}^{n_{s}}W_{i}C_{i}\pm\beta^{-1}\sum_{i=1}^{n_{s}}W_{i }C_{i}V_{i}\cdot n\] \[=\sum_{i=1}^{n_{s}}W_{i}C_{i}\] \[=\rho,\]
where the second line is due to mass conservation, i.e., \(\sum_{i=1}^{n_{s}}W_{i}C_{i}V_{ik}=0,\;k=1,\ldots,d\).
_Remark 5_.: Combining the convective and diffusive fluxes into a single flux, as done by Zhang [16] and Du and Yang [17], results in a different constraint on \(\beta\). As discussed in Section 1, in this work, we elect to use the HLLC inviscid flux function since in our experience, it typically produces more accurate solutions than the Lax-Friedrichs inviscid flux function. As such, the inviscid and viscous fluxes are treated separately in our formulation.
### One-dimensional case
In this subsection, we consider the one-dimensional case. We first focus on \(p=0\) before proceeding to \(p\geq 1\). Without loss of generality, we assume a uniform grid with element size \(h\).
#### 4.2.1 First-order DG scheme in one dimension
Consider the following \(p=0\), element-local DG discretization with forward Euler time stepping:
\[y_{\kappa}^{j+1}= y_{\kappa}^{j}-\frac{\Delta t}{h}\left[\mathcal{F}^{c\dagger} \left(y_{\kappa}^{j},y_{\kappa_{L}}^{j},-1\right)+\mathcal{F}^{c\dagger}\left(y _{\kappa}^{j},y_{\kappa_{R}}^{j},1\right)\right]\] \[+\frac{\Delta t}{h}\left[-\mathcal{F}^{v}\left(y_{\kappa_{L}}^{j}, \nabla y_{\kappa_{L}}^{j}\right)+\mathcal{F}^{v}\left(y_{\kappa_{R}}^{j}, \nabla y_{\kappa_{R}}^{j}\right)-\delta^{v}\left(y_{\kappa}^{j},y_{\kappa_{L}}^ {j},\nabla y_{\kappa}^{j},\nabla y_{\kappa_{L}}^{j},-1\right)-\delta^{v}\left( y_{\kappa}^{j},y_{\kappa_{R}}^{j},\nabla y_{\kappa}^{j},\nabla y_{\kappa_{R}}^{j},1 \right)\right], \tag{4.11}\]
where \(\Delta t\) is the time step size, \(j\) is the time step index, and \(\kappa_{L}\) and \(\kappa_{R}\) are the elements to the left and right of \(\kappa\), respectively. Equation (4.11) can be rearranged to split the convective and diffusive contributions as [16]
\[y_{\kappa}^{j+1} =\frac{1}{2}\left(y_{\kappa,c}^{j+1}+y_{\kappa,v}^{j+1}\right), \tag{4.12}\] \[y_{\kappa,c}^{j+1} =y_{\kappa}^{j}-\frac{\Delta t^{*}}{h}\left[\mathcal{F}^{c\dagger }\left(y_{\kappa}^{j},y_{\kappa_{L}}^{j},-1\right)+\mathcal{F}^{c\dagger} \left(y_{\kappa}^{j},y_{\kappa_{R}}^{j},1\right)\right],\] (4.13) \[y_{\kappa,v}^{j+1} =y_{\kappa}^{j}+\frac{\Delta t^{*}}{h}\left[-\frac{1}{2}\mathcal{ F}^{v}\left(y_{\kappa_{L}}^{j},\nabla y_{\kappa_{L}}^{j}\right)+\frac{1}{2} \mathcal{F}^{v}\left(y_{\kappa_{R}}^{j},\nabla y_{\kappa_{R}}^{j}\right)- \delta^{v}\left(y_{\kappa}^{j},y_{\kappa_{L}}^{j},\nabla y_{\kappa_{L}}^{j}, \nabla y_{\kappa_{L}}^{j},-1\right)-\delta^{v}\left(y_{\kappa}^{j},y_{\kappa_ {R}}^{j},\nabla y_{\kappa_{R}}^{j},1\right)\right], \tag{4.14}\]
where \(\Delta t^{*}=2\Delta t\).
First taking into account the convective contribution, let \(\lambda\) be an upper bound on the maximum wave speed of the system. \(y_{\kappa}^{j},y_{\kappa_{L}}^{j},y_{\kappa_{R}}^{j}\in\mathcal{G}_{\sigma}\) implies \(y_{\kappa,c}^{j+1}\in\mathcal{G}_{\sigma}\) if an _invariant-region-preserving_ flux function [12] is employed and the time step size satisfies
\[\frac{\Delta t^{*}\lambda}{h}\leq\frac{1}{2}. \tag{4.15}\]
The Godunov, Lax-Friedrichs, HLL, and HLLC inviscid flux functions are invariant-region-preserving [12]. Since the focus of this paper is the diffusive contribution, we refer the reader to [10] and the references therein for additional information on the convective contribution.
For \(p=0\), \(\mathcal{F}^{v}\left(y_{\kappa},\nabla y_{\kappa}\right)=G(y_{\kappa}):\nabla y _{\kappa}=0\) since \(\nabla y_{\kappa}=0\). As such, Equation (4.14) reduces to
\[y_{\kappa,v}^{j+1} =y_{\kappa}^{j}+\frac{\Delta t^{*}}{h}\left[-\delta^{v}\left(y_{ \kappa}^{j},y_{\kappa_{L}}^{j},\nabla y_{\kappa}^{j},\nabla y_{\kappa_{L}}^{ j},-1\right)-\delta^{v}\left(y_{\kappa}^{j},y_{\kappa_{R}}^{j},\nabla y_{ \kappa}^{j},\nabla y_{\kappa_{R}}^{j},1\right)\right]\] \[=\left[1-\frac{\Delta t^{*}}{2h}\left(\beta_{\kappa_{L}}+\beta_{ \kappa_{R}}\right)\right]y_{\kappa}^{j}+\frac{\Delta t^{*}}{2h}\beta_{\kappa_{ L}}y_{\kappa_{L}}^{j}+\frac{\Delta t^{*}}{2h}\beta_{\kappa_{R}}y_{\kappa_{R}}^{j}. \tag{4.16}\]
Under the time-step-size constraint
\[\frac{\Delta t^{*}}{2h}\max_{\kappa}\left(\beta_{\kappa_{L}}+\beta_{\kappa_{R }}\right)\leq 1,\]
the RHS of Equation (4.16) is a convex combination of \(y_{\kappa}^{j}\), \(y_{\kappa_{L}}^{j}\), and \(y_{\kappa_{R}}^{j}\) for any \(\kappa\). \(y_{\kappa}^{j},y_{\kappa_{L}}^{j},y_{\kappa_{R}}^{j}\in\mathcal{G}\) then implies \(y_{\kappa,v}^{j+1}\in\mathcal{G}\). This holds even for zero species concentrations. Finally, since \(y_{\kappa}^{j+1}\) is a convex combination of \(y_{\kappa,c}^{j+1}\) and \(y_{\kappa,v}^{j+1}\), \(y_{\kappa}^{j},y_{\kappa_{L}}^{j},y_{\kappa_{R}}^{j}\in\mathcal{G}\) implies \(y_{\kappa}^{j+1}\in\mathcal{G}\). Note that in principle, this holds for any positive values of \(\beta_{\kappa_{L}}\) and \(\beta_{\kappa_{R}}\).
#### 4.2.2 High-order DG scheme in one dimension
Consider a quadrature rule with \(n_{q}\) points and positive weights denoted with \(x_{q}\) and \(w_{q}\), respectively, such that \(x_{q}\in\kappa=[x_{L},x_{R}]\), \(\sum_{q=1}^{n_{q}}w_{q}=1\), and \(n_{q}\geq n_{b}\), The endpoints, \(x_{L}\) and \(x_{R}\), need not be included in the set of quadrature points, and none of the volumetric integrals in Equation (3.2) need to be evaluated with this quadrature rule. The standard flux interpolation (3.7) is assumed here; the modified flux interpolation (3.8) will be accounted for in Section 4.2.3. As in [41] the element-local solution average
can be expanded as
\[\overline{y}_{\kappa} =\sum_{q=1}^{n_{q}}w_{q}y_{\kappa}(x_{q})\] \[=\sum_{q=1}^{n_{q}}\theta_{q}y_{\kappa}(x_{q})+\theta_{L}y_{\kappa}( x_{L})+\theta_{R}y_{\kappa}(x_{R}), \tag{4.17}\]
where, if the set of quadrature points includes the endpoints,
\[\theta_{q}=\begin{cases}w_{q}&x_{q}\neq x_{L},x_{q}\neq x_{R}\\ 0&\text{otherwise}\end{cases}\]
and
\[\theta_{L}=w_{L},\quad\theta_{R}=w_{R},\]
with \(w_{L}\) and \(w_{R}\) denoting the quadrature weights at the left and right endpoints, respectively. Otherwise, we take
\[\theta_{q}=w_{q}-\theta_{L}\psi_{q}\left(x_{L}\right)-\theta_{R}\psi_{q}\left( x_{R}\right),\]
where \(\psi_{1},\ldots,\psi_{n_{d}}\) form a set of Lagrange basis functions whose nodes are located at \(n_{d}\) points of the set \(\left\{x_{q},q=1,\ldots,n_{q}\right\}\), with \(n_{b}\leq n_{d}\leq n_{q}\), and \(\psi_{n_{d}+1},\ldots,\psi_{n_{q}}\) are equal to zero. As a result, \(\sum_{q=1}^{n_{q}}\theta_{q}y_{\kappa}(x_{q})\) can be written as
\[\sum_{q=1}^{n_{q}}\theta_{q}y_{\kappa}(x_{q}) =\sum_{q=1}^{n_{q}}\left[w_{q}-\theta_{L}\psi_{q}\left(x_{L} \right)-\theta_{R}\psi_{q}\left(x_{R}\right)\right]y_{\kappa}(x_{q})\] \[=\sum_{q=1}^{n_{q}}w_{q}y_{\kappa}(x_{q})-\theta_{L}\sum_{q=1}^{ n_{q}}y_{\kappa}(x_{q})\psi_{q}\left(x_{L}\right)-\theta_{R}\sum_{q=1}^{n_{q}}y_{ \kappa}(x_{q})\psi_{q}\left(x_{R}\right)\] \[=\sum_{q=1}^{n_{q}}w_{q}y_{\kappa}(x_{q})-\theta_{L}y_{\kappa}(x _{L})+\theta_{R}y_{\kappa}(x_{R}).\]
\(\theta_{L}\) and \(\theta_{R}\) will be related to a time-step-size constraint below (see [41] and [10] for additional details). Note that \(\sum_{q}\theta_{q}+\theta_{L}+\theta_{R}=1\) since \(\sum_{q=1}^{n_{q}}\psi_{q}=1\). Due to the positivity of the quadrature weights, there exist positive \(\theta_{L}\) and \(\theta_{R}\) that yield \(\theta_{q}\geq 0,\ q=1,\ldots,n_{q}\)[41]. Define \(\partial\mathcal{D}_{\kappa}=\left\{x_{L},x_{R}\right\}\), and let \(\mathcal{D}_{\kappa}\) denote the following set of points:
\[\mathcal{D}_{\kappa}=\partial\mathcal{D}_{\kappa}\bigcup\left\{x_{q},q=1, \ldots,n_{q}\right\}=\left\{x_{L},x_{R},x_{q},q=1,\ldots,n_{q}\right\}.\]
Employing the forward Euler time-integration scheme and taking \(\mathfrak{v}\in V_{h}^{0}\) yields the fully discrete scheme satisfied by the element averages,
\[\overline{y}_{\kappa}^{j+1}=\frac{1}{2}\left(\overline{y}_{\kappa,c}^{j+1}+ \overline{y}_{\kappa,v}^{j+1}\right),\]
where
\[\overline{y}_{\kappa,c}^{j+1}= \overline{y}_{\kappa}^{j}-\frac{\Delta t^{*}}{h}\left[\mathcal{F} ^{c\dagger}\left(y_{\kappa}^{j}(x_{L}),y_{\kappa_{L}}^{j}(x_{L}),-1\right)+ \mathcal{F}^{c\dagger}\left(y_{\kappa}^{j}(x_{R}),y_{\kappa_{R}}^{j}(x_{R}),1 \right)\right] \tag{4.18}\] \[= \sum_{q=1}^{n_{q}}\theta_{q}y_{\kappa}^{j}(x_{q})+\theta_{L}y_{ \kappa}^{j}(x_{L})-\frac{\Delta t^{*}}{h}\left[\mathcal{F}^{c\dagger}\left(y_{ \kappa}^{j}(x_{L}),y_{\kappa_{L}}^{j}(x_{L}),-1\right)+\mathcal{F}^{\dagger} \left(y_{\kappa}^{j}(x_{L}),y_{\kappa}^{j}(x_{R}),1\right)\right]\] \[+\theta_{R}y_{\kappa}^{j}(x_{R})-\frac{\Delta t^{*}}{h}\left[ \mathcal{F}^{c\dagger}\left(y_{\kappa}^{j}(x_{R}),y_{\kappa}^{j}(x_{L}),-1 \right)+\mathcal{F}^{\dagger}\left(y_{\kappa}^{j}(x_{R}),y_{\kappa_{R}}^{j}(x_{ R}),1\right)\right],\]
and
\[\overline{y}_{\kappa,v}^{j+1}= \overline{y}_{\kappa}^{j}+\frac{\Delta t^{*}}{h}\left[-\frac{1}{2} \mathcal{F}^{v}\left(y_{\kappa}^{j}(x_{L}),\nabla y_{\kappa}^{j}(x_{L})\right)- \frac{1}{2}\mathcal{F}^{v}\left(y_{\kappa_{L}}^{j}(x_{L}),\nabla y_{\kappa_{L}}^ {j}(x_{L})\right)\right.\] \[+\frac{1}{2}\mathcal{F}^{v}\left(y_{\kappa}^{j}(x_{R}),\nabla y_{ \kappa}^{j}(x_{R})\right)+\frac{1}{2}\mathcal{F}^{v}\left(y_{\kappa_{R}}^{j}(x _{R}),\nabla y_{\kappa_{R}}^{j}(x_{R})\right)\] \[\left.-\frac{\beta_{\kappa_{L}}}{2}y_{\kappa}^{j}(x_{L})+\frac{ \beta_{\kappa_{L}}}{2}y_{\kappa_{L}}^{j}(x_{L})-\frac{\beta_{\kappa_{R}}}{2}y _{\kappa}^{j}(x_{R})+\frac{\beta_{\kappa_{R}}}{2}y_{\kappa_{R}}^{j}(x_{R})\right]\] \[= \sum_{q=1}^{n_{q}}\theta_{q}y_{\kappa}^{j}(x_{q})+\frac{\Delta t ^{*}}{2h}\beta_{\kappa_{L}}\left[y_{\kappa_{L}}^{j}(x_{L})-\beta_{\kappa_{L}}^ {-1}\mathcal{F}^{v}\left(y_{\kappa_{L}}^{j}(x_{L}),\nabla y_{\kappa_{L}}^{j}(x _{L})\right)\right] \tag{4.19}\] \[+\frac{\Delta t^{*}}{2h}\beta_{\kappa_{R}}\left[y_{\kappa_{R}}^{ j}(x_{R})+\beta_{\kappa_{R}}^{-1}\mathcal{F}^{v}\left(y_{\kappa_{R}}^{j}(x_{R}), \nabla y_{\kappa_{R}}^{j}(x_{R})\right)\right]\] \[+\left(\theta_{L}-\frac{\Delta t^{*}}{2h}\beta_{\kappa_{L}} \right)\left[y_{\kappa}^{j}(x_{L})-\frac{\Delta t^{*}}{2h\left(\theta_{L}- \frac{\Delta t^{*}}{2h}\beta_{\kappa_{L}}\right)}\mathcal{F}^{v}\left(y_{ \kappa}^{j}(x_{L}),\nabla y_{\kappa_{L}}^{j}(x_{L})\right)\right]\] \[+\left(\theta_{R}-\frac{\Delta t^{*}}{2h}\beta_{\kappa_{R}} \right)\left[y_{\kappa}^{j}(x_{R})+\frac{\Delta t^{*}}{2h\left(\theta_{R}- \frac{\Delta t^{*}}{2h}\beta_{\kappa_{R}}\right)}\mathcal{F}^{v}\left(y_{ \kappa}^{j}(x_{R}),\nabla y_{\kappa_{R}}^{j}(x_{R})\right)\right].\]
The second equality in Equation (4.18) is due to the conservation property of the numerical flux:
\[\mathcal{F}^{\dagger}\left(y_{\kappa}^{j}(x_{L}),y_{\kappa}^{j}(x_{R}),1 \right)=-\mathcal{F}^{\dagger}\left(y_{\kappa}^{j}(x_{R}),y_{\kappa}^{j}(x_{L }),-1\right).\]
Note that Equations (4.18) and (4.19) hold regardless of whether the integrals in Equation (3.2) are computed with conventional quadrature or a quadrature-free approach [39, 40].
The limiting strategy, which is described in Section 4.3, requires that \(\overline{y}_{\kappa,c}^{j+1}\) and \(\overline{y}_{\kappa,v}^{j+1}(x)\) be in \(\mathcal{G}_{s_{b}}\) and \(\mathcal{G}\), respectively, where \(s_{b}\) is a lower bound on the specific thermodynamic entropy. As discussed in [10], we employ a local entropy bound,
\[s_{b,\kappa}^{j+1}(y)=\min\left\{s\left(y^{j}(x)\right)|x\in\mathcal{D}_{ \kappa}\cup\mathcal{D}_{\kappa_{L}}\cup\mathcal{D}_{\kappa_{R}}\right\}, \tag{4.20}\]
which is motivated by the minimum entropy principle satisfied by entropy solutions to the multicomponent Euler equations [9]. It can be shown that if \(y_{\kappa}^{j}(x)\in\mathcal{G}_{s_{b}},\ \forall x\in\mathcal{D}_{\kappa}\), and \(y_{\kappa}^{-,j}\in\mathcal{G}_{s_{b}},\ \forall x\in\partial\mathcal{D}_{\kappa}\), where \(y_{\kappa}^{-}\) denotes the exterior state along \(\partial\kappa\), then \(\overline{y}_{\kappa,c}^{j+1}\) is in \(\mathcal{G}_{s_{b}}\) under the time-step-size constraint
\[\frac{\Delta t^{*}\lambda}{h}\leq\frac{1}{2}\min\left\{\theta_{L},\theta_{R}\right\} \tag{4.21}\]
and the conditions
\[\theta_{L}>0,\theta_{R}>0,\theta_{q}\geq 0,q=1,\ldots,n_{q}. \tag{4.22}\]
More information can be found in [10]. The conditions under which \(\overline{y}_{\kappa,v}^{j+1}\in\mathcal{G}\) are analyzed in the following theorem.
**Theorem 6**.: _If \(y_{\kappa}^{j}(x)\in\mathcal{G},\ \forall x\in\mathcal{D}_{\kappa}\), and \(y_{\kappa}^{-,j}\in\mathcal{G},\ \forall x\in\partial\mathcal{D}_{\kappa}\), then \(\overline{y}_{\kappa,v}^{j+1}\) is also in \(\mathcal{G}\) under the time-step-size constraint_
\[\frac{\Delta t^{*}}{h}\leq\min\left\{\frac{\theta_{L}}{\beta_{\kappa_{L}}}, \frac{\theta_{R}}{\beta_{\kappa_{R}}}\right\}, \tag{4.23}\]
_the constraints on \(\beta\),_
\[\beta_{\kappa_{L}} >\max\left\{\beta^{*}\left(y_{\kappa}^{j}(x_{L}),\mathcal{F}^{v }\left(y_{\kappa}^{j}(x_{L}),\nabla y_{\kappa}^{j}(x_{L})\right),-1\right), \beta^{*}\left(y_{\kappa_{L}}^{j}(x_{L}),\mathcal{F}^{v}\left(y_{\kappa_{L}}^{j }(x_{L}),\nabla y_{\kappa_{L}}^{j}(x_{L})\right),-1\right)\right\}, \tag{4.24}\] \[\beta_{\kappa_{R}} >\max\left\{\beta^{*}\left(y_{\kappa}^{j}(x_{R}),\mathcal{F}^{v }\left(y_{\kappa}^{j}(x_{R}),\nabla y_{\kappa}^{j}(x_{R})\right),1\right), \beta^{*}\left(y_{\kappa_{R}}^{j}(x_{R}),\mathcal{F}^{v}\left(y_{\kappa_{R}}^{j }(x_{R}),\nabla y_{\kappa_{R}}^{j}(x_{R})\right),1\right)\right\}, \tag{4.25}\]
_and the conditions (4.22)._
Proof.: The inequality (4.24) guarantees that \(y^{j}_{\kappa_{L}}(x_{L})-\beta_{\kappa_{L}}^{-1}\mathcal{F}^{v}\left(y^{j}_{ \kappa_{L}}(x_{L}),\nabla y^{j}_{\kappa_{L}}(x_{L})\right)\in\mathcal{G}\). According to the time-step-size constraint (4.23), we have
\[\frac{\Delta t^{*}}{h}\leq\frac{\theta_{L}}{\beta_{\kappa_{L}}},\]
such that
\[\theta_{L}-\frac{\Delta t^{*}}{2h}\beta_{\kappa_{L}}\geq\frac{\Delta t^{*}}{2h }\beta_{\kappa_{L}}.\]
It follows that
\[\frac{\Delta t^{*}}{2h\left(\theta_{L}-\frac{\Delta t^{*}}{2h} \beta_{\kappa_{L}}\right)} \leq\frac{\Delta t^{*}}{2h\left(\frac{\Delta t^{*}}{2h}\beta_{ \kappa_{L}}\right)}\] \[=\beta_{\kappa_{L}}^{-1},\]
which means \(y^{j}_{\kappa}(x_{L})-\frac{\Delta t^{*}}{2h\left(\theta_{L}-\frac{\Delta t^{ *}}{2h}\beta_{\kappa_{L}}\right)}\mathcal{F}^{v}\left(y^{j}_{\kappa}(x_{L}), \nabla y^{j}_{\kappa}(x_{L})\right)\in\mathcal{G}\). Moreover, we have \(\frac{\Delta t^{*}}{2h}\beta_{\kappa_{L}}\leq\theta_{L}\leq 1\). The same arguments can be applied to show
\[y^{j}_{\kappa_{R}}(x_{R})+\beta_{\kappa_{R}}^{-1}\mathcal{F}^{v }\left(y^{j}_{\kappa_{R}}(x_{R}),\nabla y^{j}_{\kappa_{R}}(x_{R})\right)\in \mathcal{G},\] \[y^{j}_{\kappa}(x_{R})+\frac{\Delta t^{*}}{2h\left(\theta_{R}- \frac{\Delta t^{*}}{2h}\beta_{\kappa_{R}}\right)}\mathcal{F}^{v}\left(y^{j}_{ \kappa}(x_{R}),\nabla y^{j}_{\kappa}(x_{R})\right)\in\mathcal{G},\] \[\frac{\Delta t^{*}}{2h}\beta_{\kappa_{R}}\leq\theta_{R}\leq 1.\]
Therefore, \(\overline{y}^{j+1}_{\kappa,v}\) is a convex combination of states in \(\mathcal{G}\), such that \(\overline{y}^{j+1}_{\kappa,v}\in\mathcal{G}\).
_Remark 7_.: Though forward Euler time stepping is employed for demonstration purposes, any time integration scheme that can be expressed as a convex combination of forward Euler steps, such as strong-stability-preserving Runge-Kutta (SSPRK) methods, can be used.
As previously mentioned, the final ingredient of the positivity-preserving, entropy-bounded DG scheme is a limiting strategy (described in Section 4.3) to ensure \(y^{j+1}_{\kappa,c}(x)\in\mathcal{G}_{s_{b}}\) and \(y^{j+1}_{\kappa,v}(x)\in\mathcal{G}\), for all \(x\in\mathcal{D}_{\kappa}\), where \(y^{j+1}_{c}\) satisfies
\[\sum_{\kappa\in\mathcal{T}}\left(\frac{y^{j+1}_{c}-y^{j}}{\Delta t^{*}}, \mathfrak{v}\right)_{\kappa}-\sum_{\kappa\in\mathcal{T}}\left(\mathcal{F}^{c} \left(y^{j},\nabla y^{j}\right),\nabla\mathfrak{v}\right)_{\kappa}+\sum_{ \epsilon\in\mathcal{E}}\left(\mathcal{F}^{c\dagger}\left(y^{j},n\right), \llbracket\mathfrak{v}\rrbracket\right)_{\epsilon}=0\qquad\forall\mathfrak{v} \in V^{p}_{h},\]
and \(y^{j+1}_{v}\) satisfies
\[\sum_{\kappa\in\mathcal{T}}\left(\frac{y^{j+1}_{v}-y^{j}}{\Delta t ^{*}},\mathfrak{v}\right)_{\kappa}-\sum_{\epsilon\in\mathcal{E}}\left( \left\{\!\left\{\mathcal{F}^{v}\left(y^{j},\nabla y^{j}\right)\right\}\! \right\}\!\right)\cdot n-\delta^{v}\left(y^{j},\nabla y^{j},n\right),\llbracket \mathfrak{v}\rrbracket\right)_{\epsilon}\] \[\qquad+\sum_{\kappa\in\mathcal{T}}\left(G\left(y^{j,+}\right): \left(\left\{\!\left\{y^{j}\right\}\!\right\}-y^{j,+}\right)\otimes n,\nabla \mathfrak{v}\right)_{\partial\kappa}=0\qquad\forall\mathfrak{v}\in V^{p}_{h},\]
such that \(y^{j+1}=\frac{1}{2}\left(y^{j+1}_{c}+y^{j+1}_{v}\right)\). \(y^{j+1}_{\kappa}(x)\) is then in \(\mathcal{G}\), for all \(x\in\mathcal{D}_{\kappa}\), since \(y^{j+1}_{\kappa}\) is a convex combination of \(y^{j+1}_{\kappa,c}\) and \(y^{j+1}_{\kappa,v}\). Entropy boundedness is enforced on only the convective contribution since the viscous flux function is not fully compatible with an entropy constraint and, at least in the monocomponent, calorically perfect setting, the minimum entropy principle does not hold for the Navier-Stokes equations unless the thermal diffusivity is zero [22, 23]. The limiting strategy here relies on a simple linear-scaling limiter that is conservative, maintains stability, and in general preserves order of accuracy for smooth solutions [13, 16, 47, 41, 12]. However, it is not expected to suppress all small-scale instabilities, which is why artificial viscosity is employed in tandem.
#### 4.2.3 Modified flux interpolation
In this subsection, we discuss how to account for the modified flux interpolation (3.8). In [11], we already discussed the inviscid case; therefore, we only consider \(y_{\kappa,v}\) here. The scheme satisfied by the element averages becomes
\[\begin{split}\overline{y}_{\kappa,v}^{j+1}=& \overline{y}_{\kappa}^{j}+\frac{\Delta t^{*}}{h}\left[-\frac{1}{2} \mathcal{F}^{v}\left(\widetilde{y}_{\kappa}^{j}(x_{L}),\nabla y_{\kappa}^{j}(x _{L})\right)-\frac{1}{2}\mathcal{F}^{v}\left(\widetilde{y}_{\kappa_{L}}^{j}(x _{L}),\nabla y_{\kappa_{L}}^{j}(x_{L})\right)\right.\\ &+\frac{1}{2}\mathcal{F}^{v}\left(\widetilde{y}_{\kappa}^{j}(x_{R} ),\nabla y_{\kappa}^{j}(x_{R})\right)+\frac{1}{2}\mathcal{F}^{v}\left( \widetilde{y}_{\kappa_{R}}^{j}(x_{R}),\nabla y_{\kappa_{R}}^{j}(x_{R})\right) \\ &\left.-\frac{\beta_{\kappa_{L}}}{2}\widetilde{y}_{\kappa}^{j}(x_ {L})+\frac{\beta_{\kappa_{L}}}{2}\widetilde{y}_{\kappa_{L}}^{j}(x_{L})-\frac {\beta_{\kappa_{R}}}{2}\widetilde{y}_{\kappa}^{j}(x_{R})+\frac{\beta_{\kappa_{ R}}}{2}\widetilde{y}_{\kappa_{R}}^{j}(x_{R})\right].\end{split} \tag{4.26}\]
If the nodal set includes the endpoints (e.g., equidistant or Gauss-Lobatto points), then \(y_{\kappa}^{j}(x_{L})=\widetilde{y}_{\kappa}^{j}(x_{L})\) and \(y_{\kappa}^{j}(x_{R})=\widetilde{y}_{\kappa}^{j}(x_{R})\), in which case both Equation (4.19) and Theorem 6 hold and the modified flux interpolation does not require any additional modifications to the formulation.
### Limiting procedure
Here, we describe the positivity-preserving and entropy limiters to ensure \(y_{\kappa,c}^{j+1}(x)\in\mathcal{G}_{s_{b}}\) and \(y_{\kappa,v}^{j+1}(x)\in\mathcal{G}\), respectively, for all \(x\in\mathcal{D}_{\kappa}\). We assume that \(\overline{y}_{\kappa,c}^{j+1}(x)\in\mathcal{G}_{s_{b}}\) and \(\overline{y}_{\kappa,v}^{j+1}(x)\in\mathcal{G}\). The \(j+1\) superscript and \(\kappa\) subscript are dropped for brevity. The limiting procedure is identical across one, two, and three dimensions.
_Positivity-preserving limiter_
The positivity-preserving limiter enforces \(\rho>0\), \(C_{i}\geq 0,\;\forall i\), and \(\rho u^{*}>0\) via the following steps:
* If \(\rho(y\left(x\right))>\varepsilon\), \(\forall x\in\mathcal{D}_{\kappa}\), where \(\varepsilon\) is a small positive number, such as \(10^{-10}\), then set \(C_{i}^{(1)}=C_{i}=\sum_{j=1}^{n_{b}}C_{i}(x_{j})\phi_{j},i=1,\ldots,n_{s}\); if not, set \[C_{i}^{(1)}=\overline{C}_{i}+\omega^{(1)}\left(C_{i}-\overline{C}_{i}\right), \quad\omega^{(1)}=\frac{\rho(\overline{y})-\epsilon}{\rho(\overline{y})- \min_{x\in\mathcal{D}}\rho(y(x))}.\] for \(i=1,\ldots,n_{s}.\) Let \(y^{(1)}=\left(\rho v_{1},\ldots,\rho v_{d},\rho e_{t},C_{1}^{(1)},\ldots,C_{n_ {s}}^{(1)}\right)\). This is referred to as the "density limiter" in Section 4.7.
* For \(i=1,\ldots,n_{s}\), if \(C_{i}^{(1)}(x)\geq 0,\;\forall x\in\mathcal{D}_{\kappa}\), then set \(C_{i}^{(2)}=C_{i}^{(1)}\); if not, set \[C_{i}^{(2)}=\overline{C}_{i}+\omega^{(2)}\left(C_{i}^{(1)}-\overline{C}_{i} \right),\quad\omega^{(2)}=\frac{\overline{C}_{i}}{\overline{C}_{i}-\min_{x\in \mathcal{D}}C_{i}^{(1)}(x)}.\] Let \(y^{(2)}=\left(\rho v_{1},\ldots,\rho v_{d},\rho e_{t},C_{1}^{(2)},\ldots,C_{n_ {s}}^{(2)}\right)\).
* If \(\rho u^{*}\left(y^{(2)}(x)\right)>\epsilon\), \(\forall x\in\mathcal{D}_{\kappa}\), then set \(y^{(3)}=y^{(2)}\); if not, set \[y^{(3)}=\overline{y}+\omega^{(3)}\left(y^{(2)}-\overline{y}\right),\quad \omega^{(3)}=\frac{\rho u^{*}(\overline{y})-\epsilon}{\rho u^{*}(\overline{y})- \min_{x\in\mathcal{D}}\rho u^{*}(y^{(2)}(x))}.\] Since \(\rho u^{*}(y)\) is a concave function of \(y\)[10], \(\rho u^{*}(y^{(3)}(x))>0\), \(\forall x\in\mathcal{D}_{\kappa}\)[48; 16].
The positivity-preserving limiter is applied to both \(y_{c}\) and \(y_{v}\).
_Entropy limiter_
The entropy limiter, which is applied only to \(y_{c}\), enforces \(\chi\geq 0\) as follows: if \(\chi\left(y^{(3)}(x)\right)\geq 0,\;\forall x\in\mathcal{D}_{\kappa}\), then set \(y^{(4)}=y^{(3)}\); if not, set
\[y^{(4)}=\overline{y}+\omega^{(4)}\left(y^{(3)}-\overline{y}\right),\quad\omega^ {(4)}=\frac{\chi(\overline{y})}{\chi(\overline{y})-\min_{x\in\mathcal{D}}\! \chi(y^{(3)}(x))}.\]
Since \(\chi(y)\) is a concave function of \(y\)[12; 42], \(s\left(y^{(4)}(x)\right)\geq s_{b},\;\forall x\in\mathcal{D}_{\kappa}\).
The solution is then replaced as
\[y\leftarrow\frac{1}{2}\left(y_{c}^{(4)}+y_{v}^{(3)}\right).\]
This limiting procedure is applied at the end of every RK stage. Note that if \(y\) is split in a different manner as [21]
\[y\gets y_{\sharp}^{(3)},\]
where \(y_{\sharp}\) satisfies
\[\sum_{\kappa\in\mathcal{T}}\left(\frac{y_{\sharp}^{j+1}-y_{\epsilon }^{(4),j+1}}{\Delta t},\mathfrak{v}\right)_{\kappa}-\sum_{\epsilon\in \mathcal{E}}\left(\left\{\!\left\{\mathcal{F}^{v}\left(y^{j},\nabla y^{j} \right)\right\}\!\right\}\cdot n-\delta^{v}\left(y^{j},\nabla y^{j},n\right), \left[\![\mathfrak{v}]\right]\!\right)_{\epsilon}\] \[+\sum_{\kappa\in\mathcal{T}}\left(G\left(y^{j,+}\right):\left( \left\{\!\left\{y^{j}\right\}\!\right\}-y^{j,+}\right)\otimes n,\nabla \mathfrak{v}\right)_{\partial\kappa}=0\qquad\forall\mathfrak{v}\in V_{h}^{p},\]
and \(y_{\mathsf{c}}\) satisfies
\[\sum_{\kappa\in\mathcal{T}}\left(\frac{y_{\mathsf{c}}^{j+1}-y^{j}}{\Delta t}, \mathfrak{v}\right)_{\kappa}-\sum_{\kappa\in\mathcal{T}}\left(\mathcal{F}^{c} \left(y^{j},\nabla y^{j}\right),\nabla\mathfrak{v}\right)_{\kappa}+\sum_{ \epsilon\in\mathcal{E}}\left(\mathcal{F}^{c\dagger}\left(y^{j},n\right), \left[\![\mathfrak{v}]\right]\!\right)_{\epsilon}=0\qquad\forall\mathfrak{v} \in V_{h}^{p},\]
then \(\overline{y}_{\kappa,\sharp}^{j+1}\) may not be in \(\mathcal{G}\) in the case that \(y_{\kappa,\mathsf{c}}^{j+1}(x)\notin\mathcal{G}_{s_{b}},\forall x\in\mathcal{ D}_{\kappa}\), i.e., \(y_{\kappa,\mathsf{c}}^{(4),j+1}\neq y_{\kappa,\mathsf{c}}^{j+1}\).
### Multidimensional case
In this section, the one-dimensional positivity-preserving, entropy-bounded DG method presented in the previous subsection is extended to two and three dimensions. Before doing so, we first review the geometric mapping, as well as volume and surface quadrature rules. For conciseness, any key ideas already presented in Section 4.2 are only briefly mentioned here.
#### 4.4.1 Preliminaries
Geometric mappingLet \(\xi=(\xi_{1},\ldots,\xi_{d})\) denote the reference coordinates and \(\widehat{\kappa}\) denote the reference element. The mapping \(x(\xi):\widehat{\kappa}\to\kappa\) is defined as
\[x(\xi)=\sum_{m=1}^{n_{g}}x_{\kappa,m}\Phi_{m}(\xi),\]
where \(\left\{x_{\kappa,1},\ldots,x_{\kappa,n_{g}}\right\}\) is the set of geometric nodes of \(\kappa\), \(\left\{\Phi_{1},\ldots,\Phi_{n_{g}}\right\}\) is the set of geometric basis functions, and \(n_{g}\) is the number of basis functions. Let \(J_{\kappa}\) denote the geometric Jacobian and \(|J_{\kappa}|\) denote its determinant, which is allowed to vary with \(\xi\). \(y_{\kappa}\) can be expressed as
\[y_{\kappa}=\sum_{j=1}^{n_{b}}y_{\kappa}(x_{j})\phi(\xi),\quad x=x(\xi)\in \kappa,\;\forall\xi\in\widehat{\kappa}.\]
Let \(\kappa^{(f)}\) be the \(f\)th neighbor of \(\kappa\) and \(\partial\kappa^{(f)}\) be the \(f\)th face of \(\kappa\), such that \(\partial\kappa=\bigcup_{f=1}^{n_{f}}\partial\kappa^{(f)}\), where \(n_{f}\) is the number of faces. Note that \(n_{f}\) can vary across elements, but we slightly abuse notation for brevity.
\(\widehat{\epsilon}\) denotes the reference face. We define \(x\left(\zeta^{(f)}\right):\widehat{\epsilon}\rightarrow\partial\kappa^{(f)}\), with \(\zeta^{(f)}=\left(\zeta_{1}^{(f)},\ldots,\zeta_{d=1}^{(f)}\right)\) denoting the reference coordinates, as
\[x\left(\zeta^{(f)}\right)=\sum_{m=1}^{n_{g,f}^{\partial}}x_{\kappa,m}^{(f)} \Phi_{m}^{(f)}\left(\zeta^{(f)}\right),\]
where \(\left\{x_{\kappa,1}^{(f)},\ldots x_{\kappa,n_{g,f}^{\partial}}^{(f)}\right\}\) is the set of geometric nodes of \(\partial\kappa^{(f)}\), \(\left\{\Phi_{1}^{(f)},\ldots,\Phi_{n_{g,f}^{\partial}}^{(f)}\right\}\) is the set of basis functions, and \(n_{g,f}^{\partial}\) is the number of basis functions. \(\xi\left(\zeta^{(f)}\right):\widehat{\epsilon}\rightarrow\widehat{\kappa}\) is the mapping from the reference face to the reference element. The surface Jacobian is denoted \(J_{\partial\kappa}^{(f)}\), which can vary with \(\zeta^{(f)}\).
_Quadrature rules._ Consider a volume quadrature rule with \(n_{q}\) points and positive weights, denoted \(\xi_{q}\) and \(w_{q}\), \(q=1,\ldots,n_{q}\), respectively, with \(n_{q}\geq n_{b}\). The weights are appropriately scaled such that \(\sum_{q=1}^{n_{q}}w_{q}=|\widehat{\kappa}|\), where \(|\widehat{\kappa}|\) is the volume of \(\widehat{\kappa}\). The quadrature rule can be used to evaluate the volume integral over \(\kappa\) of a generic function, \(g(x)\), as
\[\int_{\kappa}g(x)dx=\int_{\widehat{\kappa}}g(x(\xi))\left|J_{\kappa}(\xi) \right|d\xi\approx\sum_{q=1}^{n_{q}}g\left(x(\xi_{q})\right)\left|J_{\kappa}( \xi_{q})\right|w_{q},\]
If \(g(x)\) is a polynomial, then quadrature with sufficiently high \(n_{q}\) gives the exact value.
Similarly, consider a surface quadrature rule with \(n_{q}^{\partial}\) points and positive weights, denoted \(\zeta_{l}\) and \(w_{l}^{\partial}\),\(l=1,\ldots,n_{q}^{\partial}\). The weights are scaled such that \(\sum_{l=1}^{n_{q}^{\partial}}w_{l}^{\partial}=|\widehat{\epsilon}|\), where \(|\widehat{\epsilon}|\) is the surface area \(\widehat{\epsilon}\). The surface quadrature rule can be used to evaluate the surface integral over \(\partial\kappa^{(f)}\) of a generic function as
\[\int_{\partial\kappa^{(f)}}g(x)ds=\int_{\widehat{\epsilon}}g\left(x\left( \zeta^{(f)}\right)\right)\left|J_{\partial\kappa}^{(k)}\left(\zeta^{(f)} \right)\right|d\zeta\approx\sum_{l=1}^{n_{q}^{\partial}}g\left(x\left(\zeta_{l }^{(f)}\right)\right)\left|J_{\partial\kappa}^{(f)}\left(\zeta_{l}^{(f)} \right)\right|w_{f,l}^{\partial}=\sum_{l=1}^{n_{q}^{\partial}}g\left(x\left( \zeta_{l}^{(f)}\right)\right)\nu_{f,l}^{\partial},\]
where \(\nu_{f,l}^{\partial}=\left|J_{\partial\kappa}^{(f)}\left(\zeta_{l}^{(f)} \right)\right|w_{f,l}^{\partial}\). If \(g(x)\) is a polynomial, then quadrature with sufficiently high \(n_{q}^{\partial}\) yields the exact value. The closed surface integral over \(\partial\kappa\) can be computed as
\[\int_{\partial\kappa}g(x)ds=\sum_{f=1}^{n_{f}}\int_{\partial\kappa^{(f)}}g(x )ds=\sum_{f=1}^{n_{f}}\int_{\widehat{\epsilon}}g\left(x\left(\zeta^{(f)} \right)\right)\left|J_{\partial\kappa}^{(f)}\left(\zeta^{(f)}\right)\right|d \zeta\approx\sum_{f=1}^{n_{f}}\sum_{l=1}^{n_{q,f}^{\partial}}g\left(x\left( \zeta_{l}^{(f)}\right)\right)\nu_{f,l}^{\partial},\]
where we allow a different quadrature rule to be used for each face.
_Additional considerations._ In the following, assume that the surface integrals in Equation (3.2) are computed using \(\left\{\zeta_{1}^{(f)},\ldots,\zeta_{n_{q,f}^{\partial}}^{(f)}\right\}_{f=1}^{n _{f}}\) as integration points. Define \(\partial\mathcal{D}_{\kappa}\) and \(\mathcal{D}_{\kappa}\) as
\[\partial\mathcal{D}_{\kappa}=\bigcup_{f=1}^{n_{f}}\left\{x\left(\zeta_{l}^{(f) }\right),l=1,\ldots,n_{q,f}^{\partial}\right\},\]
and
\[\mathcal{D}_{\kappa}=\partial\mathcal{D}_{\kappa}\bigcup\left\{x(\xi_{q}),q=1, \ldots,n_{q}\right\}=\bigcup_{f=1}^{n_{f}}\left\{x\left(\zeta_{l}^{(f)}\right),l =1,\ldots,n_{q,f}^{\partial}\right\}\bigcup\left\{x(\xi_{q}),q=1,\ldots,n_{q} \right\},\]
respectively. The points in \(\left\{x(\xi_{q}),q=1,\ldots,n_{q}\right\}\) need not be used in the evaluation of any volume integrals in Equation (3.2). Without loss of generality, we define \(\nu_{f,l}^{\partial}\) as
\[\nu_{f,l}^{\partial}=\begin{cases}\left|J_{\partial\kappa}^{(f)}(\zeta_{l}) \right|w_{f,l}^{\partial},&l=1,\ldots,n_{q,f}^{\partial}\\ 0,&l=n_{q,f}^{\partial}+1,\ldots,N\end{cases}, \tag{4.27}\]
where the faces are ordered such that \(N=\max_{f}\left\{n_{q,f}^{\partial}\right\}=n_{q,n_{f}}^{\partial}\). As a result, we have
\[\sum_{f=1}^{n_{f}}\sum_{l=1}^{N}\nu_{f,l}^{\partial}=\sum_{f=1}^{n_{f}}\sum_{l=1 }^{n_{q,f}^{\partial}}\nu_{f,l}^{\partial}=\sum_{f=1}^{n_{f}}\left|\partial \kappa^{(f)}\right|=\left|\partial\kappa\right|,\]
where \(\left|\partial\kappa\right|\) is the surface area of \(\kappa\) and \(\left|\partial\kappa^{(f)}\right|\) is the surface area of the \(f\)th face.
Note that although a quadrature-free implementation [39; 40] is used in this work to compute the integrals in Equation (3.2), recall from Section 4.2.2 that the analysis is performed on the scheme satisfied by the element averages, which is identical between quadrature-based and quadrature-free approaches. Nevertheless, the scheme satisfied by the element averages is presented in terms of a quadrature-based approach for consistency with previous studies.
#### 4.4.2 First-order DG scheme in multiple dimensions
Consider the following \(p=0\), element-local DG discretization with forward Euler time stepping:
\[y_{\kappa}^{j+1}= y_{\kappa}^{j+1}-\sum_{f=1}^{n_{f}}\sum_{l=1}^{n_{q,f}^{ \partial}}\frac{\Delta t\nu_{f,l}^{\partial}}{|\kappa|}\mathcal{F}^{c\dagger }\left(y_{\kappa}^{j},y_{\kappa^{(f)}}^{j},n\left(\zeta_{l}^{(f)}\right)\right) \tag{4.28}\] \[+\sum_{f=1}^{n_{f}}\sum_{l=1}^{n_{q,f}^{\partial}}\frac{\Delta t \nu_{f,l}^{\partial}}{|\kappa|}\left[\frac{1}{2}\mathcal{F}^{v}\left(y_{ \kappa}^{j},\nabla y_{\kappa}^{j}\right)\cdot n\left(\zeta_{l}^{(f)}\right)+ \frac{1}{2}\mathcal{F}^{v}\left(y_{\kappa^{(f)}}^{j},\nabla y_{\kappa^{(f)}}^ {j}\right)\cdot n\left(\zeta_{l}^{(f)}\right)\right.\] \[\left.-\delta^{v}\left(y_{\kappa}^{j},y_{\kappa^{(f)}}^{j}, \nabla y_{\kappa}^{j},\nabla y_{\kappa^{(f)}}^{j},n\left(\zeta_{l}^{(f)} \right)\right)\right],\]
which can be rearranged to split the convective and diffusive contributions as
\[y_{\kappa}^{j+1}= \frac{1}{2}\left(y_{\kappa,c}^{j+1}+y_{\kappa,v}^{j+1}\right), \tag{4.29}\] \[y_{\kappa,c}^{j+1}= y_{\kappa}^{j}-\sum_{f=1}^{n_{f}}\sum_{l=1}^{n_{q,f}^{ \partial}}\frac{\Delta t^{*}\nu_{f,l}^{\partial}}{|\kappa|}\mathcal{F}^{c \dagger}\left(y_{\kappa}^{j},y_{\kappa^{(f)}}^{j},n\left(\zeta_{l}^{(f)} \right)\right),\] (4.30) \[y_{\kappa,v}^{j+1}= y_{\kappa}^{j}+\sum_{f=1}^{n_{f}}\sum_{l=1}^{n_{q,f}^{\partial}} \frac{\Delta t^{*}\nu_{f,l}^{\partial}}{|\kappa|}\left[\frac{1}{2}\mathcal{F} ^{v}\left(y_{\kappa}^{j},\nabla y_{\kappa}^{j}\right)\cdot n\left(\zeta_{l}^{ (f)}\right)+\frac{1}{2}\mathcal{F}^{v}\left(y_{\kappa^{(f)}}^{j},\nabla y_{ \kappa^{(f)}}^{j}\right)\cdot n\left(\zeta_{l}^{(f)}\right)\right.\] \[\left.-\delta^{v}\left(y_{\kappa}^{j},y_{\kappa^{(f)}}^{j}, \nabla y_{\kappa}^{j},\nabla y_{\kappa^{(f)}}^{j},n\left(\zeta_{l}^{(f)} \right)\right)\right], \tag{4.31}\]
where \(|\kappa|\) is the volume of the element. Since \(\mathcal{F}^{v}\left(y_{\kappa},\nabla y_{\kappa}\right)=0\), Equation (4.31) reduces to
\[y_{\kappa,v}^{j+1}= y_{\kappa}^{j}-\sum_{f=1}^{n_{f}}\sum_{l=1}^{n_{q,f}^{\partial}} \frac{\Delta t^{*}\nu_{f,l}^{\partial}}{|\kappa|}\delta^{v}\left(y_{\kappa}^ {j},y_{\kappa^{(f)}}^{j},\nabla y_{\kappa}^{j},\nabla y_{\kappa^{(f)}}^{j},n \left(\zeta_{l}^{(f)}\right)\right)\] \[= y_{\kappa}^{j}-\sum_{f=1}^{n_{f}}\frac{\Delta t^{*}\left| \partial\kappa^{(f)}\right|}{|\kappa|}\frac{\beta_{f}}{2}\left[y_{\kappa}^{ j}-y_{\kappa^{(f)}}^{j}\right]\] \[= \left[1-\sum_{f=1}^{n_{f}}\frac{\Delta t^{*}\left|\partial\kappa^ {(f)}\right|}{2\left|\kappa\right|}\beta_{f}\right]y_{\kappa}^{j}+\sum_{f=1}^{n _{f}}\frac{\Delta t^{*}\left|\partial\kappa^{(f)}\right|}{2|\kappa|}\beta_{f}y _{\kappa^{(f)}}^{j} \tag{4.32}\]
Under the time-step-size constraint
\[\sum_{f=1}^{n_{f}}\frac{\Delta t^{*}\left|\partial\kappa^{(f)}\right|}{2 \left|\kappa\right|}\beta_{f}\leq 1,\]
the RHS of Equation (4.32) is a convex combination of \(y_{\kappa}^{j}\) and \(y_{\kappa^{(f)}}^{j},f=1,\ldots,n_{f}\). As such, \(y_{\kappa}^{j}\in\mathcal{G}\) and \(y_{\kappa^{(f)}}^{j}\in\mathcal{G},f=1,\ldots,n_{f}\) imply \(y_{\kappa,v}^{j+1}\in\mathcal{G}\).
#### 4.4.3 High-order DG scheme in multiple dimensions
As in the one-dimensional case, the element-local solution average can be expanded as [41]
\[\overline{y}_{\kappa} =\sum_{q=1}^{n_{q}}\frac{\left|J_{\kappa}(\xi_{q})\right|w_{q}}{ \left|\kappa\right|}y_{\kappa}\left(\xi_{q}\right),\] \[=\sum_{q=1}^{n_{q}}\theta_{q}y_{\kappa}\left(\xi_{q}\right)+\sum_ {f=1}^{n_{f}}\sum_{l=1}^{n_{q,f}^{\partial}}\theta_{f,l}y_{\kappa}\left(\xi \left(\zeta_{l}^{(f)}\right)\right). \tag{4.33}\]
where, if \(\partial\mathcal{D}_{\kappa}\subseteq\{x(\xi_{q}),q=1,\ldots,n_{q}\}\),
\[\theta_{q}=\begin{cases}\frac{\left|J_{\kappa}(\xi_{q})\right|w_{q}}{\left| \kappa\right|}&x\left(\xi_{q}\right)\notin\partial\mathcal{D}_{\kappa}\\ 0&x\left(\xi_{q}\right)\in\partial\mathcal{D}_{\kappa}\end{cases}\]
and
\[\theta_{f,l}=\frac{\left|J_{\kappa}\left(\xi\left(\zeta_{l}^{(f)}\right) \right)\right|w_{f,l}}{\left|\kappa\right|N_{f,l}},\]
with \(w_{f,l}\) denoting the volume quadrature weight corresponding to the quadrature point that satisfies \(\xi_{q}=\xi\left(\zeta_{l}^{(f)}\right)\) and \(N_{f,l}\) denoting the number of faces belonging to \(\kappa\) shared by the given point. Otherwise, we take
\[\theta_{q}=\frac{\left|J_{\kappa}(\xi_{q})\right|w_{q}}{\left|\kappa\right|}- \sum_{f=1}^{n_{f}}\sum_{l=1}^{n_{q,f}^{\partial}}\theta_{f,l}\psi_{q}\left(\xi \left(\zeta_{l}^{(f)}\right)\right),\]
where \(\psi_{1},\ldots,\psi_{n_{d}}\) form a set of Lagrange basis functions whose nodes are located at \(n_{d}\) points of the set \(\{x_{q},q=1,\ldots,n_{q}\}\), with \(n_{b}\leq n_{d}\leq n_{q}\), and \(\psi_{n_{d}+1},\ldots,\psi_{n_{q}}\) are equal to zero. As a result, \(\sum_{q=1}^{n_{q}}\theta_{q}y_{\kappa}\left(\xi_{q}\right)\) can be written as
\[\sum_{q=1}^{n_{q}}\theta_{q}y_{\kappa}\left(\xi_{q}\right) =\sum_{q=1}^{n_{q}}\left[\frac{\left|J_{\kappa}(\xi_{q})\right|w _{q}}{\left|\kappa\right|}-\sum_{f=1}^{n_{f}}\sum_{l=1}^{n_{q,f}^{\partial}} \theta_{f,l}\psi_{q}\left(\xi\left(\zeta_{l}^{(f)}\right)\right)\right]y_{ \kappa}\left(\xi_{q}\right)\] \[=\sum_{q=1}^{n_{q}}\frac{\left|J_{\kappa}(\xi_{q})\right|w_{q}}{ \left|\kappa\right|}y_{\kappa}\left(\xi_{q}\right)-\sum_{f=1}^{n_{f}}\sum_{l=1 }^{n_{q,f}^{\partial}}\theta_{f,l}y_{\kappa}\left(\xi\left(\zeta_{l}^{(f)} \right)\right).\]
\(\theta_{f,l}\) will be related to a constraint on the time step size (see [41] and [11] for additional details). Since \(w_{q}>0,q=1,\ldots n_{q}\), there exist positive values of \(\theta_{f,l}\) that yield \(\theta_{q}\geq 0\)[41]. Furthermore, we have \(\sum_{q=1}^{n_{q}}\theta_{q}+\sum_{f=1}^{n_{f}}\sum_{l=1}^{n_{q,f}^{\partial} }\theta_{f,l}=1\).
Employing the forward Euler time-integration scheme and taking \(\mathfrak{v}\in V_{h}^{0}\) yields the fully discrete scheme satisfied by the element averages,
\[\overline{y}_{\kappa}^{j+1}=\frac{1}{2}\left(\overline{y}_{\kappa,c}^{j+1}+ \overline{y}_{\kappa,v}^{j+1}\right),\]
where
\[\overline{y}_{\kappa,c}^{j+1} = \overline{y}_{\kappa}^{j}-\sum_{f=1}^{n_{f}}\sum_{l=1}^{n_{q,f}^{ \partial}}\frac{\Delta t^{*}\nu_{f,l}^{\partial}}{|\kappa|}\mathcal{F}^{\dagger }\left(y_{\kappa}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right),y_{\kappa^{(f )}}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right),n\left(\zeta_{l}^{(f)} \right)\right)\] \[= \sum_{q=1}^{n_{q}}\theta_{q}y_{\kappa}^{j}\left(\xi_{q}\right)+ \sum_{f=1}^{n_{f}}\sum_{l=1}^{n_{q,f}^{\partial}}\left[\theta_{f,l}y_{\kappa}^ {j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right)-\frac{\Delta t^{*}\nu_{f,l}^{ \partial}}{|\kappa|}\mathcal{F}^{\dagger}\left(y_{\kappa}^{j}\left(\xi\left( \zeta_{l}^{(f)}\right)\right),y_{\kappa^{(f)}}^{j}\left(\xi\left(\zeta_{l}^{( f)}\right)\right),n\left(\zeta_{l}^{(f)}\right)\right)\right]\]
and
\[\overline{y}_{\kappa,v}^{j+1} = \overline{y}_{\kappa}^{j}+\sum_{f=1}^{n_{f}}\sum_{l=1}^{n_{q,f}^{ \partial}}\frac{\Delta t^{*}\nu_{f,l}^{\partial}}{|\kappa|}\left[\frac{1}{2} \mathcal{F}^{v}\left(y_{\kappa}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right) \right),\nabla y_{\kappa}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right) \right)\cdot n\left(\zeta_{l}^{(f)}\right)\right. \tag{4.34}\] \[\left.+\frac{1}{2}\mathcal{F}^{v}\left(y_{\kappa^{(f)}}^{j}\left( \xi\left(\zeta_{l}^{(f)}\right)\right),\nabla y_{\kappa^{(f)}}^{j}\left(\xi \left(\zeta_{l}^{(f)}\right)\right)\right)\cdot n\left(\zeta_{l}^{(f)}\right)\right.\] \[\left.-\delta^{v}\left(y_{\kappa}^{j}\left(\xi\left(\zeta_{l}^{( f)}\right)\right),y_{\kappa^{(f)}}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right) \right),\nabla y_{\kappa}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right), \nabla y_{\kappa^{(f)}}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right),n \left(\zeta_{l}^{(f)}\right)\right)\right]\] \[= \sum_{q=1}^{n_{q}}\theta_{q}y_{\kappa}^{j}\left(\xi_{q}\right)+ \sum_{f=1}^{n_{f}}\sum_{l=1}^{n_{q,f}^{\partial}}\left[\frac{\Delta t^{*}\nu_{ f,l}^{\partial}}{2|\kappa|}\beta_{f,l}y_{\kappa^{(f)}}^{j}\left(\xi\left( \zeta_{l}^{(f)}\right)\right)\right.\] \[\left.+\frac{\Delta t^{*}\nu_{f,l}^{\partial}}{2|\kappa|}\mathcal{ F}^{v}\left(y_{\kappa^{(f)}}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right), \nabla y_{\kappa^{(f)}}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right) \right)\cdot n\left(\zeta_{l}^{(f)}\right)+\theta_{f,l}y_{\kappa}^{j}\left( \xi\left(\zeta_{l}^{(f)}\right)\right)\right.\] \[\left.-\frac{\Delta t^{*}\nu_{f,l}^{\partial}}{2|\kappa|}\beta_{ f,l}y_{\kappa}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right)+\frac{\Delta t^{*}\nu_{ f,l}^{\partial}}{2|\kappa|}\mathcal{F}^{v}\left(y_{\kappa}^{j}\left(\xi\left( \zeta_{l}^{(f)}\right)\right),\nabla y_{\kappa}^{j}\left(\xi\left(\zeta_{l}^{ (f)}\right)\right)\right)\cdot n\left(\zeta_{l}^{(f)}\right)\right]\] \[= \sum_{q=1}^{n_{q}}\theta_{q}y_{\kappa}^{j}\left(\xi_{q}\right) \tag{4.35}\]
with \(\Lambda_{f,l}=\theta_{f,l}-\frac{\Delta t^{*}\nu_{f,l}^{\partial}}{2|\kappa|} \beta_{f,l}\). Standard flux interpolation, as in Equation (3.7), is assumed here; the modified flux interpolation (3.8) will be accounted for in Section 4.4.4 Note that Equations (4.34) and (4.35) still hold for the quadrature-free implementation [39, 40] used to evaluate the integrals in Equation (3.2) since the integrals of the basis functions over the reference element (required in the quadrature-free implementation) can be considered the weights of a generalized Newton-Cotes quadrature rule [49].
It can be shown that if \(y_{\kappa}^{j}(x)\in\mathcal{G}_{s_{b}},\ \forall x\in\mathcal{D}_{\kappa}\), and \(y_{\kappa}^{-,j}\in\mathcal{G}_{s_{b}},\ \forall x\in\partial\mathcal{D}_{\kappa}\), then \(\overline{y}_{\kappa,c}^{j+1}\) is in \(\mathcal{G}_{s_{b}}\) under the
time-step-size constraint [11]
\[\frac{\Delta t^{*}\lambda}{|\kappa|} \leq\frac{1}{2}\min\left\{L_{A},L_{B},L_{C}\right\}, \tag{4.36}\] \[L_{A} =\min\left\{\left.\frac{\theta_{f,l}}{\nu_{f,l}^{\partial}} \right|f=1,\ldots,n_{f}-1,\;l=1,\ldots,n_{q,f}^{\partial}\right\},\] \[L_{B} =\min\left\{\left.\frac{\theta_{n_{f,l}}}{\nu_{f,l}^{\partial}} \frac{\left|\partial\kappa^{(f)}\right|}{\left|\partial\kappa\right|}\right|,f =1,\ldots,n_{f},\;l=1,\ldots,\min\left\{n_{q,f}^{\partial},N-1\right\}\right\},\] \[L_{C} =\frac{\theta_{n_{f},N}}{\left|\partial\kappa\right|},\]
and the conditions
\[\begin{cases}\theta_{q}\geq 0,&q=1,\ldots,n_{q}\\ \theta_{f,l}>0,&f=1,\ldots,n_{f},\;l=1,\ldots,n_{q,f}^{\partial}.\end{cases} \tag{4.37}\]
The entropy bound, \(s_{b}\), is computed as
\[s_{b,\kappa}^{j+1}(y)=\min\left\{s\left(y^{j}(x)\right)\left|x\in\bigcup_{f=1 }^{n_{f}}\mathcal{D}_{\kappa^{(f)}}\bigcup\mathcal{D}_{\kappa}\right.\right\}. \tag{4.38}\]
In the following theorem, we analyze the conditions under which \(\overline{y}_{\kappa,v}^{j+1}\in\mathcal{G}\).
**Theorem 8**.: _If \(y_{\kappa}^{j}(x)\in\mathcal{G},\;\forall x\in\mathcal{D}_{\kappa}\), and \(y_{\kappa}^{-,j}\in\mathcal{G},\;\forall x\in\partial\mathcal{D}_{\kappa}\), then \(\overline{y}_{\kappa,v}^{j+1}\) is also in \(\mathcal{G}\) under the time-step-size constraint_
\[\frac{\Delta t^{*}}{|\kappa|}\leq\min\left\{\left.\frac{\theta_{f,l}}{\beta_{f,l}\nu_{f,l}^{\partial}}\right|f=1,\ldots,n_{f},\;l=1,\ldots,n_{q,f}^{\partial }\right\}, \tag{4.39}\]
_the constraints on \(\beta\),_
\[\beta_{f,l} >\max\left\{\beta_{f,l}^{(1)},\beta_{f,l}^{(2)}\right\}, \tag{4.40}\] \[\beta_{f,l}^{(1)} =\beta^{*}\left(y_{\kappa}^{j}\left(\xi\left(\zeta_{l}^{(f)} \right)\right),\mathcal{F}^{v}\left(y_{\kappa}^{j}\left(\xi\left(\zeta_{l}^{(f )}\right)\right),\nabla y_{\kappa}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right) \right)\right),n\left(\zeta_{l}^{(f)}\right)\right),\] \[\beta_{f,l}^{(2)} =\beta^{*}\left(y_{\kappa^{(f)}}^{j}\left(\xi\left(\zeta_{l}^{( f)}\right)\right),\mathcal{F}^{v}\left(y_{\kappa^{(f)}}^{j}\left(\xi\left(\zeta_{l}^{(f )}\right)\right),\nabla y_{\kappa^{(f)}}^{j}\left(\xi\left(\zeta_{l}^{(f)} \right)\right)\right),n\left(\zeta_{l}^{(f)}\right)\right),\]
_and the conditions (4.37)._
Proof.: The proof follows similar logic to that for Theorem 6. The inequality (4.40) guarantees that
\[y_{\kappa^{(f)}}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right)+\beta_{f,l}^{ -1}\mathcal{F}^{v}\left(y_{\kappa^{(f)}}^{j}\left(\xi\left(\zeta_{l}^{(f)} \right)\right),\nabla y_{\kappa^{(f)}}^{j}\left(\xi\left(\zeta_{l}^{(f)} \right)\right)\right)\cdot n\left(\zeta_{l}^{(f)}\right)\in\mathcal{G}.\]
According to the time-step-size constraint (4.39), we have
\[\frac{\Delta t^{*}}{|\kappa|}\leq\frac{\theta_{f,l}}{\beta_{f,l}\nu_{f,l}^{ \partial}}\]
such that
\[\theta_{f,l}-\frac{\Delta t^{*}\nu_{f,l}^{\partial}}{2\left|\kappa\right|} \beta_{f,l}\geq\frac{\Delta t^{*}\nu_{f,l}^{\partial}}{2\left|\kappa\right|} \beta_{f,l},\]
for \(f=1,\ldots,n_{f},\,\,l=1,\ldots,n_{q,f}^{\partial}\). It follows that
\[\frac{\Delta t^{*}\nu_{f,l}^{\partial}}{2|\kappa|}\Lambda_{f,l}^{-1} =\frac{\Delta t^{*}\nu_{f,l}^{\partial}}{2|\kappa|\left(\theta_{f,l }-\frac{\Delta t^{*}\nu_{f,l}^{\partial}}{2|\kappa|}\beta_{f,l}\right)}\] \[\leq\frac{\Delta t^{*}\nu_{f,l}^{\partial}}{2|\kappa|\left(\frac{ \Delta t^{*}\nu_{f,l}^{\partial}}{2|\kappa|}\beta_{f,l}\right)}\] \[=\beta_{f,l}^{-1},\]
which means
\[y_{\kappa}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right)+\frac{\Delta t^{*} \nu_{f,l}^{\partial}}{2|\kappa|}\Lambda_{f,l}^{-1}\mathcal{F}^{v}\left(y_{ \kappa}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right),\nabla y_{\kappa}^{j} \left(\xi\left(\zeta_{l}^{(f)}\right)\right)\right)\cdot n\left(\zeta_{l}^{( f)}\right)\in\mathcal{G}.\]
Moreover, we have \(\frac{\Delta t^{*}\nu_{f,l}^{\partial}}{2|\kappa|}\beta_{f,l}\leq\theta_{f,l}\leq 1\). Therefore, \(\overline{y}_{\kappa,v}^{j+1}\) is a convex combination of states in \(\mathcal{G}\), such that \(\overline{y}_{\kappa,v}^{j+1}\in\mathcal{G}\).
_Remark 9_.: The same limiting strategy as in the one-dimensional case is employed to ensure \(y_{\kappa,c}^{j+1}(x)\in\mathcal{G}_{s_{b}}\) and \(y_{\kappa,v}^{j+1}(x)\in\mathcal{G}\), for all \(x\in\mathcal{D}_{\kappa}\), such that \(y_{\kappa}^{j+1}(x)\in\mathcal{G},\;\forall x\in\mathcal{D}_{\kappa}\).
_Remark 10_.: The multidimensional formulation is compatible with curved elements of arbitrary shape, provided that appropriate quadrature rules exist. Note that the consideration of non-constant geometric Jacobians is significantly more straightforward for the Lax-Friedrichs-type viscous flux function than for invariant-region-preserving inviscid flux functions since the former _algebraically_ satisfies the positivity property while the latter relies on the notion of a Riemann problem. It is worth mentioning, however, that the Lax-Friedrichs inviscid flux function also satisfies the positivity property algebraically [13; 16].
#### 4.4.4 Modified flux interpolation
With the modified flux interpolation (3.8), the scheme satisfied by the element averages (for the viscous contribution) becomes
\[\widetilde{y}_{\kappa}^{j}=\widetilde{y}_{\kappa}^{j}-y_{\kappa}^{j}.\]
Under the time-step-size constraint (4.39) and the conditions (4.37), we have
\[\widetilde{y}_{\kappa}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right) \right)+\beta_{f,l}^{-1}\mathcal{F}\left(\widetilde{y}_{\kappa^{(f)}}^{j} \left(\xi\left(\zeta_{l}^{(f)}\right)\right),\nabla\widetilde{y}_{\kappa^{(f)} }^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right)\right)\cdot n\left(\zeta_{l}^ {(f)}\right)\in\mathcal{G},\] \[\dot{y}_{\kappa}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right) \in\mathcal{G},\]
provided that the constraints on \(\beta\) are modified as
\[\beta_{f,l}>\max\left\{\beta_{f,l}^{(1)},\beta_{f,l}^{(2)}\right\}, \tag{4.42}\] \[\beta_{f,l}^{(1)}=\beta^{*}\left(y_{\kappa}^{j}\left(\xi\left( \zeta_{l}^{(f)}\right)\right),\mathcal{F}^{v}\left(\widetilde{y}_{\kappa}^{j} \left(\xi\left(\zeta_{l}^{(f)}\right)\right),\nabla y_{\kappa}^{j}\left(\xi \left(\zeta_{l}^{(f)}\right)\right)\right),n\left(\zeta_{l}^{(f)}\right) \right),\] \[\beta_{f,l}^{(2)}=\beta^{*}\left(\widetilde{y}_{\kappa^{(f)}}^{j }\left(\xi\left(\zeta_{l}^{(f)}\right)\right),\mathcal{F}^{v}\left(\widetilde{ y}_{\kappa^{(f)}}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right),\nabla y_{ \kappa^{(f)}}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right)\right),n\left( \zeta_{l}^{(f)}\right)\right).\]
By Lemma 17 in Appendix B, \(\dot{y}_{\kappa}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right)-\frac{\Delta t ^{*}\nu_{f,l}^{\partial}}{2|\kappa|}\beta_{f,l}\Lambda_{f,l}^{-1}\Delta \widetilde{y}_{\kappa}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right)\in \mathcal{G}\) if
\[\frac{\Delta t^{*}}{2\left|\kappa\right|}\beta_{f,l}<\frac{\theta_{f,l}}{\nu_ {f,l}^{\partial}\left(1+\alpha^{*}\left(\dot{y}_{\kappa}^{j}\left(\xi\left( \zeta_{l}^{(f)}\right)\right),\Delta\widetilde{y}_{\kappa}^{j}\left(\xi\left( \zeta_{l}^{(f)}\right)\right)\right)\right)},\]
where \(\alpha^{*}\) is defined as in (B.1). Therefore, \(\overline{y}_{\kappa,v}^{j+1}\) remains a convex combination of states in \(\mathcal{G}\) if, in addition to the time-step-size constraint (4.39), the following additional condition is satisfied:
\[\frac{\Delta t^{*}}{2\left|\kappa\right|}<\min_{f,l}\left\{\frac{\theta_{f,l}}{ \beta_{f,l}\nu_{f,l}^{\partial}\left(1+\alpha^{*}\left(\bar{y}_{\kappa}^{j} \left(\xi\left(\zeta_{l}^{(f)}\right)\right),\Delta\bar{y}_{\kappa}^{j}\left( \xi\left(\zeta_{l}^{(f)}\right)\right)\right)\right)}\right\}\]
We then have \(\overline{y}_{\kappa,v}^{j+1}\in\mathcal{G}\).
### Boundary conditions
Thus far, \(\partial\kappa\) has been assumed to be in \(\mathcal{E}_{\mathcal{I}}\), the set of interior interfaces. Here, we discuss how to enforce boundary conditions, focusing on the viscous contribution in the multidimensional case. For simplicity, but without loss of generality, we assume \(\partial\kappa\in\mathcal{E}_{\partial}\) (i.e., all faces of \(\kappa\) are boundary faces). We also assume \(y_{\partial}^{j}\in\mathcal{G},\;\forall x\in\partial\mathcal{D}_{\kappa}\). The boundary penalty term takes the form
\[\delta_{\partial}^{v}\left(y^{+},y_{\partial},\nabla y^{+},n^{+}\right)=\frac {\beta}{2}\left(y^{+}-y_{\partial}\right).\]
The scheme satisfied by the element averages (for the viscous contribution) becomes
\[\overline{y}_{\kappa,v}^{j+1} = \overline{y}_{\kappa}^{j}+\sum_{f=1}^{n_{f}}\sum_{l=1}^{n_{q,f}^{ \partial}}\frac{\Delta t^{*}\nu_{f,l}^{\partial}}{\left|\kappa\right|}\left[ \frac{1}{2}\mathcal{F}_{\partial}^{v}\left(y_{\partial}^{j}\left(\xi\left( \zeta_{l}^{(f)}\right)\right),\nabla y_{\kappa}^{j}\left(\xi\left(\zeta_{l}^{ (f)}\right)\right)\right)\cdot n\left(\zeta_{l}^{(f)}\right)\right.\] \[\left.-\delta_{\partial}^{v}\left(y_{\kappa}^{j}\left(\xi\left( \zeta_{l}^{(f)}\right)\right),y_{\partial}^{j}\left(\xi\left(\zeta_{l}^{(f)} \right)\right),\nabla y_{\kappa}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right) \right),\nabla y_{\partial}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right),n \left(\zeta_{l}^{(f)}\right)\right)\right]\] \[= \sum_{q=1}^{n_{q}}\theta_{q}y_{\kappa}^{j}\left(\xi_{q}\right)\] \[+\sum_{f=1}^{n_{f}}\sum_{l=1}^{n_{q,f}^{\partial}}\frac{\Delta t ^{*}\nu_{f,l}^{\partial}}{2|\kappa|}\beta_{f,l}\left[y_{\partial}^{j}\left( \xi\left(\zeta_{l}^{(f)}\right)\right)+\beta_{f,l}^{-1}\mathcal{F}_{\partial} ^{v}\left(y_{\partial}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right),\nabla y _{\kappa}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right)\right)\cdot n\left( \zeta_{l}^{(f)}\right)\right]\] \[+\sum_{f=1}^{n_{f}}\sum_{l=1}^{n_{q,f}^{\partial}}\Lambda_{f,l} \Bigg{[}y_{\kappa}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right)+\frac{ \Delta t^{*}\nu_{f,l}^{\partial}}{2|\kappa|}\Lambda_{f,l}^{-1}\mathcal{F}_{ \partial}^{v}\left(y_{\partial}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right) \right),\nabla y_{\kappa}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right) \right)\cdot n\left(\zeta_{l}^{(f)}\right)\Bigg{]}.\]
Under the time-step-size constraint (4.39) and the conditions (4.37), we have
\[y_{\partial}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right)+ \beta_{f,l}^{-1}\mathcal{F}_{\partial}^{v}\left(y_{\partial}^{j}\left(\xi \left(\zeta_{l}^{(f)}\right)\right),\nabla y_{\kappa}^{j}\left(\xi\left(\zeta_ {l}^{(f)}\right)\right)\right)\cdot n\left(\zeta_{l}^{(f)}\right)\in\mathcal{G},\] \[y_{\kappa}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right)+\frac {\Delta t^{*}\nu_{f,l}^{\partial}}{2|\kappa|}\Lambda_{f,l}^{-1}\mathcal{F}_{ \partial}^{v}\left(y_{\partial}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right),\nabla y_{\kappa}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right)\right)\cdot n \left(\zeta_{l}^{(f)}\right)\in\mathcal{G},\]
provided that the constraints on \(\beta\) are modified as
\[\beta_{f,l} >\max\left\{\beta_{f,l}^{(1)},\beta_{f,l}^{(2)}\right\}, \tag{4.44}\] \[\beta_{f,l}^{(1)} =\beta^{*}\left(y_{\kappa}^{j}\left(\xi\left(\zeta_{l}^{(f)} \right)\right),\mathcal{F}_{\partial}^{v}\left(y_{\partial}^{j}\left(\xi\left( \zeta_{l}^{(f)}\right)\right),\nabla y_{\kappa}^{j}\left(\xi\left(\zeta_{l}^{( f)}\right)\right)\right),n\left(\zeta_{l}^{(f)}\right)\right),\] \[\beta_{f,l}^{(2)} =\beta^{*}\left(y_{\partial}^{j}\left(\xi\left(\zeta_{l}^{(f)} \right)\right),\mathcal{F}_{\partial}^{v}\left(y_{\partial}^{j}\left(\xi\left( \zeta_{l}^{(f)}\right)\right),\nabla y_{\kappa}^{j}\left(\xi\left(\zeta_{l}^{( f)}\right)\right)\right),n\left(\zeta_{l}^{(f)}\right)\right).\]
\(\overline{y}_{\kappa,v}^{j+1}\) is then in \(\mathcal{G}\), and the same limiting strategy can be applied to ensure \(y_{\kappa,v}^{j+1}\in\mathcal{G},\;\forall x\in\mathcal{D}_{\kappa}\).
### Adaptive time stepping
As discussed by Zhang [16], the time-step-size constraints (4.36) and (4.39) are sufficient but not necessary for \(\overline{y}_{\kappa,c}^{j+1}\) and \(\overline{y}_{\kappa,v}^{j+1}\) to be in \(\mathcal{G}_{s_{b}}\) and \(\mathcal{G}\), respectively. Furthermore, the latter constraint can sometimes be very restrictive. In addition, as will be demonstrated in Section 5.4, provided that the positivity property remains satisfied, the BR2 viscous flux function is often preferred to the Lax-Friedrichs-type viscous flux function. As such, unless otherwise specified, we employ the following adaptive time stepping procedure similar to that in [16], except with additional steps to switch between the two viscous flux functions:
1. Select \(\Delta t\) according to a user-prescribed CFL based on the acoustic time scale.
2. Compute \(y_{c}^{j+1}\) and \(y_{v}^{j+1}\) with the BR2 scheme.
3. If \(\overline{y}_{\kappa,c}^{j+1}\in\mathcal{G}_{s_{b}}\) and \(\overline{y}_{\kappa,v}^{j+1}\in\mathcal{G}\), \(\forall\kappa\), then employ the limiting procedure, proceed to the next time step, and go back to Step 1. If, for some \(\kappa\), \(\overline{y}_{\kappa,c}^{j+1}\notin\mathcal{G}_{s_{b}}\) or \(\overline{y}_{\kappa,v}^{j+1}\notin\mathcal{G}\), then proceed to Step 4.
4. Halve the time step, and recompute \(y_{c}^{j+1}\) and \(y_{v}^{j+1}\) with the BR2 scheme.
5. If \(\overline{y}_{\kappa,c}^{j+1}\in\mathcal{G}_{s_{b}}\) and \(\overline{y}_{\kappa,v}^{j+1}\in\mathcal{G}\), \(\forall\kappa\), then employ the limiting procedure, proceed to the next time step, and go back to Step 1. If, for some \(\kappa\), \(\overline{y}_{\kappa,c}^{j+1}\notin\mathcal{G}_{s_{b}}\) or \(\overline{y}_{\kappa,v}^{j+1}\notin\mathcal{G}\), then proceed to Step 6.
6. Recompute \(y_{c}^{j+1}\) and \(y_{v}^{j+1}\) with the Lax-Friedrichs-type viscous flux function. Go back to Step 3.
The above assumes forward Euler time integration. With SSPRK time integration, the solution is restarted from time step \(j\) (with the time step halved or the viscous flux function switched) if an inadmissible state is encountered at any stage. In our experience, the initial time step size is generally sufficiently small for \(\overline{y}_{\kappa,c}^{j+1}\) and \(\overline{y}_{\kappa,v}^{j+1}\) to be in \(\mathcal{G}_{s_{b}}\) and \(\mathcal{G}\), respectively. Here, when the viscous flux function is switched to the Lax-Friedrichs-type function, it is employed at all interfaces. An alternative approach is to instead use it only at the interfaces belonging to cells with inadmissible states.
In the present study, in typically no more than one percent of time steps is it necessary to decrease \(\Delta t\) and/or switch the viscous flux function. Note that the BR2 scheme can sometimes result in satisfaction of the positivity property with a larger time step size than the Lax-Friedrichs-type viscous flux function. However, the advantage of the latter is that Theorem 8 guarantees a finite time step size. In Sections 5.3 and 5.4, in order to compare the BR2 and Lax-Friedrichs-type viscous flux functions, we employ the adaptive time stepping procedure but fix the viscous flux function to be the latter in certain simulations.
### Zero species concentrations
All species concentrations have hitherto been assumed to be strictly positive. Following Remark 2, we relax this assumption and illustrate why the formulation described thus far does not ensure nonnegative species concentrations in \(\overline{y}_{\kappa,v}^{j+1}\). Note that the presence of zero species concentrations is very common (and expected) in simulations of chemically reacting flows. We then propose a strategy to address this pathological scenario. To this end, we first rewrite Equation (4.35) in terms of the \(i\)th species-concentration component
as
\[\begin{split}\overline{C}_{i,\kappa,v}^{j+1}=&\sum_{q=1}^{n _{q}}\theta_{q}C_{i,\kappa}^{j}\left(\xi_{q}\right)\\ &+\sum_{f=1}^{n_{f}}\sum_{l=1}^{n_{q}^{0}}\frac{\Delta t^{*} \nu_{f,l}^{\partial}}{2|\kappa|}\beta_{f,l}\hat{C}_{i,\kappa^{(f)}}^{j}\left( \xi\left(\zeta_{l}^{(f)}\right)\right)\\ &+\sum_{f=1}^{n_{f}}\sum_{l=1}^{n_{q}^{0}}\Lambda_{f,l}\hat{C}_{i,\kappa}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right),\end{split} \tag{4.45}\]
where
\[\hat{C}_{i,\kappa^{(f)}}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right)=C_{i, \kappa^{(f)}}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right)+\beta_{f,l}^{-1} \mathcal{F}_{\mathcal{C}_{i}}^{v}\left(y_{\kappa^{(f)}}^{j}\left(\xi\left( \zeta_{l}^{(f)}\right)\right),\nabla C_{\kappa^{(f)}}^{j}\left(\xi\left(\zeta _{l}^{(f)}\right)\right)\right)\cdot n\left(\zeta_{l}^{(f)}\right)\]
and
\[\hat{C}_{i,\kappa}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right)=C_{i, \kappa}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right)+\frac{\Delta t^{*}\nu _{f,l}^{\partial}}{2|\kappa|}\Lambda_{f,l}^{-1}\mathcal{F}_{\mathcal{C}_{i}}^ {v}\left(y_{\kappa}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right),\nabla C_{ \kappa}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right)\right)\cdot n\left( \zeta_{l}^{(f)}\right),\]
with \(\mathcal{F}_{\mathcal{C}_{i}}^{v}\), the molar flux of the \(i\)th species, given by Equation (4.9). Note that \(\mathcal{F}_{\mathcal{C}_{i}}^{v}\) depends on the concentration gradients, but not on the momentum gradient or the total-energy gradient. As before, \(y_{\kappa}^{j}\) is assumed to be in \(\mathcal{G},\;\forall x\in\mathcal{D}_{\kappa}\), for all \(\kappa\). We make the following observations.
_Remark 11_.: If \(\overline{C}_{i,\kappa}^{j}=0\), then \(C_{i,\kappa}^{j}=0,\forall x\in\mathcal{D}_{\kappa}\), due to the positivity of the quadrature weights. Furthermore, \(\nabla C_{i,\kappa}^{j}=0,\forall x\in\kappa\). One way to show this is to take \(C_{i,\kappa}^{j}=\sum_{j=1}^{n_{k}}C_{i,\kappa}^{j}(x_{j})\phi_{j}=\sum_{q=1} ^{n_{k}}C_{i,\kappa}^{j}(x_{q})\psi_{q}\). Since \(C_{i,\kappa}^{j}(x_{q})=0,q=1,\ldots,n_{b}\), we have \(\nabla C_{i,\kappa}^{j}=0\).
_Remark 12_.: By Equation (4.9) and Remark 11, if \(\overline{C}_{i,\kappa}^{j}=0\), then \(\mathcal{F}_{\mathcal{C}_{i}}^{v}\left(y_{\kappa}^{j}\left(\xi\left(\zeta_{l}^ {(f)}\right)\right),\nabla C_{\kappa}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right) \right)\right)\cdot n\left(\zeta_{l}^{(f)}\right)=0\) and \(\hat{C}_{i,\kappa}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right)=0\), for all \(l,f\). Furthermore, if \(\nabla C_{i,\kappa}^{j}=0,\;\forall x\in\partial\mathcal{D}_{\kappa}\), for all \(i\), then \(\mathcal{F}_{\mathcal{C}_{i}}^{v}\left(y_{\kappa}^{j}\left(\xi\left(\zeta_{l}^ {(f)}\right)\right),\nabla C_{\kappa}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right) \right)\right)\cdot n\left(\zeta_{l}^{(f)}\right)=0\). Similar statements can be made for \(C_{i,\kappa^{(f)}}^{j}\).
_Remark 13_.: As an example scenario in which \(\overline{C}_{i,\kappa,v}^{j+1}<0\) using the formulation described thus far, take \(\overline{C}_{i,\kappa}^{j}=0\), \(C_{i,\kappa^{(f)}}^{j}=0\), \(\forall x\in\partial\mathcal{D}_{\kappa}\), and \(\mathfrak{F}<0\), where
\[\mathfrak{F}=\sum_{f=1}^{n_{f}}\sum_{l=1}^{n_{q}^{0},f}\left[\mathcal{F}_{ \mathcal{C}_{i}}^{v}\left(y_{\kappa}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right) \right),\nabla C_{\kappa}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right) \right)+\mathcal{F}_{\mathcal{C}_{i}}^{v}\left(y_{\kappa^{(f)}}^{j}\left(\xi \left(\zeta_{l}^{(f)}\right)\right),\nabla C_{\kappa^{(f)}}^{j}\left(\xi \left(\zeta_{l}^{(f)}\right)\right)\right)\right]\cdot n\left(\zeta_{l}^{(f)} \right).\]
A representative schematic is given in Figure 4.1. In this figure, defining \(\kappa=[x_{L},x_{R}]\), we have
\[\mathfrak{F}=\left.\bar{D}_{i}\frac{\partial C_{i,\kappa^{(2)}}}{\partial x_{k}} \right|_{y_{\kappa^{(2)}}(x_{R})},\]
which is negative. Note that there exists a small region in \(\kappa^{(2)}\) with negative species concentration, which is indeed possible since the positivity-preserving limiter only guarantees \(C_{i}\geq 0\) at a finite set of points. Other situations (that do not occur in the monocomponent case) may also arise in which \(C_{i}\) and \(\mathcal{F}_{\mathcal{C}_{i}}^{v}\) result in an exceedingly large value of \(\beta\) and \(\Delta t^{*}\) must be extremely small to maintain nonnegative species concentrations, rendering the simulation computationally intractable.
_Remark 14_.: In the density limiter (see Section 4.3), if \(\omega^{(1)}=1\), the limiter has no effect. If \(\omega^{(1)}=0\), which corresponds to maximum limiter strength, then \(C_{i}\gets C_{i}^{(1)}=\overline{C}_{i},\ \forall i\), i.e., the species concentrations are projected to a \(p=0\) representation. Suppose that \(\hat{C}_{i,\kappa}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right)<0\), in which case \(\overline{C}_{i,\kappa}^{j}>0\) by Remark 12. Applying the density limiter with \(\omega^{(1)}=0\) then yields \(\hat{C}_{i,\kappa}^{j,(1)}\left(x\right)=\overline{C}_{i,\kappa}^{j}>0,\ \forall x \in\partial\mathcal{D}_{\kappa}\).
_Remark 15_.: Suppose that \(\overline{C}_{i,\kappa,v}^{j+1}<0\). Applying the density limiter to \(y_{\kappa}^{j}\) with \(\omega_{\kappa}^{(1)}=0\) and \(y_{\kappa(f)}^{j}\) with \(\omega_{\kappa(f)}^{(1)}=0,f=1,\ldots,n_{f}\) yields
\[\overline{C}_{i,\kappa,v}^{j+1,(1)}= \sum_{q=1}^{n_{q}}\theta_{q}\overline{C}_{i,\kappa}^{j}\left( \xi_{q}\right)\] \[+\sum_{f=1}^{n_{f}}\sum_{l=1}^{n_{q,f}^{0}}\frac{\Delta t^{*}v_{ f,l}^{\partial}}{2|\kappa|}\beta_{f,l}\overline{C}_{i,\kappa^{(f)}}^{j}\left( \xi\left(\zeta_{l}^{(f)}\right)\right)\] \[+\sum_{f=1}^{n_{f}}\sum_{l=1}^{n_{q,f}^{0}}\Lambda_{f,l} \overline{C}_{i,\kappa}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right),\]
which is positive.
_Remark 16_.: \(\mathcal{F}_{C_{i}}^{v}\) is directly proportional to \(\nabla C\), i.e.,
\[\mathcal{F}_{C_{i},k}^{v}\left(y,\omega\nabla C\right) =\bar{D}_{i}\omega\frac{\partial C_{i}}{\partial x_{k}}-\frac{C_{ i}\bar{D}_{i}}{\rho}\omega\frac{\partial\rho}{\partial x_{k}}-\frac{C_{i}}{ \rho}\sum_{l=1}^{n_{s}}W_{l}\left(\bar{D}_{l}\omega\frac{\partial C_{l}}{ \partial x_{k}}-\frac{C_{l}\bar{D}_{l}}{\rho}\omega\frac{\partial\rho}{ \partial x_{k}}\right)\] \[=\omega\left[\bar{D}_{i}\frac{\partial C_{i}}{\partial x_{k}}- \frac{C_{i}\bar{D}_{i}}{\rho}\frac{\partial\rho}{\partial x_{k}}-\frac{C_{i}} {\rho}\sum_{l=1}^{n_{s}}W_{l}\left(\bar{D}_{l}\frac{\partial C_{l}}{\partial x _{k}}-\frac{C_{l}\bar{D}_{l}}{\rho}\frac{\partial\rho}{\partial x_{k}}\right)\right]\] \[=\omega\mathcal{F}_{C_{i},k}^{v}\left(y,\nabla C\right),\]
where \(\omega\) is a scaling factor for \(\nabla C\).
By Remark 15, a foolproof but low-fidelity approach to guarantee nonnegative species concentrations is as follows: if \(\overline{C}_{i,\kappa,v}^{j+1}<0\), then apply a \(p=0\) projection and recalculate \(y_{\kappa,v}^{j+1}\), as well as the neighboring states. However, a higher-fidelity approach is desired. Following Remark 16, one such approach is to modify the gradient in the fourth term in Equation (3.2) as
\[\left(\left\{\!\!\left\{\mathcal{F}^{v}\left(y,\nabla y\right)\!\right\}\! \right\}\cdot n-\delta^{v}\left(y,\nabla y,n\right),\left[\!\left[\mathfrak{p} \right]\!\right]\right)_{\epsilon}\leftarrow\left(\left\{\!\!\left\{ \mathcal{F}^{v}\left(y,\omega\nabla y\right)\!\right\}\!\cdot n-\delta^{v} \left(y,\nabla y,n\right),\left[\!\left[\mathfrak{p}\right]\!\right]\!\right)_ {\epsilon}, \tag{4.46}\]
where \(\omega\in[0,1]\) is a pointwise parameter that scales the gradient. Specifically, for a given \(\xi\left(\zeta_{l}^{(f)}\right)\), we have \(\omega_{\kappa,l}^{(f)}\) and \(\omega_{\kappa^{(f)},l}^{(f)}\) for the interior and exterior gradients, respectively, which yields
\[\hat{C}_{i,\kappa}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right)=C_{i,\kappa }^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right)+\frac{\Delta t^{*}\nu_{f,l}^ {\partial}}{2|\kappa|}\Lambda_{f,l}^{-1}\omega_{\kappa,l}^{(f)}\mathcal{F}_{C_ {i}}^{v}\left(y_{\kappa}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right), \nabla C_{\kappa}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right)\right)\cdot n \left(\zeta_{l}^{(f)}\right) \tag{4.47}\]
Figure 4.1: Schematic of an example scenario in which \(\overline{C}_{i,\kappa,v}^{j+1}<0\) using the formulation described thus far.
and
\[\hat{C}_{i,\kappa^{(f)}}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right)=C_{i, \kappa^{(f)}}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right)+\beta_{f,l}^{-1} \omega_{\kappa^{(f)},l}^{(f)}\mathcal{F}_{C_{i}}^{v}\left(y_{\kappa^{(f)}}^{j} \left(\xi\left(\zeta_{l}^{(f)}\right)\right),\nabla C_{\kappa^{(f)}}^{j}\left( \xi\left(\zeta_{l}^{(f)}\right)\right)\right)\cdot n\left(\zeta_{l}^{(f)} \right). \tag{4.48}\]
This is akin to applying the linear-scaling limiter in Section 4.3 to only the gradient. In order to guarantee \(\overline{C}_{i,\kappa,v}^{j+1}\geq 0\), \(\omega_{\kappa,l}^{(f)}\) and \(\omega_{\kappa^{(f)},l}^{(f)}\) in Equations (4.47) and (4.48) can be prescribed as
\[\omega_{\kappa,l}^{(f)}=\min_{i}\omega_{i,\kappa,l}^{(f)},\quad\omega_{i, \kappa,l}^{(f)}=\begin{cases}1,&Q_{i,\kappa,l}^{(f)}\geq 0,\\ \frac{\frac{1}{2\sum_{f}n_{q,f}^{\partial}}\sum_{q=1}^{n_{q}}\theta_{q}C_{i, \kappa}^{j}(\xi_{q})+\Lambda_{f,l}C_{i,\kappa}^{j}\left(\xi\left(\zeta_{l}^{( f)}\right)\right)}{\frac{\Delta\epsilon^{*}\nu_{f,l}^{\partial}}{2|\kappa|} \mathcal{F}_{C_{i}}\left(y_{\kappa^{(f)}}^{j}\left(\xi\left(\zeta_{l}^{(f)} \right)\right),\nabla C_{\kappa}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right) \right)\right)\cdot n\left(\zeta_{l}^{(f)}\right)},&\text{otherwise},\end{cases} \tag{4.49}\]
and
\[\omega_{\kappa^{(f)},l}^{(f)}=\min_{i}\omega_{i,\kappa^{(f)},l}^{(f)},\quad \omega_{i,\kappa^{(f)},l}^{(f)}=\begin{cases}1,&Q_{i,\kappa^{(f)},l}^{(f)}\geq 0,\\ -\frac{\frac{1}{2\sum_{f}n_{q,f}^{\partial}}\sum_{q=1}^{n_{q}}\theta_{q}C_{i, \kappa}^{j}(\xi_{q})+\frac{\Delta\epsilon^{*}\nu_{f,l}^{\partial}}{2|\kappa|} \beta_{f,l}C_{i,\kappa}^{j}(\xi\left(\zeta_{l}^{(f)}\right))}{\frac{\Delta \epsilon^{*}\nu_{f,l}^{\partial}}{2|\kappa|}\mathcal{F}_{C_{i}}\left(y_{\kappa^ {(f)}}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right),\nabla C_{\kappa^{(f)}} ^{j}\left(\xi\left(\zeta_{l}^{(f)}\right)\right)\right)\cdot n\left(\zeta_{l}^ {(f)}\right)},&\text{otherwise},\end{cases} \tag{4.50}\]
respectively, where \(Q_{i,\kappa,l}^{(f)}\) and \(Q_{i,\kappa^{(f)},l}^{(f)}\) are defined as
\[Q_{i,\kappa,l}^{(f)}= \frac{1}{2\sum_{f}n_{q,f}^{\partial}}\sum_{q=1}^{n_{q}}\theta_{q} C_{i,\kappa}^{j}\left(\xi_{q}\right)+\Lambda_{f,l}C_{i,\kappa}^{j}\left(\xi \left(\zeta_{l}^{(f)}\right)\right)\] \[+\frac{\Delta\epsilon^{*}\nu_{f,l}^{\partial}}{2|\kappa|} \mathcal{F}_{C_{i}}^{v}\left(y_{\kappa}^{j}\left(\xi\left(\zeta_{l}^{(f)} \right)\right),\nabla C_{\kappa}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right) \right)\right)\cdot n\left(\zeta_{l}^{(f)}\right),\] \[Q_{i,\kappa^{(f)},l}^{(f)}= \frac{1}{2\sum_{f}n_{q,f}^{\partial}}\sum_{q=1}^{n_{q}}\theta_{q} C_{i,\kappa}^{j}\left(\xi_{q}\right)+\frac{\Delta\epsilon^{*}\nu_{f,l}^{ \partial}}{2|\kappa|}\beta_{f,l}C_{i,\kappa^{(f)}}^{j}\left(\xi\left(\zeta_{l} ^{(f)}\right)\right)\] \[+\frac{\Delta\epsilon^{*}\nu_{f,l}^{\partial}}{2|\kappa|}\mathcal{ F}_{C_{i}}^{v}\left(y_{\kappa^{(f)}}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right) \right),\nabla C_{\kappa^{(f)}}^{j}\left(\xi\left(\zeta_{l}^{(f)}\right) \right)\right)\cdot n\left(\zeta_{l}^{(f)}\right),\]
such that \(\overline{C}_{i,\kappa,v}^{j+1}\) in Equation (4.45) can be rewritten as
\[\overline{C}_{i,\kappa,v}^{j+1}=\sum_{f=1}^{n_{f}}\sum_{l=1}^{n_{q,f}}Q_{i, \kappa,l}^{(f)}+\sum_{f=1}^{n_{f}}\sum_{l=1}^{n_{q,f}^{\partial}}Q_{i,\kappa^{( f)},l}^{(f)}.\]
\(\beta_{f,l}\) can then be prescribed using only the \(\beta_{T}\) constraint (4.5) (i.e., the constraint (4.6) can be ignored); furthermore, \(\beta_{f,l}\) does not need to be recomputed since \(\mathcal{F}^{v}\left(y,\omega\nabla y\right)=G\left(y\right):\omega\nabla y= \omega\mathcal{F}^{v}\left(y,\nabla y\right)\) and \(\beta^{*}\left(y,\mathcal{F}^{v}\left(y,\nabla y\right),n\right)\geq\beta^{*} \left(y,\omega\mathcal{F}^{v}\left(y,\nabla y\right),n\right)=\omega\beta^{*} \left(y,\mathcal{F}^{v}\left(y,\nabla y\right),n\right)\), for \(\omega\in[0,1]\). It should also be noted that the constraint (4.6) can be overly restrictive, such that nonnegativity of species concentrations can often be maintained even if the constraint (4.6) is neglected and \(\omega_{\kappa,l}^{(f)}=\omega_{\kappa^{(f)},l}^{(f)}=1\) for all \(l,f\). In general, the need to limit the gradient seems to be extremely rare. In fact, such gradient limiting is not needed for the results presented here (i.e., the adaptive time stepping procedure was sufficient); however, it will quickly become necessary if, for example, insufficient artificial viscosity is applied to the moving-detonation test described in Section 5.4. In practice, an appropriate strategy is to limit the gradient when \(\overline{C}_{i,\kappa,v}^{j+1}<0\) and excessive loops are taken in the adaptive time stepping procedure. An alternative to (4.46) is to instead apply the density limiter in Section 4.3 to the state and modify the fourth term in Equation (3.2) as
\[\left(\left\{\overline{\mathcal{F}}^{v}\left(y,\nabla y\right)\right\}\right) \cdot n-\delta^{v}\left(y,\nabla y,n\right),\left[\left.\left[\left.\mathbb{b} \right]\right\right)_{\epsilon}\leftarrow\left(\left\{\overline{\mathcal{F}}^{v} \left(\tilde{y},\nabla\tilde{y}\right)\right\}\cdot n-\delta^{v}\left(\tilde{y}, \nabla\tilde{y},n\right),\left[\left.\left[\left.\mathbb{b}\right]\right)\right)_{ \epsilon}, \tag{4.51}\]
where
\[\ddot{y}=\left(\rho v_{1},\ldots,\rho v_{d},\rho e_{t},\bar{C_{1}},\ldots,\hat{C _{n_{s}}}\right),\quad\bar{C_{i}}=\overline{C}_{i}+\omega\left(C_{i}-\overline{ C}_{i}\right).\]
However, iteration would then be required to determine \(\omega_{\kappa,l}^{(f)}\) and \(\omega_{\kappa^{(f)},l}^{(f)}\) such that \(\overline{C}_{i,\kappa,v}^{j+1}\geq 0\).
## 5 Results
We consider three one-dimensional test cases: advection-diffusion of a thermal bubble, a premixed flame, and viscous shock-tube flow. Next, we compute two multidimensional reacting flows: a moving detonation wave enclosed by adiabatic walls and shock/mixing-layer interaction. Unless otherwise specified, the positivity-preserving, entropy-bounded DG formulation presented in the previous section, along with the adaptive time stepping strategy described in Section 4.6, is employed. All simulations are performed using a modified version of the JENRE(r) Multiphysics Framework [50; 2] that incorporates the developments and extensions described in this work. Unless otherwise specified, the second-order strong-stability-preserving Runge-Kutta method (SSPRK2) [37; 38] is employed.
### One-dimensional thermal bubble advection-diffusion
In this problem, we assess the order of accuracy of the positivity-preserving and entropy-bounded DG formulation (without artificial viscosity). The computational domain is \(\Omega=[-25,25]\) m. Periodicity is imposed at the left and right boundaries. The initial conditions are given by
\[v_{1} = 1\text{ m/s},\] \[Y_{H_{2}} = \frac{1}{2}\left[1-\tanh\left(|x|-10\right)\right],\] \[Y_{O_{2}} = 1-Y_{H_{2}}, \tag{5.1}\] \[T = 1200-900\tanh\left(|x|-10\right)\text{ K},\] \[P = 1\text{ bar}.\]
In [2], optimal convergence without any additional stabilization, including limiting, was demonstrated. In [10], we showed optimal convergence from \(p=1\) to \(p=3\) using the positivity-preserving, entropy-bounded DG method for _inviscid_, reacting flows. Four element sizes were considered: \(h\), \(h/2\), \(h/4\), and \(h/8\), where \(h=2\) m. The limiters were not activated when finer meshes were employed. Here, we repeat this investigation in the viscous setting. Instead of the adaptive time stepping strategy described in Section 4.6, we separately consider both viscous flux functions with fixed \(\text{CFL}=0.1\). The "exact" solution is obtained with \(p=3\) and \(h/256\). The \(L^{2}\) error at \(t=5\) s is computed in terms of the normalized state variables,
\[\widehat{\rho}\widehat{v}_{k}=\frac{1}{\sqrt{\rho_{r}P_{r}}}\rho v_{k},\quad \widehat{\rho}\widehat{e}_{t}=\frac{1}{P_{r}}\rho e_{t},\quad\widehat{C}_{i}= \frac{R^{0}T_{r}}{P_{r}}C_{i},\]
where \(\rho_{r}=1\) kg\(\cdot\)m\({}^{-3}\), \(T_{r}=1000\) K, and \(P_{r}=101325\) Pa. Figure 5.1 shows the convergence results for both viscous flux functions. The theoretical convergence rates are denoted with dashed lines. The "\(\times\)" symbol indicates that the positivity-preserving limiter is activated, the "\(\bigcirc\)" symbol indicates that the entropy limiter is activated, and the "\(\bigtriangleup\)" symbol indicates that neither limiter is activated. If both limiters are activated, then the corresponding symbols are superimposed as "\(\otimes\)". The results are extremely similar between the two viscous flux functions. Apart from the coarser grids with \(p=1\), which are likely outside the asymptotic regime, optimal convergence is demonstrated. For \(h\) and \(h/2\), both limiters are activated across all \(p\); for \(h/4\) and \(p=1\), only the positivity-preserving limiter is activated. At higher resolutions, the limiters are not engaged since the solutions are fairly well-resolved.
### One-dimensional premixed flame
In this problem, we consider a smooth, viscous flow with chemical reactions. A freely propagating flame is calculated in Cantera [30] on a 1 cm long grid using the left state in Equations (5.2) and (5.3) below. The computational domain is \(\Omega=[0,0.01]\) m. For the DG calculations, we generate a mesh that contains a refinement zone between 1.8 mm and 2.5 mm with grid spacing \(h=200\)\(\mu\)m, a target size that is 200 times larger than the smallest grid spacing from the resulting refinement procedure in Cantera. The mesh transitions to a spacing of 500 \(\mu\)m at the boundaries. The objective here is to ignite the flame and establish a solution in which the flame anchors itself in the fine region of the one-dimensional mesh. The initial conditions are given by
\[(v_{1},T,P) = \begin{cases}(9.53\text{ m/s},2122\text{ K},1\text{ atm},)\,,&x \geq 0.0025\\ (1.53\text{ m/s},300\text{ K},1\text{ atm},)\,,&x<0.0025\end{cases}, \tag{5.2}\]
with mass fractions
\[(Y_{H_{2}},Y_{O_{2}},Y_{N_{2}},Y_{H},Y_{O})= \begin{cases}\left(7\times 10^{-5},0.0572,0.745,4.2\times 10^{-6},2. 2\times 10^{-4}\right),&x\geq 0.0025\\ (0.023,0.24,0.737,0,0)\,,&x<0.0025\\ \end{cases}. \tag{5.3}\] \[(Y_{OH},Y_{H_{2}O},Y_{HO_{2}},Y_{H_{2}O_{2}})= \begin{cases}\left(2.7\times 10^{-4},0.194,3\times 10^{-6},2.1 \times 10^{-7}\right),&x\geq 0.0025\\ (0,0,0)\,,&x<0.0025\end{cases}.\]
The right state corresponding to \(x\geq 0.0025\) m is the final fully reacted state from the Cantera solution. The left boundary condition is a characteristic inflow condition that allows pressure waves to leave the domain. The right boundary is a reflective outflow condition with the pressure set to 1 atm.
In the beginning of the simulation, the states on both sides of the discontinuity immediately diffuse to form a smooth profile. As the reactions progress, the flame accelerates against the right-moving reactants and then slows down to the flame speed. With sufficient accuracy, the flame should remain stationary inside the refined region. The default CFL is set to 0.4. We perform \(p=1\) and \(p=3\) calculations both with the proposed formulation and with conventional species clipping (instead of the positivity-preserving and entropy limiters), in which negative species concentrations are simply set to zero. Artificial viscosity is not employed in these calculations.
Figure 5.1: Convergence under grid refinement, with \(h=2\,\text{m}\), for the one-dimensional thermal bubble test case. The \(L^{2}\) error of the normalized state with respect to the exact solution at \(t=5\,\text{s}\) is computed. The dashed lines represent the theoretical convergence rates. The “\(\times\)” symbol indicates that the positivity-preserving limiter is activated, the “\(\bigcirc\)” symbol indicates that the entropy limiter is activated, and the “\(\bigtriangleup\)” symbol indicates that neither limiter is activated. If both limiters are activated, then the corresponding symbols are superimposed as “\(\bigotimes\)”.
Figure 5.2 shows instantaneous solutions at \(t=0.012566\) s for \(p=1\), obtained with species clipping. Clear discrepancies between the solution and the Cantera solution are observed. The mass fractions of the intermediate species, namely \(HO_{2}\) and \(H_{2}O_{2}\), are over-predicted. Additionally, the pressure deviates from the expected ambient pressure by a factor of approximately \(50\times 10^{-4}\), with sharp structural changes prior to the flame front. Beyond the poor predictive abilities with species clipping, the solution fails to maintain a stable profile; by \(t=0.02\) s, the flame drifts out of the refinement zone.
The results obtained for \(p=1\) and \(p=3\) with the proposed positivity-preserving, entropy-bounded methodology are given in Figures 5.3a and 5.3b, respectively. The \(p=1\) solution does not fully capture the species profiles, and the pressure oscillates through the flame; nevertheless, it agrees much more closely with the Cantera solution than the \(p=1\) solution obtained with species clipping. The \(p=3\) solution at the given time is in much better agreement to the species profiles of the Cantera solution than the \(p=1\) solution. The \(p=3\) pressure solution smoothly transitions through the flame without the spikes found in the \(p=1\) solution. A slight pressure change through the flame is expected given a compressible formulation. A major improvement over the the clipping solution is that, in both the \(p=1\) and \(p=3\) simulations, the flame is held in the refinement zone, achieving the desired steady state flame at the expected flame speed. These results illustrate the significant benefits of employing the proposed positivity-preserving, entropy-bounded DG formulation.
Figure 5.2: Solution for a one-dimensional premixed flame at \(t=0.012566\) s obtained with species clipping. The \(p=1\) solutions drift out of the refinement zone by \(t=2\) s.
### One-dimensional shock tube
This test case was computed without viscous effects by Houim and Kuo [27], by Johnson and Kercher [2], and in our previous work [10], where we showed that (a) instabilities in the multicomponent, thermally perfect case are much greater than in the monocomponent, calorically perfect case and (b) enforcement of an entropy bound suppresses large-scale nonphysical oscillations much more effectively than enforcement of the positivity property. Our goals here are to investigate whether these observations hold in the viscous setting and to further compare the BR2 and Lax-Friedrichs-type viscous flux functions. The computational domain is \(\Omega=[0,1]\) m, and the final time is \(t=300\)\(\mu\)s. Walls are imposed at the left and right boundaries. The initial conditions are written as
\[(v_{1},T,P,Y_{N_{2}},Y_{He})=\begin{cases}(0\text{ m/s},300\text{ K},1\text{ atm},1,0)\,,&x\geq 0.4\\ (0\text{ m/s},300\text{ K},10\text{ atm},0,1)\,,&x<0.4\end{cases}. \tag{5.4}\]
The default CFL is set to \(0.1\). For the remainder of this subsection, "BR2" refers to the adaptive time stepping strategy exactly as described in Section 4.6, whereas "LLF" refers to a similar time stepping strategy, but with the viscous flux function fixed to be the local Lax-Friedrichs-type flux function. In addition, "PPL" corresponds to only the positivity-preserving limiter, while "EL" corresponds to both the positivity-preserving and entropy limiters. Based on [2] and [10], a reference solution is computed using \(p=2\), \(2000\) elements, artificial viscosity, BR2, and EL. All other solutions are computed using \(p=3\) and \(200\) elements.
Figure 5.4 shows the mass fraction, pressure, temperature, and entropy profiles obtained with BR2. Except for the reference solution, artificial viscosity is not employed in order to isolate the effects of the
Figure 5.3: Solutions for a one-dimensional premixed flame at \(t=0.012566\) s obtained with the proposed positivity-preserving, entropy-bounded formulation.
limiters. Note that the linear-scaling limiters alone are not expected to eliminate small-scale spurious oscillations [13, 12, 41, 42]. The results are very similar to those in the inviscid case [10]. The species profiles are well-captured using both types of limiting. The entropy limiter dampens large-scale instabilities in the pressure, temperature, and entropy distributions significantly better than the positivity-preserving limiter. Furthermore, just as observed in [10], the instabilities still present with the positivity-preserving limiter are substantially larger than those usually present in monocomponent, calorically perfect shock-tube solutions computed with the positivity-preserving limiter [13, 47, 16], and the relative advantage of applying the entropy limiter is much greater. The addition of artificial viscosity would greatly suppress the small-scale instabilities; for brevity, such results are not included here, but they are very similar to those in [10]. At the same time, artificial viscosity alone (without the limiters) results in negative concentrations and other instabilities, thus motivating a combination of the two stabilization mechanisms. The corresponding LLF results are given in Figure 5.5, which are very similar to the BR2 results. However, the temperature overshoot at the shock is noticeably smaller in the LLF case, indicating that the Lax-Friedrichs-type viscous flux function can sometimes have better stabilization properties than the BR2 scheme. Regardless, the results in the following subsection suggest that the latter is still the preferred viscous flux function, provided that the positivity property is satisfied.
Figure 5.4: Results for \(p=3\) solutions computed using BR2 on 200 elements without artificial viscosity for the one-dimensional, multicomponent shock-tube problem with initialization in Equation (5.4). “PPL” corresponds to the positivity-preserving limiter by itself, and “EL” refers to both the positivity-preserving and entropy limiters with the local entropy bound in Equation (4.20).
Figure 5.6 presents the percent error in mass, energy, and atom conservation for the BR2, EL solution as a representative example, calculated every \(0.3~{}\mu\)s (for a total of 1000 samples). \(\mathsf{N}_{N}\) and \(\mathsf{N}_{He}\) denote the total numbers of nitrogen and helium atoms in the mixture. The error remains close to machine precision, verifying that the developed DG framework is conservative. Also shown is the error in mass conservation (calculated every time step) for a solution computed with conventional species clipping. The error rises considerably before the solver diverges.
Figure 5.5: Results for \(p=3\) solutions computed using LLF on 200 elements without artificial viscosity for the one-dimensional, multicomponent shock-tube problem with initialization in Equation (5.4). “PPL” corresponds to the positivity-preserving limiter by itself, and “EL” refers to both the positivity-preserving and entropy limiters with the local entropy bound in Equation (4.20).
### Two-dimensional detonation wave
This test case involves a moving hydrogen-oxygen detonation wave diluted in Argon with initial conditions
\[\begin{array}{rcl}(v_{1},v_{2})&=&(0,0)\,\,\,\text{m/s},\\ X_{Ar}:X_{H_{2}O}:X_{OH}:X_{O_{2}}:X_{H_{2}}&=&\begin{cases}8:2:0.1:0:0&x_{1}<0.015 \,\,\text{m},x\in\mathcal{C}_{1},x\in\mathcal{C}_{2}\\ 7:0:0:1:2&\text{otherwise}\end{cases}\end{array}\,,\\ P&=&\begin{cases}5.50\text{e}5&\text{Pa}&x_{1}<0.015\,\,\text{m},x\in \mathcal{C}_{1},x\in\mathcal{C}_{2}\\ 6.67\text{e}3&\text{Pa}&\text{otherwise}\end{cases}\,,\\ T&=&\begin{cases}3500&\text{K}&x_{1}<0.015\,\,\text{m},x\in \mathcal{C}_{1},x\in\mathcal{C}_{2}\\ 300&\text{K}&\text{otherwise}\end{cases}\,,\end{array} \tag{5.5}\]
where
\[\mathcal{C}_{1} =\left\{x\left|\sqrt{\left(x_{1}-0.021\right)^{2}+\left(x_{2}-0. 015\right)^{2}}<0.0025\,\,\text{m}\right.\right\},\] \[\mathcal{C}_{2} =\left\{x\left|\sqrt{\left(x_{1}-0.022\right)^{2}+\left(x_{2}-0. 044\right)^{2}}<0.0025\,\,\text{m}\right.\right\},\]
which represent two high-pressure/high-temperature regions to perturb the flow. The computational domain is \(\Omega=(0,0.45)\,\text{m}\times(0,0.06)\,\text{m}\), with adiabatic, no-slip walls at the left, right, bottom, and top boundaries. The Westbrook mechanism [51] is employed.
Johnson and Kercher computed this flow without viscous effects with \(p=1\) and a very fine mesh with spacing \(h=9\times 10^{-5}\) m [2]. In [11], we simulated this flow (also without viscous effects) using a series of triangular grids ranging from very coarse to fine. Stability was maintained across all resolutions. The finer cases predicted the correct diamond-like cellular structure, with a cell length of \(0.055\) m and a cell height of \(0.03\) m [52; 53]. In particular, there were two cells in the vertical direction. Here, we recompute this flow with viscous effects and quadrilateral elements. Specifically, we use Gmsh [54] to first generate structured-type, uniform grids with element sizes of \(2h\), \(8h\), and \(32h\); the cells are then clustered near the top and bottom walls, resulting in smaller mesh spacing in the vertical direction at said walls. Since the grids do not
Figure 5.6: Percent error in mass, energy, and atom conservation for the “EL” case in Figure 5.4, computed with \(p=3\) on 200 elements. The initial conditions for this one-dimensional, multicomponent shock-tube problem are given in Equation (5.4). Also shown is the error in mass conservation for a solution computed with conventional species clipping.
directly account for the circular perturbations in Equation (5.5), the discontinuities in the initial conditions are slightly smoothed using hyperbolic tangent functions. For the remainder of this subsection, "BR2" refers to the adaptive time stepping strategy exactly as described in Section 4.6, whereas "LLF" refers to a similar time stepping strategy, but with the viscous flux function fixed to be the local Lax-Friedrichs-type flux function. The default CFL is set to \(0.3\).
Figure 5.7 presents the distributions of OH mole fraction and temperature obtained from \(p=2\) solutions at \(t=200\)\(\mu\)s computed with LLF. Unsurprisingly, the \(32h\) solution is extremely smeared behind the shock. Nonphysical oscillations are observed near the top and bottom walls. To more clearly illustrate these oscillations, Figure 5.8 (top) zooms in on the temperature field at the bottom wall. The flow is much better resolved in the \(8h\) case according to Figure 5.7. The near-wall oscillations largely disappear, but spurious oscillations are present in the post-shock region, particularly around \(x_{1}=0.1\) m in the mole-fraction field. Figure 5.9 displays the corresponding distributions of OH mole fraction and temperature obtained with BR2, along with a \(2h\) solution. Figure 5.8 (bottom) gives the near-wall temperature distribution for \(32h\). These \(32h\) and \(8h\) solutions are similar to the LLF solutions, but generally free from the aforementioned oscillations, which is why the BR2 flux function is chosen to be the "default" flux function in the adaptive time stepping strategy proposed in Section 4.6. The detonation-front locations are fairly close across all cases. In the \(2h\) solution, the flow topology, including transverse waves, vortices, and triple points, is well-captured.
Figure 5.7: \(p=2\) solution to a two-dimensional moving detonation wave at \(t=200\)\(\mu\)s computed with LLF. The initial conditions are given in Equation (5.5).
Figure 5.8: Temperature distributions obtained with the \(32h\) mesh zoomed in on the bottom wall.
Figure 5.10 presents the maximum-pressure history, \(P^{*}\), where \(P^{*,j+1}(x)=\max\big{\{}P^{j+1}(x),P^{*,j}(x)\big{\}}\), for the \(p=2\), BR2 solutions, which reveals the expected cellular structure, with two cells in the vertical direction. The detonation cells in the \(32h\) solution can be hardly discerned due to the excessive smearing.
Figure 5.9: \(p=2\) solution to a two-dimensional moving detonation wave at \(t=200\)\(\mu\)s computed with BR2. The initial conditions are given in Equation (5.5).
The cells in the \(8h\) solution can be clearly discerned, but begin to dissipate towards the right of the domain. Finally, those in the \(2h\) solution remain sharp throughout.
Figure 5.11 presents the percent error in discrete conservation of mass, energy, and atomic elements for BR2, \(32h\) as a representative example, calculated every \(0.200~{}\mu\)s (for a total of \(1000\) samples). \(\mathsf{N}_{O}\), \(\mathsf{N}_{H}\), and \(\mathsf{N}_{Ar}\) denote the total numbers of oxygen, hydrogen, and argon atoms in the mixture. The errors remain close to machine precision throughout the simulation. The fluctuations and slight increases in the error profiles are largely a result of minor numerical-precision issues due to the different orders of magnitude among the state variables. These results confirm that the proposed DG formulation is fully conservative. Also given in Figure 5.11 is the error in mass conservation (calculated every time step) for a solution obtained with conventional species clipping. Artificial viscosity is still employed. The error increases rapidly until the solver diverges. We further note that if only the positivity-preserving limiter is employed (without the entropy limiter), then large-scale temperature undershoots may appear that can hinder convergence of the nonlinear solver in the reaction step. For example, the \(32h\) calculation with the entropy limiter is nearly three times less expensive than a corresponding \(32h\) calculation with solely the positivity-preserving limiter.
Figure 5.10: Maximum-pressure history, \(P^{*}\), where \(P^{*,j+1}(x)=\max\left\{P^{j+1}(x),P^{*,j}(x)\right\}\), for a two-dimensional moving detonation wave at \(t=200~{}\mu\)s computed with \(p=2\), BR2, and a sequence of meshes, where \(h=9\times 10^{-5}\) m. The initial conditions are given in Equation (5.5).
Finally, we recompute the BR2, \(32h\) case with curved elements of quadratic geometric order. Specifically, high-order geometric nodes are first inserted into the straight-sided mesh, after which the midpoint nodes at interior interfaces are perturbed. These perturbations are performed only for \(x>0.05\) m to ensure the initial conditions are the same. This low-resolution case is computed in order to guarantee that the limiter is frequently activated. Figure 5.12 displays the distributions of OH mole fraction for the linear and curved meshes, which are superimposed. The solution obtained with curved mesh is stable and extremely similar to that computed with the linear mesh, demonstrating that the proposed formulation is indeed compatible with curved elements.
Figure 5.11: Percent error in mass, energy, and atom conservation for the \(p=2\), BR2, \(32h\) solution. The initial conditions for this two-dimensional hydrogen detonation problem are given in Equation (5.5). Also given is the error in mass conservation for a solution obtained with conventional species clipping.
Figure 5.12: OH mole-fraction field for a two-dimensional moving detonation wave at \(t=200\)\(\mu\)s computed with \(p=2\), BR2, and \(32h\), where \(h=9\times 10^{-5}\) m, on linear and curved meshes. The curved mesh, which is of quadratic order, is obtained by inserting high-order geometric nodes into the linear mesh and perturbing said nodes. The initial conditions are given in Equation (5.5).
### Three-dimensional shock/mixing-layer interaction
In this section, we compute a three-dimensional chemically reacting mixing layer that intersects an oblique shock. This test case was first presented in [55], which built on the configuration introduced in [56]. The mesh and flow parameters are slightly different from those in [55].
Figure 5.13 displays a two-dimensional schematic of the flow configuration. Supersonic inflow is applied at the left boundary, and extrapolation is applied at the right boundary. Flow parameters for the incoming air and fuel are listed in Table 1. Slip-wall conditions are applied the top and bottom walls since it is not necessary to capture the boundary layers. We employ the detailed reaction mechanism from Westbrook [51].
To connect the fuel and air streams, we utilize a hyperbolic tangent function for prescribing the species, temperature, and normal direction velocity with a constant pressure specification,
\[Y_{i}(x_{1},\ldots,x_{d},t) = \frac{1}{2}\left(\left(Y_{i,F}+Y_{i,O}\right)+\left(Y_{i,F}-Y_{i,O}\right)\tanh\left(\frac{\left(2\left(x_{2}-h\left(x_{1},\ldots,x_{d},t \right)\right)\right)}{L\left(x_{1},\ldots,x_{d},t\right)}\right)\right)\] \[T(x_{1},\ldots,x_{d},t) = \frac{1}{2}\left(\left(T_{F}+Y_{O}\right)+\left(T_{F}-T_{O} \right)\tanh\left(\frac{\left(2\left(x_{2}-h\left(x_{1},\ldots,x_{d},t \right)\right)\right)}{L\left(x_{1},\ldots,x_{d},t\right)}\right)\right)\] \[v_{1}(x_{1},\ldots,x_{d},t) = \frac{1}{2}\left(\left(v_{1,F}+v_{1,O}\right)+\left(v_{1,F}-v_{1,O}\right)\tanh\left(\frac{\left(2\left(x_{2}-h\left(x_{1},\ldots,x_{d},t \right)\right)\right)}{L\left(x_{1},\ldots,x_{d},t\right)}\right)\right) \tag{5.6}\] \[P = 94232.25\ \text{Pa},\]
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Prescribed Quantity** & **Air Boundary** & **Fuel Boundary** \\ \hline \hline Velocity, \(v_{1}\) [m/s] & 1634 m/s & 973 m/s \\ \hline Temperature, \(T\) [K] & 1475 & 545 \\ \hline \(Y_{O_{2}}\) & 0.278 & 0 \\ \hline \(Y_{N_{2}}\) & 0.552 & 0.95 \\ \hline \(Y_{H_{2}}\) & 0 & 0.05 \\ \hline \(Y_{H_{2}O}\) & 0.17 & 0 \\ \hline \(Y_{H}\) & \(5.6\times 10^{-7}\) & 0 \\ \hline \(Y_{O}\) & \(1.55\times 10^{-4}\) & 0 \\ \hline \(Y_{OH}\) & \(1.83\times 10^{-3}\) & 0 \\ \hline \(Y_{HO_{2}}\) & \(5.1\times 10^{-6}\) & 0 \\ \hline \(Y_{H_{2}O_{2}}\) & \(2.5\times 10^{-6}\) & 0 \\ \hline \end{tabular}
\end{table}
Table 1: Inflow parameters for three-dimensional shock/mixing-layer interaction. These values are taken from **(author?)**[56].
Figure 5.13: Schematic of the three-dimensional shock/mixing-layer interaction test case.
where \((\cdot)_{O}\) denotes air, \((\cdot)_{F}\) denotes fuel, \(L\) is a length scale, and \(h\) is the center of the hyperbolic tangent. Equation 5.6 is also used to initialize the solution. \(L\) is given by
\[L(x_{1},\ldots,x_{d},t) = L_{s}+\sum_{i=1}^{n_{t}}A_{i}\sin\left(\frac{n_{i}2\pi t}{t_{r}} \right)+l(x_{1},\ldots,x_{d})\] \[l\left(x_{1},\ldots,x_{d}\right) = \sum_{i=1}^{n_{v}}\sin\left(\frac{m_{i}2\pi x_{3}}{z_{h}}\right) \sum_{i=1}^{n_{v}}B_{i}\sin\left(\frac{q_{i}2\pi t}{t_{r}}\right), \tag{5.7}\]
where \(L_{s}=0.05\) mm is the ambient length scale; \(A_{i}\) and \(B_{i}\) are amplitudes; \(n_{i}\), \(m_{i}\), and \(q_{i}\) are wavenumbers; \(z_{h}=0.00144\) m is the domain thickness in the \(x_{3}\)-direction; \(t_{r}=1.142\times 10^{-7}\) s is the flow-through time according to the air velocity; and \(n_{t}=4\). This prescription of \(L\) introduces unsteadiness while maintaining reproducibility. To induce variation in this three-dimensional case, we prescribe \(h\) as
\[h(x_{1},\ldots,x_{d},t) = h_{s}+h_{t}\sin\left(\frac{m_{0}2\pi x_{3}}{z_{h}}\right), \tag{5.8}\]
where \(h_{s}\) is the ambient center of the hyperbolic tangent. Additional information can be found in **(author?)**[55].
Figure 5.14 shows the specifications in Gmsh [54] used to create an unstructured tetrahedral mesh. For each tuple, the first two values are the \(x_{1}\) and \(x_{2}\) locations of the given point. The third value represents the target mesh size. We select \(a=500\times 10^{-6}\) m, \(b=200\times 10^{-6}\), \(c=60\times 10^{-6}\), and \(d=120\times 10^{-6}\) m. The mesh is extruded in the \(x_{3}\)-direction from \(x_{3}=0\) to \(x_{3}=z_{h}\), with periodicity applied at the resulting \(x_{1}x_{2}\)-planes.
Figure 5.15 shows isosurfaces corresponding to \(Y_{OH}=0.00017\), superimposed on a numerical Schlieren result sampled along an \(x_{1}x_{2}\)-plane. The \(Y_{OH}\) isosurfaces ares colored by pressure to highlight the abrupt compression experienced through the oblique shock. The right image provides a zoomed-in perspective to emphasize the three dimensional flow features. Roll-up is observed upstream of the oblique shock. The interaction between the shock and the mixing layer causes the generation of smaller-scale compression waves. These results demonstrate that the proposed formulation can capture complex flow features in three dimensions.
Figure 5.14: Diagram of geometry with point locations for mesh construction in Gmsh [54]. The third value in each tuple (\(a\) through \(d\)) is the target mesh size at the respective locations.
## 6 Concluding remarks
In this paper, we developed a fully conservative, positivity-preserving, and entropy-bounded DG formulation for the chemically reacting, compressible Navier-Stokes equations. The formulation builds on the fully conservative, positivity-preserving, and entropy-bounded DG formulation for the chemically reacting, compressible Euler equations that we previously introduced [10; 11]. A key ingredient is the positivity-preserving Lax-Friedrichs-type viscous flux function devised by Zhang [16] for the monocomponent case, which we extended to multicomponent flows with species diffusion in a manner that separates the inviscid and viscous fluxes. This is in contrast with the work by Du and Yang [17], who similarly extended said flux function, but treated the inviscid and viscous fluxes together. We discussed in detail the consideration of boundary conditions and the pressure-equilibrium-preserving techniques by Johnson and Kercher [2], introducing additional constraints on the time step size. Entropy boundedness is enforced on only the convective contribution since the minimum entropy principle only applies to the Euler equations [22; 23] and the viscous flux function is not fully compatible with said entropy bound. Drawing from [16], we proposed an adaptive solution procedure that favors large time step sizes and the BR2 viscous flux function since the Lax-Friedrichs-type viscous flux function was found to more likely lead to spurious oscillations. Small time step sizes and/or the Lax-Friedrichs-type viscous flux function are employed only when necessary. However, it should be noted that the Lax-Friedrichs-type viscous flux function guarantees a finite time-step size such that the positivity property is maintained. The proposed methodology is compatible with high-order polynomials and curved elements of arbitrary shape.
The DG methodology was applied to a series of test cases. The first two comprised smooth, one-dimensional flows: advection-diffusion of a thermal bubble and a premixed flame. In the former, optimal convergence was demonstrated for both viscous flux functions. In the latter, we obtained a much more accurate solution on a relatively coarse mesh with the proposed methodology than with conventional species clipping. Next, we computed viscous shock-tube flow and found that just as in the inviscid setting, enforcement of entropy boundedness considerably reduces the magnitude of large-scale instabilities that otherwise appear if only the positivity property is enforced. Finally, we computed two-dimensional, moving, viscous detonation waves and three-dimensional shock/mixing-layer interaction, demonstrating that the proposed formulation can accurately and robustly compute complex reacting flows with detailed chemistry using high-order polynomial approximations. Discrete conservation of mass and total energy was verified. Future work will entail the simulation of larger-scale viscous, chemically reacting flows involving more complex geometries.
## Acknowledgments
This work is sponsored by the Office of Naval Research through the Naval Research Laboratory 6.1 Computational Physics Task Area.
|
2307.02193
|
Directed Poincaré Inequalities and $L^1$ Monotonicity Testing of
Lipschitz Functions
|
We study the connection between directed isoperimetric inequalities and
monotonicity testing. In recent years, this connection has unlocked
breakthroughs for testing monotonicity of functions defined on discrete
domains. Inspired the rich history of isoperimetric inequalities in continuous
settings, we propose that studying the relationship between directed
isoperimetry and monotonicity in such settings is essential for understanding
the full scope of this connection.
Hence, we ask whether directed isoperimetric inequalities hold for functions
$f : [0,1]^n \to \mathbb{R}$, and whether this question has implications for
monotonicity testing. We answer both questions affirmatively. For Lipschitz
functions $f : [0,1]^n \to \mathbb{R}$, we show the inequality
$d^{\mathsf{mono}}_1(f) \lesssim \mathbb{E}\left[\|\nabla^- f\|_1\right]$,
which upper bounds the $L^1$ distance to monotonicity of $f$ by a measure of
its "directed gradient". A key ingredient in our proof is the monotone
rearrangement of $f$, which generalizes the classical "sorting operator" to
continuous settings. We use this inequality to give an $L^1$ monotonicity
tester for Lipschitz functions $f : [0,1]^n \to \mathbb{R}$, and this framework
also implies similar results for testing real-valued functions on the
hypergrid.
|
Renato Ferreira Pinto Jr
|
2023-07-05T10:41:21Z
|
http://arxiv.org/abs/2307.02193v1
|
# Directed Poincare Inequalities and \(L^{1}\) Monotonicity Testing of Lipschitz Functions
###### Abstract
We study the connection between directed isoperimetric inequalities and monotonicity testing. In recent years, this connection has unlocked breakthroughs for testing monotonicity of functions defined on discrete domains. Inspired the rich history of isoperimetric inequalities in continuous settings, we propose that studying the relationship between directed isoperimetry and monotonicity in such settings is essential for understanding the full scope of this connection.
Hence, we ask whether directed isoperimetric inequalities hold for functions \(f:[0,1]^{n}\to\mathbb{R}\), and whether this question has implications for monotonicity testing. We answer both questions affirmatively. For Lipschitz functions \(f:[0,1]^{n}\to\mathbb{R}\), we show the inequality \(d_{1}^{\text{mono}}(f)\lesssim\mathbb{E}\left[\|\nabla^{-}f\|_{1}\right]\), which upper bounds the \(L^{1}\) distance to monotonicity of \(f\) by a measure of its "directed gradient". A key ingredient in our proof is the _monotone rearrangement_ of \(f\), which generalizes the classical "sorting operator" to continuous settings. We use this inequality to give an \(L^{1}\) monotonicity tester for Lipschitz functions \(f:[0,1]^{n}\to\mathbb{R}\), and this framework also implies similar results for testing real-valued functions on the hypergrid.
Introduction
In property testing, algorithms must make a decision about whether a function \(f:\Omega\to R\) has some property \(\mathcal{P}\), or is _far_ (under some distance metric) from having that property, using a small number of queries to \(f\). One of the most well-studied problems in property testing is _monotonicity testing_, the hallmark case being that of testing monotonicity of Boolean functions on the Boolean cube, \(f:\{0,1\}^{n}\to\{0,1\}\). We call \(f\) monotone if \(f(x)\leq f(y)\) whenever \(x\preceq y\), i. e. \(x_{i}\leq y_{i}\) for every \(i\in[n]\).
A striking trend emerging from this topic of research has been the connection between monotonicity testing and _isoperimetric inequalities_, in particular directed analogues of classical results such as Poincare and Talagrand inequalities. We preview that the focus of this work is to further explore this connection by establishing directed isoperimetric inequalities for functions \(f:[0,1]^{n}\to\mathbb{R}\) with continuous domain and range, and as an application obtain monotonicity testers in such settings. Before explaining our results, let us briefly summarize the connection between monotonicity testing and directed isoperimetry.
For a function \(f:\{0,1\}^{n}\to\mathbb{R}\), let \(d_{1}^{\mathsf{const}}(f)\) denote its \(L^{1}\) distance to any constant function \(g:\{0,1\}^{n}\to\mathbb{R}\), and for any point \(x\), define its discrete gradient \(\nabla f(x)\in\mathbb{R}^{n}\) by \((\nabla f(x))_{i}:=f(x^{i\to 1})-f(x^{i\to 0})\) for each \(i\in[n]\), where \(x^{i\to b}\) denotes the point \(x\) with its \(i\)-th coordinate set to \(b\). Then the following inequality1 is usually called the Poincare inequality on the Boolean cube (see e. g. [10]): for every \(f:\{0,1\}^{n}\to\{0,1\}\),
Footnote 1: The left-hand side is usually written \(\operatorname{Var}\left[f\right]\) instead; for Boolean functions, the two quantities are equivalent up to a constant factor, and writing \(d_{1}^{\mathsf{const}}(f)\) is more consistent with the rest of our presentation.
\[d_{1}^{\mathsf{const}}(f)\lesssim\mathbb{E}\left[\|\nabla f\|_{1}\right]\,. \tag{1}\]
(Here and going forward, we write \(f\lesssim g\) to denote that \(f\leq cg\) for some universal constant \(c\), and similarly for \(f\gtrsim g\). We write \(f\approx g\) to denote that \(f\lesssim g\) and \(g\lesssim f\).)
Now, let \(d_{1}^{\mathsf{mono}}(f)\) denote the \(L^{1}\) distance from \(f\) to any monotone function \(g:\{0,1\}^{n}\to\mathbb{R}\), and for each point \(x\) let \(\nabla^{-}f(x)\), which we call the _directed gradient_ of \(f\), be given by \(\nabla^{-}f(x):=\min\{\nabla f(x),0\}\). Then [11] were the first to notice that the main ingredient of the work of [1], who gave a monotonicity tester for Boolean functions on the Boolean cube with query complexity \(O(n/\epsilon)\), was the following "directed analogue" of (1)2: for every \(f:\{0,1\}^{n}\to\{0,1\}\),
Footnote 2: Typically the left-hand side would be the distance to a _Boolean_ monotone function, rather than any real-valued monotone function, but the two quantities are equal; this may be seen via a maximum matching of violating pairs of \(f\), see [12].
\[d_{1}^{\mathsf{mono}}(f)\lesssim\mathbb{E}\left[\|\nabla^{-}f\|_{1}\right]\,. \tag{2}\]
The tester of [1] is the "edge tester", which samples edges of the Boolean cube uniformly at random and rejects if any sampled edge violates monotonicity. Inequality (2) shows that, if \(f\) is far from monotone, then many edges are violating, so the tester stands good chance of finding one.
In their breakthrough work, [11] gave the first monotonicity tester with \(o(n)\) query complexity by showing a directed analogue of Margulis's inequality. This was improved by [10], and eventually the seminal paper of [12] resolved the problem of (nonadaptive) monotonicity testing of Boolean functions on the Boolean cube, up to polylogarithmic factors, by giving a tester with query complexity \(\widetilde{O}(\sqrt{n}/\epsilon^{2})\). The key ingredient was to show a directed analogue of _Talagrand's inequality_. Talagrand's inequality gives that, for every \(f:\{0,1\}^{n}\to\{0,1\}\),
\[d_{1}^{\mathsf{const}}(f)\lesssim\mathbb{E}\left[\|\nabla f\|_{2}\right]\,.\]
Compared to (1), this replaces the \(\ell^{1}\)-norm of the gradient with its \(\ell^{2}\)-norm. [10] showed the natural directed analogue3 up to polylogarithmic factors, which were later removed by [14]: for every \(f:\{0,1\}^{n}\to\{0,1\}\),
Footnote 3: In fact, they require a _robust_ version of this inequality, but we omit that discussion for simplicity.
\[d_{1}^{\mathsf{mono}}(f)\lesssim\mathbb{E}\left[\|\nabla^{-}f\|_{2}\right]\,.\]
Since then, directed isoperimetric inequalities have also unlocked results in monotonicity testing of Boolean functions on the hypergrid [1, 1, 1, 2, 1] (see also [1, 10]) and real-valued functions on the Boolean cube [1].
Our discussion so far has focused on isoperimetric (_Poincare-type_) inequalities on _discrete_ domains. On the other hand, a rich history in geometry and functional analysis, originated in continuous settings, has established an array of isoperimetric inequalities for functions defined on continuous domains, as well as an impressive range of connections to topics such as partial differential equations [13], Markov diffusion processes [1], probability theory and concentration of measure [1], optimal transport [1], polynomial approximation [21], among others. (See Appendix A for a brief background on Poincare-type inequalities.)
As a motivating starting point, we note that for suitably smooth (Lipschitz) functions \(f:[0,1]^{n}\to\mathbb{R}\), an \(L^{1}\) Poincare-type inequality holds [1]:
\[d_{1}^{\mathsf{const}}(f)\lesssim\mathbb{E}\left[\|\nabla f\|_{2}\right]\,. \tag{3}\]
Thus, understanding the full scope of the connection between classical isoperimetric inequalities, their directed counterparts, and monotonicity seems to suggest the study of the continuous setting. In this work, we ask: do _directed_ Poincare-type inequalities hold for functions \(f\) with continuous domain and range? And if so, do such inequalities have any implications for monotonicity testing? We answer both questions affirmatively: Lipschitz functions \(f:[0,1]^{n}\to\mathbb{R}\) admit a directed \(L^{1}\) Poincare-type inequality (Theorem 1.2), and this inequality implies an upper bound on the query complexity of testing monotonicity of such functions with respect to the \(L^{1}\) distance (Theorem 1.4). (We view \(L^{1}\) as the natural distance metric for the continuous setting; see Section 1.3 for a discussion.) This framework also yields results for \(L^{1}\) testing monotonicity of real-valued functions on the hypergrid \(f:[m]^{n}\to\mathbb{R}\). Our testers are _partial derivative testers_, which naturally generalize the classical _edge testers_[1, 1] to continuous domains.
We now introduce our model, and then summarize our results.
### \(L^{p}\)-testing
Let \((\Omega,\Sigma,\mu)\) be a probability space (typically for us, the unit cube or hypergrid with associated uniform probability distribution). Let \(R\subseteq\mathbb{R}\) be a range, and \(\mathcal{P}\) a property of functions \(g:\Omega\to R\). Given a function \(f:\Omega\to\mathbb{R}\), we denote the \(L^{p}\) distance of \(f\) to property \(\mathcal{P}\) by \(d_{p}(f,\mathcal{P}):=\inf_{g\in\mathcal{P}}d_{p}(f,g)\), where \(d_{p}(f,g):=\mathop{\mathbb{E}}_{x\sim\mu}\left[|f(x)-g(x)|^{p}\right]^{1/p}\). For fixed domain \(\Omega\), we write \(d_{p}^{\mathsf{const}}(f)\) for the \(L^{p}\) distance of \(f\) to the property of constant functions, and \(d_{p}^{\mathsf{mono}}(f)\) for the \(L^{p}\) distance of \(f\) to the property of monotone functions. (See Definition 2.2 for a formal definition contemplating e. g. the required measurability and integrability assumptions.)
**Definition 1.1** (\(L^{p}\)-testers).: Let \(p\geq 1\). For probability space \((\Omega,\Sigma,\mu)\), range \(R\subseteq\mathbb{R}\), property \(\mathcal{P}\subseteq L^{p}(\Omega,\mu)\) of functions \(g:\Omega\to R\), and proximity parameter \(\epsilon>0\), we say that randomized algorithm \(A\) is an _\(L^{p}\)-tester for \(\mathcal{P}\)_ with query complexity \(q\) if, given _oracle access_ to an unknown input function \(f:\Omega\to R\in L^{p}(\Omega,\mu)\), \(A\) makes at most \(q\) oracle queries and \(1)\) accepts with probability at least \(2/3\) if \(f\in\mathcal{P}\); \(2)\) rejects with probability at least \(2/3\) if \(d_{p}(f,\mathcal{P})>\epsilon\).
We say that \(A\) has _one-sided error_ if it accepts functions \(f\in\mathcal{P}\) with probability \(1\), otherwise we say it has _two-sided error_. It is _nonadaptive_ if it decides all of its queries in advance (i. e. before seeing output from the oracle), and otherwise it is _adaptive_. We consider two types of oracle:
**Value oracle:** Given point \(x\in\Omega\), this oracle outputs the value \(f(x)\).
**Directional derivative oracle:** Given point \(x\in\Omega\) and vector \(v\in\mathbb{R}^{n}\), this oracle outputs the derivative of \(f\) along \(v\) at point \(x\), given by \(\frac{\partial f}{\partial v}(x)=v\cdot\nabla f(x)\), as long as \(f\) is differentiable at \(x\). Otherwise, it outputs a special symbol \(\perp\).
A directional derivative oracle is weaker than a full first-order oracle, which would return the entire gradient [10], and it seems to us like a reasonable model for the high-dimensional setting; for example, obtaining the full gradient costs \(n\) queries, rather than a single query. This type of oracle has also been studied in optimization research, e. g. see [11]. For our applications, only the _sign_ of the result will matter, in which case we remark that, for sufficiently smooth functions (say, functions with bounded second derivatives) each directional derivative query may be simulated using two value queries on sufficiently close together points.
Our definition (with value oracle) coincides with that of [10] when the range is \(R=[0,1]\). On the other hand, for general \(R\), we keep the distance metric unmodified, whereas [10] normalize it by the magnitude of \(R\). Intuitively, we seek testers that are efficient even when \(f\) may take large values as the dimension \(n\) grows; see Section1.3.3 for more details.
### Results and main ideas
#### 1.2.1 Directed Poincare-type inequalities
Our first result is a directed Poincare inequality for Lipschitz functions \(f:[0,1]^{n}\to\mathbb{R}\), which may be seen as the continuous analogue of inequality (2) of [12].
**Theorem 1.2**.: _Let \(f:[0,1]^{n}\to\mathbb{R}\) be a Lipschitz function with monotone rearrangement \(f^{*}\). Then_
\[d_{1}^{\mathsf{mono}}(f)\approx\mathbb{E}\left[|f-f^{*}|\right]\lesssim \mathbb{E}\left[\|\nabla^{-}f\|_{1}\right]\,. \tag{4}\]
As hinted in the statement, a crucial tool for this result is the _monotone rearrangement_\(f^{*}\) of \(f\). We construct \(f^{*}\) by a sequence of axis-aligned rearrangements \(R_{1},\ldots,R_{n}\); each \(R_{i}\) is the _non-symmetric monotone rearrangement_ operator along dimension \(i\), which naturally generalizes the _sorting_ operator of [12] to the continuous case. For each coordinate \(i\in[n]\), the operator \(R_{i}\) takes \(f\) into an equimeasurable function \(R_{i}f\) that is monotone in the \(i\)-th coordinate, at a "cost" \(\mathbb{E}\left[|f-R_{i}f|\right]\) that is upper bounded by \(\mathbb{E}\left[|\partial_{i}^{-}f|\right]\), where \(\partial_{i}^{-}f:=(\nabla^{-}f)_{i}\) is the directed partial derivative along the \(i\)-th coordinate. We show that each application \(R_{i}\) can only decrease the "cost" associated with further applications \(R_{j}\), so that the total cost of obtaining \(f^{*}\) (i. e. the LHS of (4)) may be upper bounded, via the triangle inequality, by the sum of all directed partial derivatives, i. e. the RHS of (4).
A technically simpler version of this argument also yields a directed Poincare inequality for real-valued functions on the hypergrid. We also note that Theorems1.2 and 1.3 are both tight up to constant factors.
**Theorem 1.3**.: _Let \(f:[m]^{n}\to\mathbb{R}\) and let \(f^{*}\) be its monotone rearrangement. Then_
\[d_{1}^{\mathsf{mono}}(f)\approx\mathbb{E}\left[|f-f^{*}|\right]\lesssim m \mathbb{E}\left[\|\nabla^{-}f\|_{1}\right]\,.\]
Table 1 places our results in the context of existing classical and directed inequalities. In that table and going forward, for any \(p,q\geq 1\) we call the inequalities
\[d_{p}^{\mathsf{const}}(f)^{p}\lesssim\mathbb{E}\left[\|\nabla f\|_{q}^{p}\right] \qquad\text{and}\qquad d_{p}^{\mathsf{mono}}(f)^{p}\lesssim\mathbb{E}\left[\| \nabla^{-}f\|_{q}^{p}\right]\]
a _classical_ and _directed \((L^{p},\ell^{q})\)-Poincare inequality_, respectively. Note that the \(L^{p}\) notation refers to the space in which we take norms, while \(\ell^{q}\) refers to the geometry in which we measure gradients. In this paper, we focus on the \(L^{1}\) inequalities. See also Appendix A for an extended version of Table 1 including other related hypergrid inequalities shown in recent work.
We also note that we have ignored in our discussion the issues of _robust_ inequalities, which seem essential for some of the testing applications (see [10]), and the distinction between _inner_ and _outer boundary_, whereby some inequalities on Boolean \(f\) may be made stronger by setting \(\nabla f(x)=0\) when \(f(x)=0\) (see e. g. [12]). We refer the reader to the original works for the strongest version of each inequality and a detailed treatment of these issues.
#### 1.2.2 Testing monotonicity on the unit cube and hypergrid
Equipped with the results above, we give a monotonicity tester for Lipschitz functions \(f:[0,1]^{n}\to\mathbb{R}\), and the same technique yields a tester for functions on the hypergrid as well. The testers are parameterized by an upper bound \(L\) on the best Lipschitz constant of \(f\) in \(\ell^{1}\) geometry, which we denote \(\mathsf{Lip}_{1}(f)\) (see Definition 2.1 for a formal definition).
Both of our testers are _partial derivative testers_. These are algorithms which only have access to a directional derivative oracle and, moreover, their queries are promised to be axis-aligned vectors. In the discrete case, these are usually called _edge testers_[1, 13, 14].
**Theorem 1.4**.: _There is a nonadaptive partial derivative \(L^{1}\) monotonicity tester for Lipschitz functions \(f:[0,1]^{n}\to\mathbb{R}\) satisfying \(\mathsf{Lip}_{1}(f)\leq L\) with query complexity \(O\left(\frac{nL}{\epsilon}\right)\) and one-sided error._
_Similarly, there is a nonadaptive partial derivative \(L^{1}\) monotonicity tester for functions \(f:[m]^{n}\) satisfying \(\mathsf{Lip}_{1}(f)\leq L\) with query complexity \(O\left(\frac{nmL}{\epsilon}\right)\) and one-sided error._
The testers work by sampling points \(x\) and coordinates \(i\in[n]\) uniformly at random, and using directional derivative queries to reject if \(\partial_{i}^{-}f(x)<0\). Their correctness is shown using Theorems 1.2 and 1.3, which imply that, when \(f\) is \(\epsilon\)-far from monotone in \(L^{1}\)-distance, the total magnitude of its negative partial derivatives must be large--and since each partial derivative is at most \(L\) by assumption, the values \(\partial_{i}^{-}f(x)\) must be strictly negative in a set of large measure, which the tester stands good chance of hitting with the given query complexity.
\begin{table}
\begin{tabular}{l|c||c|c|c} \multicolumn{2}{c||}{**Setting**} & \multicolumn{2}{c|}{**Discrete**} & **Continuous** \\ \multicolumn{2}{c||}{**Inequality**} & \multicolumn{1}{c|}{\(\{0,1\}^{n}\to\{0,1\}\)} & \multicolumn{1}{c|}{\(\{0,1\}^{n}\to\mathbb{R}\)} & \multicolumn{1}{c}{\([0,1]^{n}\to\mathbb{R}\)} \\ \hline \hline \multirow{2}{*}{\((L^{1},\ell^{1})\)-Poincaré} & \(d_{1}^{\mathsf{const}}(f)\lesssim\mathbb{E}\left[\|\nabla f\|_{1}\right]\) & * [12] & * [12] & * [12] \\ \cline{2-5} & \(d_{1}^{\mathsf{mono}}(f)\lesssim\mathbb{E}\left[\|\nabla^{-}f\|_{1}\right]\) & [13] & [14] & Theorem 1.2 \\ \hline \multirow{2}{*}{\((L^{1},\ell^{2})\)-Poincaré} & \(d_{1}^{\mathsf{const}}(f)\lesssim\mathbb{E}\left[\|\nabla f\|_{2}\right]\) & * [12] & [12] & [12] \\ \cline{2-5} & \(d_{1}^{\mathsf{mono}}(f)\lesssim\mathbb{E}\left[\|\nabla^{-}f\|_{2}\right]\) & [10] &? & Conjecture 1.8 \\ \end{tabular}
\end{table}
Table 1: Classical and directed Poincaré-type inequalities on discrete and continuous domains. Cells marked with * indicate inequalities that follow from another entry in the table.
#### 1.2.3 Testing monotonicity on the line
The results above, linking a Poincare-type inequality with a monotonicity tester that uses partial derivative queries and has linear dependence on \(n\), seem to suggest a close parallel with the case of the edge tester on the Boolean cube [1, 13]. On the other hand, we also show a strong separation between Hamming and \(L^{1}\) testing. Focusing on the simpler problem of monotonicity testing _on the line_, we show that the tight query complexity of \(L^{1}\) monotonicity testing Lipschitz functions grows with the square root of the size of the (continuous or discrete) domain:
**Theorem 1.5**.: _There exist nonadaptive \(L^{1}\) monotonicity testers for Lipschitz functions \(f:[0,m]\to\mathbb{R}\) and \(f:[m]\to\mathbb{R}\) satisfying \(\mathsf{Lip}_{1}(f)\leq L\) with query complexity \(\widetilde{O}\left(\sqrt{mL/\epsilon}\right)\). The testers use value queries and have one-sided error._
This result (along with the near-tight lower bounds in Section1.2.4) is in contrast with the case of Hamming testing functions \(f:[m]\to\mathbb{R}\), which has sample complexity \(\Theta(\log m)\)[1, 14, 15]. Intuitively, this difference arises because a Lipschitz function may violate monotonicity with rate of change \(L\), so the area under the curve may grow quadratically on violating regions. The proof is in fact a reduction to the Hamming case, using the Lipschitz assumption to establish a connection between the \(L^{1}\) and Hamming distances to monotonicity.
#### 1.2.4 Lower bounds
We give two types of lower bounds: under no assumptions about the tester and for constant \(n\), we show that the dependence of Theorem1.4 on \(L/\epsilon\) is close to optimal4. We give stronger bounds for the special case of partial derivative testers (such as the ones from Theorem1.4), essentially showing that our analysis of the partial derivative tester is tight.
Footnote 4: Note that one may always multiply the input values by \(1/L\) to reduce the problem to the case with Lipschitz constant \(1\) and proximity parameter \(\epsilon/L\), so this is the right ratio to look at.
**Theorem 1.6**.: _Let \(n\) be a constant. Any \(L^{1}\) monotonicity tester (with two-sided error, and adaptive value and directional derivative queries) for Lipschitz functions \(f:[0,1]^{n}\to\mathbb{R}\) satisfying \(\mathsf{Lip}_{1}(f)\leq L\) requires at least \(\Omega\left((L/\epsilon)^{\frac{n}{n+1}}\right)\) queries._
_Similarly, any \(L^{1}\) monotonicity tester (with two-sided error and adaptive queries) for functions \(f:[m]^{n}\to\mathbb{R}\) satisfying \(\mathsf{Lip}_{1}(f)\leq L\) requires at least \(\Omega\left(\min\left\{(mL/\epsilon)^{\frac{n}{n+1}},m^{n}\right\}\right)\) queries._
Notice that the bounds above cannot be improved beyond logarithmic factors, due to the upper bounds for the line in Theorem1.5. It also follows that adaptivity (essentially) does not help with \(L^{1}\) monotonicity testing on the line, matching the situation for Hamming testing [15, 13, 16].
Theorem1.6 is obtained via a "hole" construction, which hides a non-monotone region of \(f\) inside an \(\ell^{1}\)-ball \(B\) of radius \(r\). We choose \(r\) such the violations of monotonicity inside \(B\) are large enough to make \(f\)\(\epsilon\)-far from monotone, but at the same time, the ball \(B\) is hard to find using few queries. However, this construction has poor dependence on \(n\).
To lower bound the query complexity of partial derivative testers with better dependence on \(n\), we employ a simpler "step" construction, which essentially chooses a coordinate \(i\) and hides a small negative-slope region on every line along coordinate \(i\). These functions are far from monotone, but a partial derivative tester must correctly guess both \(i\) and the negative-slope region to detect them. We conclude that Theorem1.4 is optimal for partial derivative testers on the unit cube, and optimal for edge testers on the hypergrid for constant \(\epsilon\) and \(L\):
**Theorem 1.7**.: _Any partial derivative \(L^{1}\) monotonicity tester for Lipschitz functions \(f:[0,1]^{n}\to\mathbb{R}\) satisfying \(\mathsf{Lip}_{1}(f)\leq L\) (with two-sided error and adaptive queries) requires at least \(\Omega(nL/\epsilon)\) queries._
_For sufficiently small constant \(\epsilon\) and constant \(L\), any partial derivative \(L^{1}\) monotonicity tester for functions \(f:[m]^{n}\to\mathbb{R}\) satisfying \(\mathsf{Lip}_{1}(f)\leq L\) (with two-sided error and adaptive queries) requires at least \(\Omega(nm)\) queries._
Table 2 summarizes our upper and lower bounds for testing monotonicity on the unit cube and hypergrid, along with the analogous Hamming testing results for intuition and bounds for \(L^{1}\) testing from prior works. See Section 1.3.3 and Appendices B and C for a discussion and details of how prior works imply the results in that table, since to our knowledge the problem of \(L^{1}\) monotonicity testing parameterized by the Lipschitz constant has not been explicitly studied before. See also Section 7 for a broader overview of prior works on a spectrum of monotonicity testing models.
### Discussion and open questions
#### 1.3.1 Stronger directed Poincare inequalities?
Classical Poincare inequalities are usually of the \(\ell^{2}\) form, which seems natural e. g. due to basis independence. On the other hand, in the directed setting, the weaker \(\ell^{1}\) inequalities (as in [1] and Theorems 1.2 and 1.3) have more straightforward proofs than \(\ell^{2}\) counterparts such as [13]. A perhaps related observation is that monotonicity is _not_ a basis-independent concept, since it is defined in terms of the standard basis. It is not obvious whether directed \(\ell^{2}\) inequalities ought to hold in every (real-valued, continuous) setting. Nevertheless, in light of the parallels and context established thus far, we are hopeful that such an equality does hold. Otherwise, we believe that the reason should be illuminating. For now, we conjecture:
\begin{table}
\begin{tabular}{c||c||c|c}
**Domain** & **Hamming testing** & \(L^{1}\)**-testing** (prior works) & \(L^{1}\)**-testing** (this work) \\ & \(f:\Omega\to\mathbb{R}\) & \(f:\Omega\to\mathbb{R}\), \(\mathsf{Lip}_{1}(f)\leq L\) & \(f:\Omega\to\mathbb{R}\), \(\mathsf{Lip}_{1}(f)\leq L\) \\ \hline \hline \(\Omega=[0,1]^{n}\) & \multirow{2}{*}{Infeasible} & \(\widetilde{O}\left(\frac{n^{2}L}{\epsilon}\right)\) (*) [1] & \(O\left(\frac{nL}{\epsilon}\right)\) p.d.t. \\ \cline{3-4} & & & \(\Omega\left(\left(\frac{L}{\epsilon}\right)^{\frac{n}{n+1}}\right)\) const. \(n\) \\ & & — & \(\Omega\left(\frac{nL}{\epsilon}\right)\) p.d.t. \\ \hline \hline \(\Omega=[m]^{n}\) & \(O\left(\frac{n\log m}{\epsilon}\right)\)[15] & \(\widetilde{O}\left(\frac{n^{2}mL}{\epsilon}\right)\) (*) [1] & \(O\left(\frac{nmL}{\epsilon}\right)\) p.d.t. \\ \cline{2-4} & \(\Omega\left(\frac{n\log(m)-\log(1/\epsilon)}{\epsilon}\right)\)[15] & \(\widetilde{\Omega}\left(\frac{L}{\epsilon}\right)\) n.a. 1-s. [1] & \(\Omega\left(\left(\frac{mL}{\epsilon}\right)^{\frac{n}{n+1}}\right)\) const. \(n\) \\ & & \(\Omega(n\log m)\) n.a. [1] & \(\Omega(nm)\) p.d.t. \\ \end{tabular}
\end{table}
Table 2: Query complexity bounds for testing monotonicity on the unit cube and hypergrid. Upper bounds are for nonadaptive (n.a.) algorithms with one-sided error (1-s.), and lower bounds are for adaptive algorithms with two-sided error, unless stated otherwise. For \(L^{1}\)-testing, the upper bounds derived from prior works (*) are specialized to the Lipschitz case by us; see the text for details. Our lower bounds hold either for constant (const.) \(n\), or for partial derivative testers (p.d.t.).
**Conjecture 1.8**.: _For every Lipschitz function \(f:[0,1]^{n}\to\mathbb{R}\), it holds that_
\[d_{1}^{\mathsf{mono}}(f)\lesssim\mathbb{E}\left[\|\nabla^{-}f\|_{2}\right]\,.\]
Accordingly, we also ask whether an \(L^{1}\) tester with \(O(\sqrt{n})\) complexity exists, presumably with a dependence on the \(\mathsf{Lip}_{2}(f)\) constant rather than \(\mathsf{Lip}_{1}(f)\) since \(\ell^{2}\) is the relevant geometry above.
#### 1.3.2 Query complexity bounds
Our lower bounds either have weak dependence on \(n\), or only apply to a specific family of algorithms (partial derivative testers). Previous works have established tester-independent lower bounds with strong dependence on \(n\) by using reductions from communication complexity [1, 10], whose translation to the continuous setting is not obvious5, by reduction to comparison-based testers [13], whose connection to \(L^{1}\) testing setting seems less immediate, or directly via a careful construction [1]. We believe that finding strong tester-independent lower bounds for \(L^{1}\) testing Lipschitz functions on the unit cube is an interesting direction for further study.
Footnote 5: Note that there is no obvious reduction from testing on the hypergrid to testing on the unit cube—one idea is to simulate the unit cube tester on a multilinear interpolation of the function defined on the hypergrid, but the challenge is that simulating each query to the unit cube naively requires an exponential number of queries to the hypergrid.
We also remark that even a tight lower bound matching Theorem1.4 may not rule out testers with better dependence on \(n\) if, for example, such a tester were parameterized by \(\mathsf{Lip}_{2}(f)\), which can be a factor of \(\sqrt{n}\) larger than \(\mathsf{Lip}_{1}(f)\). We view the possibility of better testers on the unit cube, or otherwise a conceptual separation with [11], as an exciting direction for future work.
#### 1.3.3 Relation to prior work on \(L^{p}\)-testing
[10] initiated the systematic study of \(L^{p}\)-testing and, most relevant to the present work, established the first (and, to our knowledge, only) results on \(L^{p}\) testing of the monotonicity property, on the hypergrid and on the discrete line. While our models are broadly compatible, a subtle but crucial distinction must be explained.
[10] focused their exposition on the case of functions \(f:\Omega\to[0,1]\), and in this regime, \(L^{1}\) testing can only be easier than Hamming testing, which they show via a reduction based on Boolean threshold functions. On the other hand, for functions with other ranges, say \(f:\Omega\to[a,b]\), their definition normalizes the notion of distance by a factor of \(\frac{1}{b-a}\). In our terminology, letting \(r:=b-a\) and \(g:=f/r\), it follows that \(d_{1}(g)=d_{1}(f)/r\), so testing \(f\) with proximity parameter \(\epsilon\) reduces to testing \(g\) with proximity parameter \(\epsilon/r\). For Hamming testers with query complexity that depends linearly on \(1/\epsilon\), this amounts to paying a factor of \(r\) in the reduction to the Boolean case6. This loss is indeed necessary, because by the same reasoning, testing \(g\) with proximity parameter \(\epsilon\) reduces to testing \(f\) with proximity parameter \(r\epsilon\). Therefore the problems of testing \(f\) with proximity parameter \(\epsilon\) and testing \(f/r\) with proximity parameter \(\epsilon/r\) have the same query complexity.
Footnote 6: This factor can also be tracked explicitly in the characterization of the \(L^{1}\) distance to monotonicity of [10]: it arises in Lemmas2.1 and 2.2, where an integral from \(0\) to \(1\) must be changed to an integral from \(a\) to \(b\), so the best threshold function is only guaranteed to be \(\epsilon/r\)-far from monotone.
In this work, we do not normalize the distance metric by \(r\); we would like to handle functions \(f\) that may take large values as the dimension \(n\) grows, as long as \(f\) satisfies a Lipschitz assumption, and our goal is to beat the query complexity afforded by the reduction to the Boolean case. We derive these benchmarks by assuming that the input \(f\) is Lipschitz, and inferring an upper bound on \(r\) based on the Lipschitz constant and the size of the domain. Combined with the hypergrid tester of [10] and a discretization argument for the unit cube inspired by [1, 1], we establish benchmarks for our testing problem. See AppendixB for details.
With the discussion above in mind, it is instructive to return to Table 2. We note that our upper bounds have polynomially smaller dependence on \(n\) than the benchmarks, suggesting that our use of the Lipschitz assumption--via the directed Poincare inequalities in Theorems 1.2 and 1.3--exploits useful structure underlying the monotonicity testing problem (whereas the benchmark testers must work for every function with bounded range, not only the Lipschitz ones). Our lower bounds introduce an almost-linear dependence on the hypergrid length \(m\); intuitively, this dependence is not implied by the previous bounds in [1, 1] because those construct the violations of monotonicity via Boolean functions, whereas our constructions exploit the fact that a Lipschitz function can "keep growing" along a given direction, which exacerbates the \(L^{1}\) distance to monotonicity in the region where that happens. Our lower bounds for partial derivative testers show that the analysis of our algorithms is essentially tight, so new (upper or lower bound) ideas are required to establish the optimal query complexity for arbitrary testers.
On the choice of \(L^{1}\) distance and Lipschitz assumption.We briefly motivate our choice of distance metric and Lipschitz assumption. For continuous range and domain, well-known counterexamples rule out testing with respect to Hamming distance: given any tester with finite query complexity, a monotone function may be made far from monotone by arbitrarily small, hard to detect perturbations. Testing against \(L^{1}\) distance is then a natural choice, since this metric takes into account the magnitude of the change required to make a function monotone ([1] also discuss connections with learning and approximation theory). However, an arbitrarily small region of the input may still have disproportionate effect on the \(L^{1}\) distance if the function is arbitrary, so again testing is infeasible. Lipschitz continuity seems like a natural enough assumption which, combined with the choice of \(L^{1}\) distance, makes the problem tractable. Another benefit is that Lipschitz functions are differentiable almost everywhere by Rademacher's theorem, so the gradient is well-defined almost everywhere, which enables the connection with Poincare-type inequalities.
Organization.Section 2 introduces definitions and conventions that will be used throughout the paper. In Section 3 we prove our directed Poincare inequalities on the unit cube and hypergrid, and in Section 4 we give our \(L^{1}\) monotonicity testers for these domains. Section 5 gives the upper bound for testing functions on the line, and in Section 6 we prove our lower bounds. Finally, in Section 7 we give a broader overview of prior works on monotonicity testing for the reader's convenience.
## 2 Preliminaries
In this paper, \(\mathbb{N}\) denotes the set of strictly positive integers \(\{1,2,\dots\}\). For \(m\in\mathbb{N}\), we write \([m]\) to denote the set \(\{i\in\mathbb{N}:i\leq m\}\). For any \(c\in\mathbb{R}\), we write \(c^{+}\) for \(\max\{0,c\}\) and \(c^{-}\) for \(-\min\{0,c\}\). We denote the closure of an open set \(B\subset\mathbb{R}^{n}\) by \(\overline{B}\).
For a (continuous or discrete) measure space \((\Omega,\Sigma,\nu)\) and measurable function \(f:\Omega\to\mathbb{R}\), we write \(\int_{\Omega}f\,\mathrm{d}\nu\) for the Lebesgue integral of \(f\) over this space. Then for \(p\geq 1\), the space \(L_{p}(\Omega,\nu)\) is the set of measurable functions \(f\) such that \(|f|^{p}\) is Lebesgue integrable, i.e. \(\int_{\Omega}\!|f|^{p}\,\mathrm{d}\nu<\infty\), and we write the \(L^{p}\) norm of such functions as \(\left\|f\right\|_{L^{p}}=\left\|f\right\|_{L^{p}(\nu)}=\left(\int_{\Omega}\!| f|^{p}\,\mathrm{d}\nu\right)^{1/p}\). We will write \(\nu\) to denote the Lebesgue measure when \(\Omega\subset\mathbb{R}^{n}\) is a continuous domain (in which case we will simply write \(L^{p}(\Omega)\) for \(L^{p}(\Omega,\nu)\)) and the counting measure when \(\Omega\subset\mathbb{Z}^{n}\) is a discrete domain, and reserve \(\mu\) for the special case of probability measures.
### Lipschitz functions and \(L^{p}\) distance
We first define Lipschitz functions with respect to a choice of \(\ell^{p}\) geometry.
**Definition 2.1**.: Let \(p\geq 1\). We say that \(f:\Omega\to\mathbb{R}\) is \((\ell^{p},L)\)_-Lipschitz_ if, for every \(x,y\in\Omega\), \(|f(x)-f(y)|\leq L\|x-y\|_{p}\). We say that \(f\) is Lipschitz if it is \((\ell^{p},L)\)-Lipschitz for any \(L\) (in which case this also holds for any other choice of \(\ell^{q}\)), and in this case we denote by \(\mathsf{Lip}_{p}(f)\) the best possible Lipschitz constant:
\[\mathsf{Lip}_{p}(f):=\inf_{L}\left\{f\text{ is }(\ell^{p},L)\text{-Lipschitz} \right\}\,.\]
It follows that \(\mathsf{Lip}_{p}(f)\leq\mathsf{Lip}_{q}(f)\) for \(p\leq q\).
We now formally define \(L^{p}\) distances, completing the definition of \(L^{p}\)-testers from Section1.1.
**Definition 2.2** (\(L^{p}\)-distance).: Let \(p\geq 1\), let \(R\subseteq\mathbb{R}\), and let \((\Omega,\Sigma,\mu)\) be a probability space. For a property \(\mathcal{P}\subseteq L^{p}(\Omega,\mu)\) of functions \(g:\Omega\to R\) and function \(f:\Omega\to R\in L^{p}(\Omega,\mu)\), we define the distance from \(f\) to \(\mathcal{P}\) as \(d_{p}(f,\mathcal{P}):=\inf_{g\in\mathcal{P}}d_{p}(f,g)\), where
\[d_{p}(f,g):=\left\|f-g\right\|_{L^{p}(\mu)}=\operatorname*{\mathbb{E}}_{x \sim\mu}\left[|f(x)-g(x)|^{p}\right]^{1/p}\,.\]
For \(p=0\), we slightly abuse notation and, taking \(0^{0}=0\), write \(d_{0}(f,g)\) for the Hamming distance between \(f\) and \(g\) weighted by \(\mu\) (and \(\mathcal{P}\) may be any set of measurable functions on \((\Omega,\Sigma,\mu)\)).
In our applications, we will always take \(\mu\) to be the uniform distribution over \(\Omega\)7. As a shorthand, when \((\Omega,\Sigma,\mu)\) is understood from the context and \(R=\mathbb{R}\), we will write
Footnote 7: More precisely: when \(\Omega=[0,1]^{n}\), \(\mu\) will be the Lebesgue measure on \(\Omega\) (with associated \(\sigma\)-algebra \(\Sigma\)), and when \(\Omega=[m]^{n}\), \(\mu\) will be the uniform distribution over \(\Omega\) (with the power set of \(\Omega\) as the \(\sigma\)-algebra \(\Sigma\)).
1. \(d_{p}^{\mathsf{const}}(f):=d_{p}(f,\mathcal{P}^{\mathsf{const}})\) where \(\mathcal{P}^{\mathsf{const}}:=\{f:\Omega\to\mathbb{R}\in L^{p}(\Omega,\mu):f=c,c\in\mathbb{R}\}\); and
2. \(d_{p}^{\mathsf{mono}}(f):=d_{p}(f,\mathcal{P}^{\mathsf{mono}})\) where \(\mathcal{P}^{\mathsf{mono}}:=\{f:\Omega\to\mathbb{R}\in L^{p}(\Omega,\mu):f \text{ is monotone}\}\).
Going forward, we will also use the shorthand \(d_{p}(f):=d_{p}^{\mathsf{mono}}(f)\).
### Directed partial derivatives and gradients
We first consider functions on continuous domains. Let \(B\) be an open subset of \(\mathbb{R}^{n}\), and let \(f:B\to\mathbb{R}\) be Lipschitz. Then by Rademacher's theorem \(f\) is differentiable almost everywhere in \(B\). For each \(x\in B\) where \(f\) is differentiable, let \(\nabla f(x)=(\partial_{1}f(x),\dots,\partial_{n}f(x))\) denote its gradient, where \(\partial_{i}f(x)\) is the partial derivative of \(f\) along the \(i\)-th coordinate at \(x\). Then, let \(\partial_{i}^{-}:=\min\{0,\partial_{i}\}\), i. e. for every \(x\) where \(f\) is differentiable we have \(\partial_{i}^{-}f(x)=-\left(\partial_{i}f(x)\right)^{-}\). We call \(\partial_{i}^{-}\) the _directed partial derivative_ operator in direction \(i\). Then we define the _directed gradient_ operator by \(\nabla^{-}:=(\partial_{1}^{-},\dots,\partial_{n}^{-})\), again defined on every point \(x\) where \(f\) is differentiable.
Now considering the hypergrid domains, let \(f:[m]^{n}\to\mathbb{R}\). Fix \(x\in[m]^{n}\) and \(i\in[n]\), and write \(e_{i}\) for the \(i\)-th basis vector, i. e. \(e_{i}\) takes value \(1\) in its \(i\)-th component and \(0\) elsewhere. We then define the (discrete) partial derivative of \(f\) along the \(i\)-th coordinate at \(x\) by \(\partial_{i}f(x):=f(x+e_{i})-f(x)\) if \(x_{i}<m\), and \(\partial_{i}f(x):=0\) if \(x_{i}=m\). We then define its discrete gradient by \(\nabla:=(\partial_{1},\dots,\partial_{n})\). Their directed counterparts are defined as above: \(\partial_{i}^{-}:=\min\{0,\partial_{i}\}\) and \(\nabla^{-}:=(\partial_{1}^{-},\dots,\partial_{n}^{-})\).
Note that this definition for the discrete discrete gradient on the hypergrid is slightly different from how we introduced the discrete gradient on the Boolean cube in the opening (c.f. inequality
(1)) and its use in Table 1, where we allowed each edge \((x,y)\) to "contribute" to both \(\partial_{i}f(x)\) and \(\partial_{i}f(y)\). In contrast, the definition above (which we will use going forward) only allows the "contribution" to \(\partial_{i}f(x)\), since on domain \([m]^{n}\) with \(m=2\), the point \(y\) falls under the case \(y_{i}=m\), so \(\partial_{i}f(y):=0\). The definition we choose seems more natural for the hypergrid settings, but we also remark that for \(\ell^{1}\) inequalities, the choice does not matter up to constant factors (i. e. each edge is counted once or twice). For \(\ell^{2}\) inequalities, this choice is related to the issues of inner/outer boundaries and robust inequalities [14, 10].
## 3 Directed Poincare inequalities for Lipschitz functions
In this section, we establish Theorems 1.2 and 1.3. We start with the one-dimensional case, i. e. functions on the line, and then generalize to higher dimensions. In each subsection, we will focus our presentation on the setting where the domain is continuous (corresponding to our results for the unit cube), and then show how the same proof strategy (more easily) yields analogous results for discrete domains (corresponding to our results for the hypergrid).
### One-dimensional case
Let \(m>0\), let \(I:=(0,m)\), and let \(f:\overline{I}\to\mathbb{R}\) be a measurable function. We wish to show that \(\left\|f-f^{*}\right\|_{L^{1}}\lesssim m\left\|\partial^{-}f\right\|_{L^{1}}\), where \(f^{*}\) is the monotone rearrangement of \(f\). We first introduce the monotone rearrangement, and then show this inequality using an elementary calculus argument.
#### 3.1.1 Monotone rearrangement
Here, we introduce the (non-symmetric, non-decreasing) monotone rearrangement of a one-dimensional function. We follow the definition of [11], with the slight modification that we are interested in the _non-decreasing_ rearrangement, whereas most of the literature usually favours the non-increasing rearrangement. The difference is purely syntactic, and our choice more conveniently matches the convention in the monotonicity testing literature. Up to this choice, our definition also agrees with that of [1, Chapter 2], and we refer the reader to these two texts for a comprehensive treatment.
We define the (lower) _level sets_ of \(f:\overline{I}\to\mathbb{R}\) as the sets
\[\overline{I}_{c}:=\left\{x\in\overline{I}:f(x)\leq c\right\}\]
for all \(c\in\mathbb{R}\). For nonempty measurable \(S\subset\mathbb{R}\) of finite measure, the _rearrangement_ of \(S\) is the set
\[S^{*}:=[0,\nu(S)]\]
(recall that \(\nu\) stands for the Lebesgue measure here), and we define \(\emptyset^{*}:=\emptyset\). For a level set \(\overline{I}_{c}\), we write \(\overline{I}_{c}^{*}\) to mean \(\left(\overline{I}_{c}\right)^{*}\).
**Definition 3.1**.: The _monotone rearrangement_ of \(f\) is the function \(f^{*}:\overline{I}\to\mathbb{R}\) given by
\[f^{*}(x):=\inf\left\{c\in\mathbb{R}:x\in\overline{I}_{c}^{*}\right\}\,. \tag{5}\]
Note that \(f^{*}\) is always a non-decreasing function.
We note two well-known properties of the monotone rearrangement: equimeasurability and order preservation. Two functions \(f,g\) are called _equimeasurable_ if \(\nu\{f\geq c\}=\nu\{g\geq c\}\) for every \(c\in\mathbb{R}\). A mapping \(u\mapsto u^{*}\) is called _order preserving_ if \(f(x)\leq g(x)\) for all \(x\in\overline{I}\) implies \(f^{*}(x)\leq g^{*}(x)\) for all \(x\in\overline{I}\). See [1, Chapter 2, Proposition 1.7] for a proof of the following:
**Fact 3.2**.: _Let \(f:\overline{I}\to\mathbb{R}\) be a measurable function. Then \(f\) and \(f^{*}\) are equimeasurable._
**Fact 3.3**.: _The mapping \(f\mapsto f^{*}\) is order preserving._
#### 3.1.2 Absolutely continuous functions and the one-dimensional Poincare inequality
Let \(f:\overline{I}\to\mathbb{R}\) be absolutely continuous. It follows that \(f\) has a derivative \(\partial f\) almost everywhere (i.e. outside a set of measure zero), \(\partial f\in L^{1}(I)\) (i.e. its derivative is Lebesgue integrable), and
\[f(x)=f(0)+\int_{0}^{x}\partial f(t)\,\mathrm{d}t\]
for all \(x\in\overline{I}\). It also follows that \(\partial^{-}f\in L^{1}(I)\).
We may now show our one-dimensional inequality:
**Lemma 3.4**.: _Let \(f:\overline{I}\to\mathbb{R}\) be absolutely continuous. Then \(\left\|f-f^{*}\right\|_{L^{1}}\leq 2m\left\|\partial^{-}f\right\|_{L^{1}}\)._
Proof.: Let \(S:=\left\{x\in\overline{I}:f^{*}(x)>f(x)\right\}\), and note that \(S\) is a measurable set because \(f,f^{*}\) are measurable functions (the latter by Fact 3.2). Moreover, since \(f\) and \(f^{*}\) are equimeasurable (by the same result), we have \(\int f\,\mathrm{d}\nu=\int f^{*}\,\mathrm{d}\nu\) and therefore
\[\left\|f-f^{*}\right\|_{L^{1}} =\int_{I}\lvert f-f^{*}\rvert\,\mathrm{d}\nu=\int_{S}(f^{*}-f)\, \mathrm{d}\nu+\int_{I\setminus S}(f-f^{*})\,\mathrm{d}\nu\] \[=\int_{S}(f^{*}-f)\,\mathrm{d}\nu+\left(\int_{I}(f-f^{*})\, \mathrm{d}\nu-\int_{S}(f-f^{*})\,\mathrm{d}\nu\right)=2\int_{S}(f^{*}-f)\, \mathrm{d}\nu\,.\]
Hence our goal is to show that
\[\int_{S}(f^{*}-f)\,\mathrm{d}\nu\leq m\left\|\partial^{-}f\right\|_{L^{1}}\,.\]
Let \(x\in\overline{I}\). We claim that there exists \(x^{\prime}\in[0,x]\) such that \(f(x^{\prime})\geq f^{*}(x)\). Suppose this is not the case. Then since \(f\) is continuous on \([0,x]\), by the extreme value theorem it attains its maximum and therefore there exists \(c<f^{*}(x)\) such that \(f(y)\leq c\) for all \(y\in[0,x]\). Thus \([0,x]\subseteq\overline{I}_{c}\), so \(\nu\left(\overline{I}_{c}\right)\geq x\) and hence \(x\in\overline{I}_{c}^{*}\). Then, by Definition 3.1, \(f^{*}(x)\leq c<f^{*}(x)\), a contradiction. Thus the claim is proved.
Now, let \(x\in S\) and fix some \(x^{\prime}\in[0,x]\) such that \(f(x^{\prime})\geq f^{*}(x)\). Since \(f\) is absolutely continuous, we have
\[f^{*}(x)-f(x)\leq f(x^{\prime})-f(x)=-\int_{x^{\prime}}^{x}\partial f(t)\, \mathrm{d}t\leq-\int_{0}^{m}\partial^{-}f(t)\,\mathrm{d}t=\left\|\partial^{-} f\right\|_{L^{1}}\,.\]
The result follows by applying this estimate to all \(x\):
\[\int_{S}(f^{*}-f)\,\mathrm{d}\nu\leq\int_{S}\left\|\partial^{-}f\right\|_{L^{1 }}\mathrm{d}\nu=\nu(S)\left\|\partial^{-}f\right\|_{L^{1}}\leq m\left\| \partial^{-}f\right\|_{L^{1}}\,.\qed\]
#### 3.1.3 Discrete case
Let \(m\in\mathbb{N}\) and let \(I:=[m]\). We may define the monotone rearrangement \(f^{*}:I\to\mathbb{R}\) of \(f:I\to\mathbb{R}\) as in Definition 3.1 by identifying \(\overline{I}\) with \(I\) and writing \(S^{*}:=[|S|]\) for each finite \(S\subset\mathbb{N}\). More directly, \(f^{*}\) is the function such that \(f^{*}(1)\leq f^{*}(2)\leq\cdots\leq f^{*}(m)\) is the _sorted sequence_ of the values \(f(1),f(2),\ldots,f(m)\). It is easy to show that the directed version of Lemma 3.4 holds, and in fact one may simply repeat the proof of that lemma.
**Lemma 3.5**.: _Let \(f:[m]\to\mathbb{R}\). Then \(\left\|f-f^{*}\right\|_{L^{1}}\leq 2m\left\|\partial^{-}f\right\|_{L^{1}}\)._
### Multidimensional case
In the continuous case, we ultimately only require an inequality on the unit cube \([0,1]^{n}\). However, we will first work in slightly more generality and consider functions defined on a _box_ in \(\mathbb{R}^{n}\), defined below. This approach makes some of the steps more transparent, and also gives intuition for the discrete case of the hypergrid.
**Definition 3.6**.: Let \(a\in\mathbb{R}^{n}_{>0}\). The _box of size \(a\)_ is the closure \(\overline{B}\subset\mathbb{R}^{n}\) of \(B=(0,a_{1})\times\cdots\times(0,a_{n})\).
Going forward, \(\overline{B}\subset\mathbb{R}^{n}\) will always denote such a box.
Notation.For \(x\in\mathbb{R}^{n}\), \(y\in\mathbb{R}\) and \(i\in[n]\), we will use the notation \(x^{-i}\) to denote the vector in \(\mathbb{R}^{[n]\setminus\{i\}}\) obtained by removing the \(i\)-th coordinate from \(x\) (note that the indexing is not changed), and we will write \((x^{-i},y)\) as a shorthand for the vector \((x_{1},\ldots,x_{i-1},y,x_{i+1},\ldots,x_{n})\in\mathbb{R}^{n}\). We will also write \(x^{-i}\) directly to denote any vector in \(\mathbb{R}^{[n]\setminus\{i\}}\). For function \(f:\overline{B}\to\mathbb{R}\) and \(x^{-i}\in\mathbb{R}^{[n]\setminus\{i\}}\), we will write \(f_{x^{-i}}\) for the function given by \(f_{x^{-i}}(y)=f(x^{-i},y)\) for all \((x^{-i},y)\in\overline{B}\). For any set \(D\in\mathbb{R}^{n}\), we will denote by \(D^{-i}\) the projection \(\{x^{-i}:x\in D\}\), and extend this notation in the natural way to more indices, e. g. \(D^{-i-j}\).
**Definition 3.7** (Rearrangement in direction \(i\)).: Let \(f:\overline{B}\to\mathbb{R}\) be a measurable function and let \(i\in[n]\). The _rearrangement of \(f\) in direction \(i\)_ is the function \(R_{i}f:\overline{B}\to\mathbb{R}\) given by
\[(R_{i}f)_{x^{-i}}:=\left(f_{x^{-i}}\right)^{*} \tag{6}\]
for all \(x^{-i}\in\left(\overline{B}\right)^{-i}\). We call each \(R_{i}\) the _rearrangement operator in direction \(i\)_.
We may put (6) in words as follows: on each line in direction \(i\) determined by point \(x^{-i}\), the restriction of \(R_{i}f\) to that line is the monotone rearrangement of the restriction of \(f\) to that line.
**Proposition 3.8**.: _Let \(\overline{B}\) be the box of size \(a\in\mathbb{R}^{n}\), and let \(f:\overline{B}\to\mathbb{R}\) be Lipschitz continuous. Then for each \(i\in[n]\),_
\[\left\|f-R_{i}f\right\|_{L^{1}}\leq 2a_{i}\left\|\partial_{i}^{-}f\right\|_{L^{1}}\,.\]
Proof.: Since \(f\) is Lipschitz continuous, each \(f_{x^{-i}}:[0,a_{i}]\to\mathbb{R}\) is Lipschitz continuous and _a fortiori_ absolutely continuous. The result follows from Lemma 3.4, using Tonelli's theorem to choose the order of integration.
A key ingredient in our multi-dimensional argument is that the rearrangement operator preserves Lipschitz continuity:
**Lemma 3.9** ([12, Lemma 2.12]).: _If \(f:\overline{B}\to\mathbb{R}\) is Lipschitz continuous (with Lipschitz constant \(L\)), then \(R_{i}f\) is Lipschitz continuous (with Lipschitz constant \(2L\))._
We are now ready to define the (multidimensional) monotone rearrangement \(f^{*}\):
**Definition 3.10**.: Let \(f:\overline{B}\to\mathbb{R}\) be a measurable function. The _monotone rearrangement_ of \(f\) is the function
\[f^{*}:=R_{n}R_{n-1}\cdots R_{1}f\,.\]
We first show that \(f^{*}\) is indeed a monotone function:
**Proposition 3.11**.: _Let \(f:\overline{B}\to\mathbb{R}\) be Lipschitz continuous. Then \(f^{*}\) is monotone._
Proof.: Say that \(g:\overline{B}\to\mathbb{R}\) is _monotone in direction \(i\)_ if \(g_{x^{-i}}\) is non-decreasing for all \(x^{-i}\in\big{(}\overline{B}\big{)}^{-i}\). Then \(g\) is monotone if and only if it is monotone in direction \(i\) for every \(i\in[n]\). Note that \(R_{i}f\) is monotone in direction \(i\) by definition of monotone rearrangement. Therefore, it suffices to prove that if \(f\) is monotone in direction \(j\), then \(R_{i}f\) is also monotone in direction \(j\).
Suppose \(f\) is monotone in direction \(j\), and suppose \(i<j\) without loss of generality. Let \(a\in\mathbb{R}^{n}\) be the size of \(B\). Let \(x^{-j}\in\big{(}\overline{B}\big{)}^{-j}\) and \(0\leq y_{1}<y_{2}\leq a_{j}\), so that \((x^{-j},y_{1}),(x^{-j},y_{2})\in\overline{B}\). We need to show that \((R_{i}f)(x^{-j},y_{1})\leq(R_{i}f)(x^{-j},y_{2})\). Let \(\overline{I}_{i}:=[0,a_{i}]\). For each \(k\in\{1,2\}\), let \(g_{k}:\overline{I}_{i}\to\mathbb{R}\) be given by
\[g_{k}(z):=f(x_{1},\ldots,x_{i-1},z,x_{i+1},\ldots,x_{j-1},y_{k},x_{j+1},\ldots,x_{n})\,.\]
Note that
\[g_{k}^{*}(z)=(R_{i}f)(x_{1},\ldots,x_{i-1},z,x_{i+1},\ldots,x_{j-1},y_{k},x_{j +1},\ldots,x_{n})\]
for every \(z\in\overline{I}_{i}\), and therefore our goal is to show that \(g_{1}^{*}(x_{i})\leq g_{2}^{*}(x_{i})\). But \(f\) being monotone in direction \(j\) means that \(g_{1}(z)\leq g_{2}(z)\) for all \(z\in\overline{I}_{i}\), so by the order preserving property (Fact 3.3) of the monotone rearrangement we get that \(g_{1}^{*}(x_{i})\leq g_{2}^{*}(x_{i})\), concluding the proof.
It is well-known that the monotone rearrangement is a non-expansive operator. Actually a stronger fact holds, as we note below.
**Proposition 3.12** ([4]).: _Let \(m>0\) and let \(f,g\in L^{1}[0,m]\). Then \(f^{*},g^{*}\) satisfy_
\[\int_{[0,m]}\left(f^{*}-g^{*}\right)^{-}\mathrm{d}\nu\leq\int_{[0,m]}\left(f- g\right)^{-}\mathrm{d}\nu\]
_and_
\[\int_{[0,m]}\lvert f^{*}-g^{*}\rvert\,\mathrm{d}\nu\leq\int_{[0,m]}\lvert f-g \rvert\,\mathrm{d}\nu\,.\]
The result above is stated for functions on the interval. Taking the integral over the box \(B\) and repeating for each operator \(R_{i}\) yields the non-expansiveness of our monotone rearrangement operator, as also noted by [10]:
**Corollary 3.13**.: _Let \(f,g\in L^{1}(\overline{B})\). Then \(\left\lVert f^{*}-g^{*}\right\rVert_{L^{1}}\leq\left\lVert f-g\right\rVert_{L^ {1}}\)._
We show that the rearrangement operator can only make the norm of the directed partial derivatives smaller, i.e. decrease the violations of monotonicity, which is the key step in this proof.
**Proposition 3.14**.: _Let \(f:\overline{B}\to\mathbb{R}\) be Lipschitz continuous and let \(i,j\in[n]\). Then \(\left\lVert\partial_{j}^{-}(R_{i}f)\right\rVert_{L^{1}}\leq\left\lVert \partial_{j}^{-}f\right\rVert_{L^{1}}\)._
Proof.: We may assume that \(i\neq j\), since otherwise the LHS is zero. We will use the following convention for variables names: \(w\in\mathbb{R}^{n}\) will denote points in \(B\); \(z\in\mathbb{R}^{[n]\setminus\{i,j\}}\) will denote points in \(B^{-i-j}\); \(x\in\mathbb{R}\) will denote points in \((0,a_{i})\) (indexing the \(i\)-th dimension); and \(y\in\mathbb{R}\) will denote points in \((0,a_{j})\) (indexing the \(j\)-th dimension). For each \(i\in[n]\), let \(e_{i}\) denote the \(i\)-th basis vector.
Since \(f\) is Lipschitz, so is \(R_{i}f\) by 3.9. By Rademacher's theorem, these functions are differentiable almost everywhere. Therefore, let \(D\subseteq B\) be a measurable set such that \(f\) and \(R_{i}f\)
are differentiable in \(D\) and \(\nu(D)=\nu(B)\). We have
\[\left\|\partial_{j}^{-}(R_{i}f)\right\|_{L^{1}} =\int_{D}\left|\partial_{j}^{-}(R_{i}f)\right|\mathrm{d}\nu\] \[=\int_{D}\left[\lim_{h\to 0}\left(\frac{(R_{i}f)(w+he_{j})-(R_{i}f)(w) }{h}\right)^{-}\right]\mathrm{d}\nu(w)\] \[\overset{(BC1)}{=}\lim_{h\to 0}\int_{D}\left(\frac{(R_{i}f)(w+ he_{j})-(R_{i}f)(w)}{h}\right)^{-}\mathrm{d}\nu(w)\] \[\overset{(D1)}{=}\lim_{h\to 0}\int_{B}\left(\frac{(R_{i}f)(w+ he_{j})-(R_{i}f)(w)}{h}\right)^{-}\mathrm{d}\nu(w)\] \[\overset{(T1)}{=}\lim_{h\to 0}\int_{B^{-i-j}}\int_{(0,a_{j})} \int_{(0,a_{i})}\left(\frac{(R_{i}f)(z,y+h,x)-(R_{i}f)(z,y,x)}{h}\right)^{-} \mathrm{d}\nu(x)\,\mathrm{d}\nu(y)\,\mathrm{d}\nu(z)\] \[\leq\lim_{h\to 0}\int_{B^{-i-j}}\int_{(0,a_{j})}\int_{(0,a_{i})} \left(\frac{f(z,y+h,x)-f(z,y,x)}{h}\right)^{-}\mathrm{d}\nu(x)\,\mathrm{d}\nu (y)\,\mathrm{d}\nu(z)\] \[\overset{(T2)}{=}\lim_{h\to 0}\int_{B}\left(\frac{f(w+ he_{j})-f(w)}{h}\right)^{-}\mathrm{d}\nu(w)\] \[\overset{(DC2)}{=}\lim_{h\to 0}\int_{D}\left(\frac{f(w+he_{j})-f(w) }{h}\right)^{-}\mathrm{d}\nu(w)\] \[=\int_{D}\left|\partial_{j}^{-}f\right|\mathrm{d}\nu\] \[=\left\|\partial_{j}^{-}f\right\|_{L^{1}}\,.\]
Equalities (BC1) and (BC2) hold by the bounded convergence theorem, which applies because the difference quotients are uniformly bounded by the Lipschitz constants of \(R_{i}f\) and \(f\) (respectively), and because \(R_{i}f\) and \(f\) are differentiable in \(D\) (which gives pointwise convergence of the limits). Equalities (D1) and (D2) hold again by the uniform boundedness of the difference quotients, along with the fact that \(\nu(B\setminus D)=0\). Equalities (T1) and (T2) hold by Tonelli's theorem. Finally, the inequality holds by Proposition 3.12, since \((R_{i}f)(z,y+h,\cdot)\) is the monotone rearrangement of \(f(z,y+h,\cdot)\) and \((R_{i}f)(z,y,\cdot)\) is the monotone rearrangement of \(f(z,y,\cdot)\).
We are now ready to prove our directed \((L^{1},\ell^{1})\)-Poincare inequality.
**Theorem 3.15**.: _Let \(B\) be the box of size \(a\in\mathbb{R}^{n}\) and let \(f:\overline{B}\to\mathbb{R}\) be Lipschitz continuous. Then_
\[\left\|f-f^{*}\right\|_{L^{1}}\leq 2\sum_{i=1}^{n}a_{i}\left\|\partial_{i}^{-} f\right\|_{L^{1}}\,.\]
Proof.: We have
\[\left\|f-f^{*}\right\|_{L^{1}} \leq\sum_{i=1}^{n}\left\|R_{i-1}\cdots R_{1}f-R_{i}\cdots R_{1}f \right\|_{L^{1}}\] (Triangle inequality) \[\leq 2\sum_{i=1}^{n}a_{i}\left\|\partial_{i}^{-}(R_{i-1}\cdots R_{1 }f)\right\|_{L^{1}}\] (Lemma 3.9 and Proposition 3.8) \[\leq 2\sum_{i=1}^{n}a_{i}\left\|\partial_{i}^{-}f\right\|_{L^{1}}\] (Lemma 3.9 and Proposition 3.14).
Setting \(B=(0,1)^{n}\) yields the inequality portion of Theorem 1.2:
**Corollary 3.16**.: _Let \(B=(0,1)^{n}\) and let \(f:\overline{B}\to\mathbb{R}\) be Lipschitz continuous. Then_
\[\mathbb{E}\left[\left|f-f^{*}\right|\right]=\left\|f-f^{*}\right\|_{L^{1}} \leq 2\int_{B}\left\|\nabla^{-}f\right\|_{1}\mathrm{d}\nu=2\mathbb{E} \left[\left\|\nabla^{-}f\right\|_{1}\right]\,.\]
To complete the proof of Theorem 1.2, we need to show that \(d_{1}(f)\approx\mathbb{E}\left[\left|f-f^{*}\right|\right]\), i.e. that the monotone rearrangement is "essentially optimal" as a target monotone function for \(f\). The inequality \(d_{1}(f)\leq\mathbb{E}\left[\left|f-f^{*}\right|\right]\) is clear from the fact that \(f^{*}\) is monotone. The inequality in the other direction follows from the non-expansiveness of the rearrangement operator, with essentially the same proof as that of [10] for the Boolean cube:
**Proposition 3.17**.: _Let \(f:[0,1]^{n}\to\mathbb{R}\) be Lipschitz continuous. Then \(\mathbb{E}\left[\left|f-f^{*}\right|\right]\leq 2d_{1}(f)\)._
Proof.: Let \(g\in L^{1}([0,1]^{n})\) be any monotone function. It follows that \(g^{*}=g\). By Corollary 3.13, we have that \(\left\|f^{*}-g^{*}\right\|_{L^{1}}\leq\left\|f-g\right\|_{L^{1}}\). Using the triangle inequality, we obtain
\[\left\|f-f^{*}\right\|_{L^{1}}\leq\left\|f-g\right\|_{L^{1}}+\left\|g-f^{*} \right\|_{L^{1}}=\left\|f-g\right\|_{L^{1}}+\left\|f^{*}-g^{*}\right\|_{L^{1}} \leq 2\left\|f-g\right\|_{L^{1}}\,.\]
The claim follows by taking the infimum over the choice of \(g\).
Tightness of the inequality.To check that Corollary 3.16 is tight up to constant factors, it suffices to take the linear function \(f:[0,1]^{n}\to\mathbb{R}\) given by \(f(x)=1-x_{1}\) for all \(x\in[0,1]^{n}\). Then \(f^{*}\) is given by \(f^{*}(x)=x_{1}\), so \(\mathbb{E}\left[f-f^{*}\right]=1/2\) while \(\mathbb{E}\left[\left\|\nabla^{-}f\right\|_{1}\right]=1\), as needed.
#### 3.2.1 Discrete case
The proof above carries over to the case of the hypergrid almost unmodified, as we now outline. We now consider functions \(f:[m]^{n}\to\mathbb{R}\), so the box \(B\) is replaced with \([m]^{n}\) and its dimensions \(a_{i}\) are all replaced with the length \(m\) of the hypergrid. We define the rearrangement in direction \(i\), \(R_{i}f\), as in Definition 3.7 by sorting the restrictions of \(f\) to each line along direction \(i\). We also define \(f^{*}\) as in Definition 3.10 by subsequent applications of each operator \(R_{i}\). Then Proposition 3.8 carries over by applying the one-dimensional Lemma 3.5, and the proof of Proposition 3.11 carries over unmodified.
The non-expansiveness properties Proposition 3.12 and Corollary 3.13 also carry over unmodified, and the key Proposition 3.14 carries over with a more immediate proof: the use of Proposition 3.12 remains the same, but rather than expanding the definition of derivative and reasoning
about the limit, the discrete argument boils down to showing the inequality
\[\int_{[m]^{n}}\left((R_{i}f)(w+e_{j})-(R_{i}f)(w)\right)^{-}\mathrm{d}\nu(w)\leq \int_{[m]^{n}}\left(f(w+e_{j})-f(w)\right)^{-}\mathrm{d}\nu(w)\,,\]
which follows immediately from the discrete version of Proposition3.12 by summing over all lines in direction \(i\). Then, the hypergrid version of Theorem3.15 follows by the same application of the triangle inequality, and we conclude the inequality portion of Theorem1.3:
**Theorem 3.18**.: _Let \(f:[m]^{n}\to\mathbb{R}\). Then \(\mathbb{E}\left[|f-f^{*}|\right]\leq 2m\mathbb{E}\left[\|\nabla^{-}f\|_{1}\right]\)._
The discrete version of Proposition3.17 follows identically, and we state it here for convenience:
**Proposition 3.19**.: _Let \(f:[m]^{n}\to\mathbb{R}\). Then \(\mathbb{E}\left[|f-f^{*}|\right]\leq 2d_{1}(f)\)._
Finally, the tightness of Theorem3.18 is mostly easily verified for the following step function: letting \(m\) be even for simplicity, define \(f:[m]^{n}\to\mathbb{R}\) by
\[f(x)=\begin{cases}1&\text{if }x_{1}\leq m/2\,,\\ 0&\text{if }x_{1}>m/2\,.\end{cases}\]
Then \(f^{*}\) is obtained by flipping this function along the first coordinate, or equivalently swapping the values \(1\) and \(0\) in the definition above. Thus \(\mathbb{E}\left[|f-f^{*}|\right]=1\). On the other hand, \(\|\nabla^{-}f\|_{1}\) takes value \(1\) on exactly one point in each line along the first coordinate, and \(0\) elsewhere. Hence \(\mathbb{E}\left[\|\nabla^{-}f\|_{1}\right]=1/m\), as needed.
## 4 Applications to monotonicity testing
In this section, we use the directed Poincare inequalities on the unit cube and hypergrid to show that the natural partial derivative tester (or edge tester) attains the upper bounds from Theorem1.4.
Let \(\Omega\) denote either \([0,1]^{n}\) or \([m]^{n}\), and let \(q(\Omega,L,\epsilon)\) denote the query complexity of testers for \((\ell^{1},L)\)-Lipschitz functions on these domains, as follows:
\[q([0,1]^{n},L,\epsilon):=\Theta\left(\frac{nL}{\epsilon}\right)\qquad\text{ and}\qquad q([m]^{n},L,\epsilon):=\Theta\left(\frac{nmL}{\epsilon}\right)\,.\]
The tester is given in Algorithm1. It is clear that this algorithm is a nonadaptive partial derivative tester, and that it always accepts monotone functions. It suffices to show that it rejects with good probability when \(d_{1}(f)>\epsilon\).
```
Input: Partial derivative oracle access to Lipschitz function \(f:\Omega\to\mathbb{R}\). Output: Accept if \(f\) is monotone, reject if \(d_{1}(f)>\epsilon\). Requirement:\(\mathsf{Lip}_{1}(f)\leq L\). procedurePartialDerivativeTester(\(f,\Omega,L,\epsilon\)) repeat\(q(\Omega,L,\epsilon)\)times Sample \(x\in\Omega\) uniformly at random. Sample \(i\in[n]\) uniformly at random. Reject if \(\partial_{i}f(x)<0\). endrepeat Accept.
```
**Algorithm 1**\(L^{1}\) monotonicity tester for Lipschitz functions using partial derivative queries
**Lemma 4.1**.: _Let \(\Omega\) be one of \([0,1]^{n}\) or \([m]^{n}\), and let \(f:\Omega\to\mathbb{R}\) be a Lipschitz function satisfying \(\mathsf{Lip}_{1}(f)\leq L\). Suppose \(d_{1}(f)>\epsilon\). Then Algorithm 1 rejects with probability at least \(2/3\)._
Proof.: **Continuous case.** Suppose \(\Omega=[0,1]^{n}\). Let \(D\subseteq[0,1]^{n}\) be a measurable set such that \(f\) is differentiable on \(D\) and \(\mu(D)=1\), which exists by Rademacher's theorem. For each \(i\in[n]\), let \(S_{i}:=\{x\in D:\partial_{i}f(x)<0\}\). A standard argument gives that each \(S_{i}\subset\mathbb{R}^{n}\) is a measurable set. We claim that
\[\sum_{i=1}^{n}\mu(S_{i})>\frac{\epsilon}{2L}\,.\]
Suppose this is not the case. By the Lipschitz continuity of \(f\), we have that \(|\partial_{i}f(x)|\leq L\) for every \(x\in D\) and \(i\in[n]\), and therefore
\[2\sum_{i=1}^{n}\mathbb{E}\left[\left|\partial_{i}^{-}f\right|\right]\leq 2L \sum_{i=1}^{n}\mu(S_{i})\leq\epsilon\,.\]
On the other hand, the assumption that \(d_{1}(f)>\epsilon\) and Corollary 3.16 yield
\[\epsilon<\mathbb{E}\left[\left|f-f^{*}\right|\right]\leq 2\mathbb{E}\left[ \left|\nabla^{-}f\right|\right]=2\sum_{i=1}^{n}\mathbb{E}\left[\left|\partial_ {i}^{-}f\right|\right]\,,\]
a contradiction. Therefore the claim holds.
Now, the probability that one iteration of the tester rejects is the probability that \(x\in S_{i}\) when \(x\) and \(i\) are sampled uniformly at random. This probability is
\[\mathbb{P}\left[\text{Iteration rejects}\right]=\sum_{j=1}^{n}\mathbb{P}_{i} \left[i=j\right]\mathbb{P}_{x}\left[x\in S_{j}\right]=\sum_{j=1}^{n}\frac{1}{ n}\cdot\mu(S_{j})>\frac{\epsilon}{2nL}\,.\]
Thus \(\Theta\left(\frac{nL}{\epsilon}\right)\) iterations suffice to reject with high constant probability.
**Discrete case.** Suppose \(\Omega=[m]^{n}\). The proof proceeds the same way, but we give it explicitly for convenience. For each \(i\in[n]\), let \(S_{i}:=\{x\in[m]^{n}:\partial_{i}f(x)<0\}\). We then claim that
\[\sum_{i=1}^{n}\mu(S_{i})>\frac{\epsilon}{2mL}\,.\]
Indeed, if this is not the case, then since \(|\partial_{i}f(x)|\leq L\) for every \(i\) and \(x\), we get that
\[2\sum_{i=1}^{n}\mathbb{E}\left[\left|\partial_{i}^{-}f\right|\right]\leq 2L\sum _{i=1}^{n}\mu(S_{i})\leq\frac{\epsilon}{m}\,.\]
On the other hand, the assumption that \(d_{1}(f)>\epsilon\) and Theorem 3.18 yield
\[\frac{\epsilon}{m}<\frac{1}{m}\cdot\mathbb{E}\left[\left|f-f^{*}\right|\right] \leq\frac{1}{m}\cdot 2m\mathbb{E}\left[\left|\nabla^{-}f\right|\right]=2 \sum_{i=1}^{n}\mathbb{E}\left[\left|\partial_{i}^{-}f\right|\right]\,,\]
a contradiction. Thus the claim holds, and the probability that one iteration of the tester rejects is
\[\mathbb{P}\left[\text{Iteration rejects}\right]=\sum_{j=1}^{n}\mathbb{P}_{i} \left[i=j\right]\mathbb{P}_{x}\left[x\in S_{j}\right]=\sum_{j=1}^{n}\frac{1}{ n}\cdot\mu(S_{j})>\frac{\epsilon}{2nmL}\,.\]
Thus \(\Theta\left(\frac{nmL}{\epsilon}\right)\) iterations suffice to reject with high constant probability.
\(L^{1}\)-testing monotonicity on the line
In this section, we show the upper bounds for \(L^{1}\) monotonicity testing on the line from Theorem1.5. The main idea is to reduce from \(L^{1}\) testing to Hamming testing by using the Lipschitz constant to show that, if the \(L^{1}\) distance to monotonicity is large, then the Hamming distance to monotonicity must be somewhat large as well; combined with the Hamming testers of [1, 1], this yields an \(L^{1}\) tester for the discrete line \([m]\).
To obtain a tester for the continuous line \([0,m]\), we furthermore apply a discretization strategy inspired by the domain reduction and downsampling ideas from [1, 1]. The idea is that, given \(\epsilon\) and \(L\), we may impose a fine enough grid on \([0,m]\) such that the function defined on that grid preserves the \(L^{1}\) distance to monotonicity compared to the continuous function; again, the Lipschitz assumption is essential for this step.
In this section, we will follow the convention of denoting functions on continuous domains by \(\underline{f},g\), and those on discrete domains by \(\overline{f},\overline{g}\). Depending on the context, it will be clear whether \(\overline{f}\) is an arbitrary function or one obtained by discretizing a particular function \(f\). We will also write "\(f\) is \(L\)-Lipschitz" without specifying the \(\ell^{p}\) geometry, since all choices are equivalent in one dimension.
**Lemma 5.1** (Discretization preserves distance to monotonicity).: _Let \(m,L,\epsilon>0\) and let \(f:[0,m]\to\mathbb{R}\) be an \(L\)-Lipschitz function. Let the discretized function \(\overline{f}:[m^{\prime}]\to\mathbb{R}\), for suitable choice of \(m^{\prime}=\Theta\left(mL/\epsilon\right)\), be given by \(\overline{f}(i)=f(\delta i)\) for each \(i\in[m^{\prime}]\), where \(\delta:=m/m^{\prime}\). Then if \(d_{1}(f)>\epsilon\), we have \(d_{1}(\overline{f})>\epsilon/4\)._
Proof.: Let \(m^{\prime}\in[cmL/\epsilon,2cmL/\epsilon]\) be an integer, where \(c\) is a sufficiently large universal constant.8 Let \(\overline{f}:[m^{\prime}]\to\mathbb{R}\) be the function given in the statement, and suppose \(d_{1}(f)>\epsilon\).
Footnote 8: We may assume that \(mL/\epsilon>1\), otherwise the problem is trivial: the maximum \(L^{1}\) distance from monotonicity attainable by an \(L\)-Lipschitz function is \(\frac{1}{m}\cdot\frac{m\cdot mL}{2}=mL/2\). Therefore the given interval does contain an integer.
Let \(\overline{g}:[m^{\prime}]\to\mathbb{R}\) be the monotone rearrangement of \(\overline{f}\). It is easy to check that \(\overline{g}\) is Lipschitz with at most the Lipschitz constant of \(\overline{f}\). Let \(g:[0,m]\to\mathbb{R}\) be the following piecewise linear function whose discretization is \(\overline{g}\): for each \(i\in[m^{\prime}]\) we set \(g(\delta i)=\overline{g}(i)\), and \(g\) is the linear spline induced by these points elsewhere (and constant in the segment \([0,\delta]\)). Then clearly \(g\) is monotone, and thus \(d_{1}(f,g)>\epsilon\). Moreover, \(g\) is \(L\)-Lipschitz, since its steepest slope is the same as that of \(\overline{g}\) up to the coordinate changes.9 Hence, we have
Footnote 9: Formally, if \(f\) is \(L\)-Lipschitz, then \(\overline{f}\) is \(L^{\prime}\)-Lipschitz for \(L^{\prime}=Lm/m^{\prime}\), hence so is its monotone rearrangement \(\overline{g}\). Then since the steepest slope of \(g\) must come from two vertices of the spline, \(g\) is Lipschitz with Lipschitz constant \(L^{\prime}m^{\prime}/m=L\).
\[\epsilon <d_{1}(f,g)=\frac{1}{m}\int_{0}^{m}\lvert f(x)-g(x)\rvert\, \mathrm{d}x=\frac{1}{m}\sum_{i=1}^{m^{\prime}}\int_{(i-1)\delta}^{i\delta} \lvert f(x)-g(x)\rvert\,\mathrm{d}x\] \[=\frac{1}{m}\sum_{i=1}^{m^{\prime}}\int_{(i-1)\delta}^{i\delta} \lvert(f(i\delta)\pm L\delta)-(g(i\delta)\pm L\delta)\rvert\,\mathrm{d}x\] (Lipschitz property) \[\leq\frac{1}{m}\sum_{i=1}^{m^{\prime}}\int_{(i-1)\delta}^{i \delta}\left[\left|\overline{f}(i)-\overline{g}(i)\right|+2L\delta\right] \mathrm{d}x\] \[=\frac{1}{m}\Big{[}2m^{\prime}L\delta^{2}+\delta\sum_{i=1}^{m^{ \prime}}\bigl{|}\overline{f}(i)-\overline{g}(i)\bigr{|}\Big{]}=\frac{2mL}{m^{ \prime}}+\frac{1}{m^{\prime}}\sum_{i=1}^{m^{\prime}}\bigl{|}\overline{f}(i)- \overline{g}(i)\bigr{|}\leq\frac{2\epsilon}{c}+d_{1}(\overline{f},\overline{g} )\,,\]
where we used the notation \(a\pm b\) to denote any number in the interval \([a-b,a+b]\).
We may set \(c\geq 4\) so that \(2\epsilon/c\leq\epsilon/2\). Therefore, we obtain \(d_{1}(\overline{f},\overline{g})>\epsilon/2\). Since \(\overline{g}\) is the monotone rearrangement of \(\overline{f}\), Proposition3.19 implies that \(d_{1}(\overline{f},\overline{g})\leq 2d_{1}(\overline{f})\). We conclude that \(d_{1}(\overline{f})>\epsilon/4\), as desired.
**Observation 5.2**.: _The function \(\overline{f}\) defined in Lemma5.1 is \(\epsilon\)-Lipschitz: since \(m^{\prime}\geq mL/\epsilon\), we have_
\[\big{|}\overline{f}(i)-\overline{f}(i+1)\big{|}=|f(\delta i)-f\left(\delta(i+ 1)\right)|\leq L\delta=Lm/m^{\prime}\leq\epsilon\,.\]
**Lemma 5.3** (Far in \(L^{1}\) distance implies far in Hamming distance).: _Let \(\overline{f}:[m^{\prime}]\to\mathbb{R}\) be an \(L^{\prime}\)-Lipschitz function. Then \(d_{0}(\overline{f})\geq\sqrt{\frac{d_{1}(\overline{f})}{m^{\prime}L^{\prime}}}\)._
Proof.: Let \(S\subseteq[m^{\prime}]\) be a set such that 1) \(|S|=d_{0}(\overline{f})m^{\prime}\) and 2) it suffices to change \(\overline{f}\) on inputs in \(S\) to obtain a monotone function; note that \(S\) exists by definition of Hamming distance. Write \(S\) as the union of maximal, pairwise disjoint contiguous intervals, \(S=I_{1}\cup\dots\cup I_{k}\).
We define a monotone function \(\overline{g}:[m^{\prime}]\to\mathbb{R}\) as follows. For each \(i\in S\), set \(i^{*}\in[m^{\prime}]\setminus S\) as follows: if there exists \(j\in[m^{\prime}]\setminus S\) such that \(j>i\), pick the smallest such \(j\); otherwise, pick the largest \(j\in[m^{\prime}]\setminus S\). In other words, \(i^{*}\) is obtained by picking a direction (right if possible, otherwise left) and choosing the first point outside the interval \(I_{k}\) that contains \(i\). Now, define \(\overline{g}\) by
\[\overline{g}(i)=\begin{cases}\overline{f}(i)&\text{if }i\not\in S\\ \overline{f}(i^{*})&\text{if }i\in S\,.\end{cases}\]
We first claim that \(\overline{g}\) is monotone. Indeed the sequence of values \(\left(f(i)\right)_{i\in[m^{\prime}]\setminus S}\) (taken in order of increasing \(i\)) is monotone by our first assumption on \(S\), and since \(\overline{g}\) is obtained by extending some of these values into flat regions, the resulting function is also monotone. Therefore we can upper bound the \(L^{1}\) distance of \(\overline{f}\) to monotonicity by
\[d_{1}(\overline{f})\leq d_{1}(\overline{f},\overline{g}) =\frac{1}{m^{\prime}}\sum_{i=1}^{m^{\prime}}\lvert\overline{f}(i) -\overline{g}(i)\rvert=\frac{1}{m^{\prime}}\sum_{j=1}^{k}\sum_{i\in I_{j}} \lvert\overline{f}(i)-\overline{f}(i^{*})\rvert\] \[=\frac{1}{m^{\prime}}\sum_{j=1}^{k}\sum_{i\in I_{j}}\big{|}\big{(} \overline{f}(i^{*})\pm L^{\prime}|i-i^{*}|\big{)}-\overline{f}(i^{*})\big{|} \text{(Lipschitz property)}\] \[\leq\frac{L^{\prime}}{m^{\prime}}\sum_{j=1}^{k}\sum_{i\in I_{j}} \lvert i-i^{*}\rvert\leq\frac{L^{\prime}}{m^{\prime}}\sum_{j=1}^{k}\sum_{i\in I _{j}}\lvert I_{j}\rvert=\frac{L^{\prime}}{m^{\prime}}\sum_{j=1}^{k}\lvert I_ {j}\rvert^{2}\] \[\leq\frac{L^{\prime}}{m^{\prime}}\cdot\lvert S\rvert^{2} \text{(Since }\lvert I_{1}\rvert+\dots+\lvert I_{k}\rvert= \lvert S\rvert\text{)}\] \[=L^{\prime}d_{0}(\overline{f})^{2}m^{\prime}\,.\]
The claim follows.
Combining the two lemmas with the classical Hamming monotonicity tester of [1], the following theorem establishes Theorem1.5 for the continuous domain \([0,m]\):
**Theorem 5.4**.: _There exists a nonadaptive one-sided \(L^{1}\) monotonicity tester for \(L\)-Lipschitz functions \(f:[0,m]\to\mathbb{R}\) with query complexity \(O\left(\sqrt{\frac{mL}{\epsilon}}\log\left(\frac{mL}{\epsilon}\right)\right)\)._
Proof.: The tester works as follows. It first fixes \(m^{\prime}=\Theta\left(mL/\epsilon\right)\) as given by Lemma5.1. Let \(\overline{f}:[m^{\prime}]\to\mathbb{R}\) be the discretization defined therein (\(\overline{f}\) is not explicitly computed upfront, but will rather be queried as needed). The algorithm then simulates the (nonadaptive, one-sided) monotonicity tester of [1] on the function \(\overline{f}\) with proximity parameter \(\epsilon^{\prime}=\Theta\left(\sqrt{\frac{\epsilon}{mL}}\right)\) (the constant may easily be made explicit), producing \(f(\delta i)=f(im/m^{\prime})\) whenever the simulation queries \(\overline{f}(i)\). The algorithm returns the result produced by the simulated tester. The query complexity claim follows from the fact that the tester of [1] has query complexity \(O\left(\frac{1}{\epsilon^{\prime}}\log m^{\prime}\right)\).
We now show correctness. When \(f\) is monotone, so is \(\overline{f}\), so the algorithm will accept since the tester of [1] has one-sided error. Now, suppose \(d_{1}(f)>\epsilon\). Then \(d_{1}(\overline{f})>\epsilon/4\) by Lemma5.1. Moreover, since \(\overline{f}\) is \(\epsilon\)-Lipschitz by creftypecap 5.2, Lemma5.3 implies that
\[d_{0}(\overline{f})\geq\sqrt{\frac{d_{1}(\overline{f})}{m^{\prime}\epsilon}}> \sqrt{\frac{1}{4m^{\prime}}}=\Omega\left(\sqrt{\frac{\epsilon}{mL}}\right)\,.\]
Since this is the proximity parameter \(\epsilon^{\prime}\) used to instantiate the [1] tester, the algorithm will reject with high constant probability, as needed.
Lemma5.3 itself also implies Theorem1.5 for the discrete domain \([m]\). This time, we use the Hamming tester of [1] to obtain a slightly more precise query complexity bound10.
Footnote 10: One may check that this refinement would have no effect in Theorem5.4
**Theorem 5.5**.: _There exists a nonadaptive one-sided \(L^{1}\) monotonicity tester for \(L\)-Lipschitz functions \(\overline{f}:[m]\to\mathbb{R}\) with query complexity \(O\left(\sqrt{\frac{mL}{\epsilon}}\log\left(\frac{m\epsilon}{L}\right)\right)\) when \(\epsilon/L\geq 4/m\), and \(O(m)\) otherwise._
Proof.: The tester sets \(\epsilon^{\prime}:=\sqrt{\frac{\epsilon}{mL}}\), and then runs the (nonadaptive, one-sided) Hamming monotonicity tester of [1] on the line \([m]\) with proximity parameter \(\epsilon^{\prime}\). That tester has query complexity \(O\left(\frac{1}{\epsilon^{\prime}}\log(\epsilon^{\prime}m)\right)\) when \(\epsilon^{\prime}\geq 2/m\) and (trivially) \(O(m)\) otherwise, which gives the claimed upper bounds. It remains to show correctness.
When \(\overline{f}\) is monotone, the algorithm will accept since the tester of [1] has one-sided error. Now, suppose \(d_{1}(\overline{f})>\epsilon\). Then Lemma5.3 yields
\[d_{0}(\overline{f})>\sqrt{\frac{\epsilon}{mL}}=\epsilon^{\prime}\,,\]
so the tester of [1] will reject with high constant probability.
## 6 Lower bounds
In this section, we prove our lower bounds for testing monotonicity on the unit cube and on the hypergrid. We first show our general lower bounds based on a "hole" construction, which hides a monotonicity violating region inside a randomly placed \(\ell^{1}\)-ball; these bounds imply near tightness of our upper bounds for testing on the line from Section5. Then we give our lower bounds for partial derivative testers, which show that the analysis of our tester in Section4 is tight.
**Definition 6.1** (\(\ell^{1}\)-ball).: Let \(\Omega\) be one of \(\mathbb{R}^{n}\) or \(\mathbb{Z}^{n}\), let \(x\in\Omega\) and let \(r>0\) be a real number. The _\(\ell^{1}\)-ball of radius \(r\) centered at \(x\)_ is the set \(B_{1}^{n}(r,x):=\{y\in\Omega:\|x-y\|_{1}\leq r\}\). We will also write \(B_{1}^{n}(r):=B_{1}^{n}(r,0)\).
It will be clear from the context whether the domain should be taken to be continuous or discrete, i. e. whether \(B_{1}^{n}(r,c)\) should be understood under \(\Omega=\mathbb{R}^{n}\) or \(\Omega=\mathbb{Z}^{n}\).
We give the following simple bounds on the volume of continuous and discrete \(\ell^{1}\)-balls. Since we do not require particularly tight bounds, we opt for a simple formulation and elementary proof.
**Proposition 6.2**.: _There exist functions \(c_{1},c_{2}:\mathbb{N}\to\mathbb{R}_{>0}\) satisfying the following. Let \(n\in\mathbb{N}\). Let \(\Omega\) be one of \(\mathbb{R}^{n}\) or \(\mathbb{Z}^{n}\). Let \(r\in\mathbb{R}\) satisfy \(r>0\) if \(\Omega=\mathbb{R}^{n}\), and \(r\geq 1\) if \(\Omega=\mathbb{Z}^{n}\). Then_
\[c_{1}(n)r^{n}\leq\nu\left(B_{1}^{n}(r)\right)\leq c_{2}(n)r^{n}\,.\]
Proof.: First suppose \(\Omega=\mathbb{R}^{n}\). Then we have the following formula for the area of the \(\ell^{1}\)-ball of radius \(r\) (see e.g. [20]):
\[\nu\left(B_{1}^{n}(r)\right)=\frac{(2r)^{n}}{n!}\,.\]
The result follows by letting \(c_{1}(n)\leq 2^{n}/n!\) and \(c_{2}(n)\geq 2^{n}/n!\).
Now, suppose \(\Omega=\mathbb{Z}^{n}\), and suppose \(r\) is an integer without loss of generality (because since \(r\geq 1\), there exist integers within a factor of \(2\) above and below \(r\)). We proceed by an inductive argument. For \(n=1\), the volume is
\[\nu\left(B_{1}^{1}\right)=1+2\sum_{d=1}^{r}1=1+2r\,,\]
so the claim holds by letting \(c_{1}(1)\leq 2\) and \(c_{2}(n)\geq 3\). Assuming the claim for some \(n\in\mathbb{N}\), we have
\[\nu\left(B_{1}^{n+1}(r)\right) =|\{x=(x_{1},\ldots,x_{n},x_{n+1}):\|x\|_{1}\leq r\}|=\sum_{y=-r}^ {r}\big{|}\big{\{}x^{\prime}=(x_{1}^{\prime},\ldots,x_{n}^{\prime}):\|x^{ \prime}\|_{1}\leq r-|y|\big{\}}\big{|}\] \[=\nu\left(B_{1}^{n}(r)\right)+2\sum_{d=1}^{r}\nu\left(B_{1}^{n}(r -d)\right)\,.\]
Since the last expression is at most \(3r\cdot\nu\left(B_{1}^{n}(r)\right)\), using the inductive hypothesis we conclude
\[\nu\left(B_{1}^{n+1}(r)\right)\leq 3r\cdot c_{2}(n)r^{n}=3c_{2}(n)r^{n+1}\leq c _{2}(n+1)r^{n+1}\,,\]
the last inequality as long as \(c_{2}(n+1)\geq 3c_{2}(n)\).
For the lower bound, we consider two cases. Note that \(r-d\geq r/2\) for at least \(\lfloor r/2\rfloor\) values of \(d\). When \(r\geq 4\), we have \(\lfloor r/2\rfloor\geq r/3\), and then
\[\nu\left(B_{1}^{n+1}(r)\right)\geq c_{1}(n)r^{n}+2\sum_{d=1}^{r}c_{1}(n)(r-d)^ {n}>\frac{2r}{3}\cdot c_{1}(n)\left(\frac{r}{2}\right)^{n}\geq c_{1}(n+1)r^{n +1}\,,\]
the last inequality as long as \(c_{1}(n+1)\leq\frac{2}{3}\cdot 2^{-n}\cdot c_{1}(n)\). On the other hand, if \(r<4\), the bound follows easily for small enough \(c_{1}(n+1)\), since
\[\nu\left(B_{1}^{n+1}(r)\right)\geq c_{1}(n)r^{n}+2\sum_{d=1}^{r}c_{1}(n)(r-d )^{n}>\frac{c_{1}(n)r^{n+1}}{r}>\frac{c_{1}(n)}{4}r^{n+1}\,.\qed\]
**Remark 6.3**.: Note that the constants \(c_{1}(n)\) and \(c_{2}(n)\) in Proposition 6.2 have poor dependence on \(n\), and in particular this is tight in the continuous case. This fact is essentially the reason why this construction is only efficient for constant dimension \(n\).
We now prove our tester-independent lower bounds. Note that there exists a tester for \((\ell^{1},L)\)-Lipschitz functions with proximity parameter \(\epsilon\) if and only if there exists a tester for \((\ell^{1},1)\)-Lipschitz functions with proximity parameter \(\epsilon/L\) (the reduction consists of simply rescaling the input values). Therefore it suffices to prove the theorems for the case \(L=1\). The following two theorems establish the continuous and discrete cases of Theorem1.6.
**Theorem 6.4** (Lower bound for constant \(n\) on the unit cube).: _Let \(n\in\mathbb{N}\) be a constant. Any \(L^{1}\) monotonicity tester (with two-sided error, and adaptive value and directional derivative queries) for Lipschitz functions \(f:[0,1]^{n}\to\mathbb{R}\) satisfying \(\mathsf{Lip}_{1}(f)\leq 1\) requires at least \(\Omega\left((1/\epsilon)^{\frac{n}{n+1}}\right)\) queries._
Proof.: We construct a family of functions that are \(\epsilon\)-far from monotone in \(L^{1}\) distance such that any deterministic algorithm cannot reliably distinguish between a function chosen uniformly at random from this family and the constant-\(0\) function with fewer than the announced number of queries; then, the claim will follow from Yao's principle.
Each such function \(f\) is constructed as follows. Let \(c\in[0,1]^{n}\) be a point such that the ball \(B_{1}^{n}(r,c)\) is completely inside \([0,1]^{n}\), for radius \(r\) to be chosen below. Then \(f\) takes value \(0\) everywhere outside \(B_{1}^{n}(r,c)\), and inside this ball, it takes value
\[f(x)=-r+\|x-c\|_{1}\]
for each \(x\in B_{1}^{n}(r,c)\). Then \(\mathsf{Lip}_{1}(f)=1\). We now lower bound \(d_{1}(f)\), its distance to monotonicity. Fix any \(x^{\prime}\in[0,1]^{n-1}\) and consider the line of points \((y,x^{\prime})\) for \(y\in[0,1]\), i. e. the line along the first coordinate with remaining coordinates set to \(x^{\prime}\). Suppose this line intersects \(B_{1}^{n}(r,c)\). Then this intersection occurs on some interval \([a,b]\) of \(y\)-values, and on this interval, \(f\) first decreases from \(f(a,x^{\prime})=0\) to \(f\left(\frac{a+b}{2},x^{\prime}\right)=-\frac{b-a}{2}\) at rate \(1\), and then increases at rate \(1\) back to \(f(b,x^{\prime})=0\). Any monotone function \(g\) is in particular monotone over this line, and it is easy to see that this requires total change to \(f\) proportional to the area under this curve:
\[\int_{0}^{1}\bigl{|}f(y,x^{\prime})-g(y,x^{\prime})\bigr{|}\,\mathrm{d}y \gtrsim\int_{0}^{1}\bigl{|}f(y,x^{\prime})\bigr{|}\,\mathrm{d}y\,.\]
Now, since this holds for any line intersecting \(B_{1}^{n}(r,c)\), and the collection of such lines gives a partition of \(B_{1}^{n}(r,c)\), the total distance between \(f\) and any monotone function \(g\) is lower bounded (up to a constant) by the \(L^{1}\)-norm of \(f\):
\[\int_{[0,1]^{n}}\bigl{|}f-g\bigr{|}\,\mathrm{d}\nu\gtrsim\int_{[0,1]^{n}} \bigl{|}f\bigr{|}\,\mathrm{d}\nu\,,\]
and since this holds for any choice of \(g\), we conclude that
\[d_{1}(f)\gtrsim\int_{[0,1]^{n}}\bigl{|}f\bigr{|}\,\mathrm{d}\nu\,.\]
We now note that this last expression is half the volume of an \(\ell^{1}\)-ball in dimension \(n+1\): for each point \(x\in B_{1}^{n}(r,c)\), the contribution to the integrand is \(|f(x)|=r-\|x-c\|_{1}\), corresponding to the measure of points \((x,z^{\prime})\) for all \(0\leq z^{\prime}\leq z\) where \(z=r-\|x-c\|_{1}\), so that the point \((x,z)\in\mathbb{R}^{n+1}\) satisfies \(\|(x,z)-(c,0)\|_{1}=r\). In other words, the points \((x,z^{\prime})\) are the points of \(B_{1}^{n+1}(r,(c,0))\) with nonnegative last coordinate. Conversely, all such points contribute to the integral above. Therefore, since \(n\) is a constant, using Proposition6.2 and writing \(\nu^{n+1}\) for the Lebesgue measure on \(\mathbb{R}^{n+1}\), we have
\[d_{1}(f)\gtrsim\int_{[0,1]^{n}}\bigl{|}f\bigr{|}\,\mathrm{d}\nu\gtrsim\nu^{n+ 1}\left(B_{1}^{n+1}(r)\right)\gtrsim r^{n+1}\,.\]
We wish this last quantity to be at least \(\Omega(\epsilon)\), so (recalling \(n\) is a constant) it suffices to set
\[r\approx\epsilon^{\frac{1}{n+1}}\,.\]
We have established that each function \(f\), for this choice of \(r\) and any choice of \(c\), is \(\epsilon\)-far from monotone as desired. Our family of functions from which \(f\) will be drawn will be given by choices of \(c\) such that the balls \(B_{1}^{n}(r,c)\) are disjoint, so that each query may only rule out one such choice (because queries outside \(B_{1}^{n}(r,c)\) take value \(0\)). How many disjoint balls \(B_{1}^{n}(r,c)\) can we fit inside \([0,1]^{n}\)? It suffices to divide \([0,1]^{n}\) into a grid of \(n\)-dimensional cells of side \(2r\), each of which can contain one ball. The number of such cells is at least (up to a constant factor)
\[(1/r)^{n}\gtrsim(1/\epsilon)^{\frac{n}{n+1}}\,.\]
Therefore to distinguish some \(f\) uniformly drawn from this family from the constant-\(0\) function with constant probability, any deterministic algorithm must have query complexity at least \(\Omega\left((1/\epsilon)^{\frac{n}{n+1}}\right)\).
**Theorem 6.5** (Lower bound for constant \(n\) on the hypergrid).: _Let \(n\in\mathbb{N}\) be a constant. Any \(L^{1}\) monotonicity tester (with two-sided error and adaptive queries) for functions \(f:[m]^{n}\to\mathbb{R}\) satisfying \(\mathsf{Lip}_{1}(f)\leq 1\) requires at least \(\Omega\left(\min\left\{(m/\epsilon)^{\frac{n}{n+1}},m^{n}\right\}\right)\) queries._
Proof.: We proceed similarly to Theorem6.4, with small changes for the discrete setting (essentially corresponding to the requirement that \(r\geq 1\) in the discrete case of Proposition6.2).
We will again construct functions \(f\) based on balls \(B_{1}^{n}(r,c)\) for suitable choices of \(r\) and \(c\). For fixed \(r\) and \(c\), \(f\) takes value \(0\) outside the ball and, for each \(x\in B_{1}^{n}(r,c)\),
\[f(x)=-r+\|x-c\|_{1}\,,\]
so that \(\mathsf{Lip}_{1}(f)=1\). Again by a line restriction argument, for any monotone function \(g\) we have
\[\int_{[m]^{n}}\lvert f-g\rvert\,\mathrm{d}\nu\gtrsim\int_{[m]^{n}}\lvert f \rvert\,\mathrm{d}\nu\,,\]
and thus
\[d_{1}(f)\gtrsim\frac{1}{m^{n}}\int_{[m]^{n}}\lvert f\rvert\,\mathrm{d}\nu\,, \tag{7}\]
the normalizing factor due to Definition2.2.
When \(\epsilon\leq 1/m^{n}\), this construction boils down to setting \(f(x)=-1\) at a single point \(x\), which requires \(\Omega(m^{n})\) queries to identify. Now, assume \(\epsilon>1/m^{n}\).
Again we may identify the integrand of (7) with points on half of \(B_{1}^{n+1}(r,(c,0))\). As long as \(r\geq 1\) and since \(n\) is a constant, Proposition6.2 implies that
\[d_{1}(f)\gtrsim\frac{r^{n+1}}{m^{n}}\,.\]
Thus to have \(d_{1}(f)\geq\epsilon\), it suffices (since \(n\) is a constant) to set
\[r\approx m^{\frac{n}{n+1}}\epsilon^{\frac{1}{n+1}}\,,\]
and indeed this gives \(r\geq 1\) since \(\epsilon>1/m^{n}\). Then, our functions \(f\) are given by choices of \(c\) placed on the hypergrid \([m]^{n}\) inside disjoint cells of side \(2r\), of which there are at least (up to a constant factor)
\[\left(\frac{m}{r}\right)^{n}\gtrsim\left(\frac{m}{\epsilon}\right)^{\frac{n}{n +1}}\,,\]
and thus any deterministic algorithm requires \(\Omega\left((m/\epsilon)^{\frac{n}{n+1}}\right)\) queries to distinguish a uniformly chosen \(f\) from this family from the constant-\(0\) function.
The construction for the partial derivative tester lower bounds is simpler: we start with a "step" one-dimensional construction which is flat everywhere except for a small region of negative slope, and then copy this function onto every line along a randomly chosen coordinate \(i\). Then a partial derivative tester must correctly guess both \(i\) and the negative-slope region to detect such functions. The following two theorems establish the continuous and discrete cases of Theorem1.7.
**Theorem 6.6** (Lower bound for partial derivative testers on the unit cube).: _Any partial derivative \(L^{1}\) monotonicity tester for Lipschitz functions \(f:[0,1]^{n}\to\mathbb{R}\) satisfying \(\mathsf{Lip}_{1}(f)\leq 1\) (with two-sided error and adaptive queries) requires at least \(\Omega(n/\epsilon)\) queries._
Proof.: Let \(\epsilon\leq 1/6\). For any \(z\in\left[\frac{1}{3},\frac{2}{3}-\epsilon\right]\), let \(g_{z}:[0,1]\to\mathbb{R}\) be the function given by
\[g(x)=\begin{cases}\epsilon&\text{if }x<z\,,\\ \epsilon-(x-z)&\text{if }z\leq x\leq z+\epsilon\,,\\ 0&\text{if }x>z+\epsilon\,.\end{cases}\]
Note that \(g_{z}\) is Lipschitz with \(\mathsf{Lip}_{1}(g)=1\). Moreover, we claim that \(d_{1}(g_{z})\gtrsim\epsilon\). Indeed, for any \(x\in[0,1/3]\), we have that \(g_{z}(x)=\epsilon\) and \(g_{z}(2/3+x)=0\). On the other hand, for any monotone function \(h:[0,1]\to\mathbb{R}\) we must have \(h(x)\leq h(2/3+x)\). Thus, for any such \(h\) we have \(|g_{z}(x)-h(x)|+|g_{z}(2/3+x)-h(2/3+x)|\geq\epsilon\). Since this holds for all \(x\in[0,1/3]\), we conclude that for any such \(h\) we must have \(\mathbb{E}\left[|g_{z}-h|\right]\geq\epsilon/3\), proving the claim.
Now, for any \(i\in[n]\) and \(z\in\left[\frac{1}{3},\frac{2}{3}-\epsilon\right]\), let \(f_{i,z}:[0,1]^{n}\to\mathbb{R}\) be given by copying \(g_{z}\) onto \(f\) along every line in direction \(i\), i.e. setting \(f_{i,z}(x)=g_{z}(x_{i})\) for every \(x\in[0,1]^{n}\). Note that \(\mathsf{Lip}_{1}(f)=1\) (since its partial derivatives are \(0\) along non-\(i\) coordinates), and \(d_{1}(f)\gtrsim\epsilon\) (since the lines in direction \(i\) partition the domain).
We construct a set of \(\Omega(n/\epsilon)\) functions \(f_{i,z}\) as follows. First, \(i\) can be any of the coordinates in \([n]\). Then let \(z_{1},\ldots,z_{k}\) be given by \(z_{j}=\frac{1}{3}+k\epsilon\) for \(k=\Omega(1/\epsilon)\), such that for each \(j\) we have \(z_{j}\in\left[\frac{1}{3},\frac{2}{3}-\epsilon\right]\) and, moreover, for distinct \(j,\ell\in[k]\), the regions where \(f_{i,z_{j}}\) and \(f_{i,z_{\ell}}\) take non-zero slope are disjoint. It follows that each partial derivative query may only rule out one such \(f_{i,z}\), so any partial derivative tester that distinguishes an \(f_{i,z}\) chosen uniformly at random from the constant-\(0\) function must make at least \(\Omega(n/\epsilon)\) queries.
The argument for the hypergrid is similar, except that the construction cannot be made to occupy an arbitrarily small region of the domain when the domain is discrete. We opt to keep the argument simple and give a proof for constant parameter \(\epsilon\).
**Theorem 6.7** (Lower bound for edge testers on the hypergrid).: _For sufficiently small constant \(\epsilon\), any partial derivative \(L^{1}\) monotonicity tester for functions \(f:[m]^{n}\to\mathbb{R}\) satisfying \(\mathsf{Lip}_{1}(f)\leq 1\) (with two-sided error and adaptive queries) requires at least \(\Omega(nm)\) queries._
Proof.: Let \(m\) be a multiple of \(3\) for simplicity. For each \(z\in\left\{\frac{m}{3}+1,\dots,\frac{2m}{3}\right\}\), define \(g_{z}:[m]\to\mathbb{R}\) by
\[g_{z}(x)=\begin{cases}1&\text{if }x<z\,,\\ 0&\text{if }z\geq z\,.\end{cases}\]
Then \(\mathsf{Lip}_{1}(g_{z})=1\) and, as before, we have \(d_{1}(g_{z})=\Omega(1)\). Then for each \(i\in[n]\) and \(z\in\left[\frac{m}{3}+1,\frac{2m}{3}\right]\), we let \(f_{i,z}:[m]^{n}\to\mathbb{R}\) be given by \(f_{i,z}(x)=g_{z}(x_{i})\) for each \(x\in[m]^{n}\); it follows that \(\mathsf{Lip}_{1}(f)=1\) and \(d_{1}(f)=\Omega(1)\). Note that there are \(\Omega(nm)\) such functions. Moreover, each partial derivative query may only rule out one such \(f_{i,z}\), and therefore any edge tester that distinguishes an \(f_{i,z}\) chosen uniformly at random from the constant-\(0\) function must make at least \(\Omega(nm)\) queries.
## 7 Overview of prior works on monotonicity testing
We first summarize results on testing monotonicity with respect to the Hamming distance.
Boolean-valued functions.Among the early works on this problem, [10] gave testers for functions on the hypergrid \([m]^{n}\) with query complexities \(O(n\log(m)/\epsilon)\) and \(O((n/\epsilon)^{2})\); note that the latter bound is independent of \(m\), and the query complexity of testers with this property was subsequently improved to \(O((n/\epsilon)\log^{2}(n/\epsilon))\) by [11] and to \(O((n/\epsilon)\log(n/\epsilon))\) by [10]. For functions on the Boolean cube \(\{0,1\}^{n}\), [12] gave the first \(o(n)\) tester, subsequently improved by [13], culminating in the \(\widetilde{O}(\sqrt{n}/\epsilon^{2})\) tester of [14], which essentially resolved the question for nonadaptive testers. Whether adaptivity helps in monotonicity testing is still an open question; see the lower bounds below, and also [12].
Returning to hypergrid domains \([m]^{n}\), [1, 1] established first testers with \(o(n)\) query complexity and, via a domain reduction technique, also obtained \(o(n)\) testers for product distributions on \(\mathbb{R}^{n}\) (and the alternative proof of [15] improves the number of _samples_ drawn by the tester when the distribution is unknown). Subsequent works [1, 1] attained the optimal dependence on \(n\) at the cost of a dependence on \(m\), with upper bounds of the form \(\widetilde{O}(\sqrt{n}\operatorname{poly}(m))\). Most recently, [1] gave a tester with query complexity \(O(n^{1/2+o(1)}/\epsilon^{2})\), which is almost optimal for nonadaptive algorithms, and again extends to product measures on \(\mathbb{R}^{n}\).
Real-valued functions.[1] gave a tester with query complexity \(O(\log(m)/\epsilon)\) for real-valued functions on the line \([m]\); the tight query complexity of this problem was more recently shown to be \(\Theta(\log(\epsilon m)/\epsilon)\)[1]. As for functions on the hypergrid \([m]^{n}\), [10, 10] also gave testers for larger ranges, but the query complexity depends on the size of the range. Then, [12] gave a nonadaptive tester with one-sided error and (optimal) query complexity \(O(n\log(m)/\epsilon)\). On the Boolean cube, [1] gave a tester with query complexity \(\widetilde{O}\left(\min\left\{r\sqrt{n}/\epsilon^{2},n/\epsilon\right\}\right)\) for real-valued functions \(f\) with image size \(r\), and showed that this is optimal (for constant \(\epsilon\)) for nonadaptive testers with one-sided error.
Lower bounds.We briefly summarize the known lower bounds for these problems; all lower bounds listed are for testers with two-sided error unless noted otherwise. For Boolean functions on the Boolean cube \(\{0,1\}^{n}\), there is a near-optimal lower bound of \(\widetilde{\Omega}(\sqrt{n})\) for nonadaptive testers [10], which improves on prior results of [12, 13, 14]. For adaptive testers, [1] gave the first polynomial lower bound of \(\widetilde{\Omega}(n^{1/4})\), since improved to \(\widetilde{\Omega}(n^{1/3})\) by [10].
Turning to real-valued functions, [15] combined Ramsey theory arguments with a result of [1] to show a \(\Omega(\log m)\) lower bound for adaptive testers on the line \([m]\). On the Boolean cube, [1] gave a \(\Omega(n/\epsilon)\) nonadaptive one-sided lower bound, and [1] gave an adaptive
lower bound of \(\Omega(n)\). On the hypergrid, [13] gave a nonadaptive lower bound of \(\Omega(n\log m)\) by communication complexity arguments, [10] showed the optimal lower bound of \(\Omega(n\log(m)/\epsilon-\log(1/\epsilon)/\epsilon)\) for adaptive testers using Ramsey theory (which involves functions with large range), and [1] gave an alternative proof of this bound that does not use Ramsey theory.
\(L^{p}\)-testing.Finally, moving from Hamming testers to \(L^{p}\) testers, and assuming functions with range \([0,1]\), [13] (who formally introduced this model) gave nonadaptive \(L^{p}\) monotonicity testers with one-sided error on the hypergrid \([m]^{n}\) with query complexity \(O((n/\epsilon^{p})\log(n/\epsilon^{p}))\)--note this is independent of \(m\), bypassing the Hamming testing lower bound--and a lower bound of \(\Omega((1/\epsilon^{p})\log(1/\epsilon^{p}))\) for nonadaptive testers with one-sided error; on the line, they showed there is an \(O(1/\epsilon^{p})\) nonadaptive tester with one-sided error and a matching lower bound for adaptive testers with two-sided error. They also gave a reduction from \(L^{p}\) monotonicity testing to Hamming testing of Boolean functions for nonadaptive one-sided testers, so in particular \(L^{1}\) testing functions with range \([0,1]\) is no harder than Hamming testing functions with Boolean range.
We also remark that our problem, which is parameterized by the upper bound \(L\) on the Lipschitz constant of input functions, lies under the umbrella of parameterized property testing, and refer to [14] for an introduction to, and results on this type of tester.
Acknowledgments.We thank Eric Blais for helpful discussions throughout the course of this project, and for comments and suggestions on preliminary versions of this paper.
|
2310.08717
|
Designing Observables for Measurements with Deep Learning
|
Many analyses in particle and nuclear physics use simulations to infer
fundamental, effective, or phenomenological parameters of the underlying
physics models. When the inference is performed with unfolded cross sections,
the observables are designed using physics intuition and heuristics. We propose
to design targeted observables with machine learning. Unfolded, differential
cross sections in a neural network output contain the most information about
parameters of interest and can be well-measured by construction. The networks
are trained using a custom loss function that rewards outputs that are
sensitive to the parameter(s) of interest while simultaneously penalizing
outputs that are different between particle-level and detector-level (to
minimize detector distortions). We demonstrate this idea in simulation using
two physics models for inclusive measurements in deep inelastic scattering. We
find that the new approach is more sensitive than classical observables at
distinguishing the two models and also has a reduced unfolding uncertainty due
to the reduced detector distortions.
|
Owen Long, Benjamin Nachman
|
2023-10-12T20:54:34Z
|
http://arxiv.org/abs/2310.08717v2
|
# Designing Observables for Measurements
###### Abstract
Many analyses in particle and nuclear physics use simulations to infer fundamental, effective, or phenomenological parameters of the underlying physics models. When the inference is performed with unfolded cross sections, the observables are designed using physics intuition and heuristics. We propose to design optimal observables with machine learning. Unfolded, differential cross sections in a neural network output contain the most information about parameters of interest and can be well-measured by construction. We demonstrate this idea using two physics models for inclusive measurements in deep inelastic scattering.
## 1 Introduction
Simulations are widely used for parameter estimation in particle and nuclear physics. A typical analysis will follow one of two paths: forward-folding or unfolding. In the forward-folding pipeline, the target physics model must be specified at the time of inference. We focus on unfolding, where detector distortions are corrected in a first step (unfolding) and then the resulting cross section can be analyzed in the context of many models by any end users. In the unfolding pipeline, the first step is to identify observables sensitive to a given parameter(s). These are typically identified using physical reasoning. Then, the differential cross sections of these observables are measured, which includes unfolding with uncertainty quantification. Finally, the measured cross sections are fit to simulation templates with different values of the target parameters. This approach has been deployed to measure fundamental parameters like the top quark mass [1] and the strong coupling constant \(\alpha_{s}(m_{Z})\)[2; 3] as well as parton distribution functions [4; 5; 6; 7] and effective or phenomenological parameters in parton shower Monte Carlo programs [8].
A key drawback of the standard pipeline is that the observables are constructed manually. There is no guarantee that the observables are maximally sensitive to the target parameters. Additionally, the observables are usually chosen based on particle-level information alone and so detector distortions may not be small. Such distortions can reduce the sensitivity to the target parameter once they are corrected for by unfolding. In some cases, the particle-level observable must be chosen manually because it must be calculable precisely in perturbation theory. There have been proposals to optimize the detector-level observable for a given particle-level observable [9] since they do not need to be the same. Alternatively, one could measure the full phase space and project out the desired observable after the fact [10][11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30]. When the inference is directly based on Monte Carlo simulations, the particle-level observable is not fixed by perturbative calculability.
We propose to use machine learning for designing observables that are maximally sensitive to a given parameter(s) while also being minimally sensitive to detector distortions. Simultaneous optimization ensures that we only use regions of phase space that are measurable. A tailored loss function is used to train neural networks. We envision that this approach could be used for any case where simulations are used for parameter estimation. For concreteness, we demonstrate the new technique to the case of differentiating two parton shower Monte Carlo models of deep inelastic scattering. While neither model is expected to match data exactly, the availability of many events with corresponding detailed simulations makes this a useful benchmark problem. We do not focus on the parameter estimation step itself, but there are many proposals for doing this optimally with machine learning [31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41].
This paper is organized as follows. Section 2 introduces our approach to observable construction. The datasets used for demonstrating the new method are introduced in Sec. 3. Results with these datasets are presented in Sec. 4. The paper ends with conclusions and outlook in Sec. 5.
## 2 Methodology
We begin by constructing new observables that are simultaneously sensitive to a parameter while also being minimally sensitive to detector effects. We introduce the method with a toy model for continuous parameter estimation, which demonstrates the essential ideas in a simplified context. This is followed by a more complete binary classification example using simulated deep inelastic scattering events from the H1 experiment at HERA, where the goal is to be maximally sensitive to distinguishing two datasets.
### Toy example for continuous parameter estimation
For each event, we have pairs \((Z,X)\) where \(Z\) represents the particle-level observable and \(X\) represents the detector-level observable. Capital letters represent random variables while lower-case letter represent realizations of the corresponding random variables. We consider the case \(X\) and \(Z\) have the same structure, i.e. they are both sets of 4-vectors. This is the standard case where \(X\) is a set of energy-flow objects that are meant to correspond to the 4-vectors of particles before being distorted by the detector. Furthermore, we fix the same definition of the observable at particle and detector level. The training samples are generated with a uniform distribution for the parameter of interest \(\mu\), so each event is specified by \((\mu_{i},z_{i},x_{i})\). Then, we parameterize the observable \(f\) as a neural network and optimize the following loss function:
\[L[f]=\sum_{i}(f(x_{i})-\mu_{i})^{2}+\lambda\sum_{i}(f(x_{i})-f(z_{i}))^{2}\,, \tag{1}\]
where the form of both terms is the usual mean squared error loss used in regression tasks. The first term trains the regression to predict the parameter of interest \(\mu\) while the second term trains the network to make the predictions given detector level features \(x\) and particle
level features \(z\) similar. The hyperparameter \(\lambda\) must be tuned and controls the trade off between sensitivity to the parameter of interest \(\mu\) and sensitivity to detector effects.
The loss function in Eq. 1 is similar to the setting of decorrelation, where a classifier is trained to be independent from a given feature [42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59]. One could apply decorrelation techniques in this case to ensure the classifier is not able to distinguish between features at detector level and at particle level. However, this will only ensure that the probability density for \(f\) is the same for particle level and detector level. To be well-measured, we need more than statistical similarity between distributions - we need them to be similar event by event. The final term in Eq. 1 is designed for exactly this purpose.
All deep neural networks are implemented in Keras[60]/TensorFlow[61] and optimized using Adam[62]. The network models use two hidden layers with 50 nodes per layer and Rectified Linear Unit (ReLU) activation functions for intermediate layers and a linear activation function for the last layer.
Figure 1 illustrates the input features and resolution model for the toy study. Two particle-level features \(z_{0}\) and \(z_{1}\) are modeled as normal distributions: \(Z_{0}\sim\mathcal{N}(\mu,0.5)\) and \(Z_{1}\sim\mathcal{N}(\mu,0.1)\), where feature 1 is significantly more sensitive to the parameter of interest \(\mu\). The experimental resolution on the features is given by \(X_{0}\sim\mathcal{N}(Z_{0},0.1)\) and \(X_{1}\sim\mathcal{N}(Z_{1},0.5)\) so that feature 0 is well measured, while feature 1 has a relatively poor resolution. For this model, the net experimental sensitivity to \(\mu\) is the same for both features, but feature 0 is much less sensitive to detector effects. Our proposed method will take this into account in the training of the neural network.
To demonstrate the sensitivity to uncertainties associated with detector effects, we make predictions using \(f\) trained with resolution model A on a sample generated with resolution model B, shown in Figure 1, where the width is increased by a factor of 1.4 for both features and a bias of 0.2 is introduced for the \(x_{1}\) feature. Figure 2 shows the results as a function of the \(\lambda\) hyperparameter. With \(\lambda=0\), which corresponds to the usual approach for this type of regression task, the correlation between each of the detector level features \(x_{0}\) and \(x_{1}\) and the network prediction \(f\) is the same, reflecting the fact that \(x_{0}\) and \(x_{1}\) have the same sensitivity to \(\mu\). As \(\lambda\) increases, more emphasis is placed on feature \(x_{0}\), which is well measured. The resolution of the regression, given by the RMS of \(f\), starts at the expected value of about \(\sqrt{0.5^{2}+0.1^{2}}/\sqrt{2}\) for resolution model A and \(\lambda=0\) and increases with \(\lambda\) as the network relies more on \(x_{0}\) for the prediction. The bias in the prediction for resolution model B is large for \(\lambda=0\) but falls significantly with increasing \(\lambda\).
### Full example for binary classification
In the binary case, we have two datasets generated from simulation 1 (sim. 1) and simulation 2 (sim. 2). The loss function for classification is given by
\[L[f]=-\sum_{i\in\text{sim. 1}}\log(f(z_{i}))-\sum_{i\in\text{sim. 2}}\log(1-f(z_{i}))+\lambda\sum_{i\in\text{sim. 1 \& 2}}\left(f(x_{i})-f(z_{i})\right)^{2}, \tag{2}\]
where the first two terms represent the usual binary cross entropy loss function for classification and the third term represents the usual mean squared error loss term for regression
asks. As in the regression case, the hyperparameter \(\lambda\) must be tuned and controls the trade off between sensitivity to the dataset and sensitivity to detector effects. The network model is the same as in the previous example except that the final layer uses a sigmoid activation function. The binary case is a special case of the previous section where there are only two values of the parameter of interest. It may also be effective to train the binary case for a continuous parameter using two extreme values of the parameter. In this paper, we use high-quality, well-curated datasets from the binary case because of their availability, but it would be interesting to explore the continuous case in the future.
To determine the efficacy of the new observable, we unfold it with TUnfold version 17.9 [63] through the interface included in the Root 6.24 [64] distribution. The response matrix is defined from a 2D binning the NN output, given detector level and particle level
Figure 1: Input features and resolution model for toy regression example. Two different experimental resolution functions, A and B, are shown.
atures. The matrix uses 24 and 12 bins for detector and particle inputs, respectively, which gives reasonable stability and cross-bin correlations in the unfolding results. The ultimate test is to show that the difference between sim. 1 at detector-level unfolded with sim. 2 for the response matrix and the particle level sim. 1 (or vice versa) is smaller than the difference between sim. 1 and sim. 2 at particle level. In other words, this test shows that the ability to distinguish sim. 1 and sim. 2 significantly exceeds the modeling uncertainty from the unfolding.
## 3 Datasets
We use deep inelastic scattering events from high-energy electron-proton collisions to demonstrate the performance of the new approach. These simulated data are from the H1 experiment at HERA [65; 66] and are used in the same way as Ref. [67]. They are briefly described in the following.
Two parton shower Monte Carlo programs provide the particle-level simulation: Rapgap 3.1 [68] or Djangoh 1.4 [69]. The energies of the incoming beams are \(E_{e}=27.6\) GeV and \(E_{p}=920\) GeV, for the lepton and proton, respectively, matching the running conditions of HERA II. Radiation from Quantum Electrodynamic processes is simulated by Heracles routines [70; 71; 72] in both cases. The outgoing particles from these two datasets are then fed into a Geant 3 [73]-based detector simulation.
Following the detector simulation, events are reconstructed with an energy-flow algorithm [74; 75; 76] and the scattered electron is reconstructed using the default H1 approach [77; 24; 78]. Mis-measured backgrounds are suppressed with standard selections [77; 78]. This whole process makes use of the modernized H1 computing environment at DESY [79]. Each dataset is comprised of approximately 10 million events.
Figure 3 shows histograms of the nine features used as input for the neural network training. These features include the energy \(E\), longitudinal momentum \(p_{z}\), transverse momentum \(p_{T}\), and pseudorapidity \(\eta\) of the scattered electron and the total Hadronic Final State (HFS), as well as the difference in azimuthal angle between the two \(\Delta\phi(e,\mathrm{HFS})\). The HFS is quite sensitive to the \(\eta\) acceptance of the detector. In order to have HFS features
Figure 2: Results of toy regression example as a function of the \(\lambda\) hyperparameter.
that are comparable for the particle and detector definitions, we only use generated final state particles with \(|\eta|<3.8\) in the definition of the particle-level HFS 4-vector. Both simulations provide event weights that must be used for physics analysis. In our study, we do not weight the simulated events in order to maximize the effective statistics of the samples in the neural network training. The electron feature distributions agree very well for the two simulations, while there are some visible differences in the HFS features.
## 4 Results
We now apply the method introduced in Sec. 2.2 to the DIS dataset described in Sec. 3. Figure 4 shows the results of four neural network trainings with different values of \(\lambda\), which sets the relative weight of the MSE term in the loss function (Eq. 2) that controls the
Figure 3: Particle level distributions of the nine NN input features for the Djangoh and Rapgap generators.
Figure 4: Neural Network output distributions for four values of the \(\lambda\) hyperparameter, which sets the scale for the Detector - Particle disagreement penalty in the loss function. The top row shows the results for \(\lambda=0\), where there is no penalty if the NN predictions for Detector-level input features and Particle-level input features disagree. The bottom three rows show increasing values of \(\lambda\) : 1, 20, and 100.
sensitivity to detector effects. With \(\lambda=0\), the classification performance for particle-level inputs is strong, while there are significant disagreements between the particle and detector level neural network outputs. As \(\lambda\) increases, the particle and detector level agreement improves at the cost of weaker classification performance. In what follows, we will use the network trained with \(\lambda=100\). For a parameter estimation task, the entire distribution will be used for inference and therefore excellent event-by-event classification is not required.
Next, we investigate how these spectra are preserved after unfolding. Figure 5 shows the results of unfolding the neural network output, given detector-level features, to give the neural network output distribution for particle-level features. The input distribution for the unfolding is \(10^{5}\) events randomly chosen from a histogram of the neural network output for detector-level inputs from the simulation. The unfolding response matrices for the two simulations agree fairly well and are concentrated along the diagonal. The output of the unfolding shows very good agreement with the true distribution of the neural network output given particle-level inputs, demonstrating acceptable closure for the unfolding. The correlations in the unfolding result are mostly between neighboring bins of the distribution.
One of the biggest challenges for the \(\lambda=0\) case is that it is highly sensitive to regions of phase space that are not well-constrained by the detector. As a result, the output of
Figure 5: Results of unfolding the NN output. The top (bottom) row shows the unfolding for the Rapgap (Djangoh) generator. The left column shows the response matrix for the unfolding, where the distribution of the NN output given the Detector-based input features (horizontal axis) is normalized to unit area for each bin of the NN output given the Particle-based input features (vertical axis). The center column shows the unfolded distribution compared to the true distribution. The right column shows the matrix of correlation coefficients from the unfolding.
the unfolding is highly dependent on the simulation used in the unfolding (prior). Figure 6 shows the model dependence of the unfolding and the ability of the neural network to perform the model classification task. The top row of the figure shows the results of the closure tests compared with a measure of the unfolding model dependence, where we perform the unfolding with the response matrix from the other simulation. The unfolded distributions have been normalized using the true distribution from the same simulation, giving an expected flat distribution consistent with 1. The model dependence is small and generally less than 10%. The bottom row shows the unfolded distributions instead normalized by the true distribution from the other simulation. The degree to which the distribution deviates from unity is a measure of the model discrimination power of the network. The results show significant deviations from unity, indicating that the neural network can distinguish the two simulations. The size of the deviations from unity is large
Figure 6: Results of testing the model dependence of the unfolding the **neural network output** distribution by varying the response matrix used in the unfolding. The top row normalizes the unfolded distribution with the true distribution from the same simulation, testing the unfolding closure (points) and unfolding model dependence (red histogram). The bottom row normalizes the unfolded distribution with the true distribution from the other simulation, showing the ability to distinguish the models.
compared to the size of the model dependence of the unfolding.
Figure 3 shows that some of the HFS variables in the input features may be able to distinguish the two models directly. Figure 7 shows the results of running the unfolding procedure using the HFS \(\eta\) distribution for model discrimination, where the model dependence is significantly larger, compared to the neural network output. The shape is distorted, including deviations up to 20%, when the response matrix from the other simulation was used in the unfolding. Since the modeling uncertainty is comparable to or larger than the size of the effect we are trying to probe, such observables are much less useful than the neural network output for the inference task.
## 5 Conclusions and Outlook
Unfolded differential cross section measurements are a standard approach to making data available for downstream inference tasks. While some measurements can be used for a
Figure 7: Results of testing the model dependence of the unfolding the **HFS \(\eta\)** distribution by varying the response matrix used in the unfolding. The top row normalizes the unfolded distribution with the true distribution from the same simulation, testing the unfolding closure (points) and unfolding model dependence (red histogram). The bottom row normalizes the unfolded distribution with the true distribution from the other simulation, showing the ability to distinguish the models.
variety of tasks, often, there is a single goal that motivate the result. In these cases, we advocate to design observables that are tailored to the physics goal using machine learning. The output of a neural network trained specifically for the downstream task is an observable and its differential cross section likely contains more information than classical observables. We have proposed a new loss function for training the network so that the resulting observable can be well measured. We anticipate that our new approach could be useful for a variety of scientific goals, including measurements of fundamental parameters like the top quark mass and tuning Monte Carlo event generators.
There are a number of ways this approach could be extended in the future. We require that the observable have the same definition at particle level and detector level, while additional information at detector-level like resolutions may be useful to improve precision. A complementary strategy would be to use all the available information to unfold the full phase space [10]. Such techniques may improve the precision by integrating all of the relevant information at detector level, but they may compromise specific sensitivity by being broad and have no direct constraints on measurability. It would be interesting to compare our tailored approach to full phase space methods in the future.
## Code availability
The code in this work can be found in: [https://github.com/owen234/designer-obs-paper](https://github.com/owen234/designer-obs-paper).
## Acknowledgments
We thank Miguel Arratia and Daniel Britzger for useful discussions and feedback on the manuscript. Additionally, we thank our colleagues from the H1 Collaboration for allowing us to use the simulated MC event samples. Thanks to DESY-IT and the MPI fur Physik for providing some computing infrastructure and supporting the data preservation project of the HERA experiments. B.N. was supported by the Department of Energy, Office of Science under contract number DE-AC02-05CH11231.
|
2306.04813
|
Human in the Loop Novelty Generation
|
Developing artificial intelligence approaches to overcome novel, unexpected
circumstances is a difficult, unsolved problem. One challenge to advancing the
state of the art in novelty accommodation is the availability of testing
frameworks for evaluating performance against novel situations. Recent novelty
generation approaches in domains such as Science Birds and Monopoly leverage
human domain expertise during the search to discover new novelties. Such
approaches introduce human guidance before novelty generation occurs and yield
novelties that can be directly loaded into a simulated environment. We
introduce a new approach to novelty generation that uses abstract models of
environments (including simulation domains) that do not require
domain-dependent human guidance to generate novelties. A key result is a
larger, often infinite space of novelties capable of being generated, with the
trade-off being a requirement to involve human guidance to select and filter
novelties post generation. We describe our Human-in-the-Loop novelty generation
process using our open-source novelty generation library to test baseline
agents in two domains: Monopoly and VizDoom. Our results shows the
Human-in-the-Loop method enables users to develop, implement, test, and revise
novelties within 4 hours for both Monopoly and VizDoom domains.
|
Mark Bercasio, Allison Wong, Dustin Dannenhauer
|
2023-06-07T22:30:27Z
|
http://arxiv.org/abs/2306.04813v2
|
# Human in the Loop Novelty Generation
###### Abstract
Developing artificial intelligence approaches to overcome novel, unexpected circumstances is a difficult, unsolved problem. One challenge to advancing the state of the art in novelty accommodation is the availability of testing frameworks for evaluating performance against novel situations. Recent novelty generation approaches in domains such as Science Birds and Monopoly leverage human domain expertise during the search to discover new novelties. Such approaches introduce human guidance before novelty generation occurs and yield novelties that can be directly loaded into a simulated environment. We introduce a new approach to novelty generation that uses abstract models of environments (including simulation domains) that do not require domain-dependent human guidance to generate novelties. A key result is a larger, often infinite space of novelties capable of being generated, with the trade-off being a requirement to involve human guidance to select and filter novelties post generation. We describe our Human-in-the-Loop novelty generation process using our open-source novelty generation library to test baseline agents in two domains: Monopoly and VizDoom. Our results shows the Human-in-the-Loop method enables users to develop, implement, test, and revise novelties within 4 hours for both Monopoly and VizDoom domains.
## 1 Introduction
There has been increased interest in artificial intelligence (AI) approaches capable of detecting, characterizing, and accommodating novel situations; situations that violate implicit or explicit assumptions about agents [1]. AI systems trained to perform tasks in a particular environment rely on large amounts of training data or numerous interactions with an environment simulator. When the environment changes in unexpected ways, these AI approaches typically fails and this is especially true when transitioning an AI system into real-world settings.
Developing AI systems capable of handling novel situations requires new methods and procedures than those that currently exists. Current effort is spent on getting AI solutions that can solve the original task at hand, achieving robust performance without novelty present. However, it is speculated that a range of additional capabilities are needed in AI systems to accommodate novelty, beyond what is needed to learn to solve a task by itself.
In most software development practices, tests are created by human users, therefore an automated test-driven development is gaining interest. Also, the verification and validation community has developed processes to obtain requirements of an AI system from the stakeholders of that systems' development in order to understand exactly what the system needs to be able to achieve to be useful. In all these cases, the tests that are used to evaluate an AI's ability come from humans and the task specification encoded in software. These test cases are known ahead of time. We have clear approaches to testing AI systems in this manner.
Current approaches uses procedural generation to create novelty with the help of a human expert experienced in the domain. This approach have a key advantage in generating more scenarios compared to hand written test cases by human. They remain limited because they are guided primarily from human expert knowledge about the domain, and thus the space of novelties they can generate are limited and biased within the scope of the human. Humans generally have good intuition about bad novelties, defined as irrelevant, noticeable, or controllable [1]. By having a human expert build targeted, automatic scenario generators for domains in which they have expert knowledge helps avoid generating bad or irrelevant novelties.
We developed a new human in the loop novelty generation process that leverages human domain experts intuition about what makes a good novelty with a new automatic novelty generation process that improves on the efficiency of previous solutions. We briefly introduce our novelty generator in the next section. Interested readers may refer to [15] for more details on that system. Additionally our novelty generator is available on Github1.
Footnote 1: [https://github.com/Parallax-Advanced-Research/noveltygen](https://github.com/Parallax-Advanced-Research/noveltygen)
The primary contributions of this paper includes:
1. A 6-step process by which a human domain expert with developer skills can leverage our novelty generator tool to save time in generating novelties, and consider a greater range of novelties with a reduced bias from human imagination.
2. An experimental setup showing that novelties generated in this way are capable of challenging agents across multiple environments.
3. Experimental results from two domains: Monopoly and VizDoom
4. A qualitative review of the experience for human developers to use our novelty generator.
Section 2 describes our approach in the context of other similar novelty generation approaches. In Section 3 we briefly cover our novelty generation tool, including how to run it, what the output looks like, etc. In section 4 we describe the human-in-the-loop process where a human uses the novelty generator. In Section 5 we describe our experimental setup where we used the novelty generator to generate novelties in two domains: Monopoly and Vizdoom. Section 6 presents results from the empirical evaluations of the novelties while Section 7 describes a qualitative review of the pros and cons of the human in the loop process from our developers who followed the process to generate the novelties described in Sections 5 and 6. Finally we conclude the paper with a discussion for future work.
## 2 Related Work
Origins of automatic novel environment generators can be traced back to Metagame [1] and EGGG (orw 2000). Both were designed to find novel variations of chess-like games. More recent work in game design research decoupled the design of a game space from the exploration of that space to find interesting games [14]. In their work, the language used to design game spaces was large and flexible compared to the prior chess-like games, but still incorporated many assumptions, such as the existence of a rectangular play space. Even newer work on algorithmic procedural content generation for games focuses on bounded spaces such as levels, characters and quests [15]. In these systems some domain knowledge is encoded by humans to guide the search towards producing valuable artifacts.
Current benchmark domains to evaluate AI agent's novelty-handling capability are designed in close alignment with simulated environments. NovelGridworlds [1] is an OpenAI Gym environment that leverages human guidance on grid-based environments. Another grid-based novelty generator uses an ontology of novelties related to sequential decision making, such as distinguishing between object and action novelties [1]. Birds (an Angry Birds AI domain; [20]) implements novelties as modifications to the C# codebase. This includes changing physical parameters and colours of existing game objects to more difficult modifications such as introducing a new class of objects (e.g. hostile external agents that hinders the agent). GNOME (Generating Novelty in Open-World Multi-agent Environments; [13]) is a simulation platform developed to test AI agent's response to novelty in the classic Monopoly board game, using a library of novelty- generating methods. An example of this is replacing existing game event functions (like a player passing go and collecting SS200) with human inspired function. Functions maintain a similar signature enabling them to be easily swapped out with the original function (such as a function that takes a wealth tax from the player).
## 3 Novelty Generation Tool
Our approach to novelty generation is originally inspired by Wiggins' 2006 paper on the Creative Systems Framework. Novelties are created using two types of transformations: R-transformations that change the space of possible states and state transitions, and T-transformations that modify the search method for generating starting states in that environment.
R-transformations2 modify a domain file describing the state-transition system of an environment. This includes possible states with symbolic constructs like object type hierarchies, relations between object types, object properties, and fluents. Changes to these structures in effect changes the state space. Domains also define the set of state-transitions that are valid in an environment using action, event, and process models. Changes to these models changes the transition space between states. All changes of this nature (either to the state space or transition space) fall under R-transformations.
Footnote 2: R and T refer to Wiggins’ Creative Systems Framework theory rule sets, otherwise ‘R’ and ‘T’ are simply variable names.
Our environment-transformation based novelty generator that we present here is based on domain-independent environments models. Using a formal language inspired from the AI Planning community, Planning Definition Domain language (PDDL), we can generate novelties for any domain capable of being modelled with the language. For the full list of constructs that our Transformation Simulation Abstraction Language (TSAL) supports, refer to [1]. A key benefit of our novelty generator is that it is domain-independent since it can generate novelties for any environment modeled in the TSAL language; however the domain-independent novelties it produces needs to be guided. We have proposed domain independent heuristics for filtering novelties on three conditions:
**Relevant:**: The novelty affects the agents performance on the task.
**Noticeable:**: The novelty causes different observations to the agent.
**Controllable:**: The novelty rewards agent policies differently.
In [1] we show that using a simple re-planning agent and a simulated version of a TSAL environment, novelties can be automatically filtered out based on relevance, with the rest left for future work.
Our novelty generator is implemented in python, available on GitHub, and comes with three domains already modeled in the TSAL language. Two of these domains are described in Section 5.
The caveat with having a general, domain-independent novelty generator is ensuring only "high-quality" or "interesting" novelties are found. To alleviate this problem we introduce the approach in this paper where a human software developer works alongside the novelty generator. The effect is that the human is able to spend less time on the creative brainstorming phase and instead focus their energy on the "evaluation" phase. This also reduces the bias from only producing novelties along certain dimensions (such as the
human getting fixated on novel kinds of objects instead of action/event properties).
```
1fromnoveltygen.noveltygenearatorimport[...]asNG
2domain_fn="domains/blocksworld/domain.tsal'
4ng=NG(domain_file=domain_fn)
5
6
7
8#chooseyourRTransformations
9r_trans=[RTransformation.ADD_EFFECT,
10RTransformation.ADD_PRECONDITION,
11RTransformation.REMOVE_EFFECT]
12#...seeRTransformation.pyforfullist
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
510
52
53
54
556
56
57
588
590
600
610
6200
6300
6400
6400
6400
6400
6400
6400
6400
6400
6400
6400
6400
6400
6400
6400
6400
6400
6400
6400
6400
6400
64000
64000
64000
64000
64000
64000
640000
64000
64000
64000
64000
640000
640000
640000
640000
640000
640000
640000
640000
640000
640000
640000
640000
640000
640000
640000
640000
640000
640000
640000
640000
6400000
640000
6400000
640000
6400000
640000
6400000
6400000
6400000
6400000
6400000
6400000
6400000
6400000
6400000
6400000
6400000
6400000
6400000
6400000
6400000
6400000
6400000
6400000
6400000
6400000
6400000
6400000
6400000
64000000
6400000
64000000
6400000
6400000
6400000
6400000
6400000
64000000
6400000
6400000
6400000
64000000
64000000
64000000
64000000
64000000
64000000
64000000
64000000
64000000
64000000
64000000
64000000
64000000
64000000
64000000
64000000
64000000
64000000
64000000
64000000
64000000
64000000
64000000
640000000
640000000
64000000
640000000
640000000
640000000
640000000
640000000
640000000
640000000
640000000
640000000
640000000
640000000
640000000
640000000
640000000
640000000
640000000
6400000000
6400000000
6400000000
6400000000
6400000000
640000000
6400000000
64000000000
64000000000
64000000000
64000000000
6400000000
64000000000
640000000000
64000000000
640000000000
640000000000
640000000000
640000000000
6400000000000
640000000000
64000000000000
640000000000000
64000000000000
6400000000000000
64000000000000000
64
Evaluation SetupTo evaluate the HITL novelty generation process, each of the generated novelties were evaluated by their viability, or impact on the baseline agent. This was done by comparing win rates of agents in pre- and post-novelty tournaments. The average wins and standard deviation of the pre-novelty agent performance is shown in Table 1.
The three levels of viability used to evaluate the novelties are defined below:
**Low Viability:**: 0.66% \(<\) Agent \(1<4\)%
**Medium Viability:**: 4% \(<\) Agent \(1<9\)%
**High Viability:**: Agent \(1>9\)%
The change in win rate used to assess viability of a novelty could either be an increase or decrease in win rate. In other words, a novelty that alters the baseline agent's performance significantly enough could be considered viable.
For each experiment, we simulated three tournaments with 10,000 instances (games) per tournament. To keep results consistent, we used meta seeds 5, 10, 999 as inputs for the tournaments. The novelties developed using the HITL method for each novelty level are summarized in Table 2.
### VizDoom Domain
VizDoom is an open-source project used as a research platform for RL using the popular first-person shooter game, Doom (1993), as the environment. This project allows researchers and developers to create and test learning and decision-making algorithms through the modular simulation environment. We developed on top of Washington State University's (WSU) VizDoom Novelty Generator GitHub4 to inject our custom novelties and conduct experiments on RL agents.
Footnote 4: [https://github.com/holderlb/WSU-SAILON-NG/tree/master/WSU-Portable-Generator](https://github.com/holderlb/WSU-SAILON-NG/tree/master/WSU-Portable-Generator)
Agent SetupWSU provided their state-of-the-art (SOTA) player agent for testing, which we used as our baseline agent. While we are unable to go over the specifics of the baseline player agent's RL logic, we are able provide a general overview of its functionality.
The agent has eight actions: forward, backward, left, right, turn left 45 degrees, turn right 45 degrees, shoot, and no action. For each turn, a feature vector of the current state of the simulation environment is sent to the agent; including general information about enemies, items, and the player agent. In response, the agent provides an action to be performed. Performance is defined as the amount of time left in the episode divided by the maximum time for the episode.
Training for the agent is done separately from testing. The user is free to indicate the number of training instances the agent should be trained on. When training is complete, the current model for the agent is saved in memory. We have pre-trained the SOTA agent for 100 pre-novelty instances before using it for novelty testing.
Evaluation SetupThe VizDoom domain consists of a fork of the WSU VizDoom GitHub5; with a change of four total enemies and four of each item (health, ammo, trap) spawned for every instance. We conducted one experiment with 1000 instances for pre-novelty, and 1000 instances of each post-novelty. As a control and point of comparison, the SOTA agent was able to win 991 out of 1000 pre-novelty games averaging to a win-rate of 99.1 percent.
Footnote 5: [https://github.com/mbercasio94/WSU-SAILON-NG](https://github.com/mbercasio94/WSU-SAILON-NG)
The novelties developed using the HITL method for each novelty level are summarized in Table 3.
## 6 Results
### Novelty Generation Setup
Independent of the TSAL file creation explained in step 1 of the HITL novelty creation process, the total time to find, implement, and test a new novelty ranges from 2 hours to 4 hours. Step 1 is removed from the time band since it varies depending on the user expertise and the target domain's complexity.
### Monopoly
The performance results of Agent 1 for all conducted novelties is shown in Table 4. The results indicate the majority of
\begin{table}
\begin{tabular}{c|c|c} \hline \hline & **Agent 1** & **Default Background Agent** \\ \hline \hline Average Wins & 6943 & 1019 \\ \hline Raw Std Dev & 45.66 & 30.07 \\ \hline \% Win Rate Std Dev & 0.66 & 2.96 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results of pre-novelty agent performance over 20 seeds of 10,000 games each.
\begin{table}
\begin{tabular}{c|l} \hline \hline
**ID** & **Description** \\ \hline \hline
0 & Players gain \$1,000 when passing GO \\ \hline
1 & Players lose \$500 when passing GO \\ \hline
2 & Players can only move when dice values \\ are identical \\ \hline
3 & Players gain \$25 at the end of their turn \\ \hline
4 & Player must be in jail to receive a property \\ & after trade is complete \\ \hline
5 & Player must be in jail to receive cash after \\ & trade is complete \\ \hline
6 & Property is automatically sold back to the \\ & bank after rent is paid for that property \\ \hline
7 & Rent is not paid unless the owner of the \\ & space is in jail \\ \hline
8 & The player cannot collect money if they are \\ & the last bidder to an auction \\ \hline \hline \end{tabular}
\end{table}
Table 2: Summary of developed agents used to verify and validate generated novelties of various levels.
implemented novelties are of high viability, meaning a significant difference of \(>\) 9% in the agent performance. Table 4 contains a summary of the agent's change in win rate over 10,000 games for each novelty.
### VizDoom
The performance results of the SOTA agent for all conducted novelties is shown in Table 7. Since the average SOTA pre-novelty win rate was 99.1%, any novelties below a 90% win rate were considered high viability. Table 5 contains a summary of the agent's win rate over 1,000 games for each novelty.
### Discussion
During our experiments, most novelties using the HITL novelty generation process resulted in medium to high viability novelties, reinforcing our hypothesis that the generated novelties are typically sufficient to thoroughly test various agents. In the situation where a generated novelty was categorized as low viability, we were able to regenerate, complement, and retest new novelties in less than 4 hours. Our novel HITL method is an automated and time-efficient novelty generation process compared to current human-only novelty generation process.
## 7 Qualitative Review of Human Experience during Novelty Generation
In this section we go over a HITL's initial thoughts for each step of the novelty generation process. This provides a look at the possible advantages and limitations of our current model in the perspective of a first-time user. Table 6 provides an overview of the time it takes to complete each step for both Monopoly and VizDoom domains.
**Step 1: Construct domain specific TSAL files (Difficulty - High)** For those unfamiliar with PDDL or declarative languages in general, it might take several iterations and possible feedback from an expert in order to properly model the domain. Abstraction of a domain must account for certain nuances that might not be so obvious on the initial iteration.
In the case of the Monopoly domain, an extra turn from dice rolls was modelled in the TSAL file even though it was not part of the actual simulation logic, thus resulting in extra turn novelties that deviated too much from the original domain.
**Step 2: Run novelty generator using TSAL domain file (Difficulty - Low)** Slight training by an expert or readme is required in order to know which lines of novelty generator code need to be adjusted in order to target specific novelty types.
**Step 3: Identify possible novelties from generated files (Difficulty - Low)** As the HITL becomes more familiar with the domain, they should have a better understanding of the novelties that are likely to cause significant impact on agent performance. For example, the HITL may find a pattern where adjusting the preconditions of a specific event has higher likelihood of producing a viable novelty.
**Step 4: Implement the novelty (Difficulty - Medium)** Creating novelty injection code depends on the modularity of the domain - how easy is it for a developer to adjust the rules and environment of the simulation space? In addition, difficulty depends on the type of novelty being implemented. For example, novelties that effect agent goals may require further implementation work when compared to a novelty that simply changes an event-based rule of the world. Similar to step
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \multicolumn{3}{|c|}{**Post-Novelty Agent Absolute Win Rate**} \\ \cline{2-5}
**ID** & Seed = 5 & Seed = 10 & Seed = 999 & **Viability** \\ \hline \hline
0 & -20.21\% & -19.51\% & -22.05\% & **High** \\ \hline
1 & 5.14\% & 4.55\% & 4.04\% & Medium \\ \hline
2 & -28.61\% & -27.11\% & -26.90\% & **High** \\ \hline
3 & -26.71\% & -26.04\% & -28.01\% & **High** \\ \hline
4 & -20.09\% & -19.35\% & -18.97\% & **High** \\ \hline
5 & -15.59\% & -15.17\% & -16.31\% & **High** \\ \hline
6 & -28.06\% & -30.33\% & -28.42\% & **High** \\ \hline
7 & -99.90\% & -99.87\% & -99.94\% & **High** \\ \hline
8 & -96.48\% & -95.99\% & -96.25\% & **High** \\ \hline
9 & 15.87\% & 14.75\% & 14.41\% & **High** \\ \hline
10 & -49.54\% & -48.39\% & -48.48\% & **High** \\ \hline
11 & -21.89\% & -20.41\% & -20.26\% & **High** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Viability of generated Monopoly novelties
\begin{table}
\begin{tabular}{c|c|c} \hline
**ID** & **Description** \\ \hline \hline
0 & Player loses 1 health point every 7 turns \\ \hline
1 & Player loses one ammo clip every 40 turns \\ \hline
2 & Player is turned 90 degrees every 3 turns \\ \hline
3 & After rotating, the player loses 2 health points \\ \hline
4 & After shifting positions, the player loses 1 health point \\ \hline
5 & After rotating, the player is turned 15 degrees to the left \\ \hline \hline \end{tabular}
\end{table}
Table 3: Summary of developed agents used to verify and validate generated novelties of various levels.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
0 & 52.6\% & **High** \\ \hline
1 & 46.0\% & **High** \\ \hline
2 & 39.8\% & **High** \\ \hline
3 & 0.4\% & **High** \\ \hline
4 & 31.5\% & **High** \\ \hline
5 & 0.0\% & **High** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Viability of generated VizDoom novelties.
3, as the HITL familiarizes themselves with the codebase, implementation of further novelties should be relatively easier.
**Step 5: Test the novelty**
**(Difficulty - Low)** In terms of manual labor, a HITL will need to simply create a batch script in order to run experiments on the implemented novelties. Running experiments with more instances will take longer to complete, but results in a more accurate view of novelty viability. The threshold for the minimum change in performance required for a novelty to be deemed viable will depend on the judgement of the HITL.
**Step 6: Revise the novelty**
**(Difficulty - Low)** A HITL will need to simply change parameter(s) in the generated novelty file or novelty generator. As an example, for _Experiment 0_ (Players gain $1000 when passing GO), the value of $1000 chosen by the novelty generator may not cause a significant change in agent performance. A user may opt to manually change the value to $500 which may end up resulting in a more significant change in agent performance. Of course, some novelties will never be viable even with HITL revision. In that case, it is safe to discard the novelty and try a new one.
## 8 Conclusions and Future Work
We introduced a 6-step process by which a human with domain expertise can work alongside our open-source novelty generator to discover, implement, and score AI approaches. Our novelty generator is domain-independent, as demonstrated by our evaluation in two substantially different domains: Monopoly and VizDoom. Any environment that is defined in the Planning Domain Definition Language (PDDL) can be easily modified to work with our novelty generator, since our TSAL language includes most PDDL constructs.
We hope to apply our novelty generator to more environments and develop a more complete testbed environment where agents can be tested directly against the state-transition system without requiring human developer effort to implement novelties in an external simulator. The TSAL language is capable and existing PDDL simulators exist, however at this time a TSAL simulator has not yet been built. A TSAL simulator would enable novelties produced by our method to be used in other frameworks like PDDL-gym Silver and Chitnis (2020): a testbed for reinforcement learning approaches in PDDL domains.
Additionally, leveraging human guidance would be easier with a better user interface. Currently, our system writes novelties to a plain text file that a human can scan line-by-line. An improved user interface should offer features such as syntax highlighting and other visual indicators to aid the human in picking out good novelties, which we leave for future work.
While the results demonstrated our generator's effectiveness in these areas, a significant limitation arises from its restricted scope. Our current model may not generalize well to other domains or interdisciplinary contexts. By limiting the generator's capabilities to only two domains and two novelty levels, we may have inadvertently curtailed its potential to create groundbreaking ideas across a broader range of disciplines. Future research should focus on expanding the generator's reach to encompass multiple domains, thus allowing
\begin{table}
\begin{tabular}{|p{113.8pt}|c|c|} \hline
**HITL Novelty Generation Step Description** & \multicolumn{2}{c|}{**Avg. Minutes to Complete Step**} \\ \hline \hline & **VizDoom** & **Monopoly** \\ \hline
**Step 1: Construct TSAL file** specific to target domain & Variable & Variable \\ \hline
**Step 2: Run novelty generator using TSAL domain file.** This process includes editing of the novelty generator parameters to target specific novelty levels. Novelty file generation - target to generate at least 100 files. & 5 minutes & 5 minutes \\ \hline
**Step 3: Identify possible novelties from generated files.** Manually parse through generated files for novelties that may prove viable for the target novelty level. Post-processing of certain conditions and effects sometimes required. & 30 minutes per novelty & 30 minutes per novelty \\ \hline
**Step 4: Implement the novelty.** Time to implement a novelty depends on the level of that novelty. Initial implementation for a specific novelty level is slow (requires planning of software architecture). Subsequent implementations significantly faster for already created novelty levels. & 45 minutes for the first novelty, 20 minutes for subsequent novelties & 60 minutes for the first novelty, 20 minutes for subsequent novelties \\ \hline
**Step 5: Test the novelty.** Run experiment to test for novelty viability. Dependent on computation speed of the simulation machine. & 30-60 minutes & 1-2 hours \\ \hline
**Step 6: Revise the novelty.** Manually revise novelty values as needed. May also discard novelty if no changes work. & 5 minutes & 5 minutes \\ \hline \hline & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \end{tabular}
\end{table}
Table 6: Novelty Generator Experiment Results with HITL completion time
for a more comprehensive exploration of novel concepts and enhancing its applicability in various fields.
**Acknowledgements:** This work was supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001121C0236. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Defense Advanced Research Projects Agency (DARPA).
|
2301.11507
|
Semi-Parametric Video-Grounded Text Generation
|
Efficient video-language modeling should consider the computational cost
because of a large, sometimes intractable, number of video frames. Parametric
approaches such as the attention mechanism may not be ideal since its
computational cost quadratically increases as the video length increases.
Rather, previous studies have relied on offline feature extraction or frame
sampling to represent the video efficiently, focusing on cross-modal modeling
in short video clips. In this paper, we propose a semi-parametric
video-grounded text generation model, SeViT, a novel perspective on scalable
video-language modeling toward long untrimmed videos. Treating a video as an
external data store, SeViT includes a non-parametric frame retriever to select
a few query-relevant frames from the data store for a given query and a
parametric generator to effectively aggregate the frames with the query via
late fusion methods. Experimental results demonstrate our method has a
significant advantage in longer videos and causal video understanding.
Moreover, our model achieves the new state of the art on four video-language
datasets, iVQA (+4.8), Next-QA (+6.9), and Activitynet-QA (+4.8) in accuracy,
and MSRVTT-Caption (+3.6) in CIDEr.
|
Sungdong Kim, Jin-Hwa Kim, Jiyoung Lee, Minjoon Seo
|
2023-01-27T03:00:43Z
|
http://arxiv.org/abs/2301.11507v1
|
# Semi-Parametric Video-Grounded Text Generation
###### Abstract
Efficient video-language modeling should consider the computational cost because of a large, sometimes intractable, number of video frames. Parametric approaches such as the attention mechanism may not be ideal since its computational cost quadratically increases as the video length increases. Rather, previous studies have relied on offline feature extraction or frame sampling to represent the video efficiently, focusing on cross-modal modeling in short video clips. In this paper, we propose a semi-parametric video-grounded text generation model, SeViT, a novel perspective on scalable video-language modeling toward long untrimmed videos. Treating a video as an external data store, SeViT includes a non-parametric frame retriever to select a few query-relevant frames from the data store for a given query and a parametric generator to effectively aggregate the frames with the query via late fusion methods. Experimental results demonstrate our method has a significant advantage in longer videos and causal video understanding. Moreover, our model achieves the new state of the art on four video-language datasets, iVQA (+4.8), Next-QA (+6.9), and Activitynet-QA (+4.8) in accuracy, and MSRVTT-Caption (+3.6) in CIDEr.
Machine Learning, ICML
## 1 Introduction
Recently, there has been the impressive success of vision-language models (Lu et al., 2019; Radford et al., 2021; Li et al., 2022; Alayrac et al., 2022) demonstrating a remarkable transferability on video-language tasks including video retrieval, video captioning, and video question answering (Video QA). Considering video input's spatio-temporal aspects, it is often more challenging to process than other modalities, especially combining video and language modalities. Moreover, modeling video information requires heavy computations since it comprises lengthy image sequences (frames). Conventionally, many video-language works have relied on the pre-trained vision or video encoder as an offline feature extractor to represent video densely but efficiently (Sun et al., 2019; Li et al., 2020; Yang et al., 2021).
A line of research has shown the effectiveness of sparse video representation for video-language modeling (Lei et al., 2021). The sparse video representation approximates a video with sparsely sampled frames instead of all frames while allowing gradient updates of the visual encoder with computationally feasible implementation. Lei et al. (2021) argue that randomly selected sparse frames work well on various video-language tasks even with very few frames, _e.g_., 1-20 frames for 3-180 seconds clips. Recent video-language studies outperform the performance of the models trained in the single task by massive video-language pre-training, employing the sparse frame paradigm (_e.g_., uniform sampling) (Zellers et al., 2021; Wang et al., 2022; Zellers et al., 2022; Yang et al., 2022).
However, these studies overlook the limitations of the sparse video representation based on naive frame sampling. The
Figure 1: Overview of semi-parametric video-grounded text generation. Treating an untrimmed long video as an external data store, it first retrieves top-\(k\) relevant frames from the data store with a given input query. Then, a vision-language (VL) transformer encodes each frame and the input independently and generates textual output by performing late fusion over top-\(k\) frames.
pre-trained models have been tested on only benchmarks with short video clips that are usually less than a minute. We are curious whether the benefits of the sparse frame sampling are still valid if the length of the source video gets longer. In particular, we hypothesize the model relying on a few sampled frames might fail on long untrimmed videos since the scene changes frequently and the frame length affects the size of the population. It requires more frames to be sampled to retain the performance, resulting in an increase in the computational cost, _i.e_., an efficiency-accuracy trade-off.
On the other hand, recent semi-parametric NLP models show success on knowledge-intensive tasks having a similar challenge regarding large search space of external knowledge (Lewis et al., 2020; Izacard and Grave, 2021). The semi-parametric models often consist of a non-parametric retriever and a parametric generator. The non-parametric retrieval drastically reduces the search space of large knowledge sources (millions of text such as Wikipedia) to a manageable size, _e.g_., less than 100, allowing the parametric model to ground relevant knowledge for a given query effectively. Also, it provides controllability of the external knowledge and explainability over model decisions with their provenance. Motivated by their success, we explore _semi-parametric video-grounded text generation_ as depicted in Figure 1, another way for the scalable sparse video representation toward long videos with minutes and even hours.
In this paper, we propose the **S**emi-parametric **V**ideo-grounded **T**ext generation model (SeViT) to take benefits from both efficiency of the sparse frame paradigm and scalability over long-form videos. SeViT consists of a non-parametric frame retriever and a parametric video-grounded text generator. In particular, we treat a video as an external data store and perform cross-modal retrieval to get top-\(k\) query-relevant frames from the data store with a given query. The video-grounded text generator independently encodes each frame with the query. Then, late fusion methods are followed to produce the final output by aggregating the separately encoded query-aware frames, _e.g_., marginalization in the final decoding step or cross-attention in the decoder layer (Lewis et al., 2020; Izacard and Grave, 2021).
In our experiments, SeViT achieves competitive or even better performances on five Video QA (Xu et al., 2017; Yu et al., 2019; Yang et al., 2021; Xiao et al., 2021) and two video captioning (Chen and Dolan, 2011; Xu et al., 2016) tasks compared to previous baseline models, which are massively pre-trained on video-text pairs, without any video-language pre-training. Our analysis demonstrates that SeViT has a significant advantage on the longer videos and questions requiring causal video understanding. Especially, our SeViT achieves new state-of-the-art performances on three Video QA benchmarks, iVQA, Next-QA, and ActivityQA, which have relatively long source videos, by improving 4.8-6.9% point of accuracy, and one video captioning, MSRVTT-Caption by improving 3.6% point of CIDEr.
Our contributions are three folds:
* To our best knowledge, we propose the semi-parametric architecture in the video-language domain, SeViT, by treating a video as an external data store for the first time.
* We demonstrate that SeViT based on retrieval-augmented generation shows strong performance in long videos and causal video understanding compared to its baseline relying on frame sampling.
* SeViT achieves the new state of the art on three Video QA with longer videos, iVQA, Next-QA, Activitynet-QA, and one video captioning, MSRVTT-Caption without any video-language pre-training.
## 2 Related Work
### Video-Language Models
Previous video-language models (Sun et al., 2019; Li et al., 2020; Yang et al., 2021) often rely on offline feature extraction leveraging pre-trained 2D/3D vision encoders such as ResNet (He et al., 2016), S3D (Xie et al., 2018) and SlowFast (Feichtenhofer et al., 2019) to efficiently represent video frames, while adopting pre-trained language models like BERT or RoBERTa (Devlin et al., 2019; Liu et al., 2019) for the textual representations of subtitles or captions. Recently, some studies adopt end-to-end trainable video-specific transformer (Liu et al., 2022; Arnab et al., 2021) for video captioning tasks (Lin et al., 2022; Seo et al., 2022).
Contrary to the models relying on the feature extraction for densely sampled frames, Lei et al. (2021) propose ClipBERT representing a video with sparsely sampled frames. It allows end-to-end training of the pre-trained vision and text encoders, leading to comprehensive performances on video-language downstream tasks, _i.e_., text-to-video retrieval and Video QA. Recent video-language studies use a few uniformly sampled frames per video to pre-training video-language models (Zellers et al., 2021; Wang et al., 2022; Zellers et al., 2022; Yang et al., 2022). They leverage millions of video-text pairs utilizing automatically generated subtitles via automatic speech recognition API for their pre-training procedure (Miech et al., 2019; Bain et al., 2021). The massive pre-training on the large video-text pairs boosts the performance of downstream video-language tasks, achieving state-of-the-art. Our approach shares the sparse frame strategy, but is more scalable toward long videos. Also, we focus on fine-tuning our semi-parametric model rather than video-language pre-training.
### Semi-Parametric Language Models
Semi-parametric language models show impressive success on many knowledge-intensive NLP tasks such as open-domain question answering and fact varification (Guu et al., 2020; Lewis et al., 2020; Izacard and Grave, 2021; Izacard et al., 2022), or language modeling (Khandelwal et al., 2019; Borgeaud et al., 2021). The semi-parametric model often consists of a non-parametric module, _i.e_., a retriever, and a parametric generator. This approach assumes large external knowledge such as Wikipedia or other large text corpora. The non-parametric retriever returns top-\(k\) relevant knowledge for a given input from the large data store. The retrieval is often based on a maximum inner product search (MIPS) within pre-computed vectors of the data store (Johnson et al., 2019). Then, the parametric generator effectively aggregates the knowledge with the given input. The non-parametric module has several useful properties like controllability, explainability, and debuggability by providing the origin of model decisions. Meanwhile, the parametric model provides better empirical performances than the non-parametric model. Combining the best of two worlds, a semi-parametric architecture is limitedly adopted for protein structure prediction (Jumper et al., 2021), image generation (Blattmann et al., 2022), and image-text QA (Chen et al., 2022; Lin and Byrne, 2022). Inspired by these studies, we adapt the retrieval-augmented generation framework to the video-language domain for the first time.
### Informative Frame Selection
Focusing on the fact that a video contains a lot of redundant and worthless frames, many works tried to select informative frames from the video (Chen et al., 2018; Yu et al., 2019). Chen et al. (2018) introduce the informative frame selection with the reinforcement learning algorithm for video captioning task. Yu et al. (2019) show informative frame selection leveraging off-the-shelf action proposal network is effective for Video QA with long untrimmed videos. Dense video captioning (Krishna et al., 2017; Zhou et al., 2018) also contains the frame proposal procedure, but many works evaluate their models with ground-truth proposals (Seo et al., 2022). On the other hand, Lei et al. (2018) introduce TVQA requiring temporal localization for answering questions in the TV show domain. However, it often needs relevant segment selection with its start and end positions requiring more consideration over corresponding subtitles, _i.e_., multi-channel tasks. Moreover, the questions containing "before" and "after" are difficult to find based on the given query because the segment that can infer the answer and the segment that corresponds to the query are often different. We hope our work would be extended with better frame segment selection in future works.
## 3 Method
In this section, we introduce SeViT, a **S**emi-parametric **V**ideo-grounded **T**ext generator. As illustrated in Figure 2, it includes informative frame selection leveraging cross-modal retrieval and effective late fusion methods such as
Figure 2: Detailed illustration for mechanism of SeViT. 1) It first retrieves top-\(k\) query-relevant frames from a video via maximum inner product search (MIPS) between a query vector and (pre-computed) \(|V|\) frame vectors, where \(k\ll|V|\). 2) Each frame is encoded with the query \(q\) independently by the encoder of VGT-Generator. It produces \(k\) query-aware representations. 3) We explore two late fusion methods, 3-(a) Marginalization (Lewis et al., 2020) and 3-(b) Fusion-in-Decoder (Izacard and Grave, 2021) to produce the final output \(\hat{a}\) by aggregating the \(k\) query-aware frames in the decoder.
marginalization and fusion-in-decoder for video-grounded text generation. We start by defining task formulation and then explain each method and training details.
### Overview: Video-Grounded Text Generation
Let \(V=\{f_{1},f_{2},...,f_{|V|}\}\) be a video clip consisting of \(|V|\) number of sequential frames and \(q\) be a textual query. Video-grounded text generation is a task that produces the textual output \(\hat{a}\) conditioned on \(V\) and \(q\), _i.e._, \(p(a\mid V,q)\). Video captioning and video question answering (Video QA) are popular examples of video-grounded text generation. Basically, we inherit the sparse frame paradigm (Lei et al., 2021) representing a video with sparsely selected frames \(V_{k}\subset V\) to approximate a video \(V\) where \(|V_{k}|=k\) and \(k\ll|V|\). However, in contrast to previous approaches that uniformly select \(V_{k}\), we propose a semi-parametric model which dynamically retrieves the \(k\) query-relevant frames using a non-parametric retriever and aggregates them with a parametric generator.
### Frame Retriever
Conventionally, many studies perform random uniform sampling to choose \(V_{k}\)(Lei et al., 2021; Zellers et al., 2021; Wang et al., 2022; Yang et al., 2022). In other words, the random sampling selects \(k\) frames from \(V\) regardless of \(q\).
Contrary to the prior studies, we select the relevant frames conditioned on \(q\) by introducing a frame retriever. The frame retriever \(\eta\) takes \(V\) and \(q\), and returns a subset \(V_{k}\subset V\), modeled as: \(p_{\eta}(V_{k}\mid V,q)\). In particular, the frame retriever consists of two separated query and frame transformer encoders, \(E_{\text{Q}}\) and \(E_{\text{F}}\), respectively. Each encoder takes \(q\) and \(f\) to represent embedding vectors, respectively, where \(f_{i}\) is the \(i\)-th frame in \(V\). Then, the cosine similarity between the two vectors is used for the relevance score between \(q\) and \(f_{i}\) as follows:
\[\text{sim}(q,f_{i})=\frac{E_{\text{Q}}(q)^{T}E_{\text{F}}(f_{i})}{\|E_{\text{ Q}}(q)\|_{2}\|E_{\text{F}}(f_{i})\|_{2}} \tag{1}\]
where \(\|\cdot\|_{2}\) denotes the \(l\)-2 normalization. Frame retriever returns the top-\(k\) frames based on the relevance scores among \(q\) and all frames in \(V\) as follows:
\[V_{k}\leftarrow\operatorname*{argsort}_{f_{i}\in V}(\text{sim}(q,f_{i}))[:k]. \tag{2}\]
Also, we compute the relative importance of each selected frame \(f_{j}\in V_{k}\) by performing softmax over the cosine similarities (Equation 1) where \(\tau\) is a temperature hyper-parameter. The frame score is computed as follows:
\[p_{\eta}(f_{j}\mid q)=\frac{e^{\text{sim}(f_{j},q)/\tau}}{\sum_{k}e^{\text{ sim}(f_{k},q)/\tau}} \tag{3}\]
### Video-Grounded Text Generator
A video-grounded text (VGT) generator \(\theta\) takes \(V_{k}\) and \(q\), and it outputs \(a\). For \(\theta\), we leverage the transformer encoder-decoder architecture taking both image and text together to generate textual output (Vaswani et al., 2017; Wang et al., 2021, 2022). Specifically, it first embeds each frame \(f_{j}\in V_{k}\) and a text query \(q\) with convolution blocks such as ResNet (He et al., 2016) and embedding matrix lookup corresponding to subwords from byte-pair encoding (Sennrich et al., 2015), respectively. Then, the frame patches and subword tokens vectors are combined and fed into the multi-modal transformer encoder to produce \(k\) query-aware frame representations. Beyond the single frame and query interaction, we investigate two effective late fusion methods, Marginalization (Lewis et al., 2020) and Fusion-in-Decoder (Izacard and Grave, 2021), to aggregate the independently encoded \(k\) query-aware frames for generating target text \(a\) in the decoder.
Marginalization (MAR)It integrates the \(k\) query-aware frames by marginalization (Lewis et al., 2020). First, the decoder also produces independent \(k\) predictions. Then, it aggregates the \(k\) predictions by marginalizing out weighting by the frame score \(p_{\eta}(f_{j}\mid q)\) resulting in the output \(a=\{w_{1},w_{2},...,w_{N}\}\), where the \(w\) is a subword token of \(a\).
\[p(a\mid V,q)=\prod_{i}^{N}\sum_{f_{j}\in V_{k}}p_{\eta}(f_{j} \mid q)p_{\theta}(w_{i}\mid q,f_{j},w_{1:i-1}) \tag{4}\]
The marginalization procedure allows joint optimization of the cross-modal retriever and generator. In other words, it enables gradient updates of encoders in a cross-modal retriever to select query-relevant frames with Equation 4, while not requiring explicit supervision for ground-truth query-relevant frame pairs.
Fusion-in-Decoder (FiD)Fusion-in-Decoder relies purely on cross-attention between the hidden states of the encoder and decoder for the fusion (Izacard and Grave, 2021). Like the marginalization, it encodes \(q\) with each \(f_{j}\in V_{k}\) independently. However, it aggregates the encoder outputs jointly in the decoder with cross-attention as illustrated in Figure 2. Specifically, the encoder produces the hidden states \(H\in\mathbb{R}^{k\times L\times d}\), where the \(L\) is the length of the combined frame and query outputs, and the \(d\) is a hidden dimension. The \(k\) hidden outputs are concatenated together as \(H\in\mathbb{R}^{k\cdot L\times d}\), as a single sequence before being fed into the decoder. Finally, the decoder can consider the \(k\) query-aware frames at the same time for target text generation.
\[p(a\mid V,q)=\prod_{i}^{N}p_{\theta}(w_{i}\mid q,V_{k},w_{1:i-1}) \tag{5}\]
### Training
VGT-generator is trained by minimizing the negative log-likelihood of \(p(a\mid V,q)\) with either Equation 4 or 5. For efficient implementation, we pre-compute the frame vectors in advance for all training videos of the target dataset using the frame encoder of the frame retriever. Then, an efficient search algorithm, _i.e_., Maximum Inner Product Search (MIPS), becomes convenient especially when the length of the source video gets longer. We further describe some training techniques considering the frame retriever.
Query-side Fine-TuningWith the objective of Marginalization (Equation 4), we can jointly optimize the frame retriever and VGT-generator. However, re-computation of frame vectors is required for all videos when we update the frame encoder of the frame retriever. Lewis et al. (2020); Izacard et al. (2022) report that the updating context encoder does not show significant advantages for knowledge-intensive NLP tasks despite the heavy computation. Thus, we keep the frame encoder \(E_{\text{F}}\) fixed during the training while only updating the query encoder \(E_{\text{Q}}\) for efficiency (Lewis et al., 2020; Izacard et al., 2022).
Retriever Warm-up for FiDOn the other hand, joint training of the frame retriever with the objective of FiD is not straightforward. Even though Izacard et al. (2022) propose various methods for joint training of retriever and generator in FiD fusion scheme, it does not work well for ours in our preliminary experiments. Thus, we initialize the frame retriever with a fine-tuned retriever by Marginalization. Then, we fix the frame retriever during training VGT-generator with the FiD manner. It is a similar approach to FiD-RAG in Shuster et al. (2021).
Top-k AnnealingWe promote diverse top-\(k\) frame selection with the fixed retriever in FiD training. Basically, we choose a frame from \(V\) in order of high relevance score (Equation 2). However, VGT-generator might show a lower generalization ability if trained with the same \(k\) frames from the fixed retriever for every training instance. Thus, we set a window size \(u\in\mathbb{R}^{1}\) and prevent subset \(\{f_{i-u},f_{i-u+1},...,f_{i},...,f_{i+u-1},f_{i+u}\}\) from being selected once \(f_{i}\) is selected as one of top-\(k\) frames. We gradually decrease the \(u\to 0\) at every training epoch, resulting in diverse top-\(k\) frames.
## 4 Experiments
In this section, we demonstrate the effectiveness of SeViT compared to its baselines on eight video-language datasets. We denote our models as SeViT\({}_{\text{MAR}}\) and SeViT\({}_{\text{FiD}}\) according to their training objectives, Marginalization and Fusion-in-Decoder, respectively.
### Main Baseline: SeViT with Frame Sampling
Although there are several baseline models, we would like to strictly compare the effect of employing the frame retrieval while controlling other factors such as model size, pre-training steps, and other fusion methods. To this end, we introduce a strong baseline utilizing uniform frame sampling instead of performing frame retrieval to choose \(k\) frames. We refer to the baseline as SeViT-with-frame-sampling, denoting it simply SeViT\({}^{\otimes}\). When we train the SeViT\({}^{\otimes}\) with the Marginalization, we use a uniform prior \(1/k\) for the frame score per frame instead of using Equation 3 for the late fusion (Equation 4). We describe details of other baselines in our experiments in Appendix C.
### Dataset
We evaluate our models on six Video QA datasets, TGIF-QA (Jang et al., 2017), MSVD-QA (Xu et al., 2017), MSRVTT-QA (Xu et al., 2017), iVQA (Yang et al., 2021), Next-QA (Xiao et al., 2021), and Activitynet-QA (Yu et al., 2019), and two video captioning datasets, MSVD-Caption (Chen and Dolan, 2011) and MSRVTT-Caption (Xu et al., 2016). We mainly report top-1 accuracy for the Video QA and CIDEr (Vedantam et al., 2015) for the video captioning. More details including statistics are in Appendix A.
### Implementation Details
We use pre-trained CLIP-base/16 (Radford et al., 2021) for our frame retriever and pre-trained OFA-Base (Wang et al., 2022b) for the VGT-generator. CLIP is a pre-trained bi-encoder on large image-text pairs. OFA is a pre-trained vision-language transformer on multi-tasks consisting of
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Dataset} & \# Frame & \multirow{2}{*}{MAR} & \multirow{2}{*}{FiD} \\ & train/test & & \\ \hline _Video QA_ & & & \\ TGIF-Action & 3/6 & **94.9** & 94.8 \\ TGIF-Transition & 3/6 & **98.2** & 98.0 \\ TGIF-Frame & 3/6 & 70.6 & **71.1** \\ MSVD-QA & 5/10 & **49.3** & **49.3** \\ MSRVTT-QA & 5/10 & 41.9 & **42.3** \\ iVQA & 5/10 & 35.3 & **36.4** \\ Next-QA & 5/10 & 54.4 & **54.6** \\ Activitynet-QA & 5/10 & 46.3 & **47.1** \\ \hline _Video Captioning_ & & & \\ MSVD-Caption & 5/10 & 127.4 & **134.9** \\ MSRVTT-Caption & 5/10 & 58.6 & **61.8** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison between two fusion methods, Marginalization (MAR) and Fusion-in-Decoder (FiD) on Video QA and captioning tasks based on our baseline with uniform frame sampling, SeViT\({}^{\otimes}\) explained in Section 4.1, to identify their differences apart from frame retrieval. We report top-1 accuracy for Video QA and CIDEr for video captioning.
image-text, text-only, and image-only tasks, by unifying the input and output protocol. As described in Section 3.4, we pre-compute frame vectors of all videos in the target dataset in advance to perform an efficient search, MIPS. The temperature \(\tau\) is empirically set to 1. First, we train \(\text{SeViT}_{\text{MAR}}\) in the Marginalization, _i.e_., joint optimization of the retriever and generator on the target dataset. Then, the fine-tuned retriever is reused for training \(\text{SeViT}_{\text{FID}}\) on the same dataset as described in Section 3.4. Also, we set \(k\) to 5 for training and 10 at test time for all datasets except for TGIF-QAs where we set \(k\) to 3 and 6 for the training and test. For multiple-choice QA, TGIF-Action, TGIF-Transition, and Next-QA, we concatenate answer options and query together introducing a separation token. For video captioning tasks, we use a null query, "_What does the image describe?_", used for image captioning tasks by Wang et al. (2022b). All models are trained with {1e-5, 3e-5} learning rate and {16, 32} batch size for 5 epochs on 1-2 NVIDIA A100 GPUs. We use mainly PyTorch and Huggingface's Transformers library for our implementation (Paszke et al., 2019; Wolf et al., 2020). Please see Appendix B for more details.
### Comparison between Late Fusion Methods
Before we discuss the benefits of frame retrieval, we compare our two late fusion methods based on our baseline model, \(\text{SeViT}^{\otimes}\). Table 1 shows the results on ten downstream datasets. Both methods show comparable performance to each other, but FiD performs slightly better than MAR in most Video QA datasets, especially in TGIF-Frame, iVQA, and Activitynet-QA. Those datasets contain descriptive QA pairs. We find that the late fusion methods perform surprisingly well on the datasets requiring temporal reasoning, _e.g_., TGIF-Action, TGIF-Transition, and Next-QA, even though they do not consider the temporal order among frames explicitly. Also, FiD shows better performances than MAR in the two video captioning tasks indicating FiD is good at generating longer text.
### Benefits from Frame Retrieval
Table 2 shows the effect of frame retrieval on the Video QA datasets 1. The frame retrieval consistently improves the performances of all datasets except for MSRVTT-QA, regardless of video length. Notably, it improves the performances of longer Video QA datasets, iVQA, Next-QA, and Activitynet-QA. We find that gains are slightly larger in MAR fusion improving the 1.0 and 0.9 accuracies of iVQA and Activitynet-QA, respectively. We presume the joint training of frame retrieval boosts the gain. In Table 12 of Appendix D, we find that frame retrieval consistently improves performances of video captioning as well, even though the null query is used for the retrieval. We think the null query effectively filters out uninformative frames resulting in performance gains.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Model & \begin{tabular}{c} Causal \\ Original / Hard \\ \end{tabular} &
\begin{tabular}{c} Temporal \\ Original / Hard \\ \end{tabular} & Descriptive & All \\ \hline HGA & 46.3 / 43.3 & 50.7 / 45.3 & 59.3 & 49.7 \\ HGQA & 48.5 / - & 51.2 / - & 61.7 & 51.4 \\ APT & 48.3 / 19.6 & 46.7 / 22.6 & 58.9 & 49.2 \\ Temp[APT] & 48.6 / 38.4 & 49.3 / 36.5 & 65.0 & 51.5 \\ + APT & 53.1 / - & 50.2 / - & 66.8 & 54.3 \\ VGT & 52.3 / - & **55.1** / - & 64.1 & 55.0 \\ \hline \(\text{SeViT}^{\otimes}_{\text{MAR}}\) & 52.3 / 41.9 & 52.3 / 44.7 & 71.2 & 55.2 \\ \(\text{SeViT}^{\text{MAP}}_{\text{MAR}}\) & 53.5 / 43.2 & 54.0 / 46.3 & 69.2 & 56.1 \\ \(\text{SeViT}^{\text{PD}}_{\text{PD}}\) & 53.0 / 42.7 & 54.1 / 46.4 & **71.9** & 56.3 \\ \(\text{SeViT}^{\text{PD}}_{\text{FD}}\) & **54.0** / **43.3** & 54.1 / **46.5** & 71.3 & **56.7** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Evaluation results on Next-QA validation set. In addition to the original validation set, we include a hard subset requiring video-level understanding identified by ATP (Buch et al., 2022).
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Dataset &
\begin{tabular}{c} Frame \\ Retrieval \\ \end{tabular} & MAR & FiD \\ \hline MSVD-QA (10s) & ✗ & 49.3 & 49.3 \\ & ✓ & **49.5** & **49.7** \\ \hline MSRVTT-QA (15s) & ✗ & **41.9** & **42.3** \\ & ✓ & 41.7 & 42.1 \\ \hline iVQA (18s) & ✗ & 35.3 & 36.4 \\ & ✓ & **36.4** & **36.9** \\ \hline Next-QA (44s) & ✗ & 54.4 & 54.6 \\ & ✓ & **54.8** & **55.2** \\ \hline Activitynet-QA (180s) & ✗ & 46.3 & 47.1 \\ & ✓ & **47.2** & **47.6** \\ \hline \hline \end{tabular}
\end{table}
Table 2: We compare our \(\text{SeViT}\) leveraging frame retriever with its counterpart baseline, \(\text{SeViT}^{\otimes}\) relying on uniform frame sampling instead of frame retrieval, on Video QA datasets.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{3}{c}{Next-QA} & \multicolumn{3}{c}{Activtivynet-QA} \\ \cline{2-7} & Short & Long & All & Short & Long & All \\ \hline \(\text{SeViT}^{\otimes}_{\text{MAR}}\) & 54.6 & 53.4 & 54.4 & 47.3 & 45.9 & 46.3 \\ \(\text{SeViT}_{\text{MAR}}\) & **55.1** & **53.9** & **54.8** & **47.4** & **47.1** & **47.2** \\ \hline \(\text{SeViT}^{\otimes}_{\text{FID}}\) & **54.9** & 53.4 & 54.6 & **47.9** & 46.6 & 47.1 \\ \(\text{SeViT}_{\text{FD}}\) & 54.8 & **56.6** & **55.2** & 47.5 & **47.6** & **47.6** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Evaluation breakdown on Next-QA and Activitynet-QA by their source video length. We split the test set of Next-QA and Activitynet-QA into long and short subsets according to whether the source video length is longer than 60 seconds.
Results on Long Video SubsetIn Table 3, we further break down the evaluation results according to source video length to identify the benefits of frame retrieval in longer videos. Specifically, we divide the original test set of Next-QA and Activitynet-QA into long and short sub-splits according to whether the source video length is longer than 60 seconds. We can find that most improvements by frame retrieval are from the long videos in both datasets. Especially, SeViT\({}_{\text{FJD}}\) improves 3.2% point accuracy of long video subset in Next-QA by employing frame retrieval.
Results by Question TypesTable 4 shows that the performance gains by the frame retrieval in Next-QA are related to causal and temporal QA types for both fusion methods. In contrast, the performance of the descriptive type is slightly degraded by the frame retrieval. It is notable that the improvements are significant in SeViT\({}_{\text{MAR}}\) for both causal and temporal types while the improvement is limited to causal type in SeViT\({}_{\text{FJD}}\). It reminds us of the importance of joint retriever training, again. However, SeViT\({}_{\text{FJD}}\) consistently performs better in all three types compared to SeViT\({}_{\text{MAR}}\). Moreover, it outperforms previous best-performing models utilizing sophisticated graph representation, HGA (Jiang and Han, 2020), HGQA (Xiao et al., 2022), and VGT (Xiao et al., 2022), especially in causal and descriptive types. More results by question types are in Table 10 and 11 of Appendix D.
Analysis on Untrimmed Long VideosFinally, we further analyze the advantages of frame retrieval on the longest video benchmark, Activitynet-QA. Figure 3 illustrates the advantages in two folds. First, we divide the test set into subsets according to more fine-grained video lengths as shown in Figure 3 (a). As hypothesized, there is a clear tendency for significant performance gaps according to the usage of frame retrieval to become more pronounced in longer videos. Specifically, SeViT\({}_{\text{MAR}}^{\otimes}\) and SeViT\({}_{\text{FJD}}^{\otimes}\) both drop their performances significantly with videos longer than 180 seconds. However, both SeViT\({}_{\text{MAR}}\) and SeViT\({}_{\text{FJD}}\) successfully retain their performances with the longer videos. Second, we investigate the sample efficiency in terms of the number of frames at the inference time. We believe that if the frame retriever selects informative frames well, the model works better with fewer frames than non-informative frames from the uniform frame sampler. Figure 3 (b) shows such a tendency as we have hypothesized. Even though the performances decrease gradually with fewer frames, the performance gaps between SeViT and SeViT\({}^{\otimes}\) also becomes significant. It implies the frames obtained by frame retriever are more informative than the frames by random uniform sampling. We also find the strength of SeViT in long videos by a qualitative analysis as shown in Figure 4 of Appendix E.
of performing query-side (QS) fine-tuning, the final performance of \(\text{SeViT}_{\text{MAR}}\) drops 0.6 and 1.2 of accuracy in Next-QA and ActivitynetQA, respectively. Similarly, using the warmed-up frame retriever by \(\text{SeViT}_{\text{MAR}}\) boosts the final performance of \(\text{SeViT}_{\text{FID}}\) in both datasets. We find diverse retrieval by top-k annealing also contributes to the final performance. Furthermore, we also find the larger backbone model for the VGT-generator significantly improves performance.
### Comparison with State-of-the-arts
We also compare ours with previous state-of-the-art models as shown in Table 6. Notably, our models show competitive performances on the short video-based Video QA datasets compared to baseline models pre-trained on large video-text pairs, JustAsk (Yang et al., 2021), MERLOT (Zellers et al., 2021), All-in-One (Wang et al., 2022a), FrozenBiLM (Yang et al., 2022) and LAVENDER (Li et al., 2022b), even without any video-text pre-training. Also, our model outperforms other baselines utilizing graph representation, HQGA (Xiao et al., 2022a), IGV (Li et al., 2022c), and VGT (Xiao et al., 2022b), and baselines pre-trained on image-text pairs, ClipBERT (Lei et al., 2021) and SINGULARITY (Lei et al., 2022). Moreover, our models outperform in relatively longer Video QA datasets, Next-QA and Activitynet-QA. Especially, our FiD-based model achieves new state-of-the-art performances on iVQA, Next-QA, and Activitynet-QA, when using a large-sized backbone, _i.e_., OFA-Large, for our VGT-generator. Moreover, in Table 12 of Appendix D, our model shows competitive performances on the video captioning dataset compared to end-to-end video transformer-based baselines, SwinBERT (Lin et al., 2022), MV-GPT (Seo et al., 2022), and LAVENDER (Li et al., 2022b). Notably, \(\text{SeViT}_{\text{FID}}\) based on OFA-Large achieves a new state-of-the-art performance in terms of CIDEr (Vedantam et al., 2015) on the MSRVTT-Caption dataset even without video-text pre-training.
## 5 Conclusion
In this work, we present SeViT for scalable video representation toward untrimmed long videos. In particular, we regard a video as an external data store and leverage the non-parametric retriever to get relevant frames. Then, a parametric generator focuses on the effective aggregation of the frames. We find SeViT has significant advantages, especially in longer videos and questions requiring causal video understanding. Furthermore, SeViT achieves state-of-the-art performances on Video QA and captioning tasks without any video-language pre-training. We believe SeViT will promote future research into longer video understanding, _e.g_., minutes or even hours.
|
2302.10349
|
Principal blocks with six ordinary irreducible characters
|
We classify Sylow $p$-subgroups of finite groups whose principal $p$-blocks
have precisely six ordinary irreducible characters.
|
Nguyen N. Hung, A. A. Schaeffer Fry, Carolina Vallejo
|
2023-02-20T22:32:00Z
|
http://arxiv.org/abs/2302.10349v2
|
# Principal blocks with six ordinary irreducible characters
###### Abstract.
We classify Sylow \(p\)-subgroups of finite groups whose principal \(p\)-blocks have precisely six ordinary irreducible characters.
Key words and phrases:Principal block, defect group, irreducible character, Sylow subgroup 2010 Mathematics Subject Classification: Primary 20C20, 20C15, 20C33 The first author is grateful for the support of an UA Faculty Research Grant. The second-named author is grateful for the support of a grant from the National Science Foundation, Award No. DMS-2100912. The third author, as part of the GNSAGA, is grateful for the support of the _Istituto Nazionale di Alta Matematica_ (INDAM). The authors also thank Gunter Malle for comments on an earlier draft.
and to obtaining a \(p\)-local lower bound for the number of height-zero characters in principal blocks [14].
The purpose of this paper is to advance on the determination of the structures of defect groups of blocks with a given number of ordinary characters. Our main result classifies the defect groups of principal blocks with six characters.
**Theorem A**.: Let \(G\) a finite group, \(p\) a prime, and \(P\in\operatorname{Syl}_{p}(G)\). Suppose that the principal \(p\)-block of \(G\) has precisely six irreducible ordinary characters. Then \(|P|=9\).
Let \(B_{0}(G)\), or sometimes just \(B_{0}\), denote the principal \(p\)-block of \(G\), and let \(k(B_{0})\) denote the number of ordinary irreducible characters of \(B_{0}\). Our strategy for proving Theorem A, which is somewhat different from the above-mentioned previous work on \(k(B_{0})\in\{4,5\}\), is to analyse the number \(k_{0}(B_{0})\) of those characters in \(B_{0}\) of _height zero_. One of the main results of [14] already classifies the Sylow structure of finite groups with \(k_{0}(B_{0})\leq 5\), and perhaps surprisingly, none of these possibilities could occur when \(k(B_{0})=6\). Therefore, we are left to deal with \(k(B_{0})=k_{0}(B_{0})=6\), in which case the Sylow subgroup \(P\) must be abelian, by the recent proof of Brauer's height zero conjecture for principal blocks by Malle and Navarro [15] (note that the more general case of the conjecture has now also been proved [15]).
Our proof makes use of the Classification of Finite Simple Groups. In particular, we have to find lower bounds for the number of \(\operatorname{Aut}(S)\)-orbits of irreducible characters in the principal block of a non-abelian simple group \(S\). This has been recently studied in connection with other problems on character degrees and character bounds [16, 17]. With this, we are able to restrict ourselves to studying the \(\operatorname{Aut}(S)\)-orbits of characters in the principal \(3\)-blocks of such an \(S\) with an abelian Sylow \(3\)-subgroup, see Theorem 2.1.
There are two isomorphism classes of groups of order \(9\), namely \(\mathsf{C}_{9}\) and \(\mathsf{C}_{3}\times\mathsf{C}_{3}\). Either of them can occur as the defect group of a principal block with \(6\) ordinary characters, as shown in semidirect products of \(\mathsf{C}_{2}\) acting on \(\mathsf{C}_{9}\) and \(\mathsf{C}_{3}\times\mathsf{C}_{3}\) by inversion. With that being said, the 'full inverse' of Theorem A is not always true: if \(|P|=9\) then \(k(B_{0})\) could be either \(6\) or \(9\) (see the proof of Theorem 3.9). The next result, which fully characterizes finite groups with \(k(B_{0})=6\) in terms of \(p\)-local structure, offers a more complete version of Theorem A.
**Theorem B**.: Let \(G\) a finite group, \(p\) a prime, and \(P\in\operatorname{Syl}_{p}(G)\). Let \(B_{0}\) denote the principal \(p\)-block of \(G\). Then \(k(B_{0})=6\) if and only if precisely one of the following holds:
1. \(P=\mathsf{C}_{9}\) and \(|\mathbf{N}_{G}(P):\mathbf{C}_{G}(P)|=2\).
2. \(P=\mathsf{C}_{3}\times\mathsf{C}_{3}\) and either \(\mathbf{N}_{G}(P)/\mathbf{C}_{G}(P)\in\{\mathsf{C}_{4},\mathsf{Q}_{8}\}\) or \(\mathbf{N}_{G}(P)/\mathbf{C}_{G}(P)\cong\mathsf{C}_{2}\) acts fixed-point freely on \(P\).
Proofs of the main theorems are contained in Section 3 and the necessary results on finite simple groups are proved in Section 2. At the end of the paper, we provide a summary of what is known about the structure of defect groups of small blocks in Table 1.
## 2. Principal blocks of simple groups
In this section, we prove the following statements on simple groups, which will be needed for the proof of our main theorems.
**Theorem 2.1**.: _Let \(S\) be a non-abelian simple group. Let \(p=3\) and let \(B_{0}:=B_{0}(S)\) be the principal 3-block of \(S\). Assume that \(Q\in\operatorname{Syl}_{p}(S)\) is abelian and \(|Q|\geqslant 9\). If \(S\leqslant A\leqslant\operatorname{Aut}(S)\), then one of the following holds:_
1. _The action of_ \(A\) _defines at least_ \(4\) _orbits on_ \(\operatorname{Irr}(B_{0})\backslash\{\mathbf{1}_{S}\}\)_._
2. \(S\in\{\operatorname{PSL}_{2}(q),\operatorname{PSL}_{3}(q),\operatorname{PSU }_{3}(q)\}\) _with_ \((3,q)=1\)_,_ \((|A:S|,3)=3\)_, and the action of_ \(A\) _induces_ \(3\) _orbits on_ \(\operatorname{Irr}(B_{0}(S))\backslash\{\mathbf{1}_{S}\}\)_. In this case,_ \(k(B_{0}(A))>6\)_._
3. \(S=\operatorname{Alt}_{6}=\operatorname{PSL}_{2}(3^{2})\)_,_ \(A\) _has a subgroup isomorphic to_ \(\operatorname{M}_{10}\)_, and the action of_ \(A\) _induces_ \(3\) _orbits on_ \(\operatorname{Irr}(B_{0}(S))\backslash\{\mathbf{1}_{S}\}\)_._
**Theorem 2.2**.: _Let \(S\) be a non-abelian simple group. Let \(p\) be an odd prime and let \(B_{0}:=B_{0}(S)\) be the principal \(p\)-block of \(S\). Assume that \(P\in\operatorname{Syl}_{p}(S)\) is abelian. If \(k(B_{0})=6\), then \(p=3\) and \(|P|=9\)._
We refer the reader to [20, Chapter 9] for basics on the block theory involving normal subgroups and quotient groups. Recall that if \(N\) is a normal subgroup of \(G\) and \(B\) and \(b\) are blocks of \(G\) and \(N\) respectively, then \(B\) is said to _cover_\(b\) if there are \(\chi\in\operatorname{Irr}(B)\) and \(\theta\in\operatorname{Irr}(b)\) such that \(\theta\) is an irreducible constituent of the restriction \(\chi_{N}\). It is clear that \(B_{0}(G)\) covers \(B_{0}(N)\). For \(\theta\in\operatorname{Irr}(N)\), we write \(\operatorname{Irr}(G|\theta)\), respectively \(\operatorname{Irr}(B|\theta)\), for the set of those characters of \(G\), respectively \(B\), containing \(\theta\) as a constituent when restricted to \(N\).
**Lemma 2.3**.: _Let \(G\) be a finite group, \(N\trianglelefteqslant
[11, Theorem (11.1)]). Now, we see that there are at least \(5\) non-conjugate partitions with \(3\)-core \((r)\) for \(m\geq 2\), yielding the claim.
We say that \(\theta\in\operatorname{Irr}(N)\), where \(N\vartriangleleft G\), _extends_ to \(G\) (or is _extendable_ to \(G\)) if there is some \(\chi\in\operatorname{Irr}(G)\) such that \(\chi_{N}=\theta\). In that case,
\[\operatorname{Irr}(G|\theta)=\{\beta\chi\ |\ \beta\in\operatorname{Irr}(G/N)\}\]
by a theorem of Gallagher [10, Corollary 6.17].
The next observation will be useful in some of the remaining cases.
**Lemma 2.5**.: _Let \(X\), \(Y\), \(\widetilde{X}\), and \(E\) be finite groups such that \(X\lhd\widetilde{X}\lhd\widetilde{X}E\); \(X\lhd Y\leq\widetilde{X}E\); and \(E\) is abelian. Suppose further that \(p\) is a prime such that \([\widetilde{X}:X]=p\) and \(p\mid[Y:X]\). Let \(\widetilde{\chi}\in\operatorname{Irr}(B_{0}(\widetilde{X}))\) be a character in the principal \(p\)-block of \(\widetilde{X}\) that is extendable to \(\widetilde{X}E\) and restricts to \(\chi\in\operatorname{Irr}(X)\). Then there exist at least two characters in \(\operatorname{Irr}(B_{0}(Y))\) lying above \(\chi\)._
Proof.: Note that our assumptions imply that \(Y/(\widetilde{X}\cap Y)\) is abelian and \(\widetilde{X}\cap Y\in\{\widetilde{X},X\}\). Let \(X\lhd Y_{p}\lhd Y\) such that \(Y_{p}=\widetilde{X}\) if \(\widetilde{X}\cap Y=\widetilde{X}\) and such that \(|Y_{p}/X|=p\) if \(\widetilde{X}\cap Y=X\). Then in either case, we have \(|Y_{p}/X|=p\) and \(Y/Y_{p}\) is abelian.
Note that \(\chi\) has \(p\) distinct extensions \(\chi_{1},\ldots,\chi_{p}\) to \(Y_{p}\), which all must lie in \(B_{0}(Y_{p})\) by Lemma 2.3(iv) and the fact that \([Y_{p}:X]=p\), so \(B_{0}(Y_{p})\) is the unique \(p\)-block of \(Y_{p}\) above \(B_{0}(X)\). Since \(\chi\) extends to \(\widetilde{X}E\), and hence \(Y\), at least one of these, say \(\chi_{1}\), extends to \(Y\). In the case that \(Y_{p}=\widetilde{X}\), we may even specify \(\chi_{1}:=\widetilde{\chi}\). Since \(Y/Y_{p}\) is abelian, it follows that every character in \(\operatorname{Irr}(Y|\chi_{1})\) is an extension, by Gallagher's theorem [10, Corollary 6.17]. In particular, \(\chi_{1}\) extends to some member of \(B_{0}(Y)\) lying above \(\chi\), by Lemma 2.3(iii). But, note that by Lemma 2.3(iii), there is also a member of \(\operatorname{Irr}(B_{0}(Y)|\chi_{2})\), also lying above \(\chi\), but that this character cannot lie above \(\chi_{1}\) since \(\chi_{1}\) and \(\chi_{2}\) are not \(Y\)-conjugate. This yields at least two distinct members of \(\operatorname{Irr}(B_{0}(Y)|\chi)\), as desired.
In what follows, for \(\epsilon\in\{\pm 1\}\), we will use \(\operatorname{PSL}_{n}^{\epsilon}(q)\) to denote the group \(\operatorname{PSL}_{n}(q)\) of type \(\operatorname{A}_{n-1}\) if \(\epsilon=1\) and the group \(\operatorname{PSU}_{n}(q)\) of type \({}^{2}\operatorname{A}_{n-1}\) if \(\epsilon=-1\), and we will use analogous notation for the related groups \(\operatorname{SL}_{n}^{\epsilon}(q)\), \(\operatorname{GL}_{n}^{\epsilon}(q)\), and \(\operatorname{PGL}_{n}^{\epsilon}(q)\).
**Lemma 2.6**.: _Theorem 2.1 holds when \(S\) is one of the groups \(\operatorname{PSL}_{2}(q)\) or \(\operatorname{PSL}_{3}^{\epsilon}(q)\) with \(q=q_{0}^{f}\) a power of a prime \(q_{0}\neq 3\)._
Proof.: Let \(S=\operatorname{PSL}_{n}^{\epsilon}(q)\) with \(n\in\{2,3\}\) and \(q=q_{0}^{f}\) with \(q_{0}\neq 3\) a prime. Write
\[G:=\operatorname{SL}_{n}^{\epsilon}(q),\widetilde{S}:=\operatorname{PGL}_{n}^{ \epsilon}(q),\text{ and }\widetilde{G}:=\operatorname{GL}_{n}^{\epsilon}(q)\]
for the appropriate choice of \(n,\epsilon\). In this case, the dual group \(\widetilde{G}^{*}\) is isomorphic to \(\widetilde{G}\), and we identify the two groups. Note that \(\operatorname{Aut}(S)=\widetilde{S}\rtimes D\), where \(D\) is an appropriate group generated by field and graph automorphisms. (See e.g. [10, Theorem 2.5.12].) In this case, \(D\) is further abelian. Let \(e\in\{1,2\}\) be the order of \(\epsilon q\) modulo \(3\). The unipotent characters of \(\widetilde{G}\) (or \(\widetilde{S}\), \(S\)) are in bijection with partitions of \(n\), and by [10], two such characters lie in the same block if and only if they correspond to partitions with the same \(e\)-core. In the case \(e=1\) and \(S=\operatorname{PSL}_{3}^{\epsilon}(q)\), we see from this that there are two nontrivial
unipotent characters in \(B_{0}(S)\), and these are \(\operatorname{Aut}(S)\)-invariant (see e.g. [10, Theorem 2.5]). In the remaining cases, there is one nontrivial unipotent character in \(B_{0}(S)\) (namely, the Steinberg character \(\mathbf{St}_{S}\)), which is again \(\operatorname{Aut}(S)\)-invariant.
(I) Suppose first that \(|Q|>9\).
(Ia) If \(S=\operatorname{PSL}_{3}^{\epsilon}(q)\) with \(e=1\), then this means that \(9\mid(q-\epsilon)\) and all three unipotent characters lie in the principal block. Let \(a_{1},a_{2}\in C_{q-\epsilon}\leqslant\mathbb{F}_{q^{2}}^{\times}\) such that \(|a_{1}|=3\) and \(|a_{2}|=9\). Then for \(i=1,2\), let
\[s_{i}:=\operatorname{diag}(a_{i},a_{i}^{-1},1)\in\widetilde{G}.\]
Then each \(s_{i}\) defines a semisimple character \(\chi_{s_{i}}\) of \(\widetilde{G}\) that lies in \(B_{0}(\widetilde{G})\) using [1, Theorem 9.12], is trivial on \(\mathbf{Z}(\widetilde{G})\) since \(s_{i}\in[\widetilde{G},\widetilde{G}]=G\) (see e.g. [1, Prop. 2.7]), and such that \(s_{1}^{\alpha}z\) is not \(\widetilde{G}\)-conjugate to \(s_{2}\) for any \(z\in\mathbf{Z}(\widetilde{G})\) and \(\alpha\in D\) (since semisimple classes in \(\widetilde{G}\) are determined by their eigenvalues and the eigenvalues of \(s_{1}^{\alpha}\) still have order \(3\)). Further, there is an isomorphism \(z\mapsto\hat{z}\) between \(\mathbf{Z}(\widetilde{G})\) and \(\operatorname{Irr}(\widetilde{G}/G)\), such that \(\chi_{sz}=\chi_{s}\hat{z}\) in this situation for \(s\in\widetilde{G}\) semisimple. (See [1, (8.19) and Proposition 8.26]). Then since \(\chi_{s_{1}}^{\alpha}=\chi_{s_{1}^{\alpha}}\) by [13, Corollary 2.5], we see \(\chi_{s_{1}}^{\alpha}\) and \(\chi_{s_{2}}\) must necessarily have distinct restrictions to \(S=G/\mathbf{Z}(G)\cong G\mathbf{Z}(\widetilde{G})/\mathbf{Z}(\widetilde{G})\), and we have obtained at least two additional \(\operatorname{Aut}(S)\)-orbits in \(\operatorname{Irr}(S)\) by restriction.
(Ib) Now let \(S=\operatorname{PSL}_{3}^{\epsilon}(q)\) with \(e=2\) or \(S=\operatorname{PSL}_{2}(q)\), so that \(Q\) is cyclic and the condition \(|Q|>9\) means \(27\mid(q^{2}-1)\). Let \(a_{1},a_{2},a_{3}\in\mathbb{F}_{q^{2}}^{\times}\) with orders \(|a_{i}|=3^{i}\) for \(i=1,2,3\). Then considering \(s_{i}\in\widetilde{G}\) whose nontrivial eigenvalues are \(\{a_{i},a_{i}^{-1}\}\), we again obtain \(\chi_{s_{i}}\in\operatorname{Irr}(B_{0}(\widetilde{G}))\), trivial on \(\mathbf{Z}(\widetilde{G})\), and hence \(\chi_{s_{i}}\) may be identified with a character in \(\operatorname{Irr}(B_{0}(\widetilde{S}))\). Suppose now for a contradiction that \(\chi_{s_{i}}^{\alpha}\hat{z}\) restricts to the same character of \(S\) as \(\chi_{s_{j}}\) for \(i\neq j\in\{1,2,3\}\), some \(\alpha\in D\), and \(z\in\mathbf{Z}(\widetilde{G})\). This means that \(s_{i}^{\alpha}z\) is \(\widetilde{G}\)-conjugate to \(s_{j}\) and that the character \(\hat{z}\) must be trivial on \(\mathbf{Z}(\widetilde{G})\). Then we see that the corresponding \(z\) (and hence \(\hat{z}\)) must have order a nontrivial power of \(3\), contradicting that \(3\) does not divide \(|\widetilde{S}/S|\) in the cases being considered. This yields our additional \(3\)\(\operatorname{Aut}(S)\)-orbits in this case.
(II) Finally, assume that we are in the last situation: \(S=\operatorname{PSL}_{2}(q)\) or \(\operatorname{PSL}_{3}^{\epsilon}(q)\) and \(|Q|=9\), and further assume that \(A\) defines fewer than \(4\) orbits on \(\operatorname{Irr}(B_{0}(S))\backslash\mathbf{1}_{S}\).
(IIa) First, if \(e=1\) and \(S=\operatorname{PSL}_{3}^{\epsilon}(q)\), this means that \(3\mid\mid(q-\epsilon)\). In this case, we still have two nontrivial, \(\operatorname{Aut}(S)\)-invariant, unipotent characters in \(\operatorname{Irr}(B_{0})\). Taking \(s_{1}=\operatorname{diag}(a_{1},a_{1}^{-1},1)\) as before with \(|a_{1}|=3\), the corresponding semisimple character \(\chi_{s_{1}}\) of \(\widetilde{G}\) is trivial on \(\mathbf{Z}(\widetilde{G})\), lies in \(B_{0}(\widetilde{S})\), and restricts to the sum of three characters in \(B_{0}(S)\). So, if \(S\lhd A\leq\operatorname{Aut}(S)\) with \(3\nmid[A:S]\), these characters must also be invariant under \(A\), giving more than \(4\)\(A\)-orbits on \(\operatorname{Irr}(B_{0}(S))\). So, assume that \(3\mid[A:S]\). By [10, Theorems 2.4 and 2.5], the unipotent characters extend to \(\operatorname{Aut}(S)\). Now, by applying Lemma 2.5 with \((X,\widetilde{X},Y,E)=(S,\widetilde{S},A,D)\) to each unipotent character, we obtain at least \(6\) characters just from those above the three unipotent characters, and hence more than \(6\) in total. Hence we are in the situation of (b).
(IIb) We are left with the case that \(9\mid\mid(q^{2}-1)\), so that \(9\mid\mid(q-\eta)\) for some \(\eta\in\{\pm 1\}\) and \(Q\) is cyclic of size \(9\). (In particular, we have \(e=2\) and \(\eta=-\epsilon\) in case \(S=\operatorname{PSL}_{3}^{\epsilon}(q)\).) Here the only nontrivial unipotent character in \(B_{0}(S)\) is the Steinberg character \(\mathbf{St}\), and there is a unique unipotent block of \(\widehat{G}\) with positive defect. Let \(a_{1},a_{2}\in\mathbb{F}_{q^{2}}^{\times}\) and \(s_{1},s_{2}\in\widetilde{G}\) be defined exactly as in the case (Ib) above. Then \(\chi_{s_{i}}\in\operatorname{Irr}(B_{0}(\widehat{S}))\) and \(\chi_{s_{1}}\) and \(\chi_{s_{2}}\) lie in distinct \(\operatorname{Aut}(S)\)-orbits, as before. Note that each \(\chi_{s_{i}}\) is irreducible on \(S\) using the same arguments as before, and that \(\chi_{s_{1}}\) is further \(D\)-invariant, since any element of \(D\) either inverts or stabilizes the eigenvalues of order \(3\). From the restrictions of \(\chi_{s_{1}}\) and \(\chi_{s_{2}}\) to \(S\), in addition to \(\mathbf{St}\), this yields three \(\operatorname{Aut}(S)\)-orbits on \(\operatorname{Irr}(B_{0}(S))\backslash\{\mathbf{1}_{S}\}\). Then since we have assumed we do not have four \(A\)-orbits on this set, the remaining characters in \(\operatorname{Irr}(B_{0}(S))\) must be \(A\)-conjugate to the restriction of \(\chi_{s_{2}}\) to \(S\). But note this means the three choices of pairs \(\{a_{2},a_{2}^{-1}\}\) with \(|a_{2}|=9\) must yield \(A\)-conjugate characters, say \(\chi_{s_{2}},\chi_{s_{2}^{\prime}},\chi_{s_{2}^{\prime\prime}}\), and hence \(3\) divides \(|A/S|\).
Hence, we see that if \(A\) is an almost simple group with socle \(S\) permitting only \(3\) orbits on \(\operatorname{Irr}(B_{0})\backslash\{\mathbf{1}_{S}\}\), then \(3\mid|A/S|\) and \(A/(\widetilde{S}\cap A)\) is abelian. Another application of Lemma 2.5 applied to \(\mathbf{1}_{S}\), \(\mathbf{St}\), and \(\chi_{s_{1}}\) now forces at least \(6\) characters in \(\operatorname{Irr}(B_{0}(A))\), along with at least one more above \(\chi_{s_{2}}\).
**Corollary 2.7**.: _Theorem 2.1 holds when \(S\) is one of the groups \(\operatorname{PSL}_{2}(q)\) or \(\operatorname{PSL}_{3}^{\epsilon}(q)\) with \(q=q_{0}^{f}\) a power of a prime \(q_{0}\)._
Proof.: From Lemma 2.6, we may assume that \(q_{0}=3\). Then the condition that \(Q\) is abelian and \(|Q|\geqslant 9\) means that \(S=\operatorname{PSL}_{2}(3^{f})\) with \(f\geqslant 2\). (See, e.g. [1, Theorem].) Since the case \(f=2\) is covered by Lemma 2.4, we assume that \(f>2\).
Here \(B_{0}(S)\) contains all irreducible characters, aside from the Steinberg character (see [1, Theorems 1.18 and 3.3]). Then we see from the well-known character table for \(S\) that there are three distinct character degrees in \(\operatorname{Irr}(B_{0}(S))\backslash\{\mathbf{1}_{S}\}\), and it suffices to show that there are two semisimple characters of the same degree that are not \(\operatorname{Aut}(S)\)-conjugate. We will employ similar strategies to the second paragraph of the proof of Lemma 2.6.
Note that since \(q=3^{f}\geqslant 27\), at least one of \(q-\eta\) for \(\eta\in\{\pm 1\}\) is a composite number of the form \(4m\) with \(m\geqslant 7\). Then \(C_{q-\eta}\leqslant\mathbb{F}_{q^{2}}^{\times}\) contains two elements \(\zeta_{1},\zeta_{2}\) of order larger than \(4\) and satisfying \(|\zeta_{1}|\notin\{|\zeta_{2}|,2|\zeta_{2}|\}\). For \(i=1,2\), let \(s_{i}\) be a semisimple element of \(\operatorname{GL}_{2}(q)\) with eigenvalues \(\{\zeta_{i},\zeta_{i}^{-1}\}\). Then the semisimple characters \(\chi_{s_{i}}\) of \(\operatorname{GL}_{2}(q)\) corresponding to the \(s_{i}\) will restrict irreducibly to \(\operatorname{SL}_{2}(q)\) (since \(s_{i}z\) cannot be conjugate to \(s_{i}\) for any \(1\neq z\in\mathbf{Z}(\operatorname{GL}_{2}(q))\)) and be trivial on the center (since \(s_{i}\in\operatorname{SL}_{2}(q)=[\operatorname{GL}_{2}(q),\operatorname{GL}_{ 2}(q)]\)). Further, the restrictions of \(\chi_{s_{1}}\) and \(\chi_{s_{2}}\) to \(\operatorname{PSL}_{2}(q)\) cannot be conjugate under field automorphisms since \(\chi_{s_{1}}^{\alpha}=\chi_{s_{1}^{\alpha}}\) for \(\alpha\in D\), where we write \(\operatorname{Aut}(S)=\widetilde{S}\rtimes D\) as in the proof of Lemma 2.6, using [11, Corollary 2.5], and \({s_{1}}^{\alpha}\) cannot be conjugate to \(s_{2}z\) for \(z\in\mathbf{Z}(\operatorname{GL}_{2}(q))\) of order dividing \(2\). (Recall that the \(z\in\mathbf{Z}(\operatorname{GL}_{2}(q))\) are in bijection with \(\hat{z}\in\operatorname{Irr}(\operatorname{GL}_{2}(q)/\operatorname{SL}_{2}(q))\), and that \(\chi_{s_{2}z}=\chi_{s_{2}}\hat{z}\). Further, if \(|z|>2\), then \(\hat{z}\) is not trivial on \(\mathbf{Z}(\operatorname{GL}_{2}(q))\).) This shows that the restrictions of \(\chi_{s_{1}}\) and \(\chi_{s_{2}}\) to \(S\) are not \(\operatorname{Aut}(S)\)-conjugate, as desired.
We now complete the proof of Theorem 2.1, which essentially follows from the observations in [11, Section 3].
Proof of Theorem 2.1.: From Lemma 2.4 and Corollary 2.7, we may assume that \(S\) is a simple group of Lie type defined over \(\mathbb{F}_{q}\) with nonexceptional Schur multiplier, where \(q\) is a power \(q=q_{0}^{f}\) of a prime \(q_{0}\) and that \(S\notin\{\operatorname{PSL}_{2}(q),\operatorname{PSL}_{3}^{\epsilon}(q)\}\). Further, \(q_{0}\neq 3\), as otherwise \(Q\) is not abelian by [11, Theorem], since \(S\neq\operatorname{PSL}_{2}(q)\).
Assume first that \(S\) is of exceptional type, including \({}^{2}\operatorname{F}_{4}(q)\) and \({}^{3}\operatorname{D}_{4}(q)\). Then the proof of [10, Lemma 3.7] yields more than \(4\) orbits under \(\operatorname{Aut}(S)\) in \(\operatorname{Irr}(B_{0}(S))\), and we are done in this case. (Note that we do not need to consider the cases \({}^{2}\operatorname{G}_{2}(q)\) and \({}^{2}\operatorname{B}_{2}(q)\), since we would have \(q_{0}=3\) in the first case and \(3\) does not divide \(|S|\) in the second.)
We are left with the case that \(S\) is of classical type \(\operatorname{A}_{n-1}\) or \({}^{2}\operatorname{A}_{n-1}\) with \(n\geq 4\), \(\operatorname{B}_{n}\) with \(n\geq 3\), \(\operatorname{C}_{n}\) with \(n\geq 2\), or \(\operatorname{D}_{n}\) or \({}^{2}\operatorname{D}_{n}\) with \(n\geq 4\). Here, the proof of [10, Lemma 3.8] shows that we can assume \(me\leq 4\) when \(S=\operatorname{PSL}_{n}^{\epsilon}(q)\), where \(e\in\{1,2\}\) is the order of \(\epsilon q\) modulo \(3\) and \(n=me+r\) with \(0\leq r<e\). Similarly, the proof of [10, Lemma 3.10] shows that there are at least \(5\)\(\operatorname{Aut}(S)\)-orbits in \(B_{0}(S)\) in the remaining classical cases, except possibly if \(S=\operatorname{C}_{2}(q)\). (Note that since \(p=3\), we have \(e=1\) and \(n=m\) for these remaining classical cases in the notation of loc. cit.) If \(S=\operatorname{C}_{2}(q)\), then there is at worst one pair of unipotent characters in \(B_{0}(S)\) that are interchanged by elements of \(\operatorname{Aut}(S)\) (see [13, Theorem 2.5]). Then the same arguments as in [10, Lemma 3.10] again finish this case, since \(k(2,2)=5\) (in the notation of loc. cit).
Finally, we assume \(S=\operatorname{PSL}_{4}^{\epsilon}(q)\) or \(\operatorname{PSL}_{5}^{\epsilon}(q)\) with \(em=4\). Recall that by [10], unipotent characters lie in the same block if and only if they correspond to partitions with the same \(e\)-core. We see that there are \(5\) partitions of \(4\) with trivial \(e\)-core and similarly \(5\) partitions of \(5\) with \(e\)-core (1), and hence there exist \(5\) unipotent characters in \(B_{0}(S)\), which are \(\operatorname{Aut}(S)\)-invariant by [13, Theorem 2.5].
We are now ready to prove Theorem 2.2.
Proof of Theorem 2.2.: First, assume \(P\) is cyclic. Then by Dade's cyclic-defect theory [1, Theorem 1], we have \(k(B_{0})=f+\frac{|P|-1}{f}\), where \(f:=[\mathbf{N}_{S}(P):\mathbf{C}_{S}(P)]\). This forces \(|P|=9\) when \(k(B_{0})=6\).
Hence we assume that \(P\) is non-cyclic, and for contradiction, we assume that \(k(B_{0})=6\) and \(|P|\neq 9\). By [10, Theorem 1.1], we know that \(k(B_{0})\geq 2\sqrt{p-1}\), and hence we may further assume that \(p\in\{3,5,7\}\), so that \(|P|\geq 25\).
As before, the cases of sporadic groups, the Tits group, \({}^{2}\operatorname{G}_{2}(3)^{\prime}\), and groups of Lie type with exceptional Schur multiplier can be seen using GAP.
Next suppose \(S=\operatorname{Alt}_{n}\) is a simple alternating group and let \(n=pm+r\) with \(m,r\) integers satisfying \(m\geq 1\) and \(0\leq r<p\). Then
\[k(B_{0})\geq\frac{1}{2}k(B_{0}(\operatorname{Sym}_{n}))=\frac{1}{2}k(B_{0}( \operatorname{Sym}_{pm}))\]
where the equality comes from [13, Theorem (1.10)]. Note that our assumption \(|P|\geq 25\) forces \(m\geq 2\) if \(p\geq 5\) and \(m\geq 3\) if \(p=3\). In these cases, we can explicitly find at least \(13\) partitions of \(pm\) with trivial \(p\)-core, so that \(k(B_{0}(\operatorname{Sym}_{pm}))\geq 13\) and \(k(B_{0})>6\).
Finally, let \(S\) be a simple group of Lie type defined over \(\mathbb{F}_{q}\) with \(q\) a power of some prime. If \(p\mid q\), then the condition \(P\) is abelian again leaves only \(S=\operatorname{PSL}_{2}(q)\), and the condition
\(|P|\geqslant 25\) forces \(q\geqslant 25\). Now, by [12, Theorem 3.3], we have \(k(B_{0})=k(S)-1>6\), using the well-known character table for \(S\).
Hence we assume \(S\) is a simple group of Lie type defined over \(\mathbb{F}_{q}\) with \(p\nmid q\). We remark that our proof in this remaining case follows the work in [13] closely, although we need to exhibit one more character here than we needed there.
Assume first that \(S\) is of exceptional type. Note that \({}^{2}\operatorname{B}_{2}(q)\) and \({}^{2}\operatorname{G}_{2}(q)\) have cyclic Sylow \(p\)-subgroups, \(B_{0}({}^{2}\operatorname{F}_{4}(q))\) has more than \(6\) characters, using [10] and \(B_{0}({}^{3}\operatorname{D}_{4}(q))\) has more than \(6\) characters using [10], so we further assume that \(S\) is not of Suzuki, Ree, or triality type. If the order \(d_{p}(q)\) of \(q\) modulo \(p\) is not a so-called regular number, then (with our previous assumptions) we see from [11, Table 2] that \(B_{0}(S)\) contains more than \(6\) characters. So, assume that \(d_{p}(q)\) is regular, so that every \(p^{\prime}\)-degree unipotent character lies in \(B_{0}(S)\) (see e.g. [13, Lemma 3.6]). For \(S=\operatorname{G}_{2}(q)\), we see using [10] that \(B_{0}(S)\) contains more than \(6\) characters. In the remaining exceptional groups \(\operatorname{F}_{4}(q)\), \(\operatorname{E}_{6}^{\pm}(q)\), \(\operatorname{E}_{7}(q)\), and \(\operatorname{E}_{8}(q)\), we see using [1, Section 13.9] that there are more than \(6\)\(p^{\prime}\)-degree unipotent characters when \(P\) is noncyclic and abelian and \(d_{p}(q)\) is regular, so we are done with the exceptional groups.
We are left with the case that \(S\) is a finite classical group. If \(S\) is \(\operatorname{PSL}_{n}^{\epsilon}(q)\) with \(\epsilon\in\{\pm\}\) and \(n\geqslant 2\), let \(e\) be \(d_{p}(\epsilon q)\). If \(S\) is \(\operatorname{P\Omega}_{2n}^{\epsilon}(q)\) with \(n\geqslant 4\), \(\operatorname{P\Omega}_{2n+1}(q)\) with \(n\geqslant 3\), or \(\operatorname{PSp}_{2n}(q)\) with \(n\geqslant 2\), let \(e\) be \(d_{p}(q^{2})\). Arguing as in [14, Sections 6 and 7], we have the number of unipotent characters in \(B_{0}(S)\) is at least \(k(e,m)\), \(k(2e,m)/2\), \(k(2e,m)\), and \(k(2e,m)\), in the cases if \(S=\operatorname{PSL}_{n}^{\epsilon}(q)\), \(\operatorname{P\Omega}_{2n}^{\epsilon}(q)\), \(\operatorname{P\Omega}_{2n+1}(q)\), or \(\operatorname{PSp}_{2n}(q)\), respectively. Here \(m\) is such that \(n=em+r\) with \(0\leqslant r<e\) and \(k(s,t)\) can be computed as in [11, Lemma 1]. Further, our assumption that \(P\) is abelian but not cyclic forces \(p\geqslant m>1\). With this and our assumption that \(P\) is not cyclic, we have the number of unipotent characters in \(B_{0}(S)\) is at least \(7\) unless possibly if \((e,m)\in\{(1,3),(1,4),(2,2)\}\) if \(S=\operatorname{PSL}_{n}^{\epsilon}(q)\) or \((e,m)\in\{(1,2),(1,3)\}\) otherwise. In the latter cases, since \(e=1\), we have \(n=m\), so that \(S=\operatorname{PSp}_{2n}(q)\) or \(\operatorname{P\Omega}_{2n+1}(q)\), and the case \((e,m)=(1,3)\) gives \(10\) unipotent characters in \(B_{0}(S)\). Then we may assume \(S=\operatorname{PSL}_{n}^{\epsilon}(q)\) with \((e,m)\in\{(1,3),(1,4),(2,2)\}\) or \(S=\operatorname{PSp}_{4}(q)\) with \((e,m)=(1,2)\). Let \(G=\operatorname{SL}_{n}^{\epsilon}(q)\) and \(\widetilde{G}=\operatorname{GL}_{n}^{\epsilon}(q)\) in the first cases and \(G=\widetilde{G}=\operatorname{Sp}_{4}(q)\) in the latter.
First let \((e,m)\neq(1,3)\), so that there are at least \(5\) unipotent characters in \(B_{0}(S)\). We may consider two semisimple characters \(s_{1},s_{2}\) of \(\widetilde{G}\) corresponding to semisimple elements of \(\widetilde{G}^{*}\in\{\operatorname{GL}_{n}^{\epsilon}(q),\operatorname{SO }_{5}(q)\}\) with nontrivial eigenvalues \(\{a,a^{-1},b,b^{-1}\}\), where \(a,b\in C_{(q^{2}-1)_{p}}\leqslant\mathbb{F}_{q^{2}}^{\times}\) satisfy \(a\neq b\) in the case of \(s_{1}\) and \(a=b\) in the case of \(s_{2}\). These necessarily give distinct characters on restriction to \(G\) since \(s_{1}z\) is not conjugate to \(s_{2}\) for any \(1\neq z\in\mathbf{Z}(\widetilde{G}^{*})\) and are trivial on \(\mathbf{Z}(G)\), as they lie in \([\widetilde{G}^{*},\widetilde{G}^{*}]\), which yields two more members of \(\operatorname{Irr}(B_{0}(S))\).
Finally, let \((e,m)=(1,3)\), so we have \(e=1\), \(p>3\), and \(S=\operatorname{PSL}_{3}^{\epsilon}(q)\). Here \(B_{0}(S)\) contains all \(3\) unipotent characters and therefore \(B_{0}(G)\) contains all characters lying in Lusztig series indexed by semisimple elements \(s\in G^{*}\) with \(|s|\) a power of \(p\) (see [1, Theorem 9.12]). We may obtain three semisimple elements \(s_{1},s_{2},s_{3}\) with eigenvalues \(\{a,a,a^{-2}\}\), \(\{a,b,(ab)^{-1}\}\), and \(\{a,a^{-1},1\}\) with \(a,b\in C_{q-\epsilon}\); \(|a|=p=|b|\); and \(a\neq b\), whose Lusztig series give distinct restrictions to \(G\) and are trivial on \(\mathbf{Z}(G)\) arguing as before. Further, the series \(\mathcal{E}(\widehat{G},s_{1})\) contains two characters, corresponding to the two unipotent characters of
\(\mathbf{C}_{\widetilde{G}}(s_{1})\cong\operatorname{GL}_{2}^{\epsilon}(q)\times \operatorname{GL}_{1}^{\epsilon}(q)\). This yields at least \(4\) additional characters in \(B_{0}(S)\), and we are done.
## 3. Principal blocks with six irreducible characters
The purpose of this section is to prove Theorems A and B.
### Preliminaries
When \(p\) is a prime, we write \(k_{p}\) for the \(p\)-part of an integer \(k\). Also, \(l(B)\) denotes the number of irreducible Brauer characters in a block \(B\).
**Lemma 3.1**.: _Let \(G\) be a finite group and \(p\) a prime. Let \(B_{0}\) denote the principal \(p\)-block of \(G\)._
1. \(k(B_{0})\geq|\mathbf{Z}(G)|_{p}l(B_{0})\)_._
2. \(l(B_{0})=1\) _if, and only if,_ \(G\) _has a normal_ \(p\)_-complement._
Proof.: Part (i) is a direct consequence of [25, Theorem 5.12]. Part (ii) is [25, Corollary 6.13].
**Lemma 3.2**.: _Let \(M\trianglelefteq
We note that the example of \(G=\operatorname{Sym}_{3}\) and \(N=\operatorname{Alt}_{3}=\langle(1\,2\,3)\rangle\) for \(p=3\) shows that the inequality in Lemma 3.5 is not always an equality.
### Proof of Theorem A
As we will see later, known results on height zero characters in principal blocks essentially reduce Theorem A to the following.
**Theorem 3.6**.: _Let \(G\) a finite group of order divisible by \(p\in\{3,5,7\}\). Suppose that \(P\in\operatorname{Syl}_{p}(G)\) is abelian and the principal \(p\)-block \(B_{0}\) of \(G\) has precisely \(6\) irreducible characters. Then \(p=3\) and \(|P|=9\)._
Proof.: Assume that the statement is false and let \(G\) be a counterexample of minimal order. In particular, \(k(B_{0})=6\) and \(|P|\neq 9\). If \(|P|<9\) then \(P\) would be \(\mathsf{C}_{3}\), \(\mathsf{C}_{5}\), or \(\mathsf{C}_{7}\), and Dade's cyclic-defect theory [1, Theorem 1] quickly shows that \(k(B_{0})\neq 6\). We therefore indeed have
\[|P|>9.\]
Furthermore, by Lemma 2.3(ii), we have \(\mathbf{O}_{p^{\prime}}(G)=1\).
We now show that \(G\) is not \(p\)-solvable. Assume that \(G\) is \(p\)-solvable. Then, by Fong's theorem (see [2, Theorem 10.20]), \(G\) has a unique \(p\)-block - the principal one, and so \(k(G)=k(B_{0})=6\). An inspection of the list of finite groups with \(6\) conjugacy classes ([2, Table 1]) reveals no counterexamples. Therefore \(G\) is in fact not \(p\)-solvable.
Let \(N\lhd\,G\) be a minimal normal subgroup of \(G\). As \(\mathbf{O}_{p^{\prime}}(G)=1\), we have that \(p\) divides the order of \(N\), which therefore is either an elementary abelian \(p\)-group or a semisimple group.
(A) We claim that \(p\) does not divide \(|G:N|\).
Assume otherwise. Let \(\bar{B}_{0}:=B_{0}(G/N)\) be the principal block of \(G/N\). On one hand we have \(k(\bar{B}_{0})\leqslant 5\) by Lemma 2.3 (i) and (iii); on the other hand, \(k(\bar{B}_{0})\geqslant\lceil 2\sqrt{p-1}\rceil\geqslant 3\) by [1, Theorem 1.1]. In summary,
\[k(\bar{B}_{0})\in\{3,4,5\}.\]
Also notice that, by Lemma 2.3, we have that \(\operatorname{Irr}(B_{0})\cap\operatorname{Irr}(G/N)\) is a union of blocks of \(G/N\) including \(\bar{B}_{0}\). Hence, we can write
\[\operatorname{Irr}(B_{0})=(\operatorname{Irr}(B_{0})\cap\operatorname{Irr}(G/ N))\cup\left\{\chi_{1},\dots,\chi_{l}\right\},\]
with \(l\in\{1,2,3\}\), where the union is disjoint.
(A1) Assume that \(l=3\). Then \(\operatorname{Irr}(B_{0})\cap\operatorname{Irr}(G/N)=\operatorname{Irr}(\bar{ B}_{0})\) and \(k(\bar{B}_{0})=3\). It follows from [1, Theorem 3.1] that
\[p=3\text{ and }|PN:N|=3.\]
Moreover, the action of \(G\) on \(\operatorname{Irr}(B_{0}(N))\backslash\{\mathbf{1}_{N}\}\) defines at most \(3\) orbits.
Suppose first that \(N\) is elementary abelian. Then
\[P\mathbf{C}_{G}(P)=\mathbf{C}_{G}(P)\subseteq\mathbf{C}_{G}(N)=:M\lhd\,G.\]
By Lemma 3.2, \(B_{0}\) is the only block of \(G\) covering \(B_{0}(M)\) and \(1\leqslant k(G/M)\leqslant 3\), as \(\operatorname{Irr}(G/M)\subseteq\operatorname{Irr}(B_{0})\cap\operatorname{ Irr}(G/N)=\operatorname{Irr}(\bar{B}_{0})\). Since \(G/M\) is a \(3^{\prime}\)-group, we necessarily have \(|G/M|\leqslant 2\). If \(G=M\), then \(N\subseteq\mathbf{Z}(G)\) and by Lemma 3.1, \(6=k(B_{0})\geqslant|N|l(B_{0})\), implying that \(|N|\) must be \(3\), so that \(|P|=3|N|=9\), a contradiction. We therefore must
have \(|G/M|=2\). As the action of \(G/M\) on the nontrivial elements of \(N\) defines at most \(3\) orbits, we have \(|N|\leqslant 7\), and thus \(|N|=3\), which implies that \(|P|=9\), a contradiction again.
Now suppose that \(N\cong S^{t}\) for some non-abelian simple group \(S\) of order divisible by \(p=3\) and \(t\in\mathbb{Z}^{+}\). By [11, Theorem 2.2], \(\operatorname{Aut}(S)\) produces at least \(2\) orbits on \(\operatorname{Irr}(B_{0}(S))\backslash\{\mathbf{1}_{S}\}\). Since the action of \(G\) on \(\operatorname{Irr}(B_{0}(N))\backslash\{\mathbf{1}_{N}\}\) defines at most \(3\) orbits, we deduce that \(t=1\) and \(N=S\) is simple (otherwise, assuming that \(\alpha,\beta\in\operatorname{Irr}(B_{0}(S))\backslash\{\mathbf{1}_{S}\}\) lie in different \(\operatorname{Aut}(S)\)-orbits, then \(\alpha^{t}\), \(\alpha\times\mathbf{1}_{S}^{t-1}\), \(\beta^{t}\) and \(\beta\times\mathbf{1}_{S}^{t-1}\) would lie in different \(G\)-orbits). We then have \(|P:(P\cap S)|=|PS:S|=3\). The fact that \(G\) is a counterexample then implies that \(Q:=P\cap S\in\operatorname{Syl}_{3}(S)\) has order at least \(9\).
We now use Theorem 2.1, with the almost simple group \(G/\mathbf{C}_{G}(S)\) in place of \(A\), to arrive at one of the following cases.
1. The action of \(G\) on \(\operatorname{Irr}(B_{0}(S))\backslash\{\mathbf{1}_{S}\}\) defines at least \(4\) orbits, a contradiction.
2. \(S\in\{\operatorname{PSL}_{2}(q),\operatorname{PSL}_{3}(q),\operatorname{PSU}_ {3}(q)\}\) with \((3,q)=1\), and \(3\) divides \(|G:S\mathbf{C}_{G}(S)|\). Since \(|G:S|_{3}=3\), \(\mathbf{C}_{G}(S)\) has order not divisible by \(3\). Then \(\mathbf{C}_{G}(S)=1\), because \(\mathbf{O}_{3^{\prime}}(G)=1\) and \(G=A\) is almost simple. Theorem 2.1(b) yields \(k(B_{0})>6\), a contradiction.
3. \(S=\operatorname{Alt}_{6}\) and \(A:=G/\mathbf{C}_{G}(S)\) has a subgroup isomorphic to \(\operatorname{M}_{10}\). It follows that \(A\) is either \(M_{10}\) or the full automorphism group \(\operatorname{Aut}(S)\). In fact, as \(k(B_{0}(A))\leqslant k(B_{0})=6\) and \(k(B_{0}(\operatorname{Aut}(S)))>6\), we have \(A\cong\operatorname{M}_{10}\), in which case \(k(B_{0}(A))=k(B_{0})=6\). We conclude that \(3\) does not divide the order of \(\mathbf{C}_{G}(S)\) (otherwise \(B_{0}(\mathbf{C}_{G}(S))\) contains a nontrivial character, say \(\theta\), and by Lemma 2.3(iii), \(B_{0}(G)\) would have a character lying above \(\theta\), implying that \(k(B_{0}(A))<k(B_{0})\)). In particular, \(|G|_{3}=|\operatorname{Alt}_{6}|_{3}=9\) and \(G\) is not a counterexample.
(A2) Assume that \(l=2\). If \(\operatorname{Irr}(\bar{B}_{0})\subsetneq\operatorname{Irr}(B_{0})\cap \operatorname{Irr}(G/N)\) then \(\operatorname{Irr}(B_{0})\) would contain a block of \(p\)-defect zero of \(G/N\), but that is impossible as every character in \(\operatorname{Irr}(B_{0})\) has degree coprime to \(p\) by [12]. Thus \(\operatorname{Irr}(\bar{B}_{0})=\operatorname{Irr}(B_{0})\cap\operatorname{Irr }(G/N)\). In particular \(k(\bar{B}_{0})=4\) and, by [10, Theorem 1.1],
\[p=5\text{ and }|PN:N|=5.\]
In this case the action of \(G\) on \(\operatorname{Irr}(B_{0}(N))\backslash\{\mathbf{1}_{N}\}\) defines at most \(2\) orbits.
Suppose that \(N\) is elementary abelian. Then, as above, \(P\subseteq\mathbf{C}_{G}(N):=M\lneq G\). By Lemma 3.2, \(B_{0}\) is the only block covering \(B_{0}(M)\). Consequently, Lemma 2.3(iv) implies that \(\operatorname{Irr}(G/M)\subseteq\operatorname{Irr}(\bar{B}_{0})\) and so \(1\leqslant k(G/M)\leqslant 4\). Moreover, using Lemma 2.3 (iii) and (iv), considering a non-principal character \(\theta\in\operatorname{Irr}(B_{0}(M/N))\subseteq\operatorname{Irr}(B_{0}(M))\), we in fact have that \(k(G/M)<4\). If \(G=M\) then \(N\subseteq\mathbf{Z}(G)\) and, by Lemma 3.1(i), \(6=k(B_{0})\geqslant 5l(B_{0})\). It follows that \(l(B_{0})=1\) and, by Lemma 3.1(ii), \(G\) would be \(p\)-solvable, a contradiction. Hence the the action of the nontrivial group \(G/M\) on \(N\) is faithful and defines at most \(2\) orbits on the nontrivial elements of \(N\). The only possibility is that \(|N|=5\) and \(|G/M|=2\). Since \(\operatorname{Irr}(B_{0}(M))\) consists of the irreducible constituents of \(\psi_{M}\) for \(\psi\in\operatorname{Irr}(B_{0})\), the fact \(|G/M|=2\) forces \(k(B_{0}(M))\in\{3,6,9\}\). As \(p=5\), the main result of [10] implies that \(k(B_{0}(M))\geqslant 2\sqrt{p-1}=4\). Furthermore, \(k(B_{0}(M))\) cannot be \(6\) neither by the minimality of \(G\). So \(k(B_{0}(M))=9\). Lemma 3.1 then implies that \(l(B_{0}(M))=1\) and \(M\) is \(p\)-solvable. Thus \(G\) is \(p\)-solvable as well, a contradiction.
Suppose that \(N\cong S^{t}\) for some non-abelian simple group \(S\) of order divisible by \(p=5\) and \(t\in\mathbb{Z}^{+}\). If \(t>1\), then, as before, [16, Theorem 2.2] shows that \(G\) defines more than \(2\) orbits when acting on \(\operatorname{Irr}(B_{0}(N))\backslash\{\mathbf{1}_{N}\}\), violating the fact that \(l=2\). Hence \(N=S\lhd\,G\).
Assume first that \(S\neq P\Omega_{8}^{+}(q)\). By [11, Proposition 2.1(i)], there are some \(\mathbf{1}_{S}\neq\theta\in\operatorname{Irr}(B_{0}(S))\) that extends to \(\hat{\theta}\in\operatorname{Irr}(B_{0}(G))\) and \(\mathbf{1}_{S}\neq\varphi\in\operatorname{Irr}(B_{0}(S))\) with \(\varphi(1)\nmid\theta(1)\). Furthermore, both \(\theta\) and \(\varphi\) have degree coprime to \(p\). Recall that \(k(B_{0}(G/S))=k(B_{0}(G/N))\geqslant 3\). We therefore can write
\[\operatorname{Irr}(B_{0})=\{\mathbf{1}_{G},\alpha,\beta,\gamma,\chi_{1},\chi_ {2}\}\]
so that \(\alpha,\beta,\gamma\) contain \(S\) in their kernels, \(\chi_{1}=\hat{\theta}\), and \(\chi_{2}\) lies over \(\varphi\). In particular, \(\chi_{1}\) is the only character in \(\operatorname{Irr}(B_{0})\) that is above \(\theta\). This together with the facts that \(p\nmid\theta(1)\) and \(p\mid|G:S|\) would contradict Lemma 3.5.
We are left with the case \(S=\operatorname{P\Omega}_{8}^{+}(q)\). Then by [11, Proposition 2.1.(ii)], the set \(\operatorname{Irr}(B_{0}(S))\backslash\{\mathbf{1}_{S}\}\) contains two \(\operatorname{Aut}(S)\)-invariant members. Since \(k(B_{0}(S))\geqslant 2\sqrt{p-1}=4\) by [10, Theorem 1.2.(i)], it follows that the action of \(G\) on \(\operatorname{Irr}(B_{0}(S))\backslash\{\mathbf{1}_{S}\}\) defines at least \(3\) different orbits, which is again a contradiction.
(A3) Assume that \(l=1\). If \(\operatorname{Irr}(\bar{B}_{0})\subsetneq\operatorname{Irr}(B_{0})\cap \operatorname{Irr}(G/N)\) then arguing as in the second sentence of case (A2) we see that \(\operatorname{Irr}(B_{0})\) contains a block \(\bar{B}_{1}\) of \(G/N\) with \(k(\bar{B}_{1})=2\). But \(k(\bar{B}_{1})=2\) would force \(p=2\) by [10, Theorem A], contradicting our hypothesis on \(p\). Hence \(k(\bar{B}_{0})=5\). Using the main result of [11], we then have
\[p\in\{5,7\}\text{ and }|PN:N|=p.\]
In this case all the characters in \(\operatorname{Irr}(B_{0}(N))\backslash\{\mathbf{1}_{N}\}\) must be \(G\)-conjugate (and lie under \(\chi_{1}\)). In particular, the set of character degrees of \(\operatorname{Irr}(B_{0}(N))\) has size \(2\), and it follows from [16, Theorem A] that \(N\) is \(p\)-solvable. We therefore conclude that \(N\) is an elementary abelian \(p\)-group. Also, \(N\subseteq P\) and \(|P/N|=p\).
As before, \(B_{0}\) is the only block covering \(B_{0}(M)\), where \(M:=\mathbf{C}_{G}(N)\lhd\,G\). Moreover, \(k(G/M)<5\). If \(G=M\), then \(N\subseteq\mathbf{Z}(G)\) and Lemma 3.1(i) implies that \(6=k(B_{0})\geqslant|N|l(B_{0})\), forcing \(l(B_{0})=1\) as \(p\geqslant 5\), and so \(G\) would be \(p\)-solvable by Lemma 3.1(ii), a contradiction. Therefore the \(p^{\prime}\)-group \(G/M\in\{\mathbf{C}_{2},\mathbf{C}_{3},\operatorname{Sym}_{3},\mathbf{C}_{4}, \mathbf{C}_{2}\times\mathbf{C}_{2},\mathbf{D}_{10},\operatorname{Alt}_{4}\}\) acts faithfully on \(N\) and transitively on \(N\backslash\{1\}\). (See [12] for the list of finite groups of relatively small class number.) There are only two possible scenarios: \(|G/M|=4\) and \(|N|=5\) or \(G/M\cong\operatorname{Sym}_{3}\) and \(|N|=7\). The latter case in fact cannot happen as \(\operatorname{Aut}(\mathbf{C}_{7})\cong\mathbf{C}_{6}\). In the former case, since \(\operatorname{Irr}(G/M)\subseteq\operatorname{Irr}(\bar{B}_{0})\) and \(k(\bar{B}_{0})=5\), the principal block \(\bar{B}_{0}\) of \(G/N\) would have only \(2\) different character degrees. It again follows from the main result of [16] that \(G/N\), and therefore \(G\), would be \(p\)-solvable, a contradiction.
(B) We have shown that \(p\) does not divide \(|G:N|\). Therefore we have \(P\leqslant N\). In particular, as \(G\) is not \(p\)-solvable, \(N\) is isomorphic to a direct product \(S^{t}\) of \(t\) copies of a non-abelian simple group \(S\) of order divisible by \(p\).
Let \(M:=N\mathbf{C}_{G}(P)\). Then \(M\lhd\,G\) by the Frattini argument, and
\[k(B_{0}(M))=k(B_{0}(S))^{t}\]
by Theorem 3.3. Moreover, \(G/M\) transitively permutes the simple factors of \(N\). By Lemma 3.2, \(B_{0}\) is the only block of \(G\) covering \(B_{0}(M)\). In particular, \(\operatorname{Irr}(G|\phi)\subseteq\operatorname{Irr}(B_{0})\) for every \(\phi\in\operatorname{Irr}(B_{0}(M))\). Moreover, as \(k(B_{0})=6\), we have \(1\leq k(G/M)<6\).
If \(G=M\) then \(t=1\) and \(k(B_{0}(S))=k(B_{0}(G))=6\), and it follows from Theorem 2.2 that \(|P|=9\), contradicting the choice of \(G\) as a counterexample. If \(k(G/M)=5\) then all members of \(\operatorname{Irr}(B_{0}(M))\backslash\{\mathbf{1}_{M}\}\) lie under the same character in \(\operatorname{Irr}(B_{0})\). In particular, [10, Theorem A] implies that \(M\) is \(p\)-solvable, and so would be \(G\), which is not the case. We have shown that
\[k(G/M)\in\{2,3,4\}.\]
Recall that \(|P|>9\). Therefore, by previous results on possible Sylow structure of finite groups with up to \(5\) characters in the principal block (see Theorems 4.1-4.5 of [10]), we have \(k(B_{0}(M))\geq 6\). Note that \(P\in\operatorname{Syl}_{p}(M)\), and so the minimality of \(G\) as a counterexample further implies that
\[k(B_{0}(M))\geq 7.\]
Let us first handle the special case \((S,p)=(\operatorname{PSL}_{2}(3^{m}),3)\) for some \(m\geq 3\). If \(S=\operatorname{PSL}_{2}(27)\) then \(\operatorname{Aut}(S)\) has \(5\) orbits on \(\operatorname{Irr}(B_{0}(S))\backslash\{\mathbf{1}_{S}\}\) ([12]), so \(k(B_{0})\geq 5+k(G/M)\geq 7\) and we would be done. For \(S=\operatorname{PSL}_{2}(q)\) with \(q\geq 81\) we have \(k(B_{0}(S))=(q+3)/2\geq 42\) (the ordinary characters of \(B_{0}(S)\) are \(\operatorname{Irr}(S)\backslash\{\mathfrak{St}_{S}\}\) and \(k(\operatorname{PSL}_{2}(q))=(q+5)/2\) for odd \(q\)). We therefore have at least \(41\) nontrivial members in \(\operatorname{Irr}(B_{0}(M))\). With \(k(G/M)\leq 4\), we have \(|G/M|\leq 12\). So \(G\) produces at least \(\lceil 41/12\rceil=4\) orbits on \(\operatorname{Irr}(B_{0}(M))\backslash\{\mathbf{1}_{M}\}\). As \(k(B_{0})=6\), it follows that \(k(G/M)\leq 2\), which would imply that \(G\) now defines more than \(41/2\) orbits on \(\operatorname{Irr}(B_{0}(M))\backslash\{\mathbf{1}_{M}\}\), a contradiction.
We may assume from now on that \((S,p)\neq(\operatorname{PSL}_{2}(3^{m}),3)\) for every \(m\geq 3\). Then \(\operatorname{Irr}(B_{0}(S))\backslash\{\mathbf{1}_{S}\}\) contains an \(\operatorname{Aut}(S)\)-invariant member \(\phi\) by [1, Proposition 2.1] for \(p\geq 5\) and by [10, Proposition 3.7] for \(p=3\) and \(S\neq\operatorname{PSL}_{2}(3^{n})\) for \(n\geq 2\). (Note that the irreducible character of degree \(10\) of \(S=\operatorname{Alt}_{6}\cong\operatorname{PSL}_{2}(9)\) is \(\operatorname{Aut}(S)\)-invariant and lies in the principal \(3\)-block.) In particular, by Theorem 3.3, there exists some
\[\mathbf{1}_{M}\neq\psi\in\operatorname{Irr}(B_{0}(M))\]
that is \(G\)-invariant. More precisely, under the Alperin-Dade correspondence, \(\psi\) corresponds to \(\phi^{t}\) where \(\phi\in\operatorname{Irr}(B_{0}(S))\backslash\{\mathbf{1}_{S}\}\) is \(\operatorname{Aut}(S)\)-invariant.
(B1) Assume that \(k:=k(G/M)\in\{2,3\}\). Then all the Sylow subgroups of \(G/M\) are cyclic and \(\psi\) extends to \(G\) by Lemma 3.4. Therefore \(\operatorname{Irr}(B_{0})\) has \(2k\) members lying over the characters \(\mathbf{1}_{M}\) and \(\psi\) in \(\operatorname{Irr}(B_{0}(M))\). As \(k(B_{0}(M))\geq 7\), the remaining at least \(5\) characters in \(\operatorname{Irr}(B_{0}(M))\) produce at least \(\lceil 5/k\rceil\) additional irreducible characters in \(B_{0}\). We arrive at
\[k(B_{0})\geq 2k+\lceil 5/k\rceil,\]
which is greater than \(6\) when \(k\in\{2,3\}\), a contradiction.
(B2) Finally we consider \(k(G/M)=4\). Now \(\operatorname{Irr}(B_{0})\) contains four members lying above \(\mathbf{1}_{M}\) and at least one member above \(\psi\). Furthermore, the set \(\operatorname{Irr}(B_{0}(M))\backslash\{\mathbf{1}_{M},\psi\}\), which again has cardinality at least \(5\), would cover at least two \(G\)-orbits. As a result, we have
\[k(B_{0})\geq 4+1+2=7,\]
and this contradiction completes the proof.
The following result on \(2\)-blocks with metacyclic defect groups will be useful. The weaker case of maximal-class defect groups, which is what we really need, is due to Brauer [1] and Olsson [14].
**Lemma 3.7**.: _Let \(B\) be a \(2\)-block of a finite group with metacyclic defect group \(D\) of order at least \(8\). Then either \(k(B)\geq 7\) or one of the following holds:_
1. \(B\) _is nilpotent and_ \(k(B)=k(D)\)_._
2. \(D=\mathsf{D}_{8}\) _and_ \(k(B)=5\)_._
Proof.: This follows from [1, Theorem 8.1].
**Theorem 3.8**.: _Let \(G\) a finite group, \(p\) a prime, and \(P\in\operatorname{Syl}_{p}(G)\). Suppose that the principal \(p\)-block of \(G\) has precisely six irreducible characters. Then \(p=3\) and \(|P|=9\)._
Proof.: Recall that \(B_{0}\) denotes the principal \(p\)-block of \(G\). First, as \(k(B_{0})=6>1\), we know that \(G\) has order divisible by \(p\). As before, let \(k_{0}(B_{0})\) denote the number of height zero characters in \(B_{0}\). Obviously \(k_{0}(B_{0})\leq 6\) and moreover \(k_{0}(B_{0})\geq 2\) by [11, Problem 3.11]. By [13, Theorems 4.2 and 4.3] we further have \(k_{0}(B_{0})\geq 4\). The case \(k_{0}(B_{0})=5\) cannot happen since otherwise, by [13, Theorem 1.2(C)], \(P\) is cyclic, and thus \(k(B_{0})=k_{0}(B_{0})=5\) by the main result of [11], which is a contradiction. We now have
\[k_{0}(B_{0})\in\{4,6\}.\]
Suppose that \(k_{0}(B_{0})=4\). Then it follows from [13, Theorem 1.2(B)] that \(p=2\) and \(P\) has maximal class (so \(P\) is either dihedral, semidihedral, or generalized quaternion). Since \(k_{0}(B_{0})<k(B_{0})\), \(P\) is nonabelian by again [11]. In particular, \(P\) is a metacyclic group of order at least \(8\). Using Lemma 3.7, we deduce that \(B_{0}\) is nilpotent and \(6=k(B_{0})=k(P)\). However, according to [12, Table 1], there is no \(p\)-group with precisely six conjugacy classes.
We are left with the case \(k_{0}(B_{0})=k(B_{0})=6\), and so \(P\) is abelian by [11]. Using [1, Corollary 1.3.(i)], we deduce that \(p\) must be odd. On the other hand, the main result of [13] implies that \(p\leq k(B_{0})^{2}/4+1=10\), and we conclude that \(p\in\{3,5,7\}\). The result now follows from Theorem 3.6.
### Proof of Theorem B
**Theorem 3.9**.: _Let \(G\) a finite group, \(p\) a prime, and \(P\in\operatorname{Syl}_{p}(G)\). Let \(B_{0}\) denote the principal \(p\)-block of \(G\). Then \(k(B_{0})=6\) if and only if precisely one of the following happens:_
1. \(P=\mathsf{C}_{9}\) _and_ \(|\mathbf{N}_{G}(P):\mathbf{C}_{G}(P)|=2\)_._
2. \(P=\mathsf{C}_{3}\times\mathsf{C}_{3}\) _and either_ \(\mathbf{N}_{G}(P)/\mathbf{C}_{G}(P)\in\{\mathsf{C}_{4},\mathsf{Q}_{8}\}\) _or_ \(\mathbf{N}_{G}(P)/\mathbf{C}_{G}(P)\cong\mathsf{C}_{2}\) _acts fixed-point freely on_ \(P\)_._
Proof.: In either implication of the statement, by Theorem A, we have \(p=3\) and \(|P|=9\). In particular, \(P\) is abelian and \(k_{0}(B_{0})=k(B_{0})\) by [11].
If \(P\) is cyclic then \(\mathbf{N}_{G}(P)/\mathbf{C}_{G}(P)\) is a \(3^{\prime}\)-subgroup of \(\operatorname{Aut}(P)\cong\mathsf{C}_{6}\). By Dade's cyclic-defect theory [1, Theorem 1], we have \(k(B_{0})=8/f+f\), where \(f:=|\mathbf{N}_{G}(P)/\mathbf{C}_{G}(P)|\). The only possibilities are that \(k(B_{0})=6\) with \(f=2\) or \(k(B_{0})=9\) with \(f=1\).
If \(P\) is elementary abelian, then Broue's abelian defect conjecture holds for \(B_{0}\) by [10]. In particular, we have \(k_{0}(B_{0})=k_{0}(B_{0}(\mathbf{N}_{G}(P)))\). Since every irreducible character of \(\mathbf{N}_{G}(P)\) has \(p^{\prime}\)-degree,
\[k_{0}(B_{0}(\mathbf{N}_{G}(P)))=k(B_{0}(\mathbf{N}_{G}(P)))=k(\mathbf{N}_{G}(P) /\mathbf{O}_{p^{\prime}}(\mathbf{N}_{G}(P))),\]
where the second equality follows from Lemma 2.3(ii). In summary,
\[k(B_{0})=k(\mathbf{N}_{G}(P)/\mathbf{O}_{p^{\prime}}(\mathbf{N}_{G}(P))).\]
Note that, as \(P\) is abelian, we have \(\mathbf{C}_{G}(P)\cong\mathbf{O}_{p^{\prime}}(\mathbf{N}_{G}(P))\times P\), so that \(\mathbf{N}_{G}(P)/\mathbf{O}_{p^{\prime}}(\mathbf{N}_{G}(P))\) is a semidirect product of the \(3^{\prime}\)-group
\[A:=\mathbf{N}_{G}(P)/\mathbf{C}_{G}(P)\]
acting faithfully (and coprimely) on \(P\). Since \(\operatorname{Aut}(P)\cong\operatorname{GL}_{2}(3)\), there are eight possibilities for the \(3^{\prime}\)-group \(A\leq\operatorname{GL}_{2}(3)\):
\[1,\mathsf{C}_{2},\mathsf{C}_{4},\mathsf{C}_{2}\times\mathsf{C}_{2},\mathsf{C}_ {8},\mathsf{D}_{8},\mathsf{Q}_{8},\text{ and }\mathsf{SD}_{16}.\]
(Here \(\mathsf{SD}_{16}\) is the semi-dihedral group of order 16.) While \(\mathsf{C}_{2}\) can either invert every element of \(P\) or invert just elements in one factor \(\mathsf{C}_{3}\) and fix elements of the other, each of the other possibilities can only act on \(P\) in a unique way. Using [1], we easily check that the corresponding semidirect products have 6 or 9 conjugacy classes. Indeed, a straightforward inspection of [11, Table 1] reveals that \(k(PA)=6\) if and only if \(A\in\{\mathsf{C}_{4},\mathsf{Q}_{8}\}\) or \(A=\mathsf{C}_{2}\) acts on \(P\) by inversion. This completes the proof.
We conclude the paper with a summary of what is now known on the problem of classifying defect groups of blocks with a "relatively small" number of irreducible ordinary characters. In Table 1, as usual, we have used \(B\) for an arbitrary block and \(B_{0}\) for a principal one. Also, \(k(B)\) and \(l(B)\) denote the numbers of irreducible ordinary and Brauer characters, respectively, in \(B\). A defect group of \(B\) is denoted by \(D\) and, in the case of principal blocks, by \(P\). Moreover, a root of \(B\) is denoted by \(b\) - a block of \(D\mathbf{C}_{G}(D)\) with defect group \(D\) that induces \(B\), see [12, Theorem 9.7 and p. 198]. The second column presents all the possible defect-group structures (for a defect group \(D\)) of a block with corresponding given values of \(k(B)\) and \(l(B)\) in the first column. The third column presents the "if and only if" conditions for those values to be achieved, with a note that a missing item indicates no conditions being needed; as usual \(T(b)\) denotes the stabilizer of the block \(b\) as in [12, p. 193].
|
2308.05079
|
Space-bounded quantum state testing via space-efficient quantum singular
value transformation
|
Driven by exploring the power of quantum computation with a limited number of
qubits, we present a novel complete characterization for space-bounded quantum
computation, which encompasses settings with one-sided error (unitary coRQL)
and two-sided error (BQL), approached from a quantum state testing perspective:
- The first family of natural complete problems for unitary coRQL, i.e.,
space-bounded quantum state certification for trace distance and
Hilbert-Schmidt distance;
- A new family of natural complete problems for BQL, i.e., space-bounded
quantum state testing for trace distance, Hilbert-Schmidt distance, and quantum
entropy difference.
In the space-bounded quantum state testing problem, we consider two
logarithmic-qubit quantum circuits (devices) denoted as $Q_0$ and $Q_1$, which
prepare quantum states $\rho_0$ and $\rho_1$, respectively, with access to
their ``source code''. Our goal is to decide whether $\rho_0$ is
$\epsilon_1$-close to or $\epsilon_2$-far from $\rho_1$ with respect to a
specified distance-like measure. Interestingly, unlike time-bounded state
testing problems, our results reveal that the space-bounded state testing
problems all correspond to the same class. Moreover, our algorithms on the
trace distance inspire an algorithmic Holevo-Helstrom measurement, implying
QSZK is in QIP(2) with a quantum linear-space honest prover.
Our results primarily build upon a space-efficient variant of the quantum
singular value transformation (QSVT) introduced by Gily\'en, Su, Low, and Wiebe
(STOC 2019), which is of independent interest. Our technique provides a unified
approach for designing space-bounded quantum algorithms. Specifically, we show
that implementing QSVT for any bounded polynomial that approximates a
piecewise-smooth function incurs only a constant overhead in terms of the space
required for special forms of the projected unitary encoding.
|
François Le Gall, Yupan Liu, Qisheng Wang
|
2023-08-09T17:16:19Z
|
http://arxiv.org/abs/2308.05079v2
|
# Space-bounded quantum state testing
###### Abstract
Driven by exploring the power of quantum computation with a limited number of qubits, we present a novel complete characterization for space-bounded quantum computation, which encompasses settings with one-sided error (unitary coRQL) and two-sided error (BQL), approached from a quantum (mixed) state testing perspective:
* The _first_ family of natural complete problems for unitary coRQL, namely _space-bounded quantum state certification_ for trace distance and Hilbert-Schmidt distance;
* A new family of (arguably simpler) natural complete problems for BQL, namely _space-bounded quantum state testing_ for trace distance, Hilbert-Schmidt distance, and (von Neumann) entropy difference.
In the space-bounded quantum state testing problem, we consider two logarithmic-qubit quantum circuits (devices) denoted as \(Q_{0}\) and \(Q_{1}\), which prepare quantum states \(\rho_{0}\) and \(\rho_{1}\), respectively, with access to their "source code". Our goal is to decide whether \(\rho_{0}\) is \(\epsilon_{1}\)-close to or \(\epsilon_{2}\)-far from \(\rho_{1}\) with respect to a specified distance-like measure. Interestingly, unlike time-bounded state testing problems, which exhibit computational hardness depending on the chosen distance-like measure (either QSZK-complete or BQP-complete), our results reveal that the space-bounded state testing problems, considering all three measures, are computationally as easy as preparing quantum states.
Our results primarily build upon _a space-efficient variant_ of the quantum singular value transformation (QSVT) introduced by Gilyen, Su, Low, and Wiebe (STOC 2019), which is of independent interest. Our technique provides a unified approach for designing space-bounded quantum algorithms. Specifically, we show that implementing QSVT for any bounded polynomial that approximates a piecewise-smooth function incurs only a constant overhead in terms of the space required for (special forms of) the projected unitary encoding.
###### Contents
* 1 Introduction
* 1.1 Main results
* 1.2 Background on space-bounded quantum computation
* 1.3 Time-bounded and space-bounded distribution and state testing
* 1.3.1 Time-bounded distribution and state testing
* 1.3.2 Space-bounded distribution and state testing
* 1.4 Proof technique: Space-efficient quantum singular value transformation
* 1.5 Proof overview: A general framework for quantum state testing
* 1.6 Discussion and open problems
* 1.7 Related works: more on quantum state testing problems
* 2 Preliminaries
* 2.1 Distances and divergences for quantum states
* 2.2 Space-bounded quantum computation
* 2.3 Near-minimax approximation by Chebyshev interpolation
* 2.4 Tools for space-bounded randomized and quantum algorithms
* 3 Space-efficient quantum singular value transformations
* 3.1 Space-efficient bounded polynomial approximations
* 3.1.1 Bounded functions
* 3.1.2 Piecewise-smooth functions
* 3.2 Applying Chebyshev interpolation to bitstring indexed encodings
* 3.3 Examples: the sign function and the normalized logarithmic function
* 3.4 Application: space-efficient error reduction for unitary quantum computations
* 4 Space-bounded quantum state testing
* 4.1 Space-bounded quantum state testing: a general framework
* 4.2 GapQSD\({}_{\log}\) is in BQL
* 4.3 GapQED\({}_{\log}\) and GapQJS\({}_{\log}\) are in BQL
* 4.4 \(\overline{\text{CertQSD}}_{\log}\) and \(\overline{\text{CertQHS}}_{\log}\) are in coRQ\({}_{\text{U}}\)L
* 4.4.1 \(\overline{\text{CertQSD}}_{\log}\) is in coRQ\({}_{\text{U}}\)L
* 4.4.2 \(\overline{\text{CertQHS}}_{\log}\) is in coRQ\({}_{\text{U}}\)L
* 4.5 BQL- and coRQ\({}_{\text{U}}\)L-hardness for space-bounded state testing problems
* 4.5.1 Hardness results for GapQSD\({}_{\log}\), GapQHS\({}_{\log}\), and their certification version
* 4.5.2 Hardness results for GapQJS\({}_{\log}\) and GapQED\({}_{\log}\)
* A Omitted proofs in space-efficient QSVT
* B Omitted proofs in space-bounded quantum state testing
Introduction
In recent years, exciting experimental advancements in quantum computing have been achieved, but concerns about their scalability persist. It thus becomes essential to characterize the computational power of feasible models of quantum computation that operate under restricted resources, such as _time_ (i.e., the number of gates in the circuit) and _space_ (i.e., the number of qubits on which the circuit acts). This paper specifically focuses on the latter aspect: what is the computational power of quantum computation with a limited number of qubits?
Previous studies [22, 23, 24] on complete problems of space-bounded quantum computation have primarily focused on well-conditioned versions of standard linear-algebraic problems [13, 14, 15] and have been limited to the two-sided error scenario. In contrast, we propose a novel family of complete problems that not only characterize the _one-sided error scenario (and extend to the two-sided scenario)_ but also arise from a quantum property testing perspective. Our new complete problems are arguably more natural and simpler, driven by recent intriguing challenges of verifying the intended functionality of quantum devices.
Consider the situation where a quantum device is designed to prepare a quantum (mixed) state \(\rho_{0}\), but a possibly malicious party could provide another quantum device that outputs a different \(n\)-qubit (mixed) state \(\rho_{1}\), claiming that \(\rho_{0}\approx_{\epsilon}\rho_{1}\). The problem of testing whether \(\rho_{0}\) is \(\epsilon_{1}\)-close to or \(\epsilon_{2}\)-far from \(\rho_{1}\) with respect to a specified distance-like measure, given the ability to produce copies of \(\rho_{0}\) and \(\rho_{1}\), is known as _quantum state testing_[25, Section 4]. Quantum state testing (resp., distribution testing) typically involves utilizing sample accesses to quantum states \(\rho_{0}\) and \(\rho_{1}\) (resp., distributions \(D_{0}\) and \(D_{1}\)) and determining the number of samples required to test the closeness between quantum states (resp., distributions). This problem is a quantum (non-commutative) generalization of classical property testing, which is a fundamental problem in theoretical computer science (see [16]), specifically (tolerant) distribution testing (see [17]). Moreover, this problem is an instance of the emerging field of quantum property testing (see [25]), which aims at designing quantum testers for the properties of quantum objects.
In this paper, we investigate quantum state testing problems where quantum states \(\rho_{0}\) and \(\rho_{1}\) are preparable by _computationally constrained resources_, specifically state-preparation circuits (viewed as the "source code" of devices) that are _(log)space-bounded_. Our main result conveys a conceptual message that testing quantum states prepared in bounded space is (computationally) as _easy_ as preparing these states in a space-bounded manner. Consequently, we can introduce the first family of natural \(\mathsf{coRQ}_{\mathsf{U}}\mathsf{L}\)-complete promise problems since Watrous [22] introduced unitary \(\mathsf{RQL}\) and \(\mathsf{coRQL}\) (known as \(\mathsf{RQ}_{\mathsf{U}}\mathsf{L}\) and \(\mathsf{coRQ}_{\mathsf{U}}\mathsf{L}\), respectively) in 2001, as well as a new family of natural \(\mathsf{BQL}\)-complete promise problems.
Our main technique is a _space-efficient variant_ of the quantum singular value transformation (QSVT) [10], distinguishing itself from prior works primarily focused on time-efficient QSVT. As time-efficient QSVT provides a unified framework for designing time-efficient quantum algorithms [10, 11], we believe our work indicates a unified approach to designing space-bounded quantum algorithms, potentially facilitating the discovery of new complete problems for \(\mathsf{BQL}\) and its one-sided error variants. Subsequently, we will first state our main results and then provide justifications for the significance of our results from various perspectives.
### Main results
We will commence by providing definitions for time- and space-bounded quantum circuits. We say that a quantum circuit \(Q\) is (_poly_)_time-bounded_ if \(Q\) is polynomial-size and acts on \(\mathrm{poly}(n)\) qubits. Likewise, we say that a quantum circuit \(Q\) is (_log_)_space-bounded_ if \(Q\) is polynomial-size and acts on \(O(\log n)\) qubits. It is worthwhile to note that primary complexity classes, e.g., \(\mathsf{BQL}\), \(\mathsf{coRQ}_{\mathsf{U}}\mathsf{L}\), and \(\mathsf{BPL}\), mentioned in this paper correspond to _promise problems_.
Complete characterizations of quantum logspace from state testing.While prior works [18, 19, 20] on \(\mathsf{BQL}\)-complete problems have mainly focused on well-conditioned versions of standard linear-algebraic problems (in \(\mathsf{DET}^{*}\)), our work takes a different perspective by exploring _quantum property testing_. Specifically, we investigate the problem of _space-bounded quantum state testing_, which aims to test the closeness between two quantum states that are preparable by (log)space-bounded quantum circuits (devices), with access to the corresponding "source code" of these devices.
We begin by considering a computational problem that serves as a "white-box" space-bounded counterpart of _quantum state certification_[1], equivalent to quantum state testing with one-sided error. Our first main theorem (Theorem 1.1) demonstrates the _first_ family of natural \(\mathsf{coRQ}_{\mathsf{U}}\mathsf{L}\)-complete problems in the context of space-bounded quantum state certification with respect to the trace distance (td) and the squared Hilbert-Schmidt distance (HS\({}^{2}\)).
**Theorem 1.1** (Informal of Theorem 4.5).: _The following (log)space-bounded quantum state certification problems are \(\mathsf{coRQ}_{\mathsf{U}}\mathsf{L}\)-complete: for any \(\alpha(n)\geq 1/\operatorname{poly}(n)\), decide whether_
1. \(\overline{\mathsf{CertQSD}}_{\log}\)_:_ \(\rho_{0}=\rho_{1}\) _or_ \(\operatorname{td}(\rho_{0},\rho_{1})\geq\alpha(n)\)_;_
2. \(\overline{\mathsf{CertQHS}}_{\log}\)_:_ \(\rho_{0}=\rho_{1}\) _or_ \(\operatorname{HS}^{2}(\rho_{0},\rho_{1})\geq\alpha(n)\)_;_
By extending the error requirement from one-sided to two-sided, we broaden the scope of space-bounded quantum state testing to include two more distance-like measures: the quantum entropy difference, denoted by \(\operatorname{S}(\rho_{0})-\operatorname{S}(\rho_{1})\), and the quantum Jensen-Shannon divergence (QJS\({}_{2}\)). As a result, we establish our second main theorem, introducing a new family of natural \(\mathsf{BQL}\)-complete problems:
**Theorem 1.2** (Informal of Theorem 4.6).: _The following (log)space-bounded quantum state testing problems are \(\mathsf{BQL}\)-complete: for any \(\alpha(n)\) and \(\beta(n)\) such that \(\alpha(n)-\beta(n)\geq 1/\operatorname{poly}(n)\), or for any \(g(n)\geq 1/\operatorname{poly}(n)\), decide whether_
1. \(\operatorname{GapQSD}_{\log}\)_:_ \(\operatorname{td}(\rho_{0},\rho_{1})\geq\alpha(n)\) _or_ \(\operatorname{td}(\rho_{0},\rho_{1})\leq\beta(n)\)_;_
2. \(\operatorname{GapQED}_{\log}\)_:_ \(\operatorname{S}(\rho_{0})-\operatorname{S}(\rho_{1})\geq g(n)\) _or_ \(\operatorname{S}(\rho_{1})-\operatorname{S}(\rho_{0})\geq g(n)\)_;_
3. \(\operatorname{GapQJS}_{\log}\)_:_ \(\operatorname{QJS}_{2}(\rho_{0},\rho_{1})\geq\alpha(n)\) _or_ \(\operatorname{QJS}_{2}(\rho_{0},\rho_{1})\leq\beta(n)\)_;_
4. \(\operatorname{GapQHS}_{\log}\)_:_ \(\operatorname{HS}^{2}(\rho_{0},\rho_{1})\geq\alpha(n)\) _or_ \(\operatorname{HS}^{2}(\rho_{0},\rho_{1})\leq\beta(n)\)_;_
Notably, Theorem 1.2(1) demonstrates that our algorithm for \(\operatorname{GapQSD}_{\log}\) exhibits a _polynomial advantage_ in space over the best-known classical algorithms [24], since Watrous implicitly showed in [24, Proposition 21] that \(\operatorname{GapQSD}_{\log}\) is contained in (classical) polylogarithmic space.1
Footnote 1: Notably, our algorithm for \(\operatorname{GapQSD}_{\log}\) provides an alternating proof for the original statement that \((\alpha,\beta)\)-QSD is in \(\mathsf{PSPACE}\) when \(\alpha(n)-\beta(n)\geq\exp(-\operatorname{poly}(n))\). In particular, Watrous [24] provided an algorithm in \(\mathsf{NC}\) to solve the Trace Norm Approximation problem on estimating \(\|X\|_{1}\) with polynomial precision, given that the polynomial-size matrix \(X\) enables evaluation of all entries in deterministic \(O(\log n)\) space.
Space-efficient quantum singular value transformation.Proving our main theorems mentioned above poses a significant challenge: establishing the containment in the relevant class (\(\mathsf{BQL}\) or \(\mathsf{coRQ}_{\mathsf{U}}\mathsf{L}\)), which is also the difficult direction for showing the known family of \(\mathsf{BQL}\)-complete problems [18, 19, 20].
Proving the containment for the one-sided error scenario is not an effortless task: such a task is not only already relatively complicated for \(\overline{\mathsf{CertQHS}}_{\log}\), but also additionally requires novel techniques for \(\overline{\mathsf{CertQSD}}_{\log}\). On the other hand, for two-sided error scenarios, while showing the containment is straightforward for \(\operatorname{GapQHS}_{\log}\), it still demands sophisticated techniques for all other problems, such as \(\operatorname{GapQSD}_{\log}\), \(\operatorname{GapQED}_{\log}\), and \(\operatorname{GapQJS}_{\log}\).
As explained in Section 1.4, our primary technical contribution and proof technique involve developing a space-efficient variant of the quantum singular value transformation (QSVT), which constitutes our third main theorem (Theorem 1.4).
### Background on space-bounded quantum computation
Watrous [20, 21] initiated research on space-bounded quantum computation and showed that fundamental properties, including closure under complement, hold for \(\mathsf{BQSPACE}[s(n)]\) with \(s(n)\geq\Omega(\log n)\). Watrous also investigated classical simulations of space-bounded quantum computation (with unbounded error), presenting deterministic simulations in \(O(s^{2}(n))\) space and unbounded-error randomized simulations in \(O(s(n))\) space. A decade later, van Melkebeek and Watson [21] provided a simultaneous \(\widehat{O}(t(n))\) time and \(O(s(n)+\log t(n))\) space unbounded-error randomized simulation for a bounded-error quantum algorithm in \(t(n)\) time and \(s(n)\) space. The complexity class corresponding to space-bounded quantum computation with \(s(n)=\Theta(\log(n))\) is known as \(\mathsf{BQL}\), or \(\mathsf{BQU}\mathsf{L}\) if only _unitary_ gates are permitted.
Significantly, several developments over the past two decades have shown that \(\mathsf{BQL}\) is well-defined, independent of the following factors in chronological order:
* **The choice of gateset**. The Solovey-Kitaev theorem [13] establishes that most quantum classes are gateset-independent, given that the gateset is closed under adjoint and all entries in gates have reasonable precision. The work of [21] presented a space-efficient counterpart of the Solovay-Kitaev theorem, implying that \(\mathsf{BQL}\) is also _gateset-independent_.
* **Error reduction**. Repeating \(\mathsf{BQU}\mathsf{L}\) sequentially necessitates reusing the workspace, making it unclear how to reduce errors for \(\mathsf{BQU}\mathsf{L}\) as intermediate measurements are not allowed. To address this issue, the work of [13] adapted the witness-preserving error reduction for \(\mathsf{QMA}\)[21] with several other ideas to the space-efficient setting.
* **Intermediate measurements**. In the space-bounded scenario, the principle of deferred measurement is not applicable since this approach leads to an exponential increase in space complexity. Initially, \(\mathsf{BQL}\) appeared to be seemingly more powerful than \(\mathsf{BQU}\mathsf{L}\) since we cannot directly demonstrate that \(\mathsf{BPL}\subseteq\mathsf{BQU}\mathsf{L}\). Recently, Fefferman and Remscrim [14] (as well as [14, 15]) proved the equivalence between \(\mathsf{BQL}\) and \(\mathsf{BQU}\mathsf{L}\), indicating a space-efficient approach to eliminating intermediate measurements.
\(\mathsf{BQL}\)**-complete problems.** Identifying natural complete problems for the class \(\mathsf{BQL}\) (or \(\mathsf{BQU}\mathsf{L}\)) is a crucial and intriguing question. Ta-Shma [12] proposed the first candidate \(\mathsf{BQL}\)-complete problem, building upon the work of Harrow, Hassidim, and Lloyd [12] which established a \(\mathsf{BQP}\)-complete problem for inverting a (polynomial-size) well-conditioned matrix. Specifically, Ta-Shma showed that inverting a well-conditioned matrix with polynomial precision is in \(\mathsf{BQL}\). Similarly, computing eigenvalues of an Hermitian matrix is also in \(\mathsf{BQL}\). These algorithms offer a quadratic space advantage over the best-known classical algorithms that saturate the classical simulation bound [20, 21, 22]. Fefferman and Lin [13] later improved upon this result to obtain the first natural \(\mathsf{BQU}\mathsf{L}\)-complete problem by ingeniously utilizing amplitude estimation to avoid intermediate measurements.
More recently, Fefferman and Remscrim [14] further extended this natural \(\mathsf{BQU}\mathsf{L}\)-complete problem (or \(\mathsf{BQL}\)-complete, equivalently) to a _family_ of natural \(\mathsf{BQL}\)-complete problems. They showed that a well-conditioned version of standard \(\mathsf{DET}^{*}\)-complete problems is \(\mathsf{BQL}\)-complete, where \(\mathsf{DET}^{*}\) denotes the class of problems that are \(\mathsf{NC}^{1}\) (Turing) reducible to intDET, including well-conditioned integer determinant (DET), well-conditioned matrix powering (MATPOW), and well-conditioned iterative matrix product (ITMATPROD), among others.
\(\mathsf{RQ}_{\mathsf{U}}\)**- and \(\mathsf{coRQ}_{\mathsf{U}}\)**-complete problems.** Watrous [20] introduced the one-sided error counterpart of \(\mathsf{BQU}\mathsf{L}\), namely \(\mathsf{RQ}_{\mathsf{U}}\mathsf{L}\) and \(\mathsf{coRQ}_{\mathsf{U}}\mathsf{L}\), and developed error reduction techniques.
Moreover, Watrous proved that the undirected graph connectivity problem (USTCON) is in \(\mathsf{RQ}_{\mathsf{U}}\mathsf{L}\cap\mathsf{coRQ}_{\mathsf{U}}\mathsf{L}\) whereas Reingold [18] demonstrated that USTCON is in \(\mathsf{L}\) several years later. Recently, Fefferman and Remscrim [19] proposed a "verification" version of the well-conditioned iterative matrix product problem (vITMATPROD) as a _candidate_\(\mathsf{coRQL}\)-complete problem. However, although this problem is known to be \(\mathsf{coRQL}\)-hard, its containment remains _unresolved_. Specifically, vITMATPROD requires to decide whether a single entry in the product of polynomially many well-conditioned matrices is equal to zero.
### Time-bounded and space-bounded distribution and state testing
We summarize prior works and our main results for time-bounded2 and space-bounded distribution and state testing with respect to \(\ell_{1}\) norm, entropy difference, and \(\ell_{2}\) norm in Table 1.
Footnote 2: The problem of _time-bounded distribution (resp., state) testing_ aims to test the closeness between two distributions (resp., states) that are preparable by (poly)time-bounded circuits (devices), with access to the corresponding “source code” of these devices.
Interestingly, the sample complexity of testing the closeness of quantum states (resp., distributions) depends on the choice of distance-like measures,3 including the one-sided error counterpart known as _quantum state certification_[1]. In particular, for distance-like measures such as the \(\ell_{1}\) norm, called total variation distance in the case of distributions [10] and trace distance in the case of states [1], as well as classical entropy difference [12, 13] and its quantum analog [1, 14], the sample complexity of distribution and state testing is polynomial in the dimension \(N\). However, for distance-like measures such as the \(\ell_{2}\) norm, called Euclidean distance in the case of distributions [10] and Hilbert-Schmidt distance in the case of states [1], the sample complexity is _independent_ of dimension \(N\).
Footnote 3: It is noteworthy that the quantum entropy difference is not a distance.
As depicted in Table 1, this phenomenon that the required sample complexity for distribution and state testing, with polynomial precision and exponential dimension, depends on the choice of distance-like measure has reflections on time-bounded quantum state testing:
* For \(\ell_{1}\) norm and entropy difference, the time-bounded scenario is _seemingly much harder than_ preparing states or distributions since \(\mathsf{QSZK}\subseteq\mathsf{BQP}\) and \(\mathsf{SZK}\subseteq\mathsf{BPP}\) are unlikely.
* For \(\ell_{2}\) norm, the time-bounded scenario is _as easy as_ preparing states or distributions.
However, interestingly, a similar phenomenon _does not appear_ for space-bounded quantum state testing. Although no direct classical counterpart has been investigated before in a complexity-theoretic fashion, namely space-bounded distribution testing, there is another closely related model (a version of streaming distribution testing) that does not demonstrate an analogous phenomenon either, as we will discuss in Section 1.3.2.
\begin{table}
\begin{tabular}{c c c c} \hline \hline & \(\ell_{1}\) norm & \(\ell_{2}\) norm & Entropy \\ \hline Classical & \(\mathsf{SZK}\)-complete5 & \(\mathsf{BPP}\)-complete & \(\mathsf{SZK}\)-complete \\ Time-bounded & [14, 15] & Folklore & [14, 15] \\ \hline Quantum & \(\mathsf{QSZK}\)-complete6 & \(\mathsf{BQP}\)-complete & \(\mathsf{QSZK}\)-complete \\ Time-bounded & [14, 15] & [14, 15] & [16, 17] \\ \hline Quantum & \(\mathsf{BQL}\)-complete & \(\mathsf{BQL}\)-complete & \(\mathsf{BQL}\)-complete \\ Space-bounded & Theorem 1.2(1) & [14] and Theorem 1.2(4) & Theorem 1.2(2) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Time- and space-bounded distribution or state testing.
#### 1.3.1 Time-bounded distribution and state testing
We review prior works on time-bounded state (resp., distribution) testing, with a particular focus on testing the closeness between states (resp., distributions) are preparable by (poly)time-bounded quantum (resp., classical) circuits (device), with access to the "source code" of corresponding devices. For time-bounded distribution testing, we also recommend a brief survey [13] by Goldreich and Vadhan.
\(\ell_{1}\) **norm scenarios.** Sahai and Vadhan [14] initiated the study of the time-bounded distribution testing problem, where distributions \(D_{0}\) and \(D_{1}\) are _efficiently samplable_, and the distance-like measure is the total variation distance. Their work named this problem Statistical Difference (SD). In particular, the promise problem \((\alpha,\beta)\)-SD asks whether \(D_{0}\) is \(\alpha\)-far from or \(\beta\)-close to \(D_{1}\) with respect to \(\|D_{0}-D_{1}\|_{\mathrm{TV}}\). Although sampling from the distribution is in BPP,4 testing the closeness between these distributions is SZK-complete [14, 14], where SZK is the class of promise problems possessing statistical zero-knowledge proofs. It is noteworthy that the SZK containment of \((\alpha,\beta)\)-SD for any \(\alpha(n)-\beta(n)\geq 1/\operatorname{poly}(n)\) is currently unknown.5 In addition, we note that SZK is contained in \(\mathsf{AM}\cap\mathsf{coAM}\)[15, 16].
Footnote 4: Rigorously speaking, as an instance in SD, sample-generating circuits are not necessarily (poly)time-uniform.
Footnote 5: The works of [14, 14] demonstrated that \((\alpha,\beta)\)-SD is in SZK for any constant \(\alpha^{2}-\beta>0\). The same technique works for the parameter regime \(\alpha^{2}(n)-\beta(n)\geq 1/(\log n)\). However, further improvement of the parameter regime requires new ideas, as clarified in [17]. Recently, the work of [1] improved the parameter regime to \(\alpha^{2}(n)-\beta(n)\geq 1/\operatorname{poly}(n)\) by utilizing a series of tailor-made reductions. Currently, we only know that \((\alpha,\beta)\)-SD for \(\alpha(n)-\beta(n)\geq 1/\operatorname{poly}(n)\) is also in \(\mathsf{AM}\cap\mathsf{coAM}\)[13].
Following the pioneering work [14], Watrous [16] introduced the time-bounded quantum state testing problem, where two quantum states \(\rho_{0}\) and \(\rho_{1}\) that are preparable by time-bounded quantum circuits \(Q_{0}\) and \(Q_{1}\), respectively, as well as the distance-like measure is the trace distance. This problem is known as the Quantum State Distinguishability (QSD), specifically, \((\alpha,\beta)\)-QSD asks whether \(\rho_{0}\) is \(\alpha\)-far from or \(\beta\)-close to \(\rho_{1}\) with respect to \(\operatorname{td}(\rho_{0},\rho_{1})\). Analogous to its classical counterpart, QSD is QSZK-complete [16, 17], whereas the QSZK containment for any \(\alpha(n)-\beta(n)\geq 1/\operatorname{poly}(n)\) remains an open question.6 Additionally, it is worth noting that QIP(2) contains QSZK [16, 17].
Footnote 6: Like SD and SZK, the techniques in [16, 17] show that \((\alpha,\beta)\)-QSD is in QSZK for \(\alpha^{2}(n)-\beta(n)\geq 1/O(\log n)\), and the same limitation also applies to the quantum settings. A recent result [18] following the line of work of [1] improved the parameter regime to \(\alpha^{2}(n)-\sqrt{2\ln 2\beta}(n)\geq 1/\operatorname{poly}(n)\), but the differences between classical and quantum distances make it challenging to push the bound further. In [16, Proposition 21], Watrous implicitly proved a PSPACE upper bound for the parameter regime \(\alpha(n)-\beta(n)\geq\exp(-\operatorname{poly}(n))\).
**Entropy difference scenarios.** Beyond \(\ell_{1}\) norm, another distance-like measure commonly considered in time-bounded quantum state testing (or distribution testing) is the (quantum) entropy difference, which also corresponds to the (quantum) Jensen-Shannon divergence. The promise problem Entropy Difference (ED), first introduced by Goldreich and Vadhan [14] following the work of [14], asks whether efficiently samplable distributions \(D_{0}\) and \(D_{1}\) satisfy \(\operatorname{H}(D_{0})-\operatorname{H}(D_{1})\geq g\) or \(\operatorname{H}(D_{1})-\operatorname{H}(D_{0})\geq g\) for \(g=1\). They demonstrated that ED is SZK-complete. Ben-Aroya, Schwartz, and Ta-Shma [1] further investigated the promise problem Quantum Entropy Difference (QED), which asks whether \(\operatorname{S}(\rho_{0})-\operatorname{S}(\rho_{1})\geq g\) or \(\operatorname{S}(\rho_{1})-\operatorname{S}(\rho_{0})\geq g\), for efficiently preparable quantum states \(\rho_{0}\) and \(\rho_{1}\) and \(g=1/2\). They showed that QED is QSZK-complete. Moreover, the SZK (resp., QSZK) containment for ED (resp., QED) automatically holds for any \(g(n)\geq 1/\operatorname{poly}(n)\).
Furthermore, Berman, Degwekar, Rothblum, and Vasudevan [1] demonstrated that the Jensen-Shannon divergence problem (JSP), asking whether \(\operatorname{JS}(D_{0},D_{1})\geq\alpha\) or \(\operatorname{JS}(D_{0},D_{1})\leq\beta\) for efficiently samplable distributions \(D_{0}\) and \(D_{1}\), is SZK-complete. Their work accomplished this result by reducing the problem to ED, and this containment applies to \(\alpha(n)-\beta(n)\geq 1/\operatorname{poly}(n)\). Recently, Liu [15] showed a quantum counterpart, referred to as the Quantum Jensen-Shannon Divergence Problem (QJSP), is QSZK-complete. Notably, the quantum
Jensen-Shannon divergence is a special instance of the Holevo \(\chi\) quantity [14].7
Footnote 7: In particular, the quantum Jensen-Shannon divergence coincides with the Holevo \(\chi\) quantity on size-2 ensembles with a uniform distribution, which arises in the Holevo bound [14]. See [13, Theorem 12.1].
\(\ell_{2}\) **norm scenarios.** For the quantum setting, it is straightforward that applying the SWAP test [1]8 to efficiently preparable quantum states \(\rho_{0}\) and \(\rho_{1}\) can lead to a \(\mathsf{BQP}\) containment, in particular, additive-error estimations of \(\operatorname{Tr}(\rho_{0}^{2})\), \(\operatorname{Tr}(\rho_{1}^{2})\), and \(\operatorname{Tr}(\rho_{0}\rho_{1})\) with polynomial precision. Recently, the work of [10] observed that time-bounded quantum state testing with respect to the squared Hilbert-Schmidt distance is \(\mathsf{BQP}\)-complete. For the classical setting, namely the squared Euclidean distance, the \(\mathsf{BPP}\)-completeness is relatively effortless.9
Footnote 8: We note that the SWAP test also applies to mixed states, see Proposition 9 in [12].
Footnote 9: Specifically, we achieve \(\mathsf{BPP}\) containment by following the approach in [12, Theorem 7.1]. On the other hand, the \(\mathsf{BPP}\) hardness owes to the fact that the squared Euclidean distance between the distribution \((p_{\mathrm{acc}},1-p_{\mathrm{acc}})\) from the output bit of any \(\mathsf{BPP}\) algorithm and the distribution \((1,0)\) is \((1-p_{\mathrm{acc}})^{2}\).
#### 1.3.2 Space-bounded distribution and state testing
To the best of our knowledge, no prior work has specifically focused on space-bounded distribution testing from a complexity-theoretic perspective. Instead, we will review prior works that are (closely) related to this computational problem. Afterward, we will delve into space-bounded quantum state testing, which constitutes the main contribution of our work.
**Space-bounded distribution testing and related works.** We focus on a computational problem involving two \(\operatorname{poly}(n)\)-size classical circuits \(C_{0}\) and \(C_{1}\), which generate samples from the distributions \(D_{0}\) and \(D_{1}\) respectively. Each circuit contains a read-once polynomial-length random-coins tape.10 The input length and output length of the circuits are \(O(\log n)\). The task is to decide whether \(D_{0}\) is \(\alpha\)-far from or \(\beta\)-close to \(D_{1}\) with respect to some distance-like measure. Additionally, we can easily observe that space-bounded distribution testing with respect to the squared Euclidean distance (\(\ell_{2}\) norm) is \(\mathsf{BPL}\)-complete, much like its time-bounded counterpart.
Footnote 10: It is noteworthy that random coins are provided as _input_ to classical circuits \(C_{0}\) and \(C_{1}\) for generating samples from the corresponding distributions in the time-bounded scenario, such as SD and ED.
Several models related to space-bounded distribution testing have been investigated previously. Earlier streaming-algorithmic works [17, 18] utilize _entries_ of the distribution as the data stream, with entries given in different orders for different models. On the other hand, a later work [13] considered a data stream consisting of a sequence of i.i.d. samples drawn from distributions and studied low-space streaming algorithms for distribution testing.
Regarding (Shannon) entropy estimation, previous streaming algorithms considered worst-case ordered samples drawn from \(N\)-dimensional distributions and required \(\operatorname{poly}\log(N/\epsilon)\) space, where \(\epsilon\) is the additive error. Recently, Acharya, Bhadane, Indyk, and Sun [1] addressed the entropy estimation problem with i.i.d. samples drawn from distributions as the data stream and demonstrated the first \(O(\log(N/\epsilon))\) space streaming algorithm. The sample complexity, viewed as the time complexity, was subsequently improved in [1].
However, for the total variation distance (\(\ell_{1}\) norm), previous works focused on the trade-off between the sample complexity and the space complexity (memory constraints), achieving only a nearly-log-squared space streaming algorithm [15].
Notably, the main differences between the computational and streaming settings lie in how we access the sampling devices.11 In the computational problem, we have access to the "source code" of the devices and can potentially use them for purposes like "reverse engineering". Conversely, the streaming setting utilizes the sampling devices in a "black-box" manner, obtaining i.i.d. samples. As a result, a logspace streaming algorithm will result in a \(\mathsf{BPL}\) containment.12
**Space-bounded quantum state testing.** Among the prior works on streaming distribution testing, particularly entropy estimation, the key takeaway is that the space complexity of the corresponding computational problem is \(O(\log(N/\epsilon))\). This observation leads to a conjecture that the computational hardness of space-bounded distribution and state testing is _independent_ of the choice of commonplace distance-like measures. Our work, in turn, provides a positive answer for space-bounded quantum state testing.
Space-bounded state testing with respect to the squared Hilbert-Schmidt distance (\(\ell_{2}\) norm) is \(\mathsf{BQL}\)-complete, as shown in Theorem1.2(4). Specifically, the \(\mathsf{BQL}\) containment follows from the SWAP test [1], similar to the time-bounded scenario. Moreover, proving \(\mathsf{BQL}\) hardness, as well as \(\mathsf{coRQ}_{\mathsf{U}}\mathsf{L}\)-hardness for state certification, are also straightforward.13
Footnote 13: In particular, considering any \(\mathsf{BQL}\) circuit \(C_{x}\) that accepts with probability \(p_{\mathrm{acc}}=||1\rangle\langle 1|_{\mathrm{out}}C_{x}|\bar{0}\rangle|^{2}\), we can construct a new circuit \(C^{\prime}_{x}\) from \(C_{x}\) such that \(C^{\prime}_{x}\) accepts with probability \(||\bar{0}\rangle\langle\bar{0}|C^{\prime}_{x}|\bar{0}\rangle|^{2}=p\mathrm{ acc}^{2}=\mathrm{Tr}(\rho_{0}\rho_{1})=1-\mathrm{HS}^{2}(\rho_{0},\rho_{1})\), where pure states \(\rho_{0}=|\bar{0}\rangle\langle\bar{0}|\) and \(\rho_{1}=C^{\prime}_{x}|\bar{0}\rangle\langle\bar{0}|{C^{\prime}_{x}}^{1}\). See Lemma4.17 for details.
Regarding space-bounded state testing with respect to the trace distance (\(\ell_{1}\) norm), we note that [25, Proposition 21] implicitly established an \(\mathsf{NC}\) containment. The \(\mathsf{BQL}\)-hardness, as well as \(\mathsf{coRQ}_{\mathsf{U}}\mathsf{L}\)-hardness for state certification, is adapted from [10]. Similarly, we derive the \(\mathsf{BQL}\)-hardness for space-bounded state testing with respect to the quantum Jensen-Shannon divergence and the quantum entropy difference from previous works [14].
Finally, we devote the remainder of this section to our main technique (Theorem1.4), and consequently, we present \(\mathsf{BQL}\) (resp., \(\mathsf{coRQ}_{\mathsf{U}}\mathsf{L}\)) containment for state testing (resp., certification) problems for other distance-like measures beyond the squared Hilbert-Schmidt distance.
### Proof technique: Space-efficient quantum singular value transformation
The quantum singular value transformation (QSVT) [11] is a powerful and efficient framework for manipulating the singular values \(\{\sigma_{i}\}_{i}\) of a linear operator \(A\), using a corresponding projected unitary encoding \(U\) of \(A=\Pi U\Pi\) for projectors \(\tilde{\Pi}\) and \(\Pi\). The singular value decomposition is \(A=\sum_{i}\sigma_{i}|\tilde{\psi}_{i}\rangle\langle\psi_{i}|\) where \(|\tilde{\psi}_{i}\rangle\) and \(|\psi_{i}\rangle\) are left and right singular vectors, respectively. QSVT has numerous applications in quantum algorithm design, and is even considered a grand unification of quantum algorithms [13]. To implement the transformation \(f^{\mathrm{(SV)}}(A)=f^{\mathrm{(SV)}}(\tilde{\Pi}U\Pi)\), we require a degree-\(d\) polynomial \(\hat{P}_{d}(x)\) that satisfies two conditions. Firstly, \(\hat{P}_{d}\) well-approximates \(f\) on the interval of interest \(\mathcal{I}\), with \(\max_{x\in\mathcal{I}\setminus\mathcal{I}_{s}}|\hat{P}_{d}(x)-f(x)|\leq\epsilon\), where \(\mathcal{I}_{\delta}\subseteq\mathcal{I}\subseteq[-1,1]\) and typically \(\mathcal{I}_{\delta}:=(-\delta,\delta)\). Secondly, \(\hat{P}_{d}\) is bounded, with \(\max_{x\in[-1,1]}|\hat{P}_{d}(x)|\leq 1\). The degree of \(\hat{P}_{d}\) depends on the precision parameters \(\delta\) and \(\epsilon\), with \(d=O(\delta^{-1}\log\epsilon^{-1})\), and all coefficients of \(\hat{P}_{d}\) can be computed efficiently.
According to [11], we can use an alternating phase modulation to implement \(\hat{P}_{d}^{\mathrm{(SV)}}(\tilde{\Pi}U\Pi)\),14 which requires a sequence of rotation angles \(\Phi\in\mathbb{R}^{d}\). For instance, consider \(\hat{P}_{d}(x)=T_{d}(x)\) where \(T_{d}(x)\) is the \(d\)-th Chebyshev polynomial (of the first kind), then we know that \(\phi_{1}=(1-d)\pi/2\) and \(\phi_{j}=\pi/2\) for all \(j\in\{2,3,\cdots,d\}\). QSVT techniques, including classical pre-processing and quantum circuit implementation, are generally _time-efficient_. Additionally, the quantum circuit implementation of QSVT is already _space-efficient_ because implementing QSVT with a degree-\(d\) bounded polynomial for any \(s(n)\)-qubit projected unitary encoding requires \(O(s(n))\) qubits, where \(s(n)\geq\Omega(\log n)\). However, the classical pre-processing in the QSVT techniques is typically not space-efficient. Indeed, prior works on classical pre-processing for QSVT, specifically angle-finding algorithms in [1, 2, 3], which have time complexity polynomially dependent on the degree \(d\), do not consider the space-efficiency. Therefore, the use of previous angle-finding algorithms may lead to an _exponential_ increase in space complexity. This raises a fundamental question on making the classical pre-processing space-efficient as well:
Footnote 14: This procedure is a generalization of quantum signal processing, as explained in [13, Section II.A].
**Problem 1.3** (Space-efficient QSVT).: Can we implement a degree-\(d\) QSVT for any \(s(n)\)-qubit
projected unitary encoding with \(d\leq 2^{O(s(n))}\), using only \(O(s(n))\) space in both classical pre-processing and quantum circuit implementation?
**QSVT via Chebyshev interpolation.** Recently, Metger and Yuen [14] constructed bounded polynomial approximations of the sign and square root functions with exponential precision in polynomial space by utilizing Chebyshev interpolation, which offers a partial solution to Problem 1.3.15 The key ingredient behind their approach is the _near-minimax approximation by Chebyshev interpolation_[20]. More precisely, for any continuous function \(f\colon[-1,1]\to\mathbb{R}\), if there is a degree-\(d\) polynomial \(\hat{P}_{d}\) satisfying \(\max_{x\in[-1,1]}|f(x)-\hat{P}_{d}(x)|\leq\epsilon\), then we have a Chebyshev interpolation polynomial \(P_{d}(x):=\frac{c_{0}}{2}+\sum_{k=1}^{d}c_{k}T_{k}\) where \(c_{k}:=\frac{2}{\pi}\int_{-1}^{1}\frac{f(x)T_{k}(x)}{\sqrt{1-x^{2}}}\mathrm{d}x\) such that \(\max_{x\in[-1,1]}|P_{d}(x)-f(x)|\leq O(\epsilon\log d)\). As the angles for any Chebyshev polynomial \(T_{k}(x)\) are explicitly known, the implementation involves applying a Chebyshev polynomial to a _bitstring indexed encoding_, which additionally requires projectors \(\tilde{\Pi}\) and \(\Pi\) span on the corresponding subset of \(\{|0\rangle,|1\rangle\}^{\otimes s}\),16 and implementing the Chebyshev interpolation polynomial by LCU techniques [1]. It is noteworthy that combining the aforementioned techniques causes a _super-quadratic dependence_ of the degree \(d\) in the query complexity to \(U\).
Footnote 15: To clarify, we can see from [14] that directly adapting their construction shows that implementing QSVT for any \(s(n)\)-qubit block-encoding with \(O(s(n))\)-bit precision requires \(\mathrm{poly}(s(n))\) classical and quantum space for any \(s(n)\geq\Omega(\log n)\). However, Problem 1.3 (space-efficient QSVT) seeks to reduce the dependence of \(s(n)\) in the space complexity from _polynomial_ to _linear_.
Footnote 16: To ensure that \(\tilde{\Pi}U\Pi\) admits a matrix representation, we require the basis of projectors \(\tilde{\Pi}\) and \(\Pi\) to have a well-defined order, leading us to focus exclusively on bitstring indexed encoding. Additionally, for simplicity, we assume no ancillary qubits are used here, and refer to Definition 3.1 for a formal definition.
A refined analysis indicates that applying a Chebyshev interpolation polynomial to a bitstring indexed encoding for any \(d\leq 2^{O(s(n))}\) and \(\epsilon\geq 2^{-O(s(n))}\) requires \(O(s(n))\) qubits and deterministic \(O(s(n))\) space, provided that an evaluation oracle \(\mathrm{Eval}_{P_{d}}\) estimates coefficients \(\{c_{k}\}_{k=0}^{d}\) of the Chebyshev interpolation polynomial with \(O(\log(\epsilon/d))\) precision. This result leads to the establishment of a space-efficient variant of QSVT:
**Theorem 1.4** (Space-efficient QSVT, informal of Theorem 3.4).: _Let \(f\colon\mathbb{R}\to\mathbb{R}\) be a continuous function bounded on \(\mathcal{I}\subseteq[-1,1]\). If there exists a degree-\(d\) polynomial \(\hat{P}_{d}\) that approximates \(h\colon[-1,1]\to\mathbb{R}\), where \(h\) approximates \(f\) only on \(\mathcal{I}\), such that \(\max_{x\in[-1,1]}|h(x)-\hat{P}_{d}(x)|\leq\epsilon\), then Chebyshev interpolation yields another degree-\(d\) polynomial \(P_{d}\) satisfying the following conditions: \(\max_{x\in\mathcal{I}}|f(x)-P_{d}(x)|\leq O(\epsilon\log d)\) and \(\max_{x\in[-1,1]}|P_{d}(x)|\leq 1\). Furthermore, we have an algorithm \(\mathcal{A}_{f}\) that computes any coefficient \(\{c_{k}\}_{k=0}^{d}\) of the Chebyshev interpolation polynomial \(P_{d}\) space-efficiently. The algorithm is deterministic for bounded \(f\), and bounded-error randomized for piecewise-smooth \(f\). Additionally, for any \(s(n)\)-qubit bitstring indexed encoding \(U\) of \(A=\tilde{\Pi}U\Pi\) with \(d\leq 2^{O(s(n))}\), we can implement the quantum singular value transformation \(P_{d}^{(\mathrm{SV})}(A)\) using \(O(d^{2}\|\mathbf{c}\|_{1})\) queries17 In addition, \(\|\mathbf{c}\|_{1}\) is generally upper-bounded by \(O(d)\) for all piecewise-smooth functions. However, for specific functions, such as the sign function, we can improve the upper bound to \(O(\log d)\). to \(U\) with \(O(s(n))\) qubits._
Footnote 17: The dependence of \(\|\mathbf{c}\|_{1}\) arises from renormalizing the bitstring indexed encoding via amplitude amplification.
Our techniques in Theorem 1.4 offer two advantages over the techniques proposed by [14]. Firstly, our techniques can handle any _piecewise-smooth function_, such as the normalized logarithmic function \(\ln_{\beta}(x):=\frac{\ln(1/x)}{2\ln(2/\beta)}\) on the interval \(\mathcal{I}=[\beta,1]\) for any \(\beta\geq 2^{-O(s(n))}\), whereas the techniques from [14] are restricted to functions that are bounded on the interval \(\mathcal{I}=[-1,1]\). Secondly, our technique is _constant overhead_ in terms of the space complexity of the bitstring indexed encoding \(U\), while the techniques from [14] are only _poly-logarithmic overhead_.
In addition, it is noteworthy that applying the space-efficient QSVT with the sign function will imply a unified approach to error reduction for the classes \(\mathsf{BQU}\mathsf{L}\), \(\mathsf{coRQ}_{\mathsf{U}}\mathsf{L}\) and \(\mathsf{RQ}_{\mathsf{U}}\mathsf{L}\).
**Computing the coefficients.** We will implement the evaluation oracle \(\mathrm{Eval}_{P_{d}}\) to prove Theorem 1.4. To estimate the coefficients \(\{c_{k}\}_{k=0}^{d}\) resulting from Chebyshev interpolation for any
function \(f\) that is bounded on the interval \(\mathcal{I}=[-1,1]\), we can use standard numerical integral techniques,18 given that the integrand's second derivative in \(\{c_{k}\}_{k=0}^{d}\) is bounded by \(\mathrm{poly}(d)\).
Footnote 18: We remark that using a more efficient numerical integral technique, such as the exponentially convergent trapezoidal rule, may improve the required space complexity for computing coefficients by a constant factor.
However, implementing the evaluation oracle for piecewise-smooth functions \(f\) on an interval \(\mathcal{I}\subsetneq[-1,1]\) is relatively convoluted. We cannot simply apply Chebyshev interpolation to \(f\). Instead, we consider a low-degree Fourier approximation \(g\) resulting from implementing smooth functions to Hamiltonians [20, Appendix B]. We then make the error vanish outside \(\mathcal{I}\) by multiplying with a Gaussian error function, resulting in \(h\) which approximates \(f\)_only_ on \(\mathcal{I}\). Therefore, we can apply Chebyshev interpolation and our algorithm for bounded functions to \(h\) through a somewhat complicated calculation.
Finally, we need to compute the coefficients of the low-degree Fourier approximation \(g\). Interestingly, this step involves the _stochastic matrix powering problem_, which lies at the heart of space-bounded derandomization, e.g., [11, 12, 13]. We utilize space-bounded random walks on a directed graph to estimate the power of a stochastic matrix. Consequently, we can only develop a bounded-error randomized algorithm \(\mathcal{A}_{f}\) for piecewise-smooth functions.19
Footnote 19: The classical pre-processing in space-efficient QSVT is not part of the deterministic Turing machine producing the quantum circuit description in the BQL model (Definition 2.6). Instead, we treat it as a component of quantum computation, allowing the use of randomized algorithms since \(\mathsf{BPL}\subseteq\mathsf{BQL}\)[14].
### Proof overview: A general framework for quantum state testing
Our framework enables space-bounded quantum state testing, specifically for proving Theorem 1.1 and Theorem 1.2, and is based on the one-bit precision phase estimation [15], also known as the _Hadamard test_[16]. Prior works [17, 18] have employed (one-bit precision) phase estimation in space-bounded quantum computation.
To address quantum state testing problems, we reduce them to estimating \(\mathrm{Tr}(P_{d}(A)\rho)\), where \(\rho\) is a (mixed) quantum state prepared by a quantum circuit \(Q_{\rho}\), \(A\) is an Hermitian operator block-encoded in a unitary operator \(U_{A}\), and \(P_{d}\) is a space-efficiently computable degree-\(d\) polynomial. This approach has been applied in _time-bounded_ quantum state testing, including fidelity estimation [13] and subsequently trace distance estimation [11].
To implement a unitary operator \(U_{P_{d}(A)}\) that (approximately) block-encodes \(P_{d}(A)\) in a space-efficient manner, we require \(P_{d}\) to meet the conditions specified in Theorem 1.4. As illustrated in Figure 1, we denote the quantum circuit as \(\mathcal{T}(Q_{\rho},U_{A},P_{d})\), where we exclude the precision for simplicity. The measurement outcome of \(\mathcal{T}(Q_{\rho},U_{A},P_{d})\) will be \(0\) with a probability close to \(\frac{1+\mathrm{Tr}(P_{d}(A)\rho)}{2}\). This property allows us to estimate \(\mathrm{Tr}(P_{d}(A)\rho)\) within an additive error \(\epsilon\) using \(O(1/\epsilon^{2})\) sequential repetitions, resulting in a BQL containment.
As an example of the application, \(\mathcal{T}(Q_{i},U_{\frac{\rho_{0}-\rho_{1}}{2}},P_{d}^{\mathrm{sgn}})\) is utilized in GapQSD, where \(U_{\frac{\rho_{0}-\rho_{1}}{2}}\) is a block-encoding of \(\frac{\rho_{0}-\rho_{1}}{2}\), and \(P_{d}^{\mathrm{sgn}}\) is a space-efficient polynomial approximation of the
Figure 1: General framework for quantum state testing \(\mathcal{T}(Q_{\rho},U_{A},P_{d})\).
sign function. Similarly, \(\mathcal{T}(Q_{i},U_{\rho_{i}},P^{\mathrm{ln}}_{d})\) is utilized in GapQED, where \(U_{\rho_{i}}\) is a block-encoding of \(\rho_{i}\) for \(i\in\{0,1\}\), and \(P^{\mathrm{ln}}_{d}\) is a space-efficient polynomial approximation of the normalized logarithmic function. Both \(P^{\mathrm{sgn}}_{d}\) and \(P^{\mathrm{ln}}_{d}\) can be obtained by employing Theorem1.4.20
Footnote 20: In particular, \(P^{\mathrm{sgn}}_{d}\) is given in Corollary3.7, as well as \(P^{\mathrm{ln}}_{d}\) is given in Corollary3.10.
**Making the error one-sided.** The main challenge is constructing a unitary \(U\) of interest, such as \(\mathcal{T}(Q_{\rho},U_{A},P_{d})\), that accepts with a certain fixed probability \(p\) for _yes_ instances (\(\rho_{0}=\rho_{1}\)), while having a probability that polynomially deviates from \(p\) for _no_ instances. As an example, we consider \(\mathtt{CertQHS}_{\mathrm{log}}\) and express \(\mathrm{HS}^{2}(\rho_{0},\rho_{1})\) as a linear combination of \(\mathrm{Tr}(\rho_{0}^{2})\), \(\mathrm{Tr}(\rho_{1}^{2})\), and \(\mathrm{Tr}(\rho_{0}\rho_{1})\). We thus design a unitary quantum algorithm employing the LCU technique, which accepts with probability \(\left(\frac{1}{2}+\frac{1}{2}\mathrm{HS}^{2}(\rho_{0},\rho_{1})\right)^{2}\), equal \(1/4\) for _yes_ instances. Applying the exact amplitude amplification [1, 1], we achieve perfect completeness, and the analysis demonstrates that the acceptance probability polynomially deviates from \(1\) for _no_ instances. By applying error reduction for \(\mathtt{coRQ}_{\mathsf{U}}\mathsf{L}\), the resulting algorithm is indeed in \(\mathtt{coRQ}_{\mathsf{U}}\mathsf{L}\).
Moving on to \(\overline{\mathtt{CertQSD}}_{\mathrm{log}}\), we consider the quantum circuit \(U_{i}=\mathcal{T}(Q_{i},U_{\frac{\alpha-\alpha_{1}}{2}},P^{\mathrm{sgn}}_{d})\) for \(i\in\{0,1\}\). Let \(p_{i}\) be the probability that the measurement outcome of \(U_{i}|\bar{0}\rangle\) in Figure1 is 0. Since our space-efficient QSVT preserves parity, specifically the approximation polynomial \(P^{\mathrm{sgn}}_{d}\) satisfies \(P^{\mathrm{sgn}}_{d}(0)=0\),21 we obtain \(p_{0}=p_{1}=1/2\) for _yes_ instances (\(\rho_{0}=\rho_{1}\)). With a simple modification, \(U_{0}\) and \(U_{1}\) enable algorithm \(\mathcal{A}\) to meet the condition of exact amplitude amplification for _yes_ instances. Further analysis shows that \(\mathcal{A}\) accepts with probability polynomially away from \(1\) for _no_ instances. We thus can conclude a \(\mathtt{coRQ}_{\mathsf{U}}\mathsf{L}\) containment similar to \(\overline{\mathtt{CertQHS}}_{\mathrm{log}}\).
Footnote 21: Let \(f\) be any odd function such that space-efficient QSVT associated with \(f\) can be implemented by Theorem1.4. It follows that the corresponding approximation polynomial \(P^{(f)}_{d}\) is also odd. See Remark3.12.
### Discussion and open problems
Since space-efficient quantum singular value transformation (QSVT) offers a unified framework for designing quantum logspace algorithms, it suggests a new direction to find applications of space-bounded quantum computation. An intriguing candidate is solving positive semidefinite programming (SDP) programs with constant precision [1, 1]. A major challenge in achieving a \(\mathsf{BQL}\) containment for this problem is that iteratively applying the space-efficient QSVT super-constantly many times may lead to a bitstring indexed encoding requiring \(\omega(\log n)\) ancillary qubits, raising the question:
1. Is it possible to have an approximation scheme (possibly under certain conditions) that introduces merely \(O(1)\) additional ancillary qubits in the bitstring indexed encoding per iteration, such that applying space-efficient QSVT \(\log n\) times results in a bitstring indexed encoding with at most \(O(\log n)\) ancillary qubits?
Furthermore, as quantum distances investigated in this work are all instances of a quantum analog of symmetric \(f\)-divergence, there is a natural question on other instances:
1. Can we demonstrate that space-bounded quantum state testing problems with respect to other quantum distances are also \(\mathsf{BQL}\)-complete, such as quantum analogs of squared Hellinger distance or quantum analogs of triangular discrimination [11]?
In addition, there is a question on improving the efficiency of the space-efficient QSVT:
1. Can we improve the query complexity of \(U\) and \(U^{\dagger}\) in the space-efficient QSVT implementation (e.g., for the sign function) from \(O(d^{2}\log d)\) to \(O(d)\)?
Notably, classical pre-processing in QSVT techniques usually involves finding the sequence of \(z\)-axis rotation angles, while our approach instead uses Chebyshev interpolation and the LCU technique. A solution thus involves developing a space-efficient angle-finding algorithm. An interesting direction from a recent work [14], which investigated QSVT with SU(2) rotations, may shed light on Question (iii) since finding SU(2) rotation angles appears easier.
### Related works: more on quantum state testing problems
Testing the spectrum of quantum states was studied in [14]: for example, whether a quantum state is maximally mixed or \(\epsilon\)-far away in trace distance from mixed states can be tested using \(\Theta(N/\epsilon^{2})\) samples. Later, it was generalized in [1] to quantum state certification with respect to fidelity and trace distance. Estimating distinguishability measures of quantum states [10] is another topic, including the estimation of fidelity [13, 15, 16] and trace distance [13, 14].
Entropy estimation of quantum states has been widely studied in the literature. Given quantum purified access, it was shown in [11] that the von Neumann entropy \(\mathrm{S}(\rho)\) can be estimated within additive error \(\epsilon\) with query complexity \(\tilde{O}(N/\epsilon^{1.5})\). If we know the reciprocal \(\kappa\) of the minimum non-zero eigenvalue of \(\rho\), then \(\mathrm{S}(\rho)\) can be estimated with query complexity \(\tilde{O}(\kappa^{2}/\epsilon)\)[13]. We can estimate \(\mathrm{S}(\rho)\) within multiplicative error \(\epsilon\) with query complexity \(\tilde{O}(n^{\frac{1}{2}+\frac{1+\epsilon^{2}}{2\epsilon^{2}}})\)[12], provided that \(\mathrm{S}(\rho)=\Omega(\epsilon+1/\eta)\). If \(\rho\) is of rank \(r\), then \(\mathrm{S}(\rho)\) can be estimated with query complexity \(\tilde{O}(r^{2}/\epsilon^{2})\)[13]. Estimating the Renyi entropy \(S_{\alpha}(\rho)\) given quantum purified access was first studied in [12], and then was improved in [13, 14]. In addition, the work of [12] investigates the (conditional) hardness of GapQED with logarithmic depth or constant depth.
Paper organization.Our paper begins by introducing key concepts in Section 2, including quantum distance and divergences, space-bounded quantum computation, Chebyshev polynomials and interpolation, and a toolkit for space-bounded randomized and quantum computation. In Section 3, we demonstrate our space-efficient variant of quantum singular value transformation (Theorem 1.4) and offer examples for bounded functions and piecewise-smooth functions. We also provide a simple proof of space-efficient error reduction for unitary quantum computation. Then, in Section 4, we formally define space-bounded quantum state testing problems with four distance-like measures, and present the first family of natural \(\mathsf{coRQ}_{\mathsf{U}}\mathsf{L}\)-complete problems (Theorem 1.1), as well as a novel family of natural BQL-complete problems (Theorem 1.2).
## 2 Preliminaries
We assume that the reader is familiar with quantum computation and the theory of quantum information. For an introduction, the textbooks by [17] and [18] provide a good starting point, while for a more comprehensive survey on quantum complexity theory, refer to [19].
In addition, we adopt the convention that the logarithmic function \(\log\) has a base of \(2\), denoted by \(\log(x):=\log_{2}(x)\) for any \(x\in\mathbb{R}^{+}\). For the purpose of clarity, we will denote the operator norm as \(\|A\|:=\|A\|_{2\to 2}\). Moreover, for the sake of simplicity, we utilize the notation \(|\bar{0}\rangle\) to represent \(|0\rangle^{\otimes a}\) with \(a>1\).
### Distances and divergences for quantum states
We will provide an overview of relevant quantum distances and divergences, along with useful inequalities among different quantum distance-like measures. Additionally, we recommend [1, Section 3.1] for a nice survey on quantum distance and divergences.
**Definition 2.1** (Quantum distances and divergences).: For any quantum states \(\rho_{0}\) and \(\rho_{1}\), we define several distance-like measures and relevant quantities:
* **Trace distance**. \(\mathrm{td}(\rho_{0},\rho_{1}):=\frac{1}{2}\mathrm{Tr}|\rho_{0}-\rho_{1}|= \frac{1}{2}\mathrm{Tr}(((\rho_{0}-\rho_{1})^{\dagger}(\rho_{0}-\rho_{1}))^{1/2})\).
* **(Uhlmann) Fidelity**. \(\mathrm{F}(\rho_{0},\rho_{1}):=\mathrm{Tr}|\sqrt{\rho_{0}}\sqrt{\rho_{1}}|\).
* **Squared Hilbert-Schmidt distance**. \(\mathrm{HS}^{2}(\rho_{0},\rho_{1}):=\frac{1}{2}\mathrm{Tr}(\rho_{0}-\rho_{1}) ^{2}\).
* **von Neumann entropy**. \(\mathrm{S}(\rho):=-\mathrm{Tr}(\rho\ln\rho)\) for any quantum state \(\rho\).
* **Quantum Jensen-Shannon divergence**. \(\mathrm{QJS}(\rho_{0},\rho_{1}):=\mathrm{S}\big{(}\frac{\rho_{0}+\rho_{1}}{2} \big{)}-\frac{\mathrm{S}(\rho_{0})+\mathrm{S}(\rho_{1})}{2}\).
The trace distance and the squared Hilbert-Schmidt distance reach the minimum of \(0\) when \(\rho_{0}\) equals \(\rho_{1}\), while the fidelity attains a maximum value of \(1\). Additionally, there are two equalities when at least one of the two states is a pure state:
* For a pure state \(\rho_{0}\) and a mixed state \(\rho_{1}\), \(\mathrm{F}^{2}(\rho_{0},\rho_{1})=\mathrm{Tr}(\rho_{0}\rho_{1})\).
* For two pure states \(\rho_{0}\) and \(\rho_{1}\), \(\mathrm{Tr}(\rho_{0}\rho_{1})=1-\mathrm{HS}^{2}(\rho_{0},\rho_{1})\).
Moreover, we have \(\mathrm{HS}^{2}(\rho_{0},\rho_{1})=\frac{1}{2}(\mathrm{Tr}(\rho_{0}^{2})+ \mathrm{Tr}(\rho_{1}^{2}))-\mathrm{Tr}(\rho_{0}\rho_{1})\). Additionally, Fuchs and van de Graaf [20] showed a well-known inequality between the trace distance and the fidelity:
**Lemma 2.2** (Trace distance vs. fidelity, adapted from [20]).: _For any states \(\rho_{0}\) and \(\rho_{1}\),_
\[1-\mathrm{F}(\rho_{0},\rho_{1})\leq\mathrm{td}(\rho_{0},\rho_{1})\leq\sqrt{1- \mathrm{F}^{2}(\rho_{0},\rho_{1})}.\]
The joint entropy theorem (Lemma 2.3) enhances our understanding of entropy in classical-quantum states and is necessary for our usages of the von Neumann entropy.
**Lemma 2.3** (Joint entropy theorem, adapted from Theorem 11.8(5) in [14]).: _Suppose \(p_{i}\) are probabilities corresponding to a distribution \(D\), \(|i\rangle\) are orthogonal state of a system \(A\), and \(\{\rho_{i}\}_{i}\) is any set of density operators for another system \(B\). Then \(\mathrm{S}\big{(}\sum_{i}p_{i}|i\rangle\langle i|\otimes\rho_{i}\big{)}= \mathrm{H}(D)+\sum_{i}p_{i}\mathrm{S}(\rho_{i})\)._
Let us now turn our attention to the quantum Jensen-Shannon divergence, which is defined in [13]. For simplicity, we define \(\mathrm{QJS}_{2}(\rho_{0},\rho_{1}):=\mathrm{QJS}(\rho_{0},\rho_{1})/\ln 2\) using the base-2 (matrix) logarithmic function. Notably, when considering size-2 ensembles with a uniform distribution, the renowned Holevo bound [13] (see Theorem 12.1 in [14]) indicates that the _quantum Shannon distinguishability_ studied in [20] is at most the quantum Jensen-Shannon divergence. Consequently, this observation yields inequalities between the trace distance and the quantum Jensen-Shannon divergence.22
Footnote 22: For a detailed proof of these inequalities, please refer to [13, Appendix B].
**Lemma 2.4** (Trace distance vs. quantum Jensen-Shannon divergence, adapted from [20, 13, 13]).: _For any quantum states \(\rho_{0}\) and \(\rho_{1}\), we have_
\[1-\mathrm{H}_{2}\left(\tfrac{1-\mathrm{td}(\rho_{0},\rho_{1})}{2}\right)\leq \mathrm{QJS}_{2}(\rho_{0},\rho_{1})\leq\mathrm{td}(\rho_{0},\rho_{1}).\]
_Here, the binary entropy \(\mathrm{H}_{2}(p):=-p\log(p)-(1-p)\log(1-p)\)._
### Space-bounded quantum computation
We say that a function \(s(n)\) is _space-constructible_ if there exists a deterministic space \(s(n)\) Turing machine that takes \(1^{n}\) as an input and output \(s(n)\) in the unary encoding. Moreover, we say that a function \(f(n)\) is \(s(n)\)-_space computable_ if there exists a deterministic space \(s(n)\) Turing machine that takes \(1^{n}\) as an input and output \(f(n)\). Our definitions of space-bounded quantum computation are formulated in terms of _quantum circuits_, whereas many prior works focused on _quantum Turing machines_[21, 22, 23]. For a discussion on the equivalence between space-bounded quantum computation using _quantum circuits_ and _quantum Turing machines_, we refer readers to [13, Appendix A] and [13, Section 2.2].
We begin by defining time-bounded and space-bounded quantum circuit families, and then proceed to the corresponding complexity class \(\mathsf{BQ}_{\mathsf{U}}\mathsf{SPACE}[s(n)]\). It is worth noting that we use the abbreviated notation \(C_{x}\) to denote that the circuit \(C_{|x|}\) takes input \(x\).
**Definition 2.5** (Time- and space-bounded quantum circuit families).: A (unitary) quantum circuit is a sequence of quantum gates, each of which belongs to some fixed gateset that is universal for quantum computation, such as \(\{\textsc{Hadamard},\textsc{CNOT},\textsc{T}\}\). For a promise problem \(\mathcal{L}=(\mathcal{L}_{\text{yes}},\mathcal{L}_{\text{no}})\), we say that a family of quantum circuits \(\{C_{x}:x\in\mathcal{L}\}\) is \(t(n)\)-time-bounded if there is a deterministic Turing machine that, on any input \(x\in\mathcal{L}\), runs in time \(O(t(|x|))\), and outputs a description of \(C_{x}\) such that \(C_{x}\) accepts (resp., rejects) if \(x\in\mathcal{L}_{\text{yes}}\) (resp., \(x\in\mathcal{L}_{\text{no}}\)). Similarly, we say that a family of quantum circuits \(\{C_{x}:x\in\mathcal{L}\}\) is \(s(n)\)-space-bounded if there is a deterministic Turing machine that, on any input \(x\in\mathcal{L}\), runs in space \(O(s(|x|))\) (and hence time \(2^{O(s(|x|))}\)), and outputs a description of \(C_{x}\) such that \(C_{x}\) accepts (resp., rejects) if \(x\in\mathcal{L}_{\text{yes}}\) (resp., \(x\in\mathcal{L}_{\text{no}}\)), as well as \(C_{x}\) is acting on \(O(s(|x|))\) qubits and has \(2^{O(s(|x|)}\) gates..
**Definition 2.6** (\(\textsf{BQ}_{\textsc{U}}\textsf{SPACE}[s(n),a(n),b(n)]\), adapted from Definition 5 in [10]).: Let \(s\colon\mathbb{N}\to\mathbb{N}\) be a space-constructible function such that \(s(n)\geq\Omega(\log n)\). Let \(a(n)\) and \(b(n)\) be functions that are computable in deterministic space \(s(n)\). A promise problem \((\mathcal{L}_{\text{yes}},\mathcal{L}_{\text{no}})\) is in \(\textsf{BQ}_{\textsc{U}}\textsf{SPACE}[s(n),a(n),b(n)]\) if there exists a family of \(s(n)\)-space-bounded (unitary) quantum circuits \(\{C_{x}\}_{x\in\mathcal{L}}\), where \(n=|x|\), satisfying the following:
* The output qubit is measured in the computational basis after applying \(C_{x}\). We say that \(C_{x}\)_accepts_\(x\) if the measurement outcome is \(1\), whereas \(C_{x}\)_rejects_\(x\) if the outcome is \(0\).
* \(\Pr[C_{x}\text{ accepts }x]\geq a(|x|)\) if \(x\in\mathcal{L}_{\text{yes}}\), whereas \(\Pr[C_{x}\text{ accepts }x]\leq b(|x|)\) if \(x\in\mathcal{L}_{\text{no}}\).
We remark that Definition 2.6 is _gateset-independent_, given that the gateset is closed under adjoint and all entries in chosen gates have reasonable precision. This property is due to the space-efficient Solovay-Kitaev theorem presented in [20]. Moreover, we can achieve error reduction for \(\textsf{BQ}_{\textsc{U}}\textsf{SPACE}[s(n),a(n),b(n)]\) as long as \(a(n)-b(n)\geq 2^{-O(s(n))}\), which follows from [10] or our space-efficient QSVT-based construction in Section 3.4. We thereby define \(\textsf{BQ}_{\textsc{U}}\textsf{SPACE}[s(n)]:=\textsf{BQ}_{\textsc{U}} \textsf{SPACE}[s(n),2/3,1/3]\) to represent (two-sided) bounded-error unitary quantum space, and \(\textsf{BQ}_{\textsc{U}}:=\textsf{BQ}_{\textsc{U}}\textsf{SPACE}[O(\log n)]\) to denote unitary quantum logspace.
We next consider general space-bounded quantum computation, which allows _intermediate quantum measurements_. As indicated in [1, Section 4.1], for any quantum channel \(\Phi\) mapping from density matrices on \(k_{1}\) qubits to density matrices on \(k_{2}\) qubits, we can exactly simulate this quantum channel \(\Phi\) by a unitary quantum circuit acting on \(2k_{1}+k_{2}\) qubits. Therefore, we extend Definition 2.5 to _general quantum circuits_, which allows local operations, such as intermediate measurements in the computational basis, resetting qubits to their initial states, and tracing out qubits. Now we proceed with a definition on \(\textsf{BQSPACE}[s(n)]\).
**Definition 2.7** (\(\textsf{BQSPACE}[s(n),a(n),b(n)]\), adapted from Definition 7 in [10]).: Let \(s\colon\mathbb{N}\to\mathbb{N}\) be a space-constructible function such that \(s(n)\geq\Omega(\log n)\). Let \(a(n)\) and \(b(n)\) be functions that are computable in deterministic space \(s(n)\). A promise problem \((\mathcal{L}_{\text{yes}},\mathcal{L}_{\text{no}})\) is in \(\textsf{BQSPACE}[s(n),a(n),b(n)]\) if there exists a family of \(s(n)\)-space-bounded general quantum circuits \(\{\Phi_{x}\}_{x\in\mathcal{L}}\), where \(n=|x|\), satisfying the following holds:
* The output qubit is measured in the computational basis after applying \(\Phi_{x}\). We say that \(\Phi_{x}\)_accepts_\(x\) if the measurement outcome is \(1\), whereas \(\Phi_{x}\)_rejects_\(x\) if the outcome is \(0\).
* \(\Pr[\Phi_{x}\text{ accepts }x]\geq a(|x|)\) if \(x\in\mathcal{L}_{\text{yes}}\), whereas \(\Pr[\Phi_{x}\text{ accepts }x]\leq b(|x|)\) if \(x\in\mathcal{L}_{\text{no}}\).
It is noteworthy that unitary quantum circuits, which correspond to unitary channels, are a specific instance of general quantum circuits that correspond to quantum channels. we thus infer that \(\textsf{BQ}_{\textsc{U}}\textsf{SPACE}[s(n)]\subseteq\textsf{BQSPACE}[s(n)]\) for any \(s(n)\geq\Omega(\log n)\). However, the opposite direction was a long-standing open problem. Recently, Fefferman and Remscrim [10] demonstrated a remarkable result that \(\textsf{BQSPACE}[s(n)]\subseteq\textsf{BQ}_{\textsc{U}}\textsf{SPACE}[O(s(n))]\). In addition, it is evident that \(\textsf{BQSPACE}[s(n)]\) can achieve error reduction since it admits sequential repetition simply by resetting working qubits. Therefore, we define \(\textsf{BQSPACE}[s(n)]:=\textsf{BQSPACE}[s(n),2/3,1/3]\)
to represent (two-sided) bounded-error general quantum space, and denote general quantum logspace by \(\mathsf{BQL}:=\mathsf{BQSPACE}[O(\log n)]\).
We now turn our attention to _one-sided_ bounded-error unitary quantum space \(\mathsf{RQ}_{\mathsf{U}}\mathsf{SPACE}[s(n)]\) and \(\mathsf{coRQ}_{\mathsf{U}}\mathsf{SPACE}[s(n)]\) for \(s(n)\geq\Omega(\log n)\). These complexity classes were first introduced by Watrous [20] and have been further discussed in [14]. We proceed with the definitions:
* \(\mathsf{RQ}_{\mathsf{U}}\mathsf{SPACE}[s(n),a(n)]:=\mathsf{BQ}_{\mathsf{U}} \mathsf{SPACE}[s(n),a(n),0]\);
* \(\mathsf{coRQ}_{\mathsf{U}}\mathsf{SPACE}[s(n),b(n)]:=\mathsf{BQ}_{\mathsf{U}} \mathsf{SPACE}[s(n),1,b(n)]\).
Note that \(\mathsf{RQ}_{\mathsf{U}}\mathsf{SPACE}[s(n),a(n)]\) and \(\mathsf{coRQ}_{\mathsf{U}}\mathsf{SPACE}[s(n),b(n)]\) can achieve error reduction, as shown in [20] or our space-efficient QSVT-based construction in Section 3.4. We define
\[\mathsf{RQ}_{\mathsf{U}}\mathsf{SPACE}[s(n)]:=\mathsf{BQ}_{\mathsf{U}} \mathsf{SPACE}\big{[}s(n),\tfrac{1}{2},0\big{]}\text{ and }\mathsf{coRQ}_{\mathsf{U}}\mathsf{SPACE}[s(n)]:=\mathsf{BQ}_{\mathsf{U}} \mathsf{SPACE}\big{[}s(n),1,\tfrac{1}{2}\big{]}\]
to represent one-sided bounded-error unitary quantum space, as well as logspace counterparts
\[\mathsf{RQ}_{\mathsf{U}}\mathsf{L}:=\mathsf{RQ}_{\mathsf{U}}\mathsf{SPACE}[O( \log n)]\text{ and }\mathsf{coRQ}_{\mathsf{U}}\mathsf{L}:=\mathsf{coRQ}_{\mathsf{U}} \mathsf{SPACE}[O(\log n)].\]
_Remark 2.8_ (\(\mathsf{RQ}_{\mathsf{U}}\mathsf{L}\) and \(\mathsf{coRQ}_{\mathsf{U}}\mathsf{L}\) are gateset-dependent).: We observe that changing the gateset in space-efficient Solovay-Kitaev theorem [20] can cause errors, revealing the _gateset-dependence_ of unitary quantum space classes with one-sided bounded-error. To address this issue, we adopt a larger gateset \(\mathcal{G}\) for \(\mathsf{RQ}_{\mathsf{U}}\mathsf{SPACE}[s(n)]\) and \(\mathsf{coRQ}_{\mathsf{U}}\mathsf{SPACE}[s(n)]\), which includes any single-qubit gates whose amplitudes can be computed in deterministic \(O(s(n))\) space.
### Near-minimax approximation by Chebyshev interpolation
We will define Chebyshev polynomials and introduce Chebyshev interpolation, which is notable for providing _near-minimax approximations_. These concepts are essential to our space-efficient quantum singular value transformation techniques (Section 3).
**Definition 2.9** (Chebyshev polynomials).: The Chebyshev polynomials (of the first kind) \(T_{k}(x)\) are defined via the following recurrence relation: \(T_{0}(x):=1\), \(T_{1}(x):=x\), and \(T_{k+1}(x):=2xT_{k}(x)-T_{k-1}(x)\). For \(x\in[-1,1]\), an equivalent definition is \(T_{k}(\cos\theta)=\cos(k\theta)\).
In order to use Chebyshev polynomials for interpolation, we first need to define an inner product between two functions, \(f\) and \(g\), as long as the following integral exists:
\[\langle f,g\rangle:=\frac{2}{\pi}\int_{-1}^{1}\frac{f(x)g(x)}{\sqrt{1-x^{2}}} \mathrm{d}x=\frac{2}{\pi}\int_{-\pi}^{0}f(\cos\theta)g(\cos\theta)\mathrm{d}\theta.\]
The Chebyshev polynomials form an orthonormal basis in this inner product space induced by \(\langle\cdot,\cdot\rangle\). As a result, any degree-\(d\) polynomial \(P_{d}\) can be represented as a linear combination of Chebyshev polynomials using a technique called _Chebyshev interpolation_, see [20, Section 6.5] for the details. In particular, \(P_{d}=\frac{1}{2}\langle T_{0},P_{d}\rangle+\sum_{k=1}^{d}\langle T_{k},P_{d} \rangle T_{k}\). It is noteworthy that Lemma 2.10 is first proven in [15].
**Lemma 2.10** (Near-minimax approximation by Chebyshev interpolation, adapted from Theorem 6.13 in [20]).: _For any continuous function \(f\colon[-1,1]\to\mathbb{R}\), if there exists an explicit degree-\(d\) polynomial \(\hat{P}_{d}\in\mathbb{R}[x]\) such that \(\max_{x\in[-1,1]}|f(x)-\hat{P}_{d}(x)|\leq\epsilon\), then we know that \(P_{d}=\frac{1}{2}\langle T_{0},f\rangle+\sum_{k=1}^{d}\langle T_{k},f\rangle T _{k}\) satisfies \(\max_{x\in[-1,1]}|f(x)-P_{d}(x)|\leq O(\epsilon\log d)\)._
### Tools for space-bounded randomized and quantum algorithms
Our convention assumes that for any algorithm \(\mathcal{A}\) in bounded-error randomized time \(t(n)\) and space \(s(n)\), \(\mathcal{A}\) outputs the correct value with probability at least \(2/3\) (viewed as "success probability"). We first proceed with space-efficient success probability estimation.
**Lemma 2.11** (Space-efficient success probability estimation by sequential repetitions).: _Let \(\mathcal{A}\) be a randomized (resp., quantum) algorithm that outputs the correct value with probability \(p\), has time complexity \(t(n)\), and space complexity \(s(n)\). We can obtain an additive-error estimation \(\hat{p}\) such that \(|p-\hat{p}|\leq\epsilon\), where \(\epsilon\geq 2^{-O(s(n))}\). Moreover, this estimation can be computed in bounded-error randomized (resp., quantum) time \(O(\epsilon^{-2}t(n))\) and space \(O(s(n))\)._
Proof.: Consider a \(m\)-time sequential repetition of the algorithm \(\mathcal{A}\), and let \(X_{i}\) be a random variable indicating whether the \(i\)-th repetition succeeds, then we obtain a random variable \(X=\frac{1}{m}\sum_{i=1}^{m}X_{i}\) such that \(\mathbb{E}[X]=p\). Now let \(\hat{X}=\frac{1}{m}\sum_{i=1}^{m}\hat{X}_{i}\) be the additive-error estimation, where \(\hat{X}_{i}\) is the outcome of \(\mathcal{A}\) in the \(i\)-th repetition. By the Chernoff-Hoeffding bound (e.g., Theorem 4.12 in [10]), we know that \(\Pr\Bigl{[}|\hat{X}-p|\geq\epsilon\Bigr{]}\leq 2\exp(-2m\epsilon^{2})\). By choosing \(m=2\epsilon^{-2}\), this choice of \(m\) ensures that each run of \(\mathcal{A}\) succeeds with probability at least \(2/3\).
Furthermore, the space complexity of our algorithm is \(O(s(n))\) since we can simply reuse the workspace. Also, the time complexity is \(m\cdot t(n)=O(\epsilon^{-2}t(n))\) as desired.
Notably, when applying Lemma 2.11 to a quantum algorithm, we introduce intermediate measurements to retain space complexity through reusing working qubits. While space-efficient success probability estimation without intermediate measurements is possible,23 we will use Lemma 2.11 for convenience, given that \(\mathsf{BQL}=\mathsf{BQUL}\)[11].
Footnote 23: Fefferman and Lin [11] noticed that one can achieve space-efficient success probability estimation for quantum algorithms without intermediate measurements via quantum amplitude estimation [1].
The SWAP test was originally proposed for pure states in [1]. Subsequently, in [10], it was demonstrated that the SWAP test can also be applied to mixed states.
**Lemma 2.12** (SWAP test for mixed states, adapted from [10, Proposition 9]).: _Suppose \(\rho_{0}\) and \(\rho_{1}\) are two \(n\)-qubit mixed quantum states. There is a \((2n+1)\)-qubit quantum circuit that outputs \(0\) with probability \(\frac{1+\operatorname{Tr}(\rho_{0}\rho_{1})}{2}\), using \(1\) sample of each \(\rho_{0}\) and \(\rho_{1}\) and \(O(n)\) one- and two-qubit quantum gates._
A matrix \(B\) is said to be _sub-stochastic_ if all its entries are non-negative and the sum of entries in each row (respectively, column) is strictly less than \(1\). Moreover, a matrix \(B\) is _row-stochastic_ if all its entries are non-negative and the sum of entries in each row is equal to \(1\).
**Lemma 2.13** (Sub-stochastic matrix powering in bounded space).: _Let \(B\) be an \(l\times l\) sub-stochastic matrix, where each entry of \(B\) requires at most \(\ell\)-bit precision. Then, there exists an explicit randomized algorithm that computes the matrix power \(B^{k}[s,t]\) in \(\log(l+1)\) space and \(O(\ell k)\) time. Specifically, the algorithm accepts with probability \(B^{k}[s,t]\)._
Proof.: Our randomized algorithm leverages the equivalence between space-bounded randomized computation and Markov chains, see [11, Section 2.4] for a detailed introduction.
First, we construct a row-stochastic matrix \(\hat{B}\) from \(B\) by adding an additional column and row. Let \(\hat{B}[i,j]\) denote the entry at the \(i\)-th column and the \(j\)-th row of \(\hat{B}\). Specifically,
\[\hat{B}[i,j]:=\begin{cases}B[i,j],&\text{ if }1\leq i,j\leq l;\\ 1-\sum_{j=1}^{l+1-i}b_{j}^{(1)},&\text{ if }i=l+1\text{ and }1\leq j\leq l+1;\\ 0,&\text{ if }1\leq i\leq l\text{ and }j=l+1.\end{cases}\]
Next, we view \(\hat{B}\) as a transition matrix of a Markov chain since \(\hat{B}\) is row-stochastic. We consequently have a random walk on the directed graph \(G=(V,E)\) where \(V=\{1,2,\cdots,l\}\cup\{\bot\}\) and \((u,v)\in E\) iff \(\hat{B}(u,v)>0\). In particular, the probability that a \(k\)-step random walk starting at node \(s\) and ending at node \(t\) is exactly \(\hat{B}^{k}[s,t]=B^{k}[s,t]\). This is because the walker who visits the dummy node \(\bot\) will not reach other nodes.
Finally, note that \(\hat{B}\) is a \((l+1)\times(l+1)\) matrix, the matrix powering of \(\hat{B}^{k}\) can be computed in \(\log(l)\) space. In addition, the overall time complexity is \(O(\ell k)\) since we simulate the dyadic rationals (with \(\ell\)-bit precision) of a single transition exactly by \(\ell\) coin flips.
## 3 Space-efficient quantum singular value transformations
We begin by defining the _projected unitary encoding_ and its special forms, viz. the bitstring indexed encoding and the block-encoding, as well as notations on _singular value decomposition_ and _singular value transformation_.
**Definition 3.1** (Projected unitary encoding and its special forms, adapted from [1]).: Let \(U\) be an \((\alpha,a,\epsilon)\)-projected unitary encoding of a linear operator \(A\) if \(\|A-\alpha\tilde{\Pi}U\Pi\|\leq\epsilon\), where \(U\) and orthogonal projectors \(\tilde{\Pi}\) and \(\Pi\) act on \(s+a\) qubits, and both \(\operatorname{rank}(\tilde{\Pi})\) and \(\operatorname{rank}(\Pi)\) are at least \(2^{a}\) (\(a\) is viewed as the number of ancillary qubits). Furthermore, we are interested in two special forms of the projected unitary encoding:
* **Bitstring indexed encoding.** We say that a projected unitary encoding is a _bitstring indexed encoding_ if both orthogonal projectors \(\tilde{\Pi}\) and \(\Pi\) span on \(\tilde{S},S\subseteq\{|0\rangle,|1\rangle\}^{\otimes(a+s)}\), respectively.24 In particular, for any \(|\tilde{s_{i}}\rangle\in\tilde{S}\) and \(|s_{j}\rangle\in S\), we have a matrix representation \(A_{\tilde{S},S}(i,j):=\langle\tilde{s}_{i}|U|s_{j}\rangle\) of \(A\). Footnote 24: Typically, to ensure these orthogonal projectors coincide with space-bounded quantum computation, we additionally require the corresponding subsets \(\tilde{S}\) and \(S\) admit space-efficient set membership, namely deciding the membership of these subsets is in deterministic \(O(s+a)\) space.
* **Block-encoding.** We say that a projected unitary encoding is a block-encoding if both orthogonal projectors are of the form \(\Pi=\tilde{\Pi}=|0\rangle\langle 0|^{\otimes a}\otimes I_{s}\). We use the shorthand \(A=(\langle 0|\otimes I_{s}\rangle U(|\tilde{0}\rangle\otimes I_{s})\) for convenience.
**Definition 3.2** (Singular value decomposition of a projected unitary, adapted from Definition 7 in [1]).: Given a projected unitary encoding of \(A\), denoted by \(U\), associated with orthogonal projectors \(\Pi\) and \(\tilde{\Pi}\) on a finite-dimensional Hilbert space \(\mathcal{H}_{U}\). Namely, \(A=\tilde{\Pi}U\Pi\). Then there exists orthonormal bases of \(\Pi\) and \(\tilde{\Pi}\) such that \(\Pi\colon\left\{|\psi_{i}\rangle:i\in[d]\right\}\), where \(d:=\operatorname{rank}(\Pi)\), of a subspace \(\operatorname{Img}(\Pi)=\operatorname{span}\left\{|\psi_{i}\rangle\right\}\); \(\tilde{\Pi}\colon\left\{|\tilde{\psi}_{i}\rangle:i\in[d]\right\}\), where \(d:=\operatorname{rank}(\tilde{\Pi})\), of a subspace \(\operatorname{Img}(\tilde{\Pi})=\operatorname{span}\left\{|\tilde{\psi}_{i} \rangle\right\}\). These bases ensure that the singular value decomposition \(\tilde{\Pi}U\Pi=\sum_{i=1}^{\min\{d,\tilde{d}\}}\sigma_{i}|\tilde{\psi}_{i} \rangle\langle\psi_{i}|\) where singular values \(\sigma_{i}>\sigma_{j}\) for any \(i<j\in[\min\{d,\tilde{d}\}]\).
**Definition 3.3** (Singular value transformation by even or odd functions, adapted from Definition 9 in [1]).: Let \(f\colon\mathbb{R}\to\mathbb{C}\) be an even or odd function. We consider a linear operator \(A\in\mathbb{C}^{\tilde{d}\times d}\) satisfying the singular value decomposition \(A=\sum_{i=1}^{\min\{d,\tilde{d}\}}\sigma_{i}|\tilde{\psi}_{i}\rangle\langle \tilde{\psi}_{i}|\). We define the _singular value transformation_ corresponding to \(f\) as follows:
\[f^{\rm(SV)}(A):=\begin{cases}\sum_{i=1}^{\min\{d,\tilde{d}\}}f(\sigma_{i})| \tilde{\psi}_{i}\rangle\langle\psi_{i}|,&\text{for odd $f$},\\ \sum_{i=1}^{d}f(\sigma_{i})|\psi_{i}\rangle\langle\psi_{i}|,&\text{for even $f$}.\end{cases}\]
Here, for \(i\in\{\min\{d,\tilde{d}\}+1,\cdots,d-1,d\}\), we define \(\sigma_{i}:=0\).
It is worth noting that \(f^{\rm(SV)}(A)=f(A)\) when \(A\) is an Hermitian matrix.
With these definitions in place, we present the main (informal) theorem in this section:
**Theorem 3.4** (Space-efficient QSVT).: _Let \(f\colon\mathbb{R}\to\mathbb{R}\) be a continuous function bounded on the closed interval of interest \(\mathcal{I}\subseteq[-1,1]\). If there exists a degree-\(d\) polynomial \(\hat{P}_{d}\) that approximates \(h\colon[-1,1]\to\mathbb{R}\), where \(h\) approximates \(f\) only on \(\mathcal{I}\), such that \(\max_{x\in[-1,1]}|h(x)-\hat{P}_{d}(x)|\leq\epsilon\), then Chebyshev interpolation yields another degree-\(d\) polynomial \(P_{d}\) satisfying the following conditions: \(\max_{x\in\mathcal{I}}|f(x)-P_{d}(x)|\leq O(\epsilon\log d)\) and \(\max_{x\in[-1,1]}|P_{d}(x)|\leq 1\)._
_Moreover, we have space-efficient classical algorithms for computing any entry in the coefficient vector \(\mathbf{c}\) of the Chebyshev interpolation polynomial \(P_{d}\):_
* _If_ \(f\) _is a bounded function_,_25 _then any entry in the coefficient vector_ \(\mathbf{c}\) _can be computed in deterministic_ \(O(\log d)\) _space;_ Footnote 25: This conclusion also applies to a linear combination of bounded functions, provided that the coefficients are bounded and can be computed deterministically and space-efficiently.
* _If_ \(f\) _is a piecewise-smooth function, then any entry in the coefficient vector_ \(\mathbf{c}\) _can be computed in bounded-error randomized_ \(O(\log d)\) _space._
_Furthermore, for any \((1,a,0)\)-bitstring indexed encoding \(U\) of \(A=\tilde{\Pi}U\Pi\), acting on \(s+a\) qubits where \(a(n)\leq s(n)\), and any \(P_{d}\) with \(d\leq 2^{O(s(n))}\), we can implement the quantum singular value transformation \(P_{d}^{(\mathrm{SV})}(A)\) that acts on \(O(s(n))\) qubits by using \(O(d^{2}\|\mathbf{c}\|_{1})\) queries to \(U\)._
We remark that we can apply Theorem 3.4 to general forms of the projected unitary encoding \(U\) with orthogonal projectors \(\Pi\) and \(\tilde{\Pi}\), as long as such an encoding meets the conditions: (1) The basis of \(\Pi\) and \(\tilde{\Pi}\) admits a well-defined order; (2) Both controlled-\(\Pi\) and controlled-\(\tilde{\Pi}\) admit computationally efficient implementation. We note that bitstring indexed encoding defined in Definition 3.1 trivially meets the first condition, and a sufficient condition for the second condition is that the corresponding subsets \(S\) and \(\tilde{S}\) have space-efficient set membership.
Specifically, we elaborate on three main technical contributions that culminate in our space-efficient quantum singular value transformations (Theorem 3.4):
* We provide deterministic space-efficient polynomial approximations for _bounded_ functions (Lemma 3.5), including the sign function (Corollary 3.7). Our approach leads to a simple proof of space-efficient error reduction for unitary quantum computations (Section 3.4).
* We present bounded-error randomized space-efficient polynomial approximations for _piecewise-smooth_ functions (Theorem 3.8), such as the normalized logarithmic function (Corollary 3.10).
* We propose QSVT implementations using Chebyshev interpolation polynomials (Theorem 3.11), including those for the sign function (Corollary 3.16) and the normalized logarithmic function (Corollary 3.17).
### Space-efficient bounded polynomial approximations
We provide a systematic approach for constructing _space-efficient_ polynomial approximations of real-valued piecewise-smooth functions, which is a space-efficient counterpart of Corollary 23 in [10]. It is worth mentioning that our algorithm (Lemma 3.5) is _deterministic_ for continuous functions that are bounded on the interval \([-1,1]\). However, for general piecewise-smooth functions, we only introduce a _randomized_ algorithm (Theorem 3.8). In addition, please refer to Section 2.3 as a brief introduction to Chebyshev polynomials and Chebyshev interpolation.
#### 3.1.1 Bounded functions
We propose a space-efficient algorithm for computing the coefficients of a polynomial approximation with high accuracy for bounded functions. Our approach uses Chebyshev interpolation and numerical integration, building upon the methodology outlined in Lemma 2.10 of [11] with meticulous analysis.
**Lemma 3.5** (Space-efficient polynomial approximations for bounded functions).: _Consider a continuous function \(f\), and let \(\hat{P}_{d}^{(f)}\) be a degree-\(d\) polynomial with the same parity as \(f\), such that \(\max_{x\in[-1,1]}\lvert f(x)-\hat{P}_{d}^{(f)}(x)\rvert\leq\epsilon\), where \(f\) is bounded with \(\max_{x\in[-1,1]}\left\lvert f(x)\right\rvert\leq B\). By using
Chebyshev interpolation, we can obtain another degree-\(d\) polynomial \(P_{d}^{(f)}\) that has the same parity as \(\hat{P}_{d}^{(f)}\) and satisfies \(\max_{x\in[-1,1]}|f(x)-P_{d}^{(f)}(x)|\leq O(\epsilon\log d)\). This polynomial \(P_{d}^{(f)}\) is defined as a linear combination of Chebyshev polynomials \(T_{k}(\cos\theta)=\cos(k\theta)\):_
\[P_{d}^{(f)}(x)=\frac{c_{0}}{2}+\sum_{k=1}^{d}c_{k}T_{k}(x)\text{ where }c_{k}:=\frac{2}{\pi}\int_{-\pi}^{0}F_{k}(\theta)\mathrm{d}\theta\text{ and }F_{k}(\theta):=\cos(k\theta)f(\cos\theta).\]
_If the integrand \(F_{k}(\theta)\) satisfies \(\max_{\xi\in[-\pi,0]}|F_{k}^{\prime\prime}(\xi)|\leq O(d^{\gamma})\) for some constant \(\gamma\), then any entry of the coefficient vector \(\mathbf{c}=(c_{0},\cdots,c_{d})\) can be computed in deterministic time \(O(d^{(\gamma+1)/2}\epsilon^{-1/2}t(\ell))\) and space \(O(\log(d^{\gamma+1}\epsilon^{-1}B))\), where evaluating \(F(\theta)\) in \(\ell\)-bit precision is in deterministic time \(t(\ell)\) and space \(O(\ell)\) for \(\ell=O(\log(d^{(\gamma+1)/2}\epsilon^{-3/2}))\). Furthermore, the coefficient vector \(\mathbf{c}\) has a norm bounded by \(\|\mathbf{c}\|_{1}\leq O(Bd)\)._
Proof.: To apply Chebyshev interpolation to a bounded continuous function \(f(x)\), we begin with a degree-\(d\) polynomial \(\hat{P}_{d}^{(f)}\) such that \(\max_{x\in[-1,1]}|f(x)-\hat{P}_{d}^{(f)}(x)|\leq\epsilon\). By utilizing Lemma 2.10, we can construct a degree-\(d\) Chebyshev interpolation of \(f(x)\) denoted as \(P_{d}^{(f)}\). This interpolation is expressed as \(P_{d}^{(f)}=c_{0}/2+\sum_{k=1}^{d}c_{k}T_{k}\), where \(c_{k}=\frac{2}{\pi}\int_{-\pi}^{0}F_{k}(\theta)\mathrm{d}\theta\text{ and }F_{k}(\theta):=\cos(k\theta)f(\cos\theta)\), and additionally satisfies the error bound: \(\max_{x\in[-1,1]}\left|f(x)-P_{d}^{(f)}(x)\right|\leq O(\epsilon\log d)\).
Computing the coefficients.It is left to compute the coefficients \(c_{k}\) for \(0\leq k\leq d\). We can estimate the numerical integration using _the composite trapezium rule_, as described in [14, Section 7.5]. The application of this method yields the following result:
\[\int_{-\pi}^{0}\!\!F_{k}(x)\mathrm{d}x\approx\frac{\pi}{m}\Big{(}\frac{F_{k}( x_{0})}{2}+\sum_{l=1}^{m}F_{k}(x_{l})+\frac{F_{k}(x_{m})}{2}\Big{)}\text{ where }x_{l}:=-\pi+\frac{\pi l}{m}\text{ for }l=0,1,\cdots,m. \tag{3.1}\]
Moreover, we know the upper bound on the numerical errors for computing the coefficient \(c_{k}\):
\[\varepsilon_{d,k}^{(f)}:=\sum_{l=1}^{m}\Big{|}\int_{x_{i-1}}^{x_{i}}F_{k}(x) \mathrm{d}x-\frac{\pi}{2m}\cdot(F_{k}(x_{i-1})+F_{k}(x_{i}))\,\Big{|}\leq\frac {\pi^{3}}{12m^{2}}\max_{\xi\in[-\pi,\pi]}\big{|}F_{k}^{\prime\prime}(\xi)\big{|}\,. \tag{3.2}\]
To obtain an upper bound on the number of intervals \(m\), we need to ensure that the error of the numerical integration is within \(\varepsilon_{d}^{(f)}=\sum_{k=1}^{d}\varepsilon_{d,k}^{(f)}\leq\epsilon\). Plugging the assumption \(|F_{k}^{\prime\prime}(x)|\leq O(d^{\gamma})\) into Equation (3.2), by choosing an appropriate value of \(m=O(\epsilon^{-1/2}d^{(\gamma+1)/2})\), we establish that \(\varepsilon_{d}^{(f)}\leq O(d^{\gamma+1}/m^{2})\leq O(\epsilon)\). Moreover, to guarantee that the accumulated error is \(O(\epsilon)\) in Equation (3.1), we need to evaluate the integrand \(F(\theta)\) with \(\theta\)-bit precision, where \(\ell=O(\log(m/\epsilon))=O(\log(\epsilon^{-3/2}d^{(\gamma+1)/2}))\). In addition, note that \(c_{k}=\frac{2}{\pi}\int_{-\pi}^{0}F_{k}(\theta)\mathrm{d}\theta\leq 2\cdot \max_{x\in[-1,1]}|f(x)|\leq 2B\), we know that the coefficient vector \(\mathbf{c}\) satisfies \(\|\mathbf{c}\|_{1}=\sum_{k=0}^{d}|c_{i}|\leq O(Bd)\).
Analyzing time and space complexity.The presented numerical integration algorithm is deterministic, and therefore, the time complexity for computing the integral is \(O(mt(\ell))\), where \(t(\ell)\) is the time complexity for evaluating the integrand \(F_{k}(\theta)\) within \(2^{-\ell}\) accuracy (i.e., \(\ell\)-bit precision) in \(O(\ell)\) space. The space complexity required for computing the numerical integration is the number of bits required to index the integral intervals and represent the resulting coefficients. To be specific, the space complexity is
\[\max\!\big{\{}O(\log m),O\big{(}\log\frac{m}{\epsilon}\big{)},\log\|\mathbf{c} \|_{\infty}\big{\}}\leq O\big{(}\!\max\!\big{\{}\log\!\big{(}\epsilon^{-\frac{ 3}{2}}d^{-\frac{\gamma+1}{2}}\big{)},\log B\big{\}}\big{)}\leq O\big{(}\log \!\big{(}\epsilon^{-\frac{3}{2}}d^{-\frac{\gamma+1}{2}}B\big{)}\big{)}.\]
Here, \(\|\mathbf{c}\|_{\infty}=\max\limits_{0\leq k\leq d}\frac{2}{\pi}|\int_{-\pi}^{0} \cos(k\theta)f(\cos\theta)\mathrm{d}\theta|\leq\max\limits_{0\leq k\leq d}\max \limits_{\pi\leq\theta\leq 0}O(|f(\cos\theta)|)\leq O(B)\), and the last inequality is due to the fact that \(\Theta(\max\{\log A,\log B\})=\Theta(\log(AB))\) for any \(A,B>0\).
It is worth noting that evaluating a large family of functions, called holonomic functions, with \(\ell\)-bit precision requires only _deterministic_\(O(\ell)\) space:
_Remark 3.6_ (Space-efficient evaluation of holonomic functions).: Holonomic functions encompass several commonly used functions,26 such as polynomials, rational functions, sine and cosine functions (but not other trigonometric functions such as tangent or secant), exponential functions, logarithms (to any base), the Gaussian error function, and the normalized binomial coefficients. In [11, 10], these works have demonstrated that evaluating a holonomic function with \(\ell\)-bit precision is achievable in deterministic time \(\tilde{O}(\ell)\) and space \(O(\ell)\). Prior works achieved the same time complexity, but with a space complexity of \(O(\ell\log\ell)\).
Footnote 26: For a more detailed introduction, please refer to [11, Section 4.9.2].
We now present an example of bounded functions, specifically the sign function.
**Corollary 3.7** (Space-efficient approximation to the sign function).: _For any \(\delta,\epsilon>0\), there is an explicit odd polynomial \(P_{d}^{\mathrm{sgn}}:=\frac{\alpha}{2}+\sum_{k=1}^{d}c_{k}T_{k}\in\mathbb{R}[x]\) of degree \(d\leq\tilde{C}_{\mathrm{sgn}}\delta^{-1}\log\epsilon^{-1}\), where \(\tilde{C}_{\mathrm{sgn}}\) is a universal constant. Any entry of the coefficient vector \(\mathbf{c}:=(c_{0},\cdots,c_{d})\) can be computed in deterministic time \(\tilde{O}\big{(}\epsilon^{-1/2}d^{2}\big{)}\) and space \(O(\log(\epsilon^{-1}d^{4}))\). Furthermore, the polynomial \(P_{d}^{\mathrm{sgn}}\) satisfies the following conditions:_
\[\forall x\in[-1,1]\setminus[-\delta,\delta],\big{|}\mathrm{sgn}(x )-P_{d}^{\mathrm{sgn}}(x)\big{|}\leq C_{\mathrm{sgn}}\epsilon\log d,\text{ where }C_{\mathrm{sgn}}\text{ is a universal constant,}\] \[\forall x\in[-1,1],\big{|}P_{d}^{\mathrm{sgn}}(x)\big{|}\leq 1.\]
_Additionally, the coefficient vector \(\mathbf{c}\) has a norm bounded by \(\|\mathbf{c}\|_{1}\leq\hat{C}_{\mathrm{sgn}}\log d\), where \(\hat{C}_{\mathrm{sgn}}\) is another universal constant. Without loss of generality. we assume that all constants \(C_{\mathrm{sgn}}\), \(\hat{C}_{\mathrm{sgn}}\), and \(\tilde{C}_{\mathrm{sgn}}\) are at least \(1\)._
Proof.: We start from a degree-\(d\) polynomial \(\hat{P}_{d}^{\mathrm{sgn}}\) that well-approximates \(\mathrm{sgn}(x)\).
**Proposition 3.7.1** (Polynomial approximation of the sign function, adapted from Lemma 10 and Corollary 4 in [10]).: _For any \(\delta>0\), \(x\in\mathbb{R}\), \(\epsilon\in(0,\sqrt{2e\pi})\). Let \(\kappa=\frac{2}{\delta}\log^{1/2}\left(\frac{\sqrt{2}}{\sqrt{\pi}\epsilon}\right)\), Then_
\[g_{\delta,\epsilon}(x):=\mathrm{erf}(\kappa x)\text{ satisfies that }|g_{\delta,\epsilon}(x)|\leq 1 \text{ and }\max_{|x|\geq\delta/2}|g_{\delta,\epsilon}(x)-\mathrm{sgn}(x)|\leq\epsilon.\]
_Moreover, there is an explicit odd polynomial \(\hat{P}_{d}^{\mathrm{sgn}}\in\mathbb{R}[x]\) of degree \(d=O(\sqrt{(\kappa^{2}+\log\epsilon^{-1})\log\epsilon^{-1}})\) such that \(\max_{x\in[-1,1]}\left|\hat{P}_{d}^{\mathrm{sgn}}(x)-\mathrm{erf}(\kappa x) \right|\leq\epsilon\)_
By applying Proposition 3.7.1, we obtain a polynomial \(\hat{P}_{d}^{\mathrm{sgn}}\) that well approximates the function \(\mathrm{erf}(\kappa x)\) where \(\kappa=O(\delta^{-1}\sqrt{\log\epsilon^{-1}})\). Consequently, this polynomial \(\hat{P}_{d}^{\mathrm{sgn}}\) has a degree of \(d\leq\tilde{C}_{\mathrm{sgn}}\delta^{-1}\log\epsilon^{-1}\), where \(\tilde{C}_{\mathrm{sgn}}\) is a universal constant. Note that the Gaussian error function is bounded, namely \(|\mathrm{erf}(\kappa x)|\leq 1\) for any \(x\). To utilize Lemma 3.5, it suffices to upper bound \(\max_{\xi\in[-\pi,0]}|F_{k}^{\prime\prime}(\xi)|\) for any \(0\leq k\leq d\), as specified in Fact 3.7.2 and the proof is deferred to Appendix A.1.1.
**Fact 3.7.2**.: _Let \(F_{k}(\theta):=\mathrm{erf}(\kappa\cos\theta)\cos(k\theta)\), \(\max_{0\leq k\leq d}\max_{\xi\in[-\pi,0]}|F_{k}^{\prime\prime}(\xi)|\leq\frac{ 2}{\sqrt{\pi}}\kappa+k^{2}+\frac{4}{\sqrt{\pi}}\kappa^{3}+\frac{4}{\sqrt{\pi}}k\kappa\)._
Note that \(\kappa\leq O(d)\) and \(k\leq d\), Fact 3.7.2 indicates that \(\max_{\xi\in[-\pi,0]}|F_{k}^{\prime\prime}(\xi)|\leq O(d^{3})\) for any \(0\leq k\leq d\). Hence, we result in an approximation polynomial \(\tilde{P}_{d}^{\mathrm{sgn}}\) by Lemma 3.5 satisfies that \(\max_{x\in[-1,1]}|\mathrm{erf}(\kappa x)-\tilde{P}_{d}^{\mathrm{sgn}}(x)|\leq O (\epsilon\log d)\), which additionally derives that
\[\max_{x\in[-1,1]}|\mathrm{sgn}(x)-\tilde{P}_{d}^{\mathrm{sgn}}(x)|\leq\epsilon+ \max_{x\in[-1,1]}|\mathrm{erf}(\kappa x)-\tilde{P}_{d}^{\mathrm{sgn}}(x)|\leq C _{\mathrm{sgn}}\epsilon\log d.\]
Here, \(C_{\mathrm{sgn}}\) is a universal constant. Moreover, we specify the bound of \(\|\tilde{\mathbf{c}}^{\mathrm{sgn}}\|_{1}\) in Fact 3.7.3, and the proof is deferred to Appendix A.1.1:
**Fact 3.7.3** (Implicit in [14, Lemma 2.10]).: _For the coefficient vector \(\tilde{\mathbf{c}}^{\mathrm{sgn}}\) corresponding to a degree-\(d\) polynomial \(\tilde{P}_{d}^{\mathrm{sgn}}\), we have \(\|\tilde{\mathbf{c}}^{\mathrm{sgn}}\|_{1}\leq\hat{C}_{\mathrm{sgn}}\log d\) where \(\hat{C}_{\mathrm{sgn}}\) is a universal constant._
In addition, the coefficient vector \(\tilde{\mathbf{c}}^{\text{sgn}}\) can be computed in deterministic space \(O(\log(d\epsilon^{-1}))\). As the evaluation of the integrand \(F(\theta)\) requires \(\ell\)-bit precision where \(\ell=O(\log(\epsilon^{-3/2}d^{2}))\), together with Remark 3.6, \(\tilde{\mathbf{c}}^{\text{sgn}}\) can be computed in deterministic time \(\tilde{O}(\epsilon^{-1/2}d^{2})\).
Finally, we obtain that \(|\tilde{P}^{\text{sgn}}_{d}(x)|\leq 1+\epsilon\) for any \(x\in[-1,1]\) since \(|\text{sgn}(x)|\leq 1\) for any \(x\). We finish the proof by normalizing \(\tilde{P}^{\text{sgn}}_{d}\), in particular, considering \(P^{\text{sgn}}_{d}(x):=(1+\epsilon)^{-1}\tilde{P}^{\text{sgn}}_{d}\). It is evident to verify that \(P^{\text{sgn}}_{d}\) is an odd polynomial that satisfies all desired requirements.
#### 3.1.2 Piecewise-smooth functions
We present a randomized algorithm for constructing bounded polynomial approximations of piecewise-smooth functions, which can be seen as a _space-efficient_ alternative to Corollary 23 in [10], as described in Theorem 3.8. Our algorithm leverages Lemma 3.5 and Lemma 3.9.
**Theorem 3.8** (Taylor series based space-efficient bounded polynomial approximations).: _Consider a real-valued function \(f\colon[-x_{0}-r-\delta,x_{0}+r+\delta]\to\mathbb{R}\) such that \(f(x_{0}+x)=\sum_{l=0}^{\infty}a_{l}x^{l}\) for all \(x\in[-r-\delta,r+\delta]\), where \(x_{0}\in[-1,1]\), \(r\in(0,2]\), \(\delta\in(0,r]\). Assume that \(\sum_{l=0}^{\infty}(r+\delta)^{l}|a_{l}|\leq B\) where \(B>0\). Let \(\epsilon\in(0,\frac{1}{2B}]\) such that \(B>\epsilon\), then there is a polynomial \(P\in\mathbb{R}[x]\) of degree \(O(\delta^{-1}\log(\epsilon^{-1}B))\), such that any entry of the coefficient vector \(\mathbf{c}^{(P)}\) can be computed in bounded-error randomized time \(\tilde{O}(\max\{(\delta^{\prime})^{-5}\epsilon^{-2}B^{2},d^{2}\epsilon^{-1/2}B\})\) and space \(O(\log(d^{4}(\delta^{\prime})^{-4}\epsilon^{-1}B))\) where \(\delta^{\prime}:=\frac{\delta}{2(r+\delta)}\), such that_
\[\|f(x)-P(x)\|_{[x_{0}-r,x_{0}+r]} \leq O(\epsilon\log d),\] \[\|P(x)\|_{[-1,1]} \leq O(\epsilon\log d)+\|f(x)\|_{[x_{0}-r-\delta/2,x_{0}+r+\delta/2]} \leq O(\epsilon\log d)+B,\] \[\|P(x)\|_{[-1,1]\setminus[x_{0}-r-\delta/2,x_{0}+r+\delta/2]} \leq O(\epsilon\log d).\]
_Furthermore, the coefficient vector \(\mathbf{c}^{(P)}\) of \(P\) has a norm bounded by \(\|\mathbf{c}^{(P)}\|_{1}\leq O(Bd)\)._
The main ingredient, and the primary challenge, for demonstrating Theorem 3.8 is to construct a low-weight approximation using Fourier series, as shown in Lemma 37 of [21], which requires computing the powers of sub-stochastic matrices in bounded space (Lemma 2.13).
**Lemma 3.9** (Space-efficient low-weight approximation by Fourier series).: _Let \(0<\delta,\epsilon<1\) and \(f\colon\mathbb{R}\to\mathbb{R}\) be a real-valued function such that \(|f(x)-\sum_{k=0}^{K}a_{k}x^{k}|\leq\epsilon/4\) for all \(x\in\mathcal{I}_{\delta}\), the interval \(\mathcal{I}_{\delta}:=[-1+\delta,1-\delta]\) and \(\|\mathbf{a}\|_{1}\leq O(\max\{\epsilon^{-1},\delta^{-1}\})\). Then there is a coefficient vector \(\mathbf{c}\in\mathbb{C}^{2M+1}\) such that_
* _For even functions,_ \(\Big{|}f(x)-\sum_{m=-M}^{M}c_{m}^{\text{(even)}}\!\cos(\pi xm)\Big{|}\leq\epsilon\) _for any_ \(x\in\mathcal{I}_{\delta}\)_;_
* _For odd functions,_ \(\Big{|}f(x)-\sum_{m=-M}^{M}c_{m}^{\text{(odd)}}\!\sin\!\left(\pi x\big{(}m\!+ \!\frac{1}{2}\big{)}\right)\Big{|}\leq\epsilon\) _for any_ \(x\in\mathcal{I}_{\delta}\)_;_
* _Otherwise,_ \(\Big{|}f(x)\!-\!\sum_{m=-M}^{M}\big{(}c_{m}^{\text{(even)}}\!\cos(\pi xm)\!+ \!c_{m}^{\text{(odd)}}\!\sin\!\left(\pi x\big{(}m\!+\!\frac{1}{2}\big{)}\right) \big{)}\Big{|}\!\leq\!\epsilon\) _for any_ \(x\in\mathcal{I}_{\delta}\)_._
_Here \(M:=\max\big{(}2\lceil\delta^{-1}\ln(4\|a\|_{1}\epsilon^{-1})\rceil,0\big{)}\) and \(\|\mathbf{c}\|_{1}\leq\|\mathbf{a}\|_{1}\). Moreover, the coefficient vector \(\mathbf{c}\) can be computed in bounded-error randomized time \(\tilde{O}(\delta^{-5}\epsilon^{-2})\) and space \(O(\log(\delta^{-4}\epsilon^{-1}))\)._
Proof.: We begin by defining \(\|f\|_{\infty}:=\sup\{|f(x)|:x\in[-1+\delta,1-\delta]\}\). It is worth noting that the truncation error of \(\sum_{k=0}^{K}a_{k}x^{k}\), as shown in [11, Theorem A.4], is \((1-\delta)^{k+1}\leq e^{-\delta(k+1)}\leq\epsilon\), implying that \(K\geq\Omega(\delta^{-1}\ln\epsilon^{-1})\). Without loss of generality, we can assume that \(\|\mathbf{a}\|_{1}\geq\epsilon/2\).27
Footnote 27: This is because if \(\|\mathbf{a}\|_{1}<\epsilon/2\), then \(\|f\|_{\infty}\leq\|f(x)-\sum_{k=0}^{K}a_{k}x^{k}\|_{\infty}+\|\sum_{k=0}^{K}a_{ k}x^{k}\|_{\infty}\leq\epsilon/4+\|\mathbf{a}\|_{1}<\epsilon\), implying that \(M=0\) and \(\mathbf{c}=0\).
**Construction of polynomial approximations.** Our construction involves three approximations, as described in Lemma 37 of [21]. We defer the detailed proofs of all three approximations to Appendix A.1.2.
The first approximation combines the assumed \(\sum_{k=0}^{K}a_{k}x^{k}\) with \(\arcsin(x)\)'s Taylor series.
**Proposition 3.9.1** (First approximation).: _Let \(\hat{f}_{1}(x)\!:=\!\sum\nolimits_{k=0}^{K}a_{k}x^{k}\) such that \(\|f-\hat{f}_{1}\|_{\infty}\leq\epsilon/4\). Then we know that \(\hat{f}_{1}(x)=\sum\nolimits_{k=0}^{K}a_{k}\sum\nolimits_{l=0}^{\infty}b_{l}^{ (k)}\sin^{l}\big{(}\frac{x\pi}{2}\big{)}\) where the coefficients \(b_{l}^{(k)}\) satisfy that_
\[b_{l}^{(k+1)}=\sum\limits_{l^{\prime}=0}^{l}b_{l^{\prime}}^{(k)}b_{l-l^{ \prime}}^{(1)}\ \text{where}\ b_{l}^{(1)}=\begin{cases}0&\text{if}\ l\ \text{is even,}\\ \frac{l-1}{2}\frac{2-l+1}{l}\cdot\frac{2}{\pi}&\text{if}\ l\ \text{is odd.}\end{cases} \tag{3.3}\]
_Furthermore, the coefficients \(\{b_{l}^{(k)}\}\) satisfies the following: (1) \(\|\mathbf{b}^{(k)}\|_{1}=1\) for all \(k\geq 1\); (2) \(\mathbf{b}^{(k)}\) is entry-wise non-negative for all \(k\geq 1\); (3) \(b_{l}^{(k)}=0\) if \(l\) and \(k\) have different parities._
The second approximation truncates the series at \(l=L\), and bounds the truncation error.
**Proposition 3.9.2** (Second approximation).: _Let \(\hat{f}_{2}(x):=\sum\nolimits_{k=0}^{K}a_{k}\sum\nolimits_{l=0}^{L}b_{l}^{(k) }\sin^{l}\!\big{(}\frac{x\pi}{2}\big{)}\) where \(L:=\lceil\delta^{-2}\ln(4\|\mathbf{a}\|_{1}\epsilon^{-1})\rceil\), then we have that \(\|\hat{f}_{1}-\hat{f}_{2}\|_{\infty}\leq\epsilon/4\)._
The third approximation approximates the functions \(\sin^{l}\!(x)\) in \(\hat{f}_{2}(x)\) using a tail bound of the binomial distribution. Notably, this construction not only quadratically improves the dependence on \(\delta\), but also ensures that the integrand's second derivative is _bounded_ when combined with Lemma 3.5.
**Proposition 3.9.3** (Third approximation).: _Let \(\hat{f}_{3}(x)\) be polynomial approximations of \(f\) that depends on the parity of \(f\) such that \(\|\hat{f}_{2}\!-\!\hat{f}_{3}\|\!\leq\!\epsilon/2\) and \(M\!=\!\lfloor\delta^{-1}\ln(4\|\mathbf{a}\|_{1}\epsilon^{-1})\rfloor\), then we have_
\[\hat{f}_{3}^{(\text{even})}(x):=\sum\limits_{k=0}^{K}a_{k}\sum \limits_{l=0}^{L/2}(-1)^{l}\!2^{-2l}b_{2l}^{(k)}\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
We now present a bounded-error randomized algorithm for estimating \(b_{l}^{(k)}\). As \(\mathbf{b}^{(1)}\) is entrywise non-negative and \(\sum_{i=1}^{l}b_{i}^{(1)}<\|\mathbf{b}^{(1)}\|_{1}=1\) following Proposition 3.9.1, we can express the recursive formula in Equation (3.3) as the matrix
\[B_{1}^{k}:=\begin{pmatrix}b_{1}^{(1)}&b_{2}^{(1)}&\cdots&b_{l-1}^{(1)}&b_{l}^{( 1)}\\ 0&b_{1}^{(1)}&\cdots&b_{l-2}^{(1)}&b_{l-1}^{(1)}\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&\cdots&b_{1}^{(1)}&b_{2}^{(1)}\\ 0&0&\cdots&0&b_{1}^{(1)}\end{pmatrix}^{k}=\begin{pmatrix}b_{1}^{(k)}&b_{2}^{( k)}&\cdots&b_{l-1}^{(k)}&b_{l}^{(k)}\\ 0&b_{1}^{(k)}&\cdots&b_{l-2}^{(k)}&b_{l-1}^{(k)}\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&\cdots&b_{1}^{(k)}&b_{2}^{(k)}\\ 0&0&\cdots&0&b_{1}^{(k)}\end{pmatrix}:=B_{k}.\]
In addition, we approximate the sub-stochastic matrix \(B_{1}\) by dyadic rationals with \(\ell\)-bit precision, denoted as \(\hat{B}_{1}\). Utilizing Lemma 2.13, we can compute any entry \(\hat{B}_{1}^{k}[s,t]\) with a randomized algorithm that runs in \(O(\ell k)\) time and \(\log(l+1)\) space with acceptance probability \(\hat{B}_{1}^{k}[s,t]\). To evaluate \(\hat{B}_{1}^{k}[s,t]\) with an additive error of \(\epsilon\), we use the sequential repetitions outlined in Lemma 2.11. Specifically, we repeat the algorithm \(m=2\epsilon^{-2}\ln(KLM)=O(\epsilon^{-2}\log(\delta^{-4}))\) times, and each turn succeeds with probability at least \(1-1/(3KLM)\). Note that the number of the evaluation of \(b_{l}^{(k)}\) for computing \(\hat{f}_{3}(x)\) is \(O(KLM)\), and by the union bound, we can conclude that the success probability of evaluating all coefficients in \(\mathbf{c}\) is at least \(2/3\).
Finally, we complete the proof by analyzing the overall computational complexity. It is evident that our algorithm utilizes \(O(\ell+\log m)=O(\log(\delta^{-4}\epsilon^{-3}))\) space because indexing \(m\) repetitions requires additional \(O(\log m)\) bits. Moreover, since there are \(O(KLM)\) summands in \(\hat{f}_{3}(x)\), and evaluating \(b_{l}^{(k)}\) takes \(m\) repetitions with time complexity \(O(\ell K)\) for a single turn, the overall time complexity is \(O(KLM\cdot\ell K\cdot\epsilon^{-2}\log(KLM))=\tilde{O}(\delta^{-5}\epsilon^{ -2})\).
Now we present the proof of Theorem 3.8, which is a space-efficient and randomized algorithm for constructing bounded polynomial approximations of piecewise-smooth functions.
Proof of Theorem 3.8.: Our approach is based on Theorem 40 in [11] and Corollary 23 in [10]. Firstly, we obtain a Fourier approximation \(\hat{f}(x)\) of the given function \(f(x)\) by truncating it using Lemma 3.9. Next, we ensure that \(\hat{f}(x)\) is negligible outside the interval \([-x_{0}-r,x_{0}+r]\) by multiplying it with a suitable rectangle function, denoted as \(h(x)\). Finally, we derive a space-efficient polynomial approximation \(\hat{h}(x)\) of \(h(x)\) by applying Lemma 3.5.
**Construction of a bounded function.** Let us begin by defining a linear transformation \(L(x):=\frac{x-x_{0}}{r+\delta}\) that maps \([x_{0}-r-\delta,x_{0}+r+\delta]\) to \([-1,1]\). For convenience, we denote \(g(y):=f(L^{-1}(y))\) and \(b_{l}:=a_{l}(r+\delta)^{l}\), then it is evident that \(g(y):=\sum_{l=0}^{\infty}b_{l}y^{l}\) for any \(y\in[-1,1]\).
To construct a Fourier approximation by Lemma 3.9, we need to bound the truncation error \(\varepsilon_{J}^{(g)}\). We define \(\delta^{\prime}:=\frac{\delta}{2(r+\delta)}\) and \(J:=\lceil(\delta^{\prime})^{-1}\log(12B\epsilon^{-1})\rceil\). This ensures that the truncation error \(\varepsilon_{J}^{(g)}:=\big{|}g(y)-\sum_{j=0}^{J-1}b_{j}y^{j}\big{|}\) for any \(y\in[-1+\delta^{\prime},1-\delta^{\prime}]\) satisfies the following:
\[\varepsilon_{J}^{(g)}=\Big{|}\sum_{j=J}^{\infty}b_{j}y^{j}\Big{|}\leq\sum_{j=J }^{\infty}\big{|}b_{j}(1-\delta^{\prime})^{j}\big{|}\leq(1-\delta^{\prime})^{ J}\sum_{j=J}^{\infty}|b_{j}|\leq(1-\delta^{\prime})^{J}B\leq e^{-\delta^{ \prime}J}B\leq\frac{\epsilon}{12}:=\frac{\epsilon^{\prime}}{4}.\]
Afterward, let \(\hat{\mathbf{b}}:=(b_{0},b_{1},\cdots,b_{J-1})\), then we know that \(\|\hat{\mathbf{b}}\|_{1}\leq\|\mathbf{b}\|_{1}\leq B\) by the assumption. Now we utilize Lemma 3.9 and obtain the Fourier approximation \(\hat{g}(y)\):
\[\hat{g}(y)\!:=\!\begin{cases}\sum_{m=-M}^{M}c_{m}^{(\text{even})}\cos(\pi ym),& \text{if $f$ is even}\\ \sum_{m=-M}^{M}c_{m}^{(\text{odd})}\sin(\pi y\big{(}m\!+\!\tfrac{1}{2}\big{)} ),&\text{if $f$ is odd}\\ \sum_{m=-M}^{M}\Big{(}c_{m}^{(\text{even})}\!\cos(\pi ym)+c_{m}^{(\text{odd})} \!\sin\big{(}\pi y\big{(}m\!+\!\tfrac{1}{2}\big{)}\big{)}\Big{)},&\text{ otherwise}\end{cases}. \tag{3.5}\]
By appropriately choosing \(M=O\big{(}(\delta^{\prime})^{-1}\!\log\big{(}\|\hat{\mathbf{b}}\|_{1}/\epsilon^{ \prime}\big{)}\big{)}=O\big{(}r\delta^{-1}\!\log\big{(}B/\epsilon\big{)}\big{)}\), we obtain that the vectors of coefficients \(\mathbf{c}^{(\text{even})}\) and \(\mathbf{c}^{(\text{odd})}\) satisfy \(\|\mathbf{c}^{(\text{even})}\|_{1}\leq\|\hat{\mathbf{b}}\|_{1}\leq B\) and similarly \(\|\mathbf{c}^{(\text{odd})}\|_{1}\leq B\). Plugging \(f(x)=g(L(x))\) into Equation (3.5), we conclude that \(\hat{f}(x)=\hat{g}(L(x))\) is a Fourier approximation of \(f\) with an additive error of \(\epsilon/3\) on the interval \([x_{0}-r-\delta/2,x_{0}+r+\delta/2]\):
\[\hat{f}(x)=\hat{g}\Big{(}\frac{x\!-\!x_{0}}{r\!+\!\delta}\Big{)}=\begin{cases} \sum\limits_{m=-M}^{M}c_{m}^{(\text{even})}\!\!\cos\big{(}\pi m\big{(}\frac{x -x_{0}}{r+\delta}\big{)}\big{)},&\text{if $f$ is even}\\ \sum\limits_{m=-M}^{M}c_{m}^{(\text{odd})}\!\!\sin\!\big{(}\pi\big{(}m+\frac{ 1}{2}\big{)}\big{(}\frac{x-x_{0}}{r+\delta}\big{)}\big{)},&\text{if $f$ is odd}\\ \sum\limits_{m=-M}^{M}c_{m}^{(\text{even})}\!\!\cos\big{(}\pi m\big{(}\frac{x -x_{0}}{r+\delta}\big{)}\big{)}+c_{m}^{(\text{odd})}\sin\!\big{(}\pi\big{(}m+ \frac{1}{2}\big{)}\big{(}\frac{x-x_{0}}{r+\delta}\big{)}\big{)},&\text{otherwise} \end{cases}.\]
Making the error negligible outside the interval.Subsequently, we define the function \(h(x)=\hat{f}(x)\cdot R(x)\) such that it becomes negligible outside the interval of interest, i.e., \([x_{0}-r-\delta/2,x_{0}+r+\delta/2]\). Here, the approximate rectangle function \(R(x)\) is \(\tilde{\epsilon}\)-close to \(1\) on the interval \([x_{0}-r,x_{0}+r]\), and is \(\tilde{\epsilon}\)-close to \(0\) on the interval \([-1,1]\setminus[x_{0}-r-2\tilde{\delta},x_{0}+r+2\tilde{\delta}]\), where \(\tilde{\epsilon}:=\epsilon/(3B)\) and \(\tilde{\delta}:=\delta/4\). Moreover, \(|R(x)|\leq 1\) for any \(x\in[-1,1]\). Similar to Lemma 29 in [11], \(R(x)\) can be expressed as a linear combination of Gaussian error functions:
\[R(x)\!:=\!\frac{1}{2}\Big{[}\operatorname{erf}\big{(}\kappa(x\!-\!x_{0}\!+\!r \!+\!\delta^{\prime})\big{)}\!-\!\operatorname{erf}\big{(}\kappa(x\!-\!x_{0}\! -\!r\!-\!\delta^{\prime})\big{)}\Big{]}\text{where }\kappa\!:=\!\frac{2}{\delta^{ \prime}}\!\log^{\frac{1}{2}}\!\frac{\sqrt{2}}{\sqrt{\pi\epsilon^{\prime}}}\!= \!\frac{8}{\delta}\!\log^{\frac{1}{2}}\!\frac{\sqrt{18}B}{\sqrt{\pi\epsilon}}. \tag{3.6}\]
Bounded polynomial approximation with Chebyshev interpolation.We here present an algorithmic, space-efficient, randomized polynomial approximation method using Chebyshev interpolation to approximate the function \(h(x):=\hat{f}(x)\cdot R(x)\). As suggested in Proposition 3.8.1, we use an explicit polynomial approximation \(\hat{P}(x)\) of the bounded function \(h(x)\) of degree \(d=O(\delta^{-1}\log(B\epsilon^{-1}))\) that satisfies the conditions specified in Equation (3.7).
**Proposition 3.8.1** (Bounded polynomial approximations based on a local Taylor series, adapted from [11, Corollary 23]).: _Let \(x_{0}\in[-1,1]\), \(r\in(0,2]\), \(\delta\in(0,r]\) and let \(f\!:[-x_{0}-r-\delta,x_{0}+r+\delta]\to\mathbb{R}\) and be such that \(f(x_{0}+x)\!:=\!\sum_{l=0}^{\infty}a_{l}x^{l}\) for all \(x\in[-r\!-\!\delta,\!r\!+\!\delta]\). Suppose \(B>0\) is such that \(\sum_{l=0}^{\infty}(r+\delta)^{l}|a_{l}|\leq B\). Let \(\epsilon\in\big{(}0,\frac{1}{2B}\big{]}\), there is a \(\epsilon/3\)-precise Fourier approximation \(\tilde{f}(x)\) of \(f(x)\) on the interval \([x_{0}\!-\!r\!+\!\delta/2,x_{0}\!+\!r\!+\!\delta/2]\), where \(\hat{f}(x)\!:=\!\sum_{m=-M}^{M}\!\mathrm{Re}\Big{[}\tilde{c}_{m}e^{-\frac{1 \pi m}{2(r+\delta)^{2}}x_{0}}e^{\frac{1\pi m}{2(r+\delta)^{2}}x}\Big{]}\) and \(\|\tilde{\mathbf{c}}\|_{1}\leq B\). We have a time-efficient polynomial \(P^{*}\in\mathbb{R}[x]\) of degree \(O(\delta^{-1}\log(B\epsilon^{-1}))\) s.t._
\[\|\hat{f}(x)R(x)-P^{*}(x)\|_{[x_{0}-r,x_{0}+r]} \leq\epsilon,\] \[\|P^{*}(x)\|_{[-1,1]} \leq\epsilon+\|\hat{f}(x)R(x)\|_{[x_{0}-r-\delta/2,x_{0}+r+\delta/ 2]}\leq\epsilon+B, \tag{3.7}\] \[\|P^{*}(x)\|_{[-1,1]\setminus[x_{0}-r-\delta/2,x_{0}+r+\delta/2]} \leq\epsilon.\]
To utilize Lemma 3.5, we need to bound the second derivative \(\max_{\xi\in[-\pi,0]}|F_{k}^{\prime\prime}(\xi)|\), where the integrand \(F_{k}(\cos\theta):=\cos(k\theta)h(\cos\theta)\) for any \(0\leq k\leq d\). We will calculate this upper bound directly in Fact 3.8.2, and the proof is deferred to Appendix A.1.3.
**Fact 3.8.2**.: _Consider the integrand \(F_{k}(\theta)\!=\!\sum_{m=-M}^{M}\!\frac{c_{m}}{2}\big{(}H_{k,m}^{(+)}\!-\!H_{k,m }^{(-)}\big{)}\) for any function \(f\) which is either even or odd. If \(f\) is even, we have that \(c_{m}=c_{m}^{(\text{even})}\) defined in Lemma 3.9, and_
\[H_{k,m}^{(\pm)}(\theta):=\cos\!\Big{(}\pi m\Big{(}\frac{\cos\theta-x_{0}}{r+ \delta}\Big{)}\Big{)}\cdot\cos(k\theta)\cdot\operatorname{erf}\Big{(}\kappa \Big{(}\cos\theta-x_{0}\pm r\pm\frac{\delta}{4}\Big{)}\Big{)}. \tag{3.8}\]
_Likewise, if \(f\) is odd, we know that \(c_{m}=c_{m}^{(\text{odd})}\) defined in Lemma 3.9, and_
\[H_{k,m}^{(\pm)}(\theta):=\sin\!\Big{(}\pi\Big{(}m+\frac{1}{2}\Big{)}\Big{(}\frac {\cos\theta-x_{0}}{r+\delta}\Big{)}\Big{)}\cdot\cos(k\theta)\cdot \operatorname{erf}\Big{(}\kappa\Big{(}\cos\theta-x_{0}\pm r\pm\frac{\delta}{4} \Big{)}\Big{)}. \tag{3.9}\]
_Moreover, the integrand is \(F_{k}(\theta)\!=\!\sum_{m=-M}^{M}\!\Big{(}\frac{c_{m}^{(\text{even})}}{2}\big{(} \hat{H}_{k,m}^{(+)}\!-\!\hat{H}_{k,m}^{(-)}\big{)}+\frac{c_{m}^{(\text{odd})}}{2} \big{(}\tilde{H}_{k,m}^{(+)}\!-\!\hat{H}_{k,m}^{(-)}\big{)}\Big{)}\) when \(f\) is neither even nor odd, where \(\hat{H}_{k,m}^{(\pm)}\) and \(\tilde{H}_{k,m}^{(\pm)}\) follow from Equation (3.8) and Equation (3.9),
_respectively. Regardless of the parity of \(f\), we have that the second derivative \(F_{k}^{\prime\prime}(\theta)\leq O(Bd^{3})\)._
Together with Fact 3.8.2, we are ready to apply Lemma 3.5 to \(h(x)=\hat{f}(x)R(x)\), resulting in a degree-\(d\) polynomial \(P(x)\). Since \(P(x)\) is the minimax approximation of \(P^{*}(x)\) by Chebyshev interpolation and satisfies Equation (3.7), we can define intervals \(\mathcal{I}_{\text{int}}:=[x_{0}-r,x_{0}+r]\) and \(\mathcal{I}_{\text{ext}}:=[x_{0}-r-\delta/2,x_{0}+r+\delta/2]\) to obtain:
\[\begin{split}\|f(x)-P(x)\|_{\mathcal{I}_{\text{int}}}& \leq\|f(x)-h(x)\|_{\mathcal{I}_{\text{int}}}+\|h(x)-P(x)\|_{ \mathcal{I}_{\text{int}}}\leq\epsilon+O(\epsilon\log d)=O(\epsilon\log d),\\ \|P(x)-0\|_{\mathcal{I}_{\text{ext}}}&\leq\|P(x)-h( x)\|_{\mathcal{I}_{\text{ext}}}+\|h(x)-0\|_{\mathcal{I}_{\text{ext}}}\leq O( \epsilon\log d)+O(\epsilon)\leq O(\epsilon\log d).\end{split} \tag{3.10}\]
We can achieve the desired error bound by observing that Equation (3.10) implies \(|P(x)|_{[-1,1]}\leq O(\epsilon\log d)+|P(x)|_{[-1,1]\setminus\mathcal{I}_{ \text{ext}}}\leq O(\epsilon\log d)+B\). Moreover, we note that the norm of the coefficient vector \(\mathbf{c}^{(P)}\) of the polynomial \(P(x)\) is bounded by \(|\mathbf{c}^{(P)}|\leq O(Bd)\cdot(1+O(\epsilon\log d))=O(Bd)\), which follows directly from our utilization of Lemma 3.5.
**Analyzing time and space complexity.** The construction of \(\hat{f}(x)\) can be implemented in bounded-error randomized time \(\tilde{O}((\delta^{\prime})^{-5}\epsilon^{-2}B^{2})\) and space \(O(\log((\delta^{\prime})^{-4}\epsilon^{-1}B))\), given that this construction uses Lemma 3.9 with \(\delta^{\prime}=\frac{\delta}{2(r+\delta)}\in(0,\frac{1}{2}]\) and \(\epsilon^{\prime}=\frac{\epsilon}{3B}\). Having \(\hat{f}(x)\), we can construct a bounded polynomial approximation \(\hat{h}(x)\) deterministically using Lemma 3.5. This construction can be implemented in deterministic time \(\tilde{O}(d^{2}\epsilon^{-1/2}B)\) and space \(O(\log(d^{4}\epsilon^{-1}B))\) since the integrand \(F_{k}(\theta)\) is a product of a constant number of (compositions of) holonomic functions (Remark 3.6). Therefore, our construction can be implemented in bounded-error randomized time \(\tilde{O}(\max\left\{(\delta^{\prime})^{-5}\epsilon^{-2}B^{2},d^{2}\epsilon^{ -1/2}B\right\})\) and space \(O(\log(d^{4}(\delta^{\prime})^{-4}\epsilon^{-1}B))\).
With the aid of Theorem 3.8, we can provide a space-efficient polynomial approximation to the normalized logarithmic function utilized in Lemma 11 of [10].
**Corollary 3.10** (Space-efficient polynomial approximation to the normalized logarithmic function).: _Let \(\beta\in(0,1]\) and \(\epsilon\in(0,1/2)\), there is an even polynomial \(P\) of degree \(d\leq\tilde{C}_{\ln}\beta^{-1}\log\epsilon^{-1}\) where \(\tilde{C}_{\ln}\) is a universal constant such that_
\[\forall x\in[\beta,1],\Big{|}P(x)-\tfrac{\ln(1/x)}{2\ln(2/\beta)} \Big{|}\leq C_{\ln}\epsilon\log d,\text{ where }C_{\ln}\text{ is a universal constant},\] \[\forall x\in[-1,1], |P(x)|\leq 1.\]
_Moreover, the coefficient vector \(\mathbf{c}^{(P)}\) of \(P\) has a norm bounded by \(\|\mathbf{c}^{(P)}\|_{1}\leq\hat{C}_{\ln}d\), where \(\hat{C}_{\ln}\) is another universal constant. In addition, any entry of the coefficient vector \(\mathbf{c}^{(P)}\) can be computed in bounded-error randomized time \(\tilde{O}(\max\{\beta^{-5}\epsilon^{-2},d^{2}\epsilon^{-1/2}\})\) and space \(O(\log(d^{4}\beta^{-4}\epsilon^{-1}))\). Without loss of generality, we assume that all constants \(C_{\ln}\), \(\hat{C}_{\ln}\), and \(\tilde{C}_{\ln}\) are at least \(1\)._
Proof.: Consider the function \(f(x):=\frac{\ln(1/x)}{2\ln(2/\beta)}\). We apply Theorem 3.8 to \(f(x)\) by choosing the same parameters as in Lemma 11 of [10], specifically \(\epsilon^{\prime}=\epsilon/2\), \(x_{0}=1\), \(r=1-\beta\), \(\delta=\beta/2\), and \(B=1/2\).29 This results in a space-efficient randomized polynomial approximation \(\tilde{P}\in\mathbb{R}[x]\) of degree \(d=O(\delta^{-1}\log(\epsilon^{-1}B))\leq\tilde{C}_{\ln}\beta^{-1}\log \epsilon^{-1}\), where \(\tilde{C}_{\ln}\) is a universal constant. By appropriately choosing \(\eta\leq 1/2\) such that \(C_{\ln}^{\prime}\epsilon\log d=\eta/4\) for a universal constant \(C_{\ln}^{\prime}\), the approximation guarantees the following inequalities:
Footnote 29: As indicated in Lemma 11 of [10], since the Taylor series of \(f(x)\) at \(x=1\) is \(\frac{1}{2\ln(2/\beta)}\sum_{l=1}^{\infty}\frac{(-1)^{l}x^{l}}{l}\), we obtain that \(B=f\big{(}\frac{\beta}{2}-1\big{)}=\frac{1}{2\ln(2/\beta)}\sum_{l=1}^{\infty} \frac{(1-\beta/2)^{l}}{l}=-\frac{1}{2\ln(2/\beta)}\sum_{l=1}^{\infty}\frac{(-1)^ {l-1}}{l}(\beta/2-1)^{l}=-\frac{1}{2\ln(2/\beta)}\ln\frac{\beta}{2}=\frac{1}{2}\).
\[\|f(x)-\tilde{P}(x)\|_{[\beta,2-\beta]} \leq C_{\ln}^{\prime}\epsilon\log d=\frac{\eta}{4}\] \[\|\tilde{P}(x)\|_{[-1,1]} \leq B+C_{\ln}^{\prime}\epsilon\log d\leq\frac{\eta}{4}. \tag{3.11}\]
Additionally, the coefficient vector \(\mathbf{c}^{(\tilde{P})}\) of \(\tilde{P}\) satisfies that \(\|\mathbf{c}^{(\tilde{P})}\|_{1}\leq O(Bd)\leq\hat{C}_{\mathrm{ln}}d\) where \(\hat{C}_{\mathrm{ln}}\) is a universal constant. Notice that \(\delta^{\prime}=\frac{\delta}{2(r+\delta)}=\frac{\beta/2}{2(1-\beta+\beta/2)}= \frac{\beta}{4(1-\beta/2)}=\Theta(\beta)\), our utilization of Theorem 3.8 yields a bounded-error randomized algorithm that requires \(O(\log(d^{4}\beta^{\prime})^{-4}\epsilon^{-1}B))=O(\log(d^{4}\beta^{-4} \epsilon^{-1}))\) space and \(\tilde{O}(\max\{(\delta^{\prime})^{-5}\epsilon^{-2}B^{2},d^{2}\epsilon^{-1/2}B \})=\tilde{O}(\max\{\beta^{-5}\epsilon^{-2},d^{2}\epsilon^{-1/2}\})\) time.
Furthermore, note that the real-valued function \(f(x)\) only defines when \(x>0\), then \(\tilde{P}(x)\) is not an even polynomial in general. Instead, we consider \(P(x):=(1+\eta)^{-1}(\tilde{P}(x)+\tilde{P}(-x))\) for all \(x\in[-1,1]\). Together with Equation (3.11), we have derived that:
\[\|f(x)-P(x)\|_{[\beta,1]} \leq\big{\|}f(x)-\tfrac{1}{1+\eta}\tilde{P}(x)\big{\|}_{[\beta,1 ]}+\big{\|}\tfrac{1}{1+\eta}\tilde{P}(-x)\big{\|}_{[\beta,1]} \tag{3.12}\] \[\leq\big{\|}f(x)-\tilde{P}(x)\big{\|}_{[\beta,1]}+\big{\|}\tilde{ P}(x)-\tfrac{1}{1+\eta}\tilde{P}(x)\big{\|}_{[\beta,1]}+\big{\|}\tfrac{1}{1+ \eta}\tilde{P}(-x)\big{\|}_{[\beta,1]}\] \[\leq\tfrac{\eta}{4}+\tfrac{\eta}{1+\eta}\cdot\big{(}\tfrac{1}{2}+ \tfrac{\eta}{4}\big{)}+\tfrac{1}{1+\eta}\cdot\tfrac{\eta}{4}\] \[=\tfrac{\eta}{4}+\tfrac{\eta}{1+\eta}\cdot\tfrac{1+\eta}{4}+ \tfrac{1}{1+\eta}\cdot\tfrac{\eta}{2}\] \[\leq\eta.\]
Here, the last line owes to the fact that \(\eta>0\). Consequently, Equation (3.12) implies that \(\|f(x)-P(x)\|_{[\beta,1]}\leq 4C_{\mathrm{ln}}^{\prime}\epsilon\log d:=C_{ \mathrm{ln}}\epsilon\log d\) for another universal constant \(C_{\mathrm{ln}}\). Notice \(P(x)\) is an even polynomial with \(\deg(P)\leq\tilde{C}_{\mathrm{ln}}\beta^{-1}\log\epsilon^{-1}\), Equation (3.11) yields that:
\[\|P(x)\|_{[-1,1]}=\|P(x)\|_{[0,1]}\leq\tfrac{1}{1+\eta}\tilde{P}(x)\|_{[0,1]}+ \|\tfrac{1}{1+\eta}\tilde{P}(x)\|_{[-1,0]}\leq\tfrac{1}{1+\eta}\cdot\tfrac{1+ \eta}{2}+\tfrac{1}{1+\eta}\cdot\tfrac{\eta}{2}\leq 1.\]
We now complete the proof by noticing \(\eta\leq 1/2\).
### Applying Chebyshev interpolation to bitstring indexed encodings
Equipped with space-efficient bounded polynomial approximations of piecewise-smooth functions, it suffices to implement Chebyshev interpolation on bitstring indexed encodings, as specified in Theorem 3.11. The proof follows from combining Lemma 3.13 and Lemma 3.14.
**Theorem 3.11** (Chebyshev interpolation applied to bitstring indexed encodings).: _Let \(A\) be an Hermitian matrix acting on \(s\) qubits, and let \(U\) be a \((1,a,\epsilon_{1})\)-bitstring indexed encoding of \(A\) that acts on \(s+a\) qubits. For any degree-\(d\) polynomial \(P_{d}(x)=\frac{\alpha_{0}}{2}+\sum_{k=1}^{d}c_{k}T_{k}(x)\) where \(d\leq 2^{O(s(n))}\) and \(T_{k}\) is the \(k\)-th Chebyshev polynomial (of the first kind), equipped with an evaluation oracle \(\mathrm{Eval}\) that returns \(\hat{c}_{k}\) with precision \(\varepsilon:=O(\epsilon_{2}^{2}/d)\), then we have a \((1,a^{\prime},144d\sqrt{\epsilon_{1}}\|\mathbf{c}\|_{1}^{2}+36\epsilon_{2}\| \mathbf{c}\|_{1})\)-bitstring indexed encoding \(V\) of \(P_{d}(A)\) that acts on \(s+a^{\prime}\) qubits where \(a^{\prime}:=a+\lceil\log d\rceil+3\). This implementation requires \(O(d^{2}\|\mathbf{c}\|_{1})\) uses of \(U\), \(U^{\dagger}\), \(C_{\mathrm{\Pi}}\mathrm{NOT}\), \(C_{\mathrm{\Pi}}\mathrm{NOT}\), and \(O(d^{2}\|\mathbf{c}\|_{1})\) multi-controlled single-qubit gates.30 Moreover, we can compute the description of the resulting quantum circuit in deterministic time \(\tilde{O}(d^{2}\|\mathbf{c}\|_{1}\log(d/\epsilon_{2}))\) and space \(O(\max\{s(n),\log(d/\epsilon_{2}^{2})\})^{31}\), also \(O(d^{2}\|\mathbf{c}\|_{1})\) oracle calls to \(\mathrm{Eval}\) with precision \(\varepsilon\). Furthermore, our construction straightforwardly extends to any linear (possibly non-Hermitian) operator \(A\) by simply replacing \(P_{d}(A)\) with \(P_{d}^{(\mathrm{SV})}(A)\) defined in Definition 3.3._
Footnote 30: As indicated in Figure 3(c) of [18] (see also Lemma 19 in [18]), we replace the single-qubit gates used in Lemma 3.13 with multi-controlled (or “multiply controlled”) single-qubit gates.
_Remark 3.12_ (QSVT implementations of Chebyshev interpolation preserve the parity).: As shown in Proposition 3.13.1, we can implement the quantum singular value transformation \(T_{k}(A)\)_exactly_ for any Hermitian matrix that admits a bitstring indexed encoding, because we observe that the rotation angles corresponding to the \(k\)-th Chebyshev polynomials are either \(\pi/2\) or \((1-k)\pi/2\), indicating that \(T_{k}(0)=0\) for any odd \(k\). We then implement the QSVT corresponding to the Chebyshev interpolation polynomial \(P_{d}(x)=\sum_{l=0}^{(d-1)/2}c_{2l+1}T_{2l+1}(x)\), as described in Theorem 3.11, although the actual implementation results in a slightly different polynomial, \(\hat{P}_{d}(x)=\sum_{l=0}^{(d-1)/2}\hat{c}_{2l+1}T_{2l+1}(x)\). However, we still have \(\hat{P}_{d}(0)=0=P_{d}(0)\), indicating that the implementations in Theorem 3.11 preserve the parity.
We first demonstrate an approach, based on Lemma 3.12 in [14], that constructs Chebyshev polynomials of bitstring indexed encodings in a space-efficient manner.
**Lemma 3.13** (Chebyshev polynomials applied to bitstring indexed encodings).: _Let \(A\) be a linear operator acting on \(s\) qubits, and let \(U\) be a \((1,a,\epsilon)\)-bitstring indexed encoding of \(A\) that acts on \(s+a\) qubits. Then, for the \(k\)-th Chebyshev polynomial (of the first kind) \(T_{k}(x)\) of degree \(k\leq 2^{O(s)}\), there exists a new \((1,a+1,4k\sqrt{\epsilon})\)-bitstring indexed encoding \(V\) of \(T_{k}^{(\mathrm{SV})}(A)\) that acts on \(s+a+1\) qubits. This implementation requires \(k\) uses of \(U\), \(U^{\dagger}\), \(C_{\Pi}\mathrm{NOT}\), \(C_{\tilde{\Pi}}\mathrm{NOT}\), and \(k\) single-qubit gates. Moreover, we can compute the description of the resulting quantum circuit in deterministic time \(k\) and space \(O(s)\). Furthermore, consider \(A^{\prime}:=\tilde{\Pi}U\Pi\), where \(\tilde{\Pi}\) and \(\Pi\) are the corresponding orthogonal projectors of the bitstring indexed encoding \(U\). If \(A\) and \(A^{\prime}\) satisfy the conditions \(\left\|A-A^{\prime}\right\|+\left\|\frac{A+A^{\prime}}{2}\right\|^{2}\leq 1\) and \(\left\|\frac{A+A^{\prime}}{2}\right\|^{2}\leq\zeta\), then \(V\) is a \(\left(1,a+1,\frac{\sqrt{2}}{\sqrt{1-\zeta}}k\epsilon\right)\)-bitstring indexed encoding of \(T_{k}^{(\mathrm{SV})}(A)\)._
Proof.: As specified in Proposition 3.13.1, we first notice that we can derive the sequence of rotation angles corresponding to Chebyshev polynomials \(T_{k}(x)\) by directly factorizing them.
**Proposition 3.13.1** (Chebyshev polynomials in quantum signal processing, adapted from Lemma 6 in [13]).: _Let \(T_{k}\in\mathbb{R}[x]\) be the \(k\)-th Chebyshev polynomial (of the first kind). Consider the corresponding sequence of rotation angles \(\Phi\in\mathbb{R}^{k}\) such that \(\phi_{1}:=(1-k)\pi/2\), and \(\phi_{j}:=\pi/2\) for all \(j\in[k]\setminus\{1\}\), then we know that \(\prod_{j=1}^{k}\left[\binom{\exp(i\phi_{j})}{0}\binom{0}{\exp(-i\phi_{j})} \right)\left(\frac{x}{\sqrt{1-x^{2}}}\frac{\sqrt{1-x^{2}}}{-x}\right)\right]= \left(\begin{smallmatrix}T_{k}\\ \cdot\end{smallmatrix}\right)\)._
Then we implement the quantum singular value transformation \(T_{k}^{(\mathrm{SV})}(A)\), utilizing an alternating phase modulation (Proposition 3.13.2) with the aforementioned sequence of rotation angles, denoted by \(V\).
**Proposition 3.13.2** (QSVT by alternating phase modulation, adapted from Theorem 10 and Figure 3 in [13]).: _Suppose \(P\in\mathbb{C}[x]\) is a polynomial, and let \(\Phi\in\mathbb{R}^{n}\) be the corresponding sequence of rotation angles. We can construct \(P^{(\mathrm{SV})}(\tilde{\Pi}U\Pi)=\begin{cases}\tilde{\Pi}U_{\Phi}\Pi,&\text{ if $n$ is odd}\\ \tilde{\Pi}U_{\Phi}\Pi,&\text{if $n$ is even}\end{cases}\) with a single ancillary qubit. Moreover, this implementation in [13, Figure 3] makes \(k\) uses of \(U\), \(U^{\dagger}\), \(C_{\Pi}\mathrm{NOT}\), \(C_{\tilde{\Pi}}\mathrm{NOT}\), and single-qubit gates._
Owing to the robustness of QSVT (Lemma 22 in [13], full version of [13]), we have that \(\left\|T_{k}^{(\mathrm{SV})}(U)-T_{k}^{(\mathrm{SV})}(U^{\prime})\right\|\leq 4k \sqrt{\left\|A-A^{\prime}\right\|}=4k\sqrt{\epsilon}\), where \(U^{\prime}\) is a \((1,a,0)\)-bitstring indexed encoding of \(A\). Moreover, with a tighter bound for \(A\) and \(A^{\prime}\), namely \(\left\|A-A^{\prime}\right\|+\left\|\frac{A+A^{\prime}}{2}\right\|^{2}\leq 1\), we can deduce that \(\left\|T_{k}^{(\mathrm{SV})}(U)-T_{k}^{(\mathrm{SV})}(U^{\prime})\right\|\leq k \frac{\sqrt{2}}{\sqrt{1-\left\|(A+A^{\prime})/2\right\|^{2}}}\|A-A^{\prime} \|\leq\frac{\sqrt{2}}{\sqrt{1-\zeta}}k\epsilon\) following [13, Lemma 23], indicating an improved dependence of \(\epsilon\). Finally, we can compute the description of the resulting quantum circuits in \(O(\log k)=O(s(n))\) space and \(O(k)\) times because of the implementation specified in Proposition 3.13.2.
We then proceed by presenting a linear combination of bitstring indexed encodings, which adapts the LCU technique proposed by Berry, Childs, Cleve, Kothari, and Somma in [1], and incorporates a space-efficient state preparation operator. We say that \(P_{\mathbf{y}}\) is an \(\epsilon\)-state preparation operator for \(\mathbf{y}\) if \(P_{\mathbf{y}}|\bar{0}):=\sum_{i=1}^{m}\sqrt{\hat{y}_{i}}|i\rangle\) for some \(\hat{\mathbf{y}}\) such that \(\|\mathbf{y}/\|\mathbf{y}\|_{1}-\hat{\mathbf{y}}\|_{1}\leq\epsilon\).
**Lemma 3.14** (Linear combinations of bitstring indexed encodings, adapted from Lemma 29 in [13]).: _Given a matrix \(A=\sum_{i=0}^{m-1}y_{i}A_{i}\) such that each linear operator \(A_{i}\)\((1\leq i\leq m)\) acts on \(s\) qubits with the corresponding \((\|\mathbf{y}\|_{1},a,\epsilon_{1})\)-bitstring indexed encoding \(U_{i}\) acting on \(s+a\) qubits associated with projections \(\tilde{\Pi}_{i}\) and \(\Pi_{i}\). Also each \(y_{i}\)\((1\leq i\leq m)\) can be expressed in \(O(s(n))\) bits with an evaluation oracle \(\mathrm{Eval}\) that returns \(\hat{y}_{i}\) with precision \(\varepsilon:=O(\epsilon_{2}^{2}/m)\). Then utilizing an \(\epsilon_{2}\)-state preparation operator \(P_{\mathbf{y}}\) for \(\mathbf{y}\) acting on \(O(\log m)\) qubits, and a
\(\lceil\log m\rceil\))-qubit unitary \(W=\sum_{i=0}^{m-1}|i\rangle\langle i|\otimes U_{i}+\big{(}I-\sum_{i=0}^{m-1}|i \rangle\langle i|\big{)}\otimes I\), we can implement a \((\|\mathbf{y}\|_{1},a+\lceil\log m\rceil,\epsilon_{1}\|\mathbf{y}\|_{1}^{2}+ \epsilon_{2}\|\mathbf{y}\|_{1})\)-bitstring indexed encoding of \(A\) acting on \(s+a+\lceil\log m\rceil\) qubits with a single use of \(W\), \(P_{\mathbf{y}}\), \(P_{\mathbf{y}}^{\dagger}\). In addition, the classical pre-processing can be implemented in deterministic time \(\tilde{O}(m^{2}\log(m/\epsilon_{2}))\) and space \(O(\log(m/\epsilon_{2}^{2}))\),31 as well as \(m^{2}\) oracle calls to \(\mathrm{Eval}\) with precision \(\varepsilon\)._
Footnote 31: It is noteworthy that we define \(\tilde{O}(f):=O(f\operatorname{poly}\log(f))\).
Proof.: For the \(\epsilon_{2}\)-state preparation operator \(P_{\mathbf{y}}\) such that \(P_{\mathbf{y}}|\bar{0}\rangle=\sum_{i=1}^{m}\sqrt{\hat{y}_{i}}|i\rangle\), we utilize a scheme introduced by Zalka [24] (also independently rediscovered in [11] and [12]). We make an additional analysis of the required classical computational complexity, and the proof can be found in Appendix A.2.
**Proposition 3.14.1** (Space-efficient state preparation, adapted from [24, 12, 12]).: _Given an \(l\)-qubit quantum state \(|\psi\rangle:=\sum_{i=1}^{m}\sqrt{\hat{y}_{i}}|i\rangle\), where \(l=\lceil\log m\rceil\) and \(\hat{y}_{i}\) are real amplitudes associated with an evaluation oracle \(\mathrm{Eval}(i,\varepsilon)\) that returns \(\hat{y}_{i}\) up to accuracy \(\varepsilon\) we can prepare \(|\psi\rangle\) up to accuracy \(\epsilon\) in deterministic time \(\tilde{O}(m^{2}\log(m/\epsilon))\) and space \(O(\log(m/\epsilon^{2}))\), together with \(m^{2}\) evaluation oracle calls with precision \(\varepsilon:=O(\epsilon^{2}/m)\)._
Now consider the bitstring indexed encoding \(\big{(}P_{\mathbf{y}}^{\dagger}\otimes I_{s}\big{)}W\big{(}P_{\mathbf{y}} \otimes I_{s}\big{)}\) of \(A\) acting on \(s+a+\lceil\log m\rceil\) qubits. Let \(y_{i}^{\prime}:=y_{i}/\|\mathbf{y}\|_{1}\), then we obtain the implementation error:
\[\big{\|}A-\|\mathbf{y}\|_{1}\big{(}|\bar{0}\rangle\langle\bar{0 }|\otimes\tilde{\Pi}\big{)}\big{(}P_{y}^{\dagger}\otimes I_{s}\big{)}W\big{(} P_{y}\otimes I_{s}\big{)}\left(|\bar{0}\rangle\langle\bar{0}|\otimes\Pi\right)\big{\|}\] \[= \big{\|}A-\|\mathbf{y}\|_{1}\sum_{i=0}^{m-1}\hat{y}_{i}\tilde{ \Pi}_{i}U_{i}\Pi_{i}\big{\|}\] \[\leq \big{\|}A-\|\mathbf{y}\|_{1}\sum_{i=0}^{m-1}y_{i}^{\prime}\tilde {\Pi}_{i}U_{i}\Pi_{i}\big{\|}+\|\mathbf{y}\|_{1}\sum_{i=0}^{m-1}(y_{i}^{\prime }-\hat{y}_{i})\|\tilde{\Pi}_{i}U_{i}\Pi_{i}\|\] \[\leq \|\mathbf{y}\|_{1}\sum_{i=0}^{m-1}y_{i}^{\prime}\|A_{i}-\tilde{ \Pi}_{i}U_{i}\Pi_{i}\|+\epsilon_{2}\|\mathbf{y}\|_{1}\] \[\leq \epsilon_{1}\|\mathbf{y}\|_{1}^{2}+\epsilon_{2}\|\mathbf{y}\|_{1}.\]
Here, the third line is due to the triangle inequality, the fourth line owes to Proposition 3.14.1, and the fifth line is because \(U_{i}\) is a \((1,a,\epsilon_{1})\)-bitstring indexed encoding of \(A_{i}\) for \(0\leq i<m\).
To make the resulting bitstring indexed encoding from Lemma 3.14 with \(\alpha=1\), we need to perform _a renormalization procedure_ to construct a new encoding with the desired \(\alpha\). We achieve this by extending the proof strategy outlined by Gilyen [11, Page 52] for block-encodings to bitstring indexed encodings. The renormalization procedure is provided in Lemma 3.15, and the complete proof is available in Appendix A.2. Additionally, similar results have been established in [10, Lemma 3.10] and [14, Corollary 2.8].
**Lemma 3.15** (Renormalizing bitstring indexed encoding).: _Let \(U\) be an \((\alpha,a,\epsilon)\)-bitstring indexed encoding of \(A\), where \(\alpha>1\) and \(0<\epsilon<1\), and \(A\) is a linear operator acting on \(s(n)\) qubits. We can implement a quantum circuit \(V\), serving as a normalization of \(U\), such that \(V\) is a \((1,a+2,36\epsilon)\)-bitstring indexed encoding of \(A\). This implementation requires \(O(\alpha)\) uses of \(U\), \(U^{\dagger}\), \(C_{\Pi}\mathrm{NOT}\), \(C_{\tilde{\Pi}}\mathrm{NOT}\), and \(O(\alpha)\) single-qubit gates. Moreover, the description of the resulting quantum circuit can be computed in deterministic time \(O(\alpha)\) and space \(O(s)\)._
Finally, we combine Lemma 3.14 and Lemma 3.13 to proceed with the proof of Theorem 3.11.
Proof of Theorem 3.11.: By using Lemma 3.13, we have \(P_{d}(A)=\frac{c_{0}}{2}+\sum_{k=1}^{d}c_{k}T_{k}(A)\) where \(T_{k}(A)\) corresponding to a \((1,a+1,4k\sqrt{\epsilon_{1}})\)-bitstring indexed encoding \(V_{k}\). Employing Lemma 3.14, we result in a \((\|\mathbf{c}\|_{1},\hat{a},4k\sqrt{\epsilon_{1}}\|\mathbf{c}\|_{1}^{2}+ \epsilon_{2}\|\mathbf{c}\|_{1})\)-bitstring indexed encoding \(\tilde{V}\) where \(\hat{a}:=a+\lceil\log d\rceil+1\). Moreover, by utilizing Lemma 3.15, we obtain a \((1,a^{\prime},144k\sqrt{\epsilon_{1}}\|\mathbf{c}\|_{1}^{2}+36\epsilon_{2}\| \mathbf{c}\|_{1})\)-bitstring indexed encoding \(V\) acts on \(s+a^{\prime}\) qubits where \(a^{\prime}:=\hat{a}+2=a+\lceil\log d\rceil+3\). A direct calculation demonstrates that this implementation makes \(\sum_{k=1}^{d}k\cdot O(\|\mathbf{c}\|_{1})=O(d^{2}\|\mathbf{c}\|_{1})\) uses of \(U\), \(U^{\dagger}\), \(C_{\Pi}\mathrm{NOT}\)
\(\mathrm{C}_{\widehat{\Pi}}\mathrm{NOT}\), also \(\sum_{k=0}^{d}k\cdot O(\|\mathbf{c}\|_{1})=O(d^{2}\|\mathbf{c}\|_{1})\) uses of multi-controlled single-qubit gates. In addition, the descriptions of quantum circuits \(\{V_{k}\}_{k=0}^{d}\) can be computed in \(O(s(n))\) space and \(\sum_{k=0}^{d}k\cdot O(\|\mathbf{c}\|_{1})=O(d^{2}\|\mathbf{c}\|_{1})\) time. Therefore, the description of the quantum circuit \(V\) can be computed in deterministic time \(\max\{\tilde{O}(d^{2}\|\mathbf{c}\|_{1}\log(d/\epsilon_{2})),O(d^{2}\|\mathbf{ c}\|_{1})\}=\tilde{O}(d^{2}\|\mathbf{c}\|_{1}\log(d/\epsilon_{2}))\) and space \(O(\max\{s(n),d/\epsilon_{2}^{2}\})\), as well as \(O(d^{2}\|\mathbf{c}\|_{1})\) oracle calls to Eval with precision \(\varepsilon\).
Finally, we can extend our construction to any linear operator \(A\) by replacing \(P_{d}(A)\) with \(P_{d}^{(\mathrm{SV})}\) as defined in Definition 3.3, taking into account that the Chebyshev polynomial (of the first kind) \(T_{k}\) is either an even or an odd function.
### Examples: the sign function and the normalized logarithmic function
In this subsection, we provide explicit examples that illustrate the usage of the space-efficient quantum singular value transformation (QSVT) technique. We define two functions:
\[\mathrm{sgn}(x):=\begin{cases}1,&x>0\\ -1,&x<0\\ 0,&x=0\end{cases}\quad\text{ and }\quad\ln_{\beta}(x):=\frac{\ln(1/x)}{2\ln(2/ \beta)}.\]
In particular, the sign function is a bounded function, and we derive the corresponding bitstring indexed encoding with _deterministic_ space-efficient classical pre-processing in Corollary 3.7. On the other hand, the logarithmic function is a piecewise-smooth function that is bounded by \(1\), and we deduce the corresponding bitstring indexed encoding with _randomized_ space-efficient classical pre-processing in Corollary 3.10.
**Corollary 3.16** (Sign polynomial with space-efficient coefficients applied to bitstring indexed encodings).: _Let \(A\) be an Hermitian matrix that acts on \(s\) qubits, where \(s(n)\geq\Omega(\log(n))\). Let \(U\) be a \((1,a,\epsilon_{1})\)-bitstring indexed encoding of \(A\) that acts on \(s+a\) qubits. Then, for any \(d\leq 2^{O(s(n))}\) and \(\epsilon_{2}\geq 2^{-O(s(n))}\), we have an \(\left(1,a+\lceil\log d\rceil+3,144\hat{C}_{\mathrm{sgn}}^{2}d\epsilon_{1}^{1/ 2}\log^{2}(d)+36\hat{C}_{\mathrm{sgn}}\epsilon_{2}\log(d)\right)\)-bitstring indexed encoding \(V\) of \(P_{d}^{\mathrm{sgn}}(A)\), where \(P_{d}^{\mathrm{sgn}}\) is a space-efficient bounded polynomial approximation of the sign function specified in Corollary 3.7, and \(\hat{C}_{\mathrm{sgn}}\) is a universal constant. This implementation requires \(O(d^{2}\log d)\) uses of \(U\), \(U^{\dagger}\), \(C_{\Pi}\mathrm{NOT}\), \(C_{\widehat{\Pi}}\mathrm{NOT}\), and \(O(d^{2}\log d)\) multi-controlled single-qubit gates [30]. Moreover, we can compute the description of \(V\) in deterministic time \(\tilde{O}(\epsilon_{2}^{-1}d^{9/2})\) and space \(O(s(n))\). Furthermore, our construction straightforwardly extends to any non-Hermitian (_but linear_) matrix \(A\) by simply replacing \(P_{d}^{\mathrm{sgn}}(A)\) with \(P_{\mathrm{sgn},d}^{(\mathrm{SV})}(A)\) defined in the same way as Definition 3.3._
Proof.: In Corollary 3.7, we can express \(P_{d}^{\mathrm{sgn}}(x)\) as \(\frac{c_{2}}{2}+\sum_{k=1}^{d}c_{k}T_{k}(x)\), where \(d=O(\delta^{-1}\log\epsilon^{-1})\). For all \(x\in[-1,1]\setminus[-\delta,\delta]\), we have \(\lvert\mathrm{sgn}(x)-P_{d}^{\mathrm{sgn}}(x)\rvert\leq O(\epsilon\log d):= \epsilon_{2}.\) To implement Eval with precision \(\varepsilon\), we can compute the corresponding entry \(c_{i}\) of the coefficient vector, which requires \(O(\log(\varepsilon^{-1}d^{4}))=O(\log(\epsilon_{2}^{-2}d^{5}))\) space and \(\tilde{O}(\varepsilon^{-1/2}d^{2})=\tilde{O}(\epsilon_{2}^{-1}d^{5/2})\) time. Using Theorem 3.11, we can conclude that \(P_{d}^{\mathrm{sgn}}\) has a \(\left(1,a^{\prime},144dC_{\mathrm{sgn}}^{2}\epsilon_{1}^{1/2}\log^{2}(d)+36 \hat{C}_{\mathrm{sgn}}\epsilon_{2}\log d\right)\)-bitstring indexed encoding \(V\) that acts on \(s+a^{\prime}\) qubits, where \(a^{\prime}:=a+\lceil\log d\rceil+3\) and \(\lVert\mathbf{c}\rVert_{1}\leq\hat{C}_{\mathrm{sgn}}\log d\).
Furthermore, the quantum circuit of \(V\) makes \(O(d^{2}\log d)\) uses of \(U\), \(U^{\dagger}\), \(\mathrm{C}_{\widehat{\Pi}}\mathrm{NOT}\), and \(\mathrm{C}_{\widehat{\Pi}}\mathrm{NOT}\) as well as \(O(d^{2}\log d)\) multi-controlled single-qubit gates. We note that \(d\leq 2^{O(s(n))}\) and \(\epsilon_{2}\geq 2^{-O(s(n))}\). Moreover, we can compute the description of \(V\) in \(O(s(n))\) space since each oracle call to Eval with precision \(\varepsilon\) can be computed in \(O(\log(\epsilon_{2}^{-2}d^{5}))\) space. Additionally, the time complexity for computing the description of \(V\) is
\[\max\{\tilde{O}(d^{2}\log d\log(d/\epsilon_{2})),d^{2}\log d\cdot\tilde{O}( \epsilon_{2}^{-1}d^{5/2})\}=\tilde{O}(\epsilon_{2}^{-1}d^{9/2}).\qed\]
**Corollary 3.17** (Log polynomial with space-efficient coefficients applied to bitstring indexed encodings).: _Let \(A\) be an Hermitian matrix that acts on \(s\) qubits, where \(s(n)\geq\Omega(\log(n))\). Let \(U\) be a \((1,a,\epsilon_{1})\)-bitstring indexed encoding of \(A\) that acts on \(s+a\) qubits. Then, for any \(d\leq 2^{O(s(n))}\)
\(\epsilon_{2}\geq 2^{-O(s(n))}\), and \(\beta\geq 2^{-O(s(n))}\), we have a \((1,a+[\log d]+3,144\hat{C}_{\mathrm{ln}}\epsilon_{1}^{1/2}d^{3}+36\hat{C}_{ \mathrm{ln}}\epsilon_{2}d)\)-bitstring indexed encoding \(V\) of \(P_{d}^{\mathrm{ln}}(A)\), where \(P_{d}^{\mathrm{ln}}\) is a space-efficient bounded polynomial approximation of the normalized log function specified in Corollary 3.10, and \(\hat{C}_{\mathrm{ln}}\) is a universal constant. This implementation requires \(O(d^{3})\) uses of \(U\), \(U^{\dagger}\), \(C_{\Pi}\mathrm{NOT}\), \(C_{\Pi}\mathrm{NOT}\), and \(O(d^{3})\) multi-controlled single-qubit gates [30]. Moreover, we can compute the description of the resulting quantum circuit in bounded-error randomized time \(\tilde{O}(\max\{\beta^{-5}\epsilon_{2}^{-4}d^{5},\epsilon_{2}^{-1}d^{11/2}\})\) and space \(O(s(n))\)._
Proof.: In Corollary 3.10, we can express \(P_{d}^{\mathrm{ln}}(x)\) as \(\frac{\alpha_{0}}{2}+\sum_{d=1}^{d}c_{k}T_{k}(x)\), where \(d=O(\delta^{-1}\log\epsilon^{-1})\). For any \(\mathrm{ln}_{\beta}(x)\), we have \(|\,\mathrm{ln}_{\beta}(x)-P_{d}^{\mathrm{ln}}(x)|\leq O(\epsilon\log d):= \epsilon_{2}\) for all \(x\in[\beta,1]\). To implement Eval with precision \(\varepsilon\), we can compute the corresponding entry \(c_{i}\) of the coefficient vector by a bounded-error randomized algorithm. This requires \(O(\log(\varepsilon^{-1}d^{4}\beta^{-4}))=O(\log(\beta^{-4}\epsilon_{2}^{-2}d^ {5}))\) space and \(\tilde{O}(\max\{\beta^{-5}\epsilon^{-2},\varepsilon^{-1/2}d^{2}\})=\tilde{O}( \max\{\beta^{-5}\epsilon_{2}^{-4}d^{2},\epsilon_{2}^{-1}\}d^{5/2})\) time. Using Theorem 3.11, we conclude that \(P_{d}^{\mathrm{ln}}\) has a \((1,a^{\prime},144C_{\mathrm{ln}}^{2}\epsilon_{1}^{1/2}d^{3}+36\hat{C}_{ \mathrm{ln}}\epsilon_{2}d)\)-bitstring indexed encoding \(V\) that acts on \(s+a^{\prime}\) qubits, where \(a^{\prime}:=a+\lceil\log d\rceil+3\) and \(\|\mathbf{c}\|_{1}\leq\hat{C}_{\mathrm{ln}}d\).
Furthermore, the quantum circuit of \(V\) makes \(O(d^{3})\) uses of \(U\), \(U^{\dagger}\), \(C_{\Pi}\mathrm{NOT}\), and \(C_{\widehat{\Pi}}\mathrm{NOT}\) as well as \(O(d^{3})\) multi-controlled single-qubit gates. We note that \(d\leq 2^{O(s(n))}\), \(\epsilon_{2}\geq 2^{-O(s(n))}\), and \(\beta\geq 2^{-O(s(n))}\). Additionally, we can compute the description of \(V\) in \(O(s(n))\) space since each oracle call to Eval with precision \(\varepsilon\) can be computed in \(O(\log(\beta^{-4}\epsilon_{2}^{-2}d^{5}))\) space. The time complexity for computing the description of \(V\) is given by:
\[\max\{\tilde{O}(d^{3}\log(d/\epsilon_{2})),d^{3}\cdot\tilde{O}(\max\{\beta^{ -5}\epsilon_{2}^{-4}d^{2},\epsilon_{2}^{-1}d^{5/2}\})=\tilde{O}(\max\{\beta^{ -5}\epsilon_{2}^{-4}d^{5},\epsilon_{2}^{-1}d^{11/2}\}). \tag{3.13}\]
Finally, to guarantee that the probability that all \(O(d^{3})\) oracle calls to Eval succeed is at least \(2/3\), we use a \((4\ln d)\)-time sequential repetition of Eval for each oracle call. Together with the Chernoff-Hoeffding bound and the union bound, the resulting randomized algorithm succeeds with probability at least \(1-d^{3}\cdot 2\exp(-4\ln d)\geq 2/3\). We further note that the time complexity specified in Equation (3.13) only increase by a \(4\ln d\) factor.
### Application: space-efficient error reduction for unitary quantum computations
We provide a unified space-efficient error reduction for unitary quantum computations. In particular, one-sided error scenarios (e.g., \(\mathsf{RQ}_{\mathrm{U}}\mathsf{L}\) and \(\mathsf{coRQ}_{\mathrm{U}}\mathsf{L}\)) have been proven in [20], and the two-sided error scenario (e.g., \(\mathsf{BQ}_{\mathrm{U}}\mathsf{L}\)) has been demonstrated in [11].
**Theorem 3.18** (Space-efficient error reduction for unitary quantum computations).: _Let \(s(n)\) be a space-constructible function, and let \(a(n)\), \(b(n)\), and \(l(n)\) be deterministic \(O(s(n))\) space computable functions such that \(a(n)-b(n)\geq 2^{-O(s(n))}\), we know that for any \(l(n)\leq O(s(n))\), there is \(d:=l(n)/\max\{\sqrt{a}-\sqrt{b},\sqrt{1-b}-\sqrt{1-a}\}\) such that_
\[\mathsf{BQ}_{\mathrm{U}}\mathsf{SPACE}[s(n),a(n),b(n)]\subseteq\mathsf{BQ}_{ \mathrm{U}}\mathsf{SPACE}\big{[}s(n)+\lceil\log d\rceil+1,1-2^{-l(n)},2^{-l(n)} \big{]}.\]
_Furthermore, for one-sided error scenarios, we have that for any \(l(n)\leq 2^{O(s(n))}\):_
\[\mathsf{RQ}_{\mathrm{U}}\mathsf{SPACE}[s(n),a(n)]\subseteq\mathsf{RQ}_{ \mathrm{U}}\mathsf{SPACE}\big{[}s(n)+\lceil\log d_{0}\rceil+1,1-2^{-l(n)} \big{]}\text{ where }d_{0}:=\frac{l(n)}{\max\{\sqrt{a},1-\sqrt{1-a}\}},\]
\[\mathsf{coRQ}_{\mathrm{U}}\mathsf{SPACE}[s(n),b(n)]\subseteq\mathsf{coRQ}_{ \mathrm{U}}\mathsf{SPACE}\big{[}s(n)+\lceil\log d_{1}\rceil+1,2^{-l(n)} \big{]}\text{ where }d_{1}:=\frac{l(n)}{\max\{1-\sqrt{b},\sqrt{1-b}\}}.\]
By choosing \(s(n)=\Theta(\log(n))\), we derive error reduction for logarithmic-space quantum computation in a unified approach:
**Corollary 3.19** (Error reduction for \(\mathsf{BQ}_{\mathrm{U}}\mathsf{L}\), \(\mathsf{RQ}_{\mathrm{U}}\mathsf{L}\), and \(\mathsf{coRQ}_{\mathrm{U}}\mathsf{L}\)).: _For deterministic logspace computable functions \(a(n)\), \(b(n)\), and \(l(n)\) satisfying \(a(n)-b(n)\geq 1/\operatorname{poly}(n)\) and \(l(n)\leq\max\{\tilde{O}(d^{3}\log(d/\epsilon_{2})),d^{3}\cdot\tilde{O}(\max\{ \beta^{-5}\epsilon_{2}^{-4}d^{2},\epsilon_{2}^{-1}d^{5/2}\})=\tilde{O}(\max\{ \beta^{-5}\epsilon_{2}^{-4}d^{5},\epsilon_{2}^{-1}d^{11/2}\})\)._
Proof.: We first prove the correctness of the algorithm. First, we have that \(\tilde{O}(d^{3}\log(d/\epsilon_{2}))\) is a linear combination of \(\tilde{O}(d^{3}\log(d/\epsilon_{2}))\) and \(\tilde{O}(d^{3}\log(d/\epsilon_{2}))\). We have that \(\tilde{O}(d^{3}\log(d/\epsilon_{2}))\) is a linear combination of \(\tilde{O}(d^{3}\log(d/\epsilon_{2}))\) and \(\tilde{O}(d^{3}\log(d/\epsilon_{2}))\).
\(O(\log n)\), we have the following inclusions:_
\[\mathsf{BQ}_{\mathrm{U}}\mathsf{L}[a(n),b(n)] \subseteq\mathsf{BQ}_{\mathrm{U}}\mathsf{L}[1-2^{-l(n)},2^{-l(n)}],\] \[\mathsf{RQ}_{\mathrm{U}}\mathsf{L}[a(n)] \subseteq\mathsf{RQ}_{\mathrm{U}}\mathsf{L}[1-2^{-l(n)}],\] \[\mathsf{coRQ}_{\mathrm{U}}\mathsf{L}[b(n)] \subseteq\mathsf{coRQ}_{\mathrm{U}}\mathsf{L}[2^{-l(n)}].\]
The construction specified in Theorem 3.18 crucially relies on Lemma 3.20. And the proof of Lemma 3.20 directly follows from Theorem 20 in [10], which is deferred to Appendix A.3.
**Lemma 3.20** (Space-efficient singular value discrimination).: _Let \(0\leq\alpha<\beta\leq 1\) and \(A:=\tilde{\Pi}U\Pi\) be a \((1,0,0)\)-bitstring indexed encoding where \(U\) acts on \(s\) qubits and \(s(n)\geq\Omega(\log n)\). Consider an unknown quantum state \(|\psi\rangle\), with the promise that it is a right singular vector of \(A\) with a singular value either above \(\alpha\) or below \(\beta\). We can distinguish the two cases with error probability at most \(\varepsilon:=O(\epsilon\log d)\) using a degree-\(d\) quantum singular value transformation where \(d=\frac{\log 1/\epsilon}{\max\{\beta-\alpha,\sqrt{1-\alpha^{2}}-\sqrt{1-\beta^{2}}\}}\). Moreover, we can make the error one-sided if \(\alpha=0\) or \(\beta=1\). In particular, the implementation requires \(O(d^{2}\log d)\) uses of \(U\), \(U^{\dagger}\), \(C_{\tilde{\Pi}}\mathrm{NOT}\), \(C_{\tilde{\Pi}}\mathrm{NOT}\), and \(O(d^{2}\log d)\) multi-controlled single-qubit gates. Also, we can compute the description of the implementation in deterministic time \(\tilde{O}(\varepsilon^{-1}d^{\theta/2})\) and space \(O(s(n))\)._
Finally, we provide the proof of Theorem 3.18, which closely relates to Theorem 38 in [10] (the full version of [10]).
Proof of Theorem 3.18.: It suffices to amplify the promise gap by QSVT. Note that the probability that a \(\mathsf{BQ}_{\mathrm{U}}\mathsf{SPACE}[s(n)]\) circuit \(C_{x}\) accepts is \(\Pr[C_{x}\text{ accepts}]=\||1\rangle\langle 1|_{\mathrm{out}}C_{x}|0^{k+m} \rangle\|_{2}^{2}\geq a\) for _yes_ instances, whereas \(\Pr[C_{x}\text{ accepts }]=\||1\rangle\langle 1|_{\mathrm{out}}C_{x}|0^{k+m} \rangle\|_{2}^{2}\leq b\) for _no_ instances. Then consider a \((1,0,0)\)-bitstring indexed encoding \(M_{x}:=\Pi_{\mathrm{out}}C_{x}\Pi_{\mathrm{in}}\) such that \(\|M_{x}\|\geq\sqrt{a}\) for _yes_ instances while \(\|M_{x}\|\leq\sqrt{b}\) for _no_ instances, where \(\Pi_{\mathrm{in}}:=|0\rangle\,\langle 0|^{\otimes k+m}\) and \(\Pi_{\mathrm{out}}:=|1\rangle\langle 1|_{\mathrm{out}}\otimes I_{m+k-1}\). Since \(\|M_{x}\|=\sigma_{\max}(M_{x})\) where \(\sigma_{\max}(M_{x})\) is the largest singular value of \(M_{x}\), it suffices to distinguish the largest singular value of \(M_{x}\) are either above \(\sqrt{a}\) or below \(\sqrt{b}\). By setting \(\alpha:=\sqrt{a}\), \(\beta:=\sqrt{b}\) and \(\varepsilon:=2^{-l(n)}\), this task is a direct corollary of Lemma 3.20.
## 4 Space-bounded quantum state testing
We begin by defining the problem of quantum state testing in a space-bounded manner:
**Definition 4.1** (Space-bounded Quantum State Testing).: Given polynomial-size quantum circuits (devices) \(Q_{0}\) and \(Q_{1}\) that act on \(O(\log n)\) qubits and have a succinct description (the "source code" of devices), with \(r(n)\) specified output qubits, where \(r(n)\) is a deterministic logspace computable function such that \(0<r(n)\leq O(\log(n))\). Let \(\rho_{i}\) denote the mixed state obtained by running \(Q_{i}\) on the all-zero state \(|\tilde{0}\rangle\) and tracing out the non-output qubits.
We define a _space-bounded quantum state testing_ problem, with respect to a specified distance-like measure, to decide whether \(\rho_{0}\) and \(\rho_{1}\) are easily distinguished or almost indistinguishable. Likewise, we also define a _space-bounded quantum state certification_ problem to decide whether \(\rho_{0}\) and \(\rho_{1}\) are easily distinguished or _exactly_ indistinguishable.
We remark that space-bounded quantum state certification, defined in Definition 4.1, represents a "white-box" (log)space-bounded counterpart of quantum state certification [1].
_Remark 4.2_ (Lifting to exponential-size instances by succinct encodings).: For \(s(n)\) space-uniform quantum circuits \(Q_{0}\) and \(Q_{1}\) acting on \(O(s(n))\) qubits, if these circuits admit a succinct encoding,32 namely there is a deterministic \(O(s(n))\)-space Turing machine with time complexity
\(\operatorname{poly}(s(n))\) can uniformly generate the corresponding gate sequences, then Definition 4.1 can be extended to any \(s(n)\) satisfying \(\Omega(\log n)\leq s(n)\leq\operatorname{poly}(n)\).33
Footnote 33: It is noteworthy that Definition 4.1 (mostly) coincides with the case of \(s(n)=\Theta(O(\log n))\) and directly takes the corresponding gate sequence of \(Q_{0}\) and \(Q_{1}\) as an input.
Next, we define space-bounded quantum state testing problems, based on Definition 4.1, with respect to four commonplace distance-like measures.
**Definition 4.3** (Space-bounded Quantum State Distinguishability Problem, \(\operatorname{\textsc{GapQSD}}_{\log}\)).: Consider deterministic logspace computable functions \(\alpha(n)\) and \(\beta(n)\), satisfying \(0\leq\beta(n)<\alpha(n)\leq 1\) and \(\alpha(n)-\beta(n)\geq 1/\operatorname{poly}(n)\). Then the promise is that one of the following holds:
* _Yes_ instances: A pair of quantum circuits \((Q_{0},Q_{1})\) such that \(\operatorname{td}(\rho_{0},\rho_{1})\geq\alpha(n)\);
* _No_ instances: A pair of quantum circuits \((Q_{0},Q_{1})\) such that \(\operatorname{td}(\rho_{0},\rho_{1})\leq\beta(n)\).
Moreover, we also define the certification counterpart of \(\operatorname{\textsc{GapQSD}}_{\log}\), referred to as \(\operatorname{\textsc{CertQSD}}_{\log}\), given that \(\beta=0\). Specifically, \(\operatorname{\textsc{CertQSD}}_{\log}[\alpha(n)]:=\operatorname{\textsc{GapQSD }}_{\log}[\alpha(n),0]\).
Likewise, we can define \(\operatorname{\textsc{GapQJS}}_{\log}\) and \(\operatorname{\textsc{GapQHS}}_{\log}\), also the certification version \(\operatorname{\textsc{CertQHS}}_{\log}\), in a similar manner to Definition 4.3 by replacing the distance-like measure accordingly:
* \(\operatorname{\textsc{GapQJS}}_{\log}[\alpha(n),\beta(n)]\): Decide whether \(\operatorname{QJS}_{2}(\rho_{0},\rho_{1})\geq\alpha(n)\) or \(\operatorname{QJS}_{2}(\rho_{0},\rho_{1})\leq\beta(n)\);
* \(\operatorname{\textsc{GapQHS}}_{\log}[\alpha(n),\beta(n)]\): Decide whether \(\operatorname{HS}^{2}(\rho_{0},\rho_{1})\geq\alpha(n)\) or \(\operatorname{HS}^{2}(\rho_{0},\rho_{1})\leq\beta(n)\).
Furthermore, we use the notation \(\operatorname{\textsc{CertQSD}}_{\log}\) to indicate the _complement_ of \(\operatorname{\textsc{CertQSD}}_{\log}\) with respect to the chosen parameter \(\alpha(n)\), and so does \(\operatorname{\textsc{CertQHS}}_{\log}\).
**Definition 4.4** (Space-bounded Quantum Entropy Difference Problem, \(\operatorname{\textsc{GapQED}}_{\log}\)).: Consider a deterministic logspace computable function \(g:\mathbb{N}\to\mathbb{R}^{+}\), satisfying \(g(n)\geq 1/\operatorname{poly}(n)\). Then the promise is that one of the following cases holds:
* _Yes_ instance: A pair of quantum circuits \((Q_{0},Q_{1})\) such that \(\operatorname{S}(\rho_{0})-\operatorname{S}(\rho_{1})\geq g(n)\);
* _No_ instance: A pair of quantum circuits \((Q_{0},Q_{1})\) such that \(\operatorname{S}(\rho_{1})-\operatorname{S}(\rho_{0})\geq g(n)\).
**Novel complete characterizations for space-bounded quantum computation.** We now present the main theorems in this section and the paper. Theorem 4.5 establishes the first family of natural \(\operatorname{\textsc{coRQ}}_{\text{U}}\)L-complete problems. By relaxing the error requirement from one-sided to two-sided, Theorem 4.6 identifies a new family of natural \(\operatorname{\mathsf{BQL}}\)-complete problems on space-bounded quantum state testing.
**Theorem 4.5**.: _The computational hardness of the following \((\)log\()\)space-bounded quantum state certification problems, for any deterministic logspace computable \(\alpha(n)\geq 1/\operatorname{poly}(n)\), is as follows:_
1. \(\operatorname{\overline{\textsc{CertQSD}}}_{\log}[\alpha(n)]\) _is_ \(\operatorname{\textsc{coRQ}}_{\text{U}}\)_-complete;_
2. \(\operatorname{\overline{\textsc{CertQHS}}}_{\log}[\alpha(n)]\) _is_ \(\operatorname{\textsc{coRQ}}_{\text{U}}\)_-complete._
**Theorem 4.6**.: _The computational hardness of the following \((\)log\()\)space-bounded quantum state testing problems, where \(\alpha(n)-\beta(n)\geq 1/\operatorname{poly}(n)\) or \(g(n)\geq 1/\operatorname{poly}(n)\) as well as \(\alpha(n)\), \(\beta(n)\), \(g(n)\) can be computed in deterministic logspace, is as follows:_
1. \(\operatorname{\textsc{GapQSD}}_{\log}[\alpha(n),\beta(n)]\) _is_ \(\operatorname{\mathsf{BQL}}\)_-complete;_
2. \(\operatorname{\textsc{GapQED}}_{\log}[g(n)]\) _is_ \(\operatorname{\mathsf{BQL}}\)_-complete;_
3. \(\operatorname{\textsc{GapQJS}}_{\log}[\alpha(n),\beta(n)]\) _is_ \(\operatorname{\mathsf{BQL}}\)_-complete;_
4. \(\textsc{GapQHS}_{\log}[\alpha(n),\beta(n)]\) _is_ \(\mathsf{BQL}\)_-complete._
It is noteworthy that we can naturally extend Theorem 4.5 and Theorem 4.6 to their exponential-size up-scaling counterparts with \(2^{-O(s(n))}\)-precision, employing the extended version of Definition 4.1 outlined in Remark 4.2, thus achieving the complete characterizations for \(\mathsf{coRQ}_{\mathsf{U}}\mathsf{SPACE}[s(n)]\) and \(\mathsf{BQPSPACE}[s(n)]\), respectively.
In the remainder of this section, we first address problems with two-sided errors. Specifically, by employing a general framework for space-bounded quantum state testing demonstrated in Section 4.1, we demonstrate the \(\mathsf{BQL}\) containment of \(\textsc{GapQSD}_{\log}\) in Section 4.2, as well as the \(\mathsf{BQL}\) containment of \(\textsc{GapQED}_{\log}\) and \(\textsc{GapQJS}_{\log}\) in Section 4.3. Subsequently, in Section 4.4, we focus on making the error one-sided and establish the \(\mathsf{coRQ}_{\mathsf{U}}\mathsf{L}\) containment of \(\overline{\textsc{CertQSD}_{\log}}\) and \(\overline{\textsc{CertQHS}_{\log}}\). Additionally, we show the \(\mathsf{BQL}\) containment of \(\textsc{GapQHS}_{\log}\) in Appendix B. The corresponding hardness proof for all these problems is provided in Section 4.5.
### Space-bounded quantum state testing: a general framework
In this subsection, we introduce a general framework for quantum state testing that utilizes a quantum tester \(\mathcal{T}\). Specifically, the space-efficient tester \(\mathcal{T}\) succeeds (outputting the value "\(0\)") with probability \(x\), which is linearly dependent on some quantity closely related to the distance-like measure of interest. Consequently, we can obtain an additive-error estimation \(\widetilde{x}\) of \(x\) with high probability through sequential repetition (Lemma 2.11).
To construct \(\mathcal{T}\), we combine the one-bit precision phase estimation [10], commonly known as the Hadamard test [1], for block-encodings (Lemma 4.9), with our space-efficient quantum singular value transformation (QSVT) technique, which we describe in Section 3.
**Constructing a space-efficient quantum tester.** We now provide a formal definition and the detailed construction of the quantum tester \(\mathcal{T}\). The quantum circuit shown in Figure 2 defines the quantum tester \(\mathcal{T}(Q,U_{A},P_{d},\epsilon)\) using the following parameters with \(s(n)=\Theta(\log n)\):
* A \(s(n)\)-qubit quantum circuit \(Q\) prepares the purification of an \(r(n)\)-qubit quantum state \(\rho\) where \(\rho\) is the quantum state of interest;
* \(U_{A}\) is a \((1,s-r,0)\)-block-encoding of an \(r(n)\)-qubit Hermitian operator \(A\) where \(A\) relates to the quantum states of interest and \(r(n)\leq s(n)\);
* \(P_{d}\) is a degree-\(d\) bounded polynomial with a particular form \(P_{d}=\frac{c_{0}}{2}+\sum_{k=1}^{d}c_{k}T_{k}\in\mathbb{R}[x]\) where \(T_{k}\) is the \(k\)-th Chebyshev polynomial, with \(d\leq 2^{O(s(n))}\), such that the coefficients \(\mathbf{c}:=(c_{0},\cdots,c_{d})\) can be computed in bounded-error randomized space \(O(s(n))\);
* \(\epsilon\) is the precision parameter used in the estimation of \(x\), with \(\epsilon\geq 2^{-O(s(n))}\).
Moreover, we define the corresponding estimation procedure, denoted as \(\hat{\mathcal{T}}(Q,U_{A},P_{d},\epsilon,\epsilon_{H},\delta)\), namely a quantum algorithm that computes an additive-error estimation \(\widetilde{x}\) of the output \(x\) from
Figure 2: Quantum tester \(\mathcal{T}(Q,U_{A},P_{d},\epsilon)\): the circuit implementation.
the tester \(\mathcal{T}(Q,U_{A},P_{d},\epsilon)\). Technically speaking, \(\hat{\mathcal{T}}\) outputs \(\widetilde{x}\) such that \(|x-\widetilde{x}|\leq\epsilon\|\mathbf{c}\|_{1}+\epsilon_{H}\) with probability at least \(1-\delta\). Now we will demonstrate that both the tester \(\mathcal{T}\) and the corresponding estimation procedure \(\hat{\mathcal{T}}\) are space-efficient:
**Lemma 4.7** (Quantum tester \(\mathcal{T}\) and estimation procedure \(\hat{\mathcal{T}}\) are space-efficient).: _The quantum tester \(\mathcal{T}(Q,U_{A},P_{d},\epsilon)\), as specified in Figure 2, accepts (outputting the value "\(0\)") with probability \(\frac{1}{2}(1+\mathrm{Re}(\mathrm{Tr}(P_{d}(A)\rho)))\pm\frac{1}{2}\epsilon\| \mathbf{c}\|_{1}\). Moreover, we can compute the quantum circuit description of \(\mathcal{T}\) in deterministic space \(O(s+\log(1/\epsilon))\) given the coefficient vector \(\mathbf{c}\) of \(P_{d}\). Furthermore, we can implement the corresponding estimation procedure \(\hat{\mathcal{T}}(Q,U_{A},P_{d},\epsilon,\epsilon_{H},\delta)\) in bounded-error quantum space \(O(s+\log(1/\epsilon)+\log(1/\epsilon_{H})+\log\log(1/\delta))\)._
We first provide two useful lemmas for implementing our quantum tester \(\mathcal{T}\). It is noteworthy that Lemma 4.8 originates from [10], as well as Lemma 4.9 is a specific version of one-bit precision phase estimation (or the Hadamard test) [11, 1].
**Lemma 4.8** (Purified density matrix, [11, Lemma 25]).: _Suppose \(\rho\) is an \(s\)-qubit density operator and \(U\) is an \((a+s)\)-qubit unitary operator such that \(U|0\rangle^{\otimes a}|0\rangle^{\otimes s}=|\rho\rangle\) and \(\rho=\mathrm{Tr}_{a}(|\rho\rangle\langle\rho|)\). Then, we can construct an \(O(a+s)\)-qubit quantum circuit \(\widetilde{U}\) that is an \((O(a+s),0)\)-block-encoding of \(\rho\), using \(O(1)\) queries to \(U\) and \(O(a+s)\) one- and two-qubit quantum gates._
**Lemma 4.9** (Hadamard test for block-encodings, adapted from [10, Lemma 9]).: _Suppose \(U\) is an \((a+s)\)-qubit unitary operator that is a block-encoding of \(s(n)\)-qubit operator \(A\). We can implement an \(O(a+s)\)-qubit quantum circuit that, on input \(s(n)\)-qubit quantum state \(\rho\), outputs \(0\) with probability \(\frac{1+\mathrm{Re}(\mathrm{Tr}(A\rho))}{2}\)._
Finally, we proceed with the actual proof of Lemma 4.7.
Proof of Lemma 4.7.: By applying Chebyshev interpolation on \(U_{A}\) (Theorem 3.11 with the choice of \(\epsilon_{1}=0\) and \(\epsilon_{2}=\epsilon/36\)), we can implement an \(O(s(n))\)-qubit quantum circuit \(U_{P_{d}(A)}\) that is a \((1,a,0)\)-block-encoding of \(A^{\prime}_{P_{d}}\) using \(O(d^{2}\|\mathbf{c}\|_{1})\) queries to \(U_{A}\), where \(a=s-r+\lceil\log d\rceil+3\) and \(A^{\prime}_{P_{d}}\) is specified in Theorem 3.11 satisfying \(\|P_{d}(A)-A^{\prime}_{P_{d}}\|\leq\epsilon\|\mathbf{c}\|_{1}\). Additionally, we can compute the quantum circuit description of \(U_{P_{d}(A)}\) in deterministic space \(O(s+\log(1/\epsilon))\) given the coefficient vector \(\mathbf{c}\) of \(P_{d}\). As the quantum tester \(\mathcal{T}(Q,U_{A},P_{d},\epsilon)\) is mainly based on the Hadamard test, by employing Lemma 4.9, we have that \(\mathcal{T}\) outputs \(0\) with probability
\[\mathrm{Pr}[x=0]=\frac{1}{2}\big{(}1+\mathrm{Re}(\mathrm{Tr}(A^{\prime}_{P_{d }}\rho))\big{)}=\frac{1}{2}(1+\mathrm{Re}(\mathrm{Tr}(P_{d}(A)\rho)))\pm\frac{ 1}{2}\epsilon\|\mathbf{c}\|_{1}.\]
It is left to construct the estimation procedure \(\hat{\mathcal{T}}\). As detailed in in Lemma 2.11, we can obtain an estimation \(\widetilde{x}\) by sequentially repeating the quantum tester \(\mathcal{T}(Q,U_{A},P_{d},\epsilon)\) for \(O(1/\epsilon_{H}^{2})\) times. This repetition ensures that \(|\widetilde{x}-\mathrm{Re}(\mathrm{Tr}(A^{\prime}_{P_{d}}\rho))|\leq\epsilon_ {H}\) holds with probability at least \(\Omega(1)\), and derives an further implication on \(P_{d}(A)\):
\[\mathrm{Pr}[|\widetilde{x}-\mathrm{Re}(\mathrm{Tr}(P_{d}(A)\rho))|\leq\epsilon \|\mathbf{c}\|_{1}+\epsilon_{H}]\geq\Omega(1).\]
We thus conclude that construction of the estimation procedure \(\hat{\mathcal{T}}(Q,U_{A},P_{d},\epsilon,\epsilon_{H},\delta)\) by utilizing \(O(\log(1/\delta)/\epsilon_{H}^{2})\) sequential repetitions of \(\mathcal{T}(Q,U_{A},P_{d},\epsilon)\). Similarly following Lemma 2.11, \(\hat{\mathcal{T}}(Q,U_{A},P_{d},\epsilon,\epsilon_{H},\delta)\) outputs an estimation \(\widetilde{x}\) satisfies the following condition:
\[\mathrm{Pr}[|\widetilde{x}-\mathrm{Re}(\mathrm{Tr}(P_{d}(A)\rho))|\leq\epsilon \|\mathbf{c}\|_{1}+\epsilon_{H}]\geq 1-\delta.\]
In addition, a direct calculation indicates that we can implement \(\hat{\mathcal{T}}(Q,U_{A},P_{d},\epsilon,\epsilon_{H},\delta)\) in quantum space \(O(s+\log(1/\epsilon)+\log(1/\epsilon_{H})+\log\log(1/\delta))\) as desired.
### GapQSD\({}_{\log}\) is in Bql
In this subsection, we demonstrate Theorem 4.10 by constructing a quantum algorithm that incorporates testers \(\mathcal{T}(Q_{i},U_{\frac{\rho_{0}-\rho_{1}}{2}},P_{d}^{\mathrm{sgn}},\epsilon)\) for \(i\in\{0,1\}\), where the construction of testers utilizes the space-efficient QSVT associated with the sign function.
**Theorem 4.10**.: _For any functions \(\alpha(n)\) and \(\beta(n)\) that can be computed in deterministic logspace and satisfy \(\alpha(n)-\beta(n)\geq 1/\operatorname{poly}(n)\), we have that \(\textsc{GapQSD}_{\log}[\alpha(n),\beta(n)]\) is in \(\mathsf{BQL}\)._
Proof.: Inspired by time-efficient algorithms for the low-rank variant of GapQSD [23], we devise a space-efficient algorithm for \(\textsc{GapQSD}_{\log}\), present formally in Algorithm 1.
```
Input : Quantum circuits \(Q_{i}\) that prepares the purification of \(\rho_{i}\) for \(i\in\{0,1\}\). Output : An additive-error estimation of \(\operatorname{td}(\rho_{0},\rho_{1})\). Params:\(\varepsilon\!:=\!\frac{\alpha-\beta}{4}\), \(\delta\!:=\!\frac{\varepsilon}{2^{r+3}}\), \(\epsilon\!:=\!\frac{\varepsilon}{2(C_{\mathrm{sgn}}+2C_{\mathrm{sgn}})C_{ \mathrm{sgn}}2^{r+3}}\cdot\frac{1}{2\log\left(2(C_{\mathrm{sgn}}+2C_{\mathrm{ sgn}})C_{\mathrm{sgn}}2^{r+3}/\varepsilon\right)}\), \(d\!:=\!\tilde{C}_{\mathrm{sgn}}\delta^{-1}\log\epsilon^{-1}\), \(\varepsilon_{H}\!:=\!\frac{\varepsilon}{4}\).
1. Construct block-encodings of \(\rho_{0}\) and \(\rho_{1}\), denoted by \(U_{\rho_{0}}\) and \(U_{\rho_{1}}\), respectively, using \(O(1)\) queries to \(Q_{0}\) and \(Q_{1}\) and \(O(s(n))\) ancillary qubits by Lemma 4.8;
2. Construct a block-encoding of \(\frac{\rho_{0}-\rho_{1}}{2}\), denoted by \(U_{\frac{\rho_{0}-\rho_{1}}{2}}\), using \(O(1)\) queries to \(U_{\rho_{0}}\) and \(U_{\rho_{1}}\) and \(O(s(n))\) ancillary qubits by Lemma 3.14;
3. _Let \(P_{d}^{\mathrm{sgn}}\) be the degree-\(d\) polynomial specified in Corollary 3.7 with parameters \(\delta\) and \(\epsilon\) such that its Chebyshev coefficients are computable in deterministic space \(O(\log(d/\epsilon))\)_;
4. Set \(x_{0}:=\tilde{\mathcal{T}}(Q_{0},U_{\frac{\rho_{0}-\rho_{1}}{2}},P_{d}^{ \mathrm{sgn}},\epsilon,\epsilon_{H},1/10)\), \(x_{1}:=\tilde{\mathcal{T}}(Q_{1},U_{\frac{\rho_{0}-\rho_{1}}{2}},P_{d}^{ \mathrm{sgn}},\epsilon,\epsilon_{H},1/10)\);
5. Compute \(x=(x_{0}-x_{1})/2\). Return "yes" if \(x>(\alpha+\beta)/2\), and "no" otherwise.
```
**Algorithm 1**Space-efficient algorithm for \(\textsc{GapQSD}_{\log}\).
Let us demonstrate the correctness of Algorithm 1 and analyze the computational complexity. We focus on the setting with \(s(n)=\Theta(\log n)\). We set \(\varepsilon:=(\alpha-\beta)/4\geq 2^{-O(s)}\) and assume that \(Q_{0}\) and \(Q_{1}\) are \(s(n)\)-qubit quantum circuits that prepare the purifications of \(\rho_{0}\) and \(\rho_{1}\), respectively. According to Lemma 4.8, we can construct \(O(s)\)-qubit quantum circuits \(U_{\rho_{0}}\) and \(U_{\rho_{1}}\) that encode \(\rho_{0}\) and \(\rho_{1}\) as \((1,O(s),0)\)-block-encodings, using \(O(1)\) queries to \(Q_{0}\) and \(Q_{1}\) as well as \(O(1)\) one- and two-qubit quantum gates. Next, we apply Lemma 3.14 to construct a \((1,O(s),0)\)-block-encoding \(U_{\frac{\rho_{0}-\rho_{1}}{2}}\) of \(\frac{\rho_{0}-\rho_{1}}{2}\), using \(O(1)\) queries to \(Q_{\rho_{0}}\) and \(Q_{\rho_{1}}\), as well as \(O(1)\) one- and two-qubit quantum gates.
Let \(\delta:=\frac{\varepsilon}{2^{r+3}}\), \(\epsilon:=\frac{\varepsilon}{2(C_{\mathrm{sgn}}+2C_{\mathrm{sgn}})C_{\mathrm{ sgn}}2^{r+3}}\cdot\frac{1}{2\log\left(2(\tilde{C}_{\mathrm{sgn}}+2C_{ \mathrm{sgn}})\tilde{C}_{\mathrm{sgn}}2^{r+3}/\varepsilon\right)}\) and \(d:=\tilde{C}_{\mathrm{sgn}}\delta^{-1}\log\epsilon^{-1}=2^{O(s)}\) where \(\tilde{C}_{\mathrm{sgn}}\) comes from Corollary 3.7. Let \(P_{d}^{\mathrm{sgn}}\in\mathbb{R}[x]\) be the polynomial specified in Corollary 3.7. Let \(\epsilon_{H}=\varepsilon/4\). By employing Corollary 3.16 and the corresponding estimation procedure \(\tilde{\mathcal{T}}(Q_{i},U_{\frac{\rho_{0}-\rho_{1}}{2}},P_{d}^{\mathrm{sgn }},\epsilon,\epsilon_{H},1/10)\) from Lemma 4.7, we obtain the values \(x_{i}\) for \(i\in\{0,1\}\), ensuring the following inequalities:
\[\Pr\biggl{[}\biggl{|}x_{i}-\operatorname{Tr}\left(P_{d}^{\mathrm{sgn}}\left( \frac{\rho_{0}-\rho_{1}}{2}\right)\rho_{i}\right)\biggr{|}\leq\hat{C}_{\mathrm{ sgn}}\epsilon\log d+\epsilon_{H}\biggr{]}\geq\frac{9}{10}\text{ for }i\in\{0,1\}. \tag{4.1}\]
Here, the implementation uses \(O(d^{2}\log d)\) queries to \(U_{\frac{\rho_{0}-\rho_{1}}{2}}\) and \(O(d^{2}\log d)\) multi-controlled single-qubit gates. Moreover, the circuit descriptions of \(\tilde{\mathcal{T}}(Q_{i},U_{\frac{\rho_{0}-\rho_{1}}{2}},P_{d}^{\mathrm{sgn}}, \epsilon,\epsilon_{H},1/10)\) can be computed in deterministic time \(\tilde{O}(d^{9/2}/\epsilon)\) and space \(O(s)\).
Now let \(x:=(x_{0}-x_{1})/2\). We will finish the correctness analysis of Algorithm 1 by showing \(\Pr[|x-\operatorname{td}(\rho_{0},\rho_{1})|\leq\varepsilon]>0.8\) through Equation (4.1). By considering the approximation error of \(P_{d}^{\mathrm{sgn}}\) in Corollary 3.7 and the QSVT implementation error in Corollary 3.16, we derive the following inequality in Proposition 4.10.1, and the proof is deferred to Appendix B.1:
**Proposition 4.10.1**.: \(\Pr\Bigl{[}|x-\operatorname{td}(\rho_{0},\rho_{1})|\leq\hat{C}_{\mathrm{sgn}} \epsilon\log d+\epsilon_{H}+2C_{\mathrm{sgn}}\epsilon\log d+2^{r+1}\delta \Bigr{]}>0.8\)
Consequently, it is left to show that \(\hat{C}_{\text{sgn}}\epsilon\log d+\epsilon_{H}+2C_{\text{sgn}}\epsilon\log d+2^{ \tau+1}\delta\leq\varepsilon\) for the aforementioned choice of \(\delta\), \(\epsilon\), and \(\epsilon_{H}\). Note that \(\epsilon_{H}=\varepsilon/4\) and \(2^{\tau+1}\delta=\varepsilon/4\), we complete the correctness analysis by choosing \(\epsilon:=\delta^{\prime}/2\log(\delta^{\prime-1})\) with \(\delta^{\prime}:=\delta/\big{(}2(\hat{C}_{\text{sgn}}+2C_{\text{sgn}})\hat{C}_ {\text{sgn}}\big{)}\leq 1/2\) and subsequently deriving the following inequality:
\[(\hat{C}_{\text{sgn}}+2C_{\text{sgn}})\epsilon\log d\leq(\hat{C}_{\text{sgn}} +2C_{\text{sgn}})\epsilon\log(\delta^{\prime-1}\log(\epsilon^{-1}))\leq(\hat{ C}_{\text{sgn}}+2C_{\text{sgn}})\delta^{\prime}\leq\varepsilon/2.\]
Here, the second inequality results from the fact that \(\gamma\log(\varepsilon^{-1}\log\gamma^{-1})\leq\varepsilon\) for \(0<\varepsilon\leq 1/2\), with \(\gamma:=\varepsilon/2\log(\varepsilon^{-1})\), and the last inequality owes to the chosen \(\delta^{\prime}\), along with the facts that \(\delta:=\epsilon/2^{\tau+3}\leq\varepsilon\) and \(\hat{C}_{\text{sgn}}\geq 1\).
Finally, we analyze the computational resources required for Algorithm 1. According to Lemma 4.7, we can compute \(x\) in \(\mathsf{BQL}\), with the resulting algorithm requiring \(O(d^{2}\log d/\epsilon_{H}^{2})=\tilde{O}(2^{2\tau}/\varepsilon^{4})\) queries to \(Q_{0}\) and \(Q_{1}\). In addition, its circuit description can be computed in deterministic time \(\tilde{O}(d^{9/2}/\varepsilon)=\tilde{O}(2^{4.5\tau}/\varepsilon^{5.5})\).
### GapQED\({}_{\log}\) and GapQJS\({}_{\log}\) are in \(\mathsf{BQL}\)
In this subsection, we will demonstrate Theorem 4.11 by devising a quantum algorithm that encompasses testers \(\mathcal{T}(Q_{i},U_{\rho_{i}},P_{d}^{\ln},\epsilon)\) for \(i\in\{0,1\}\), where the construction of testers employs the space-efficient QSVT associated with the normalized logarithmic function. Consequently, we can deduce that GapQJS\({}_{\log}\) is in \(\mathsf{BQL}\) via a reduction from GapQJS\({}_{\log}\) to GapQED\({}_{\log}\).
**Theorem 4.11**.: _For any deterministic logspace computable function \(g(n)\) that satisfies \(g(n)\geq 1/\operatorname{poly}(n)\), we have that GapQED\({}_{\log}[g(n)]\) is in \(\mathsf{BQL}\)._
Proof.: We begin with a formal algorithm in Algorithm 2.
```
Input : Quantum circuits \(Q_{i}\) that prepares the purification of \(\rho_{i}\) for \(i\in\{0,1\}\). Output : An additive-error estimation of \(\operatorname{S}(\rho_{0})-\operatorname{S}(\rho_{1})\). Params :\(\varepsilon:=\frac{q}{4}\), \(\beta:=\min\{\frac{\varepsilon}{2^{\tau+5}\ln(2^{\tau+4}/\varepsilon)},\frac{1 }{4}\}\), \(d:=\tilde{C}_{\ln}\cdot\frac{1}{\beta}\log\frac{1}{\epsilon}\), \(\epsilon:=\frac{\beta\varepsilon}{4C_{\ln}(C_{\ln}+C_{\ln})\ln(1/\beta)}\cdot \frac{1}{4\log\big{(}4C_{\ln}(C_{\ln}+C_{\ln})\ln(1/\beta)/(\beta\varepsilon) \big{)}}\), \(\epsilon_{H}:=\frac{\varepsilon}{8\ln(1/\beta)}\).
1. Construct block-encodings of \(\rho_{0}\) and \(\rho_{1}\), denoted by \(U_{\rho_{0}}\) and \(U_{\rho_{1}}\), respectively, using \(O(1)\) queries to \(Q_{0}\) and \(Q_{1}\) and \(O(s(n))\) ancillary qubits by Lemma 4.8; Let \(P_{d}^{\ln}\) be the degree-\(d\) polynomial specified in Corollary 3.10 with parameters \(\delta\) and \(\epsilon\) such that its Chebyshev coefficients are computable in bounded-error randomized space \(O(\log(d/\epsilon))\);
2. Set \(x_{0}:=\hat{\mathcal{T}}(Q_{0},U_{\rho_{0}},P_{d}^{\ln},\epsilon,\epsilon_{H},1/10)\), \(x_{1}:=\hat{\mathcal{T}}(Q_{1},U_{\rho_{1}},P_{d}^{\ln},\epsilon,\epsilon_{H},1/10)\);
3. Compute \(x=(x_{0}-x_{1})\ln(2/\beta)\). Return "yes" if \(x>0\), and "no" otherwise.
```
**Algorithm 2**Space-efficient algorithm for GapQED\({}_{\log}\).
Let us now demonstrate the correctness and computational complexity of Algorithm 2. We concentrate on the scenario with \(s(n)=\Theta(\log n)\) and \(\varepsilon=g/4\geq 2^{-O(s)}\). Our strategy is to estimate the entropy of each of \(\rho_{0}\) and \(\rho_{1}\), respectively. We assume that \(Q_{0}\) and \(Q_{1}\) are \(s\)-qubit quantum circuits that prepare the purifications of \(\rho_{0}\) and \(\rho_{1}\), respectively. By Lemma 4.8, we can construct \((1,O(s),0)\)-block-encodings \(U_{\rho_{0}}\) and \(U_{\rho_{1}}\) of \(\rho_{0}\) and \(\rho_{1}\), respectively, using \(O(1)\) queries to \(Q_{0}\) and \(Q_{1}\) as well as \(O(1)\) one- and two-qubit quantum gates.
Let \(\beta=\min\{\frac{\varepsilon}{2^{\tau+5}\ln(2^{\tau+4}/\varepsilon)},\frac{1}{4}\}\), \(\epsilon:=\frac{\beta\varepsilon}{4C_{\ln}(C_{\ln}+C_{\ln})\ln(1/\beta)}\cdot \frac{1}{4\log\big{(}4\tilde{C}_{\ln}(\hat{C}_{\ln}+C_{\ln})\ln(1/\beta)/( \beta\varepsilon)\big{)}}\) and \(d=\tilde{C}_{\ln}\beta^{-1}\log\epsilon^{-1}=2^{O(s(n))}\) where \(\tilde{C}_{\ln}\) comes from Corollary 3.10. Let \(P_{d}^{\ln}\in\mathbb{R}[x]\) be the polynomial specified in Corollary 3.10. Let \(\epsilon_{H}=\frac{\varepsilon}{8\ln(1/\beta)}\). By utilizing Corollary 3.17 and the
corresponding estimation procedure \(\tilde{\mathcal{T}}(Q_{i},U_{\rho_{i}},P_{d}^{\mathrm{ln}},\epsilon,\epsilon_{H}, 1/10)\) from Lemma 4.7, we obtain the values \(x_{i}\) for \(i\in\{0,1\}\), ensuring the following inequalities:
\[\Pr\Bigl{[}\Bigl{|}x_{i}-\mathrm{Tr}\left(P_{d}^{\mathrm{ln}}(\rho_{i})\,\rho_{ i}\right)\Bigr{|}\leq\hat{C}_{\mathrm{ln}}\epsilon d+\epsilon_{H}\Bigr{]}\geq\frac{9}{1 0}\text{ for }i\in\{0,1\}. \tag{4.2}\]
Here, the implementation uses \(O(d^{3})\) queries to \(U_{\rho_{0}}\) and \(O(d^{3})\) multi-controlled single-qubit gates. Moreover, the circuit descriptions of \(\tilde{\mathcal{T}}(Q_{i},U_{\rho_{i}},P_{d}^{\mathrm{ln}},\epsilon,\epsilon_{ H},1/10)\) can be computed in bounded-error time \(\tilde{O}(d^{9}/\epsilon^{4})\) and space \(O(s)\).
We will finish the correctness analysis of Algorithm 2 by demonstrating \(\Pr[|x_{i}-\mathrm{S}(\rho_{i})|\leq\varepsilon]\geq 0.9\) through Equation (4.2). By considering the approximation error of \(P_{d}^{\mathrm{ln}}\) in Corollary 3.10 and the QSVT implementation error in Corollary 3.17, we derive the following inequality in Proposition 4.11.1, and the proof is deferred to Appendix B.1:
**Proposition 4.11.1**.: _The following inequality holds for \(i\in\{0,1\}\):_
\[\Pr\Bigl{[}\bigl{|}x_{i}\ln\left(\tfrac{2}{\beta}\right)-\mathrm{S}(\rho_{i}) \bigr{|}\leq 2\ln\left(\tfrac{1}{\beta}\right)\left(\hat{C}_{\mathrm{ln}} \epsilon d+\epsilon_{H}+C_{\mathrm{ln}}\epsilon\log d+2^{r+1}\beta\right) \Bigr{]}\geq\tfrac{9}{10}.\]
Consequently, it is left to show that \(2\ln\left(\tfrac{1}{\beta}\right)\left(\hat{C}_{\mathrm{ln}}\epsilon d+ \epsilon_{H}+C_{\mathrm{ln}}\epsilon\log d+2^{r+1}\beta\right)\leq\varepsilon\) for the aforementioned choice of \(\beta\), \(\epsilon\), and \(\epsilon_{H}\). Note that \(2\ln(1/\beta)\epsilon_{H}=\varepsilon/4\) and \(2\ln(1/\beta)\cdot 2^{r+1}\beta\leq\varepsilon/4\), we complete the correctness analysis by choosing \(\epsilon:=\delta/4\cdot\log(1/\delta)\) with \(\delta:=\frac{\beta\varepsilon}{4\hat{C}_{\mathrm{ln}}(\hat{C}_{\mathrm{ln}} +C_{\mathrm{ln}})\ln(1/\beta)}\leq 1/2\), and subsequently deriving the following inequality:
\[2\ln(\beta^{-1})(\hat{C}_{\mathrm{ln}}\epsilon d+C_{\mathrm{ln} }\epsilon\log(d)) \leq 2\ln(\beta^{-1})(\hat{C}_{\mathrm{ln}}+C_{\mathrm{ln}})\epsilon d\] \[=2\ln(\beta^{-1})(\hat{C}_{\mathrm{ln}}+C_{\mathrm{ln}})\hat{C} _{\mathrm{ln}}\beta^{-1}\epsilon\log(\epsilon^{-1})\] \[\leq 2\ln(\beta^{-1})(\hat{C}_{\mathrm{ln}}+C_{\mathrm{ln}})\hat{C} _{\mathrm{ln}}\beta^{-1}\delta\] \[=\varepsilon/2.\]
Here, the first line is because of \(\log(d)\leq d\), the third line owes to the fact that \(\epsilon\log(\epsilon^{-1})\leq\delta\), and the last line is due to choice of \(\delta\).
Finally, we analyze the computational resources required for Algorithm 2. As per Lemma 4.7, we can compute \(x\) in \(\mathsf{BQL}\), with the resulting algorithm requiring \(O(d^{3}/\epsilon_{H}^{2})=\tilde{O}(2^{3r}/\varepsilon^{4})\) queries to \(Q_{0}\) and \(Q_{1}\). Furthermore, its circuit description can be computed in bounded-error randomized time \(\tilde{O}(d^{11}/\varepsilon^{4})=\tilde{O}(2^{11r}/\varepsilon^{15})\).
Gap\(\mathrm{QJS}_{\mathrm{log}}\) is in \(\mathsf{BQL}\). It is noteworthy that we can achieve \(\textsc{Gap\_JS}_{\mathrm{log}}\in\mathsf{BQL}\) by employing the estimation procedure \(\tilde{\mathcal{T}}\) in Algorithm 2 for _three according states_, given that the quantum Jensen-Shannon divergence \(\mathrm{QJS}(\rho_{0},\rho_{1})\) is a linear combination of \(\mathrm{S}(\rho_{0}),\mathrm{S}(\rho_{1})\), and \(\mathrm{S}\bigl{(}\tfrac{\rho_{0}+\rho_{1}}{2}\bigr{)}\). Nevertheless, the log-space Karp reduction from \(\textsc{Gap\_JS}_{\mathrm{log}}\) to \(\textsc{Gap\_QED}_{\mathrm{log}}\) (Corollary 4.12) allows us to utilize \(\tilde{\mathcal{T}}\) for only _two_ states. Furthermore, our construction is adapted from the time-bounded scenario [11, Lemma 4.3].
**Corollary 4.12**.: _For any functions \(\alpha(n)\) and \(\beta(n)\) that can be computed in deterministic logspace and satisfy \(\alpha(n)-\beta(n)\geq 1/\operatorname{poly}(n)\), we have that \(\textsc{Gap\_JS}_{\mathrm{log}}[\alpha(n),\beta(n)]\) is in \(\mathsf{BQL}\)._
Proof.: Let \(Q_{0}\) and \(Q_{1}\) be the given \(s(n)\)-qubit quantum circuits where \(s(n)=\Theta(\log n)\). Consider a classical-quantum mixed state on a classical register \(\mathsf{B}\) and a quantum register \(\mathsf{Y}\), denoted by \(\rho_{1}^{\prime}:=\tfrac{1}{2}|0\rangle\langle 0|\otimes\rho_{0}+\tfrac{1}{2}|1 \rangle\langle 1|\otimes\rho_{1}\), where \(\rho_{0}\) and \(\rho_{1}\) are the state obtained by running \(Q_{0}\) and \(Q_{1}\), respectively, and tracing out the non-output qubits. We utilize our reduction to output classical-quantum mixed states \(\rho_{0}^{\prime}\) and \(\rho_{1}^{\prime}\), which are the output of \((s(n)+2)\)-qubit quantum circuits
and \(Q^{\prime}_{1}\),34 respectively, where \(\rho^{\prime}_{0}:=(p_{0}|0\rangle\langle 0|+p_{1}|1\rangle\langle 1|)\otimes( \tfrac{1}{2}\rho_{0}+\tfrac{1}{2}\rho_{1})\) and \(\mathsf{B}^{\prime}:=(p_{0},p_{1})\) is an independent random bit with entropy \(\mathrm{H}(\mathsf{B}^{\prime})=1-\tfrac{1}{2}[\alpha(n)+\beta(n)]\). Let \(\mathrm{S}_{2}(\rho):=\mathrm{S}(\rho)/\ln 2\) for any quantum state \(\rho\), we then have derived that:
Footnote 34: To construct \(Q^{\prime}_{1}\), we follow these steps: We start by applying a Hadamard gate on \(\mathsf{B}\) followed by a \(\textsc{CNOT}_{\mathsf{B}\to\mathsf{R}}\) gate where \(\mathsf{B}\) and \(\mathsf{R}\) are single-qubit quantum registers initialized on \([0]\). Next, we apply the controlled-\(Q_{1}\) gate on the qubits from \(\mathsf{B}\) to \(\mathsf{S}\), where \(\mathsf{S}=(\mathsf{Y},\mathsf{Z})\) is an \(s(n)\)-qubit register initialized on \([\bar{0}]\). We then apply \(X\) gate on \(\mathsf{B}\) followed by the controlled-\(Q_{0}\) gate on the qubits from \(\mathsf{B}\) to \(\mathsf{S}\), and we apply \(X\) gate on \(\mathsf{B}\) again. Finally, we obtain \(\rho^{\prime}_{1}\) by tracing out \(\mathsf{R}\) and the qubits in \(\mathsf{Z}\). In addition, we can construct \(Q^{\prime}_{0}\) similarly.
\[\begin{split}\mathrm{S}_{2}(\rho^{\prime}_{0})-\mathrm{S}_{2}( \rho^{\prime}_{1})&=\mathrm{S}_{2}(\mathsf{B}^{\prime},\mathsf{Y })_{\rho^{\prime}_{0}}-\mathrm{S}_{2}(\mathsf{B},\mathsf{Y})_{\rho^{\prime}_ {1}}\\ &=\mathrm{[H}(\mathsf{B}^{\prime})+\mathrm{S}_{2}(\mathsf{Y}| \mathsf{B}^{\prime})_{\rho^{\prime}_{0}}]-[\mathrm{H}(\mathsf{B})+\mathrm{S}_ {2}(\mathsf{Y}|\mathsf{B})_{\rho^{\prime}_{1}}]\\ &=\mathrm{S}_{2}(\mathsf{Y})_{\rho^{\prime}_{0}}-\mathrm{S}_{2}( \mathsf{Y}|\mathsf{B})_{\rho^{\prime}_{1}}^{\prime}+\mathrm{H}(\mathsf{B}^{ \prime})-\mathrm{H}(\mathsf{B})\\ &=\mathrm{S}_{2}(\mathsf{Y})_{\rho^{\prime}_{0}}-\mathrm{S}_{2}( \mathsf{Y}|\mathsf{B})_{\rho^{\prime}_{1}}^{\prime}-\tfrac{1}{2}[\alpha(n)+ \beta(n)]\\ &=\mathrm{S}_{2}\left(\tfrac{1}{2}\rho_{0}+\tfrac{1}{2}\rho_{1} \right)-\tfrac{1}{2}[\mathrm{S}_{2}(\rho_{0})+\mathrm{S}_{2}(\rho_{1}))- \tfrac{1}{2}[\alpha(n)+\beta(n)]\\ &=\mathrm{Q}\mathrm{JS}_{2}(\rho_{0},\rho_{1})-\tfrac{1}{2}[ \alpha(n)+\beta(n)].\end{split} \tag{4.3}\]
Here, the second line derives from the definition of quantum conditional entropy and acknowledges that both \(\mathsf{B}\) and \(\mathsf{B}^{\prime}\) are classical registers. The third line owes to the independence of \(\mathsf{B}^{\prime}\) as a random bit. Furthermore, the fifth line relies on the Joint entropy theorem (Lemma 2.3).
By plugging Equation (4.3) into the promise of GapQJS\({}_{\log}[\alpha(n),\beta(n)]\), we can define \(g(n):=\tfrac{\ln 2}{2}\big{(}\alpha(n-1)-\beta(n-1)\big{)}\) and conclude that:
* If \(\mathrm{QJS}_{2}(\rho_{0},\rho_{1})\geq\alpha(n)\), then \(\mathrm{S}(\rho^{\prime}_{0})-\mathrm{S}(\rho^{\prime}_{1})\geq\tfrac{\ln 2}{2} \big{(}\alpha(n)-\beta(n)\big{)}=g(n+1)\);
* If \(\mathrm{QJS}_{2}(\rho_{0},\rho_{1})\leq\beta(n)\), then \(\mathrm{S}(\rho^{\prime}_{0})-\mathrm{S}(\rho^{\prime}_{1})\leq-\tfrac{\ln 2 }{2}\big{(}\alpha(n)-\beta(n)\big{)}=-g(n+1)\).
As \(\rho^{\prime}_{1}\) and \(\rho^{\prime}_{0}\) are \(r^{\prime}(n)\)-qubit states where \(r^{\prime}(n):=r(n)+1\), the output length of the corresponding space-bounded quantum circuits \(Q^{\prime}_{0}\) and \(Q^{\prime}_{1}\) is \(r^{\prime}(n)\). Therefore, GapQJS\({}_{s(n)}[\alpha(n),\beta(n)]\) is logspace Karp reducible to GapQED\({}_{s+1}[g(n)]\) by mapping \((Q_{0},Q_{1})\) to \((Q^{\prime}_{0},Q^{\prime}_{1})\).
(\overline{\textsc{CertQSD}}_{\log}\) and \(\overline{\textsc{CertQHS}}_{\log}\) are in \(\textsc{coRQ}_{\textsc{U}}\)
To make the error one-sided, we adapt the Grover search when the number of solutions is one quarter [1], also known as the exact amplitude amplification [1].
**Lemma 4.13** (Exact amplitude amplification, adapted from [1, Equation 8]).: _Suppose \(U\) is a unitary of interest such that \(U|\bar{0}\rangle=\sin(\theta)|\psi_{0}\rangle+\cos(\theta)|\psi_{1}\rangle\), where \(|\psi_{0}\rangle\) and \(|\psi_{1}\rangle\) are normalized pure states and \(\langle\psi_{0}|\psi_{1}\rangle=0\). Let \(G=-U(I-2|\bar{0}\rangle\langle\bar{0}|)U^{\dagger}(I-2|\psi_{0}\rangle\langle \psi_{0}|)\) be the Grover operator. Then, for every integer \(j\geq 0\), we have \(G^{j}U|\bar{0}\rangle=\sin((2j+1)\theta)|\psi_{0}\rangle+\cos((2j+1)\theta)| \psi_{1}\rangle\). In particular, with a single application of \(G\), we obtain \(GU|\bar{0}\rangle=\sin(3\theta)|\psi_{0}\rangle+\cos(3\theta)|\psi_{1}\rangle\), signifying that \(GU|\bar{0}\rangle=|\psi_{0}\rangle\) when \(\sin(\theta)=1/2\)._
Notably, when dealing with the unitary of interest with the property specified in Lemma 4.13, which is typically a quantum algorithm with acceptance probability linearly dependent on the chosen distance-like measure (e.g., a tester \(\mathcal{T}\) from Lemma 4.7), Lemma 4.13 guarantees that the resulting algorithm \(\mathcal{A}\) accepts with probability exactly \(1\) for _yes_ instances (\(\rho_{0}=\rho_{1}\)). However, achieving \(\mathcal{A}\) to accept with probability polynomially deviating from \(1\) for _no_ instances requires additional efforts, leading to the \(\textsc{coRQ}_{\textsc{U}}\)L containment established through error reduction for \(\textsc{coRQ}_{\textsc{U}}\)L (Corollary 3.19). In a nutshell, demonstrating \(\textsc{coRQ}_{\textsc{U}}\)L containment entails satisfying the desired property, which is achieved differently for \(\overline{\textsc{CertQSD}}_{\log}\) and \(\overline{\textsc{CertQHS}}_{\log}\).
#### 4.4.1 \(\operatorname{\textsc{CertQSD}}_{\log}\) is in \(\mathsf{coRQ}_{\mathsf{U}}\mathsf{L}\)
Our algorithm in Theorem 4.14 relies on the quantum tester \(\mathcal{T}(Q_{i},U_{\frac{\rho_{0}-\rho_{1}}{2}},P_{d}^{\operatorname{sgn}},\epsilon)\) specified in Algorithm 1. Note that the exact implementation of the space-efficient QSVT associated with odd polynomials preserves the original point (Remark 3.12). Consequently, \(\mathcal{T}(Q_{i},U_{\frac{\rho_{0}-\rho_{1}}{2}},P_{d}^{\operatorname{sgn}},\epsilon)\) outputs \(0\) with probability exactly \(1/2\) when \(\rho_{0}=\rho_{1}\), enabling us to derive the \(\mathsf{coRQ}_{\mathsf{U}}\mathsf{L}\) containment through a relatively involved analysis for cases when \(\operatorname{td}(\rho_{0},\rho_{1})\geq\alpha\):
**Theorem 4.14**.: _For any deterministic logspace computable function \(\alpha(n)\geq 1/\operatorname{poly}(n)\), we have that \(\operatorname{\textsc{CertQSD}}_{\log}[\alpha(n)]\) is in \(\mathsf{coRQ}_{\mathsf{U}}\mathsf{L}\)._
Proof.: We first present a formal algorithm in Algorithm 3:
```
Input : Quantum circuits \(Q_{i}\) that prepares the purification of \(\rho_{i}\) for \(i\in\{0,1\}\). Output : Return "yes" if \(\rho_{0}=\rho_{1}\), and "no" otherwise. Params:\(\varepsilon:=\frac{\alpha}{2}\), \(\delta:=\frac{\varepsilon}{2^{r+3}}\), \(\epsilon:=\frac{\varepsilon}{2(\tilde{C}_{\operatorname{sgn}}+2\tilde{C}_{ \operatorname{sgn}})\tilde{C}_{\operatorname{sgn}}2^{r+3}}\cdot\frac{1}{2\log \big{(}2(\tilde{C}_{\operatorname{sgn}}+2\tilde{C}_{\operatorname{sgn}}) \tilde{C}_{\operatorname{sgn}}2^{r+3}/\varepsilon\big{)}}\), \(d:=\tilde{C}_{\operatorname{sgn}}\delta^{-1}\log\epsilon^{-1}\).
1. Construct block-encodings of \(\rho_{0}\) and \(\rho_{1}\), denoted by \(U_{\rho_{0}}\) and \(U_{\rho_{1}}\), respectively, using \(O(1)\) queries to \(Q_{0}\) and \(Q_{1}\) and \(O(s(n))\) ancillary qubits by Lemma 4.8;
2. Construct a block-encoding of \(\frac{\rho_{0}-\rho_{1}}{2}\), denoted by \(U_{\frac{\rho_{0}-\rho_{1}}{2}}\), using \(O(1)\) queries to \(U_{\rho_{0}}\) and \(U_{\rho_{1}}\) and \(O(s(n))\) ancillary qubits by Lemma 3.14;
3. Let \(P_{d}^{\operatorname{sgn}}\) be the degree-\(d\) odd polynomial specified in Corollary 3.7 with parameters \(\delta\) and \(\epsilon\) such that its Chebyshev coefficients are computable in deterministic space \(O(\log(d/\epsilon))\);
4. Let \(U_{0}:=\mathcal{T}(Q_{0},U_{\frac{\rho_{0}-\rho_{1}}{2}},P_{d}^{\operatorname{ sgn}},\epsilon)\) and \(U_{1}:=\mathcal{T}(Q_{1},U_{\frac{\rho_{0}-\rho_{1}}{2}},P_{d}^{\operatorname{ sgn}},\epsilon)\);
5. Let \(G_{i}:=-(H\otimes U_{i})(I-2|\bar{0}\rangle\langle\bar{0}|)(H\otimes U_{i}^{ \dagger})(I-2\Pi_{0})\) for \(i\in\{0,1\}\), where \(\Pi_{0}\) is the projector onto the subspace spanned by \(\{|0\rangle|0\rangle|\varphi\}\) over all \(|\varphi\rangle\);
6. Measure the first two qubits of \(G_{i}(H\otimes U_{i})|0\rangle|0\rangle|\bar{0}\rangle\), and let \(x_{i0}\) and \(x_{i1}\) be the outcomes, respectively. Return "yes" if \(x_{00}=x_{01}=x_{10}=x_{11}=0\), and "no" otherwise.
```
**Algorithm 3**Space-efficient algorithm for \(\operatorname{\textsc{CertQSD}}_{\log}\).
**Constructing the unitary of interest via the space-efficient QSVT.** We consider the setting with \(s(n)=\Theta(\log n)\) and \(\varepsilon=\alpha/2\). Suppose \(Q_{0}\) and \(Q_{1}\) are \(s(n)\)-qubit quantum circuits that prepare the purifications of \(\rho_{0}\) and \(\rho_{1}\), respectively. Similar to Algorithm 1, we first construct an \(O(s)\)-qubit quantum circuit \(U_{\frac{\rho_{0}-\rho_{1}}{2}}\) that is a \((1,O(s),0)\)-block-encoding of \(\frac{\rho_{0}-\rho_{1}}{2}\), using \(O(1)\) queries to \(Q_{\rho_{0}}\) and \(Q_{\rho_{1}}\) and \(O(1)\) one- and two-qubit quantum gates.
Let \(\delta=\frac{\varepsilon}{2^{r+3}}\), \(\epsilon:=\frac{\varepsilon}{2(\tilde{C}_{\operatorname{sgn}}+2\tilde{C}_{ \operatorname{sgn}})\tilde{C}_{\operatorname{sgn}}2^{r+3}}\cdot\frac{1}{2\log \big{(}2(\tilde{C}_{\operatorname{sgn}}+2\tilde{C}_{\operatorname{sgn}}) \tilde{C}_{\operatorname{sgn}}2^{r+3}/\varepsilon\big{)}}\) and \(d:=\tilde{C}_{\operatorname{sgn}}\delta^{-1}\log\epsilon^{-1}=2^{O(s)}\) where \(\tilde{C}_{\operatorname{sgn}}\) comes from Corollary 3.7. Let \(P_{d}^{\operatorname{sgn}}\in\mathbb{R}[x]\) be the odd polynomial specified in Corollary 3.7. Let \(U_{i}:=\mathcal{T}(Q_{i},U_{\frac{\rho_{0}-\rho_{1}}{2}},P_{d}^{\operatorname{ sgn}},\epsilon)\) for \(i\in\{0,1\}\), then we have the following equalities with \(0\leq p_{0},p_{1}\leq 1\):
\[U_{0}|0\rangle|\bar{0}\rangle =\sqrt{p_{0}}|0\rangle|\psi_{0}\rangle+\sqrt{1-p_{0}}|1\rangle| \psi_{1}\rangle,\] \[U_{1}|0\rangle|\bar{0}\rangle =\sqrt{p_{1}}|0\rangle|\phi_{0}\rangle+\sqrt{1-p_{1}}|1\rangle| \phi_{1}\rangle.\]
Let \(H\) be the Hadamard gate, then we derive the following equality for \(i\in\{0,1\}\):
\[(H\otimes U_{i})|0\rangle|0\rangle|\bar{0}\rangle=\sqrt{\frac{p_{i}}{2}}|0 \rangle|0\rangle|\psi_{0}\rangle+\underbrace{\sqrt{\frac{p_{i}}{2}}|0\rangle|1 \rangle|\psi_{0}\rangle+\sqrt{\frac{1-p_{i}}{2}}|1\rangle|0\rangle|\psi_{1} \rangle+\sqrt{\frac{1-p_{i}}{2}}|1\rangle|1\rangle|\psi_{1}\rangle}_{\sqrt{1- \frac{p_{i}}{2}}|\bot_{i}\rangle}.\]
**Making the error one-sided by exact amplitude amplification.** Consider the Grover operator \(G_{i}:=-(H\otimes U_{i})(I-2|\bar{0}\rangle\langle\bar{0}|)(H\otimes U_{i}^{ \dagger})(I-2\Pi_{0})\), where \(\Pi_{0}\) is the projector onto the subspace spanned by \(\{|0\rangle|0\rangle|\varphi\rangle\}\) over all \(|\varphi\rangle\). By employing the exact amplitude amplification (Lemma 4.13), we can obtain that:
\[G_{i}(H\otimes U_{i})|0\rangle|0\rangle|\bar{0}\rangle=\sin(3\theta_{i})|0 \rangle|0\rangle|\psi_{0}\rangle+\cos(3\theta_{i})|\bot_{i}\rangle\text{ where }\sin^{2}(\theta_{i}) \!=\!\tfrac{p_{i}}{2}\text{ when }\theta_{i}\!\in\!\left[0,\tfrac{\pi}{4}\right]\!. \tag{4.4}\]
Let \(x_{i0}\) and \(x_{i1}\) be the measurement outcomes of the first two qubits of \(G_{i}(H\otimes U_{i})|0\rangle|0\rangle|\bar{0}\rangle\) for \(i\in\{0,1\}\). Algorithm 3 returns "yes" if \(x_{00}=x_{01}=x_{10}=x_{11}=0\), and "no" otherwise. We will show the correctness of our algorithm as follows:
* For _yes_ instances \((\rho_{0}=\rho_{1})\), \(U_{P_{d}^{\text{sgn}}\left(\frac{\rho_{0}-\rho_{1}}{2}\right)}\) is a \((1,O(s),0)\)-block-encoding of the zero operator, following from Remark 3.12. Consequently, \(\mathcal{T}(Q_{i},U_{\frac{\rho_{0}-\rho_{1}}{2}},P_{d},\epsilon)\) outputs \(0\) with probability \(1/2\) for \(i\in\{0,1\}\), i.e., \(p_{0}=p_{1}=1/2\). As a result, we have \(\theta_{0}=\theta_{1}=\pi/6\) and \(\sin^{2}(3\theta_{0})=\sin^{2}(3\theta_{1})=1\). Substituting these values into Equation (4.4), we can conclude that \(x_{00}=x_{01}=x_{10}=x_{11}=0\) with certainty, which completes the analysis.
* For _no_ instances \((\operatorname{td}(\rho_{0},\rho_{1})\geq\alpha)\), \(U_{P_{d}^{\text{sgn}}\left(\frac{\rho_{0}-\rho_{1}}{2}\right)}\) is a \((1,O(s),0)\)-block-encoding of \(A\) satisfying \(\left\|A-P_{d}^{\text{sgn}}\left(\frac{\rho_{0}-\rho_{1}}{2}\right)\right\| \leq\hat{C}_{\text{sgn}}\epsilon\log d\). Let \(p_{i}\) be the probability that \(\mathcal{T}(Q_{i},U_{\frac{\rho_{0}-\rho_{1}}{2}},P_{d},\epsilon)\) outputs \(0\) for \(i\in\{0,1\}\), then \(p_{i}=\tfrac{1}{2}\big{(}1+\operatorname{Re}(\operatorname{Tr}(\rho_{i}A)) \big{)}\) following from Lemma 4.7. A direct calculation similar to Proposition 4.10.1 indicates that: \[|(p_{0}-p_{1})-\operatorname{td}(\rho_{0},\rho_{1})|\leq\hat{C}_{\text{sgn}} \epsilon\log d+2C_{\text{sgn}}\epsilon\log d+2^{r+1}\delta.\] Under the choice of \(\delta\), \(\epsilon\), and \(d\) in the proof of Theorem 4.11, we obtain that \(|(p_{0}-p_{1})-\operatorname{td}(\rho_{0},\rho_{1})|\leq\varepsilon\) which yields that \(\max\{|p_{0}-1/2|,|p_{1}-1/2|\}\geq\varepsilon/2\).35 Note that \(\operatorname{Pr}[x_{i0}=x_{i1}=0]=\sin^{2}(3\theta_{i})\) for \(i\in\{0,1\}\), Algorithm 3 will return "yes" with probability \(p_{\text{yes}}=\sin^{2}(3\theta_{0})\sin^{2}(3\theta_{1})\). We provide an upper bound for \(p_{\text{yes}}\) in Proposition 4.14.1, with the proof deferred to Appendix B.2: \[\text{Proposition \ref{prop:
**Constructing the unitary of interest via the SWAP test.** We consider the setting with \(s(n)=\Theta(s(n))\). Our main building block is the circuit implementation of the SWAP test (Lemma 2.12). Specifically, we utilize the subroutine \(\text{SWAP}(\rho_{i},\rho_{j})\) for \(i,j\in\{0,1\}\), which involves applying \(Q_{i}\) and \(Q_{j}\) to prepare quantum states \(\rho_{i}\) and \(\rho_{j}\), respectively, and then employing the SWAP test on these states \(\rho_{i}\) and \(\rho_{j}\). We denote by \(p_{ij}\) the probability that \(\text{SWAP}(\rho_{i},\rho_{j})\) outputs \(0\) based on the measurement outcome of the control qubit in the SWAP test. Following Lemma 2.12, we have \(p_{ij}=\frac{1}{2}\big{(}1+\text{Tr}(\rho_{i}\rho_{j})\big{)}\) for \(i,j\in\{0,1\}\).
We define \(T_{ij}:=\text{SWAP}(\rho_{i},\rho_{j})\) for \((i,j)\in\mathcal{I}:=\{(0,0),1,1,0,1\}\), with the control qubit in \(\text{SWAP}(\rho_{i},\rho_{j})\) serving as the output qubit of \(T_{ij}\). By introducing another ancillary qubit, we construct \(T^{\prime}_{ij}:=\text{CNOT}(I\otimes T_{ij})\) for \((i,j)\in\mathcal{I}\), where CNOT is controlled by the output qubit of \(T_{ij}\) and targets on the new ancillary qubit. It is effortless to see that \(T^{\prime}_{ij}\) prepares the purification of \(\varrho(p_{ij})\) with \(\varrho(p_{ij}):=p_{ij}|0\rangle\langle 0|+(1-p_{ij})|1\rangle\langle 1|\) for \((i,j)\in\mathcal{I}\).
By applying Lemma 4.8, we can construct quantum circuits \(T^{\prime\prime}_{ij}\) for \((i,j)\in\mathcal{I}\) that serve as \((1,O(s),0)\)-block-encoding of \(\varrho(p_{ij})\), using \(O(1)\) queries to \(T^{\prime}_{ij}\) and \(O(1)\) one- and two-qubit quantum gates. Notably, \((X\otimes I)^{\prime\prime}_{01}\), with \(X\) acting on the qubit of \(\varrho(p_{2})\), prepares the purification of \(X\varrho(p_{01})X^{\dagger}=p_{01}|1\rangle\langle 1|+(1-p_{01})|0\rangle \langle 0|=\varrho(1-p_{01})\), leading to the equality:
\[\varrho(\rho_{0},\rho_{1}):=\frac{1}{4}\varrho(p_{00})+\frac{1}{4}\varrho(p_ {11})+\frac{1}{2}\varrho(1-p_{01})=\varrho\left(\frac{1}{2}+\frac{\text{HS}^{ 2}(\rho_{0},\rho_{1})}{4}\right).\]
Consequently, we employ Lemma 3.14 to construct a unitary quantum circuit \(U\) that is a \((1,m,0)\)-block-encoding of \(\varrho\big{(}\frac{1}{2}+\frac{\text{HS}^{2}(\rho_{0},\rho_{1})}{4}\big{)}\) using \(O(1)\) queries to \(T^{\prime\prime}_{00}\), \(T^{\prime\prime}_{11}\), \((X\otimes I)T^{\prime\prime}_{01}\), and \(O(1)\) one- and two-qubit quantum gates, where \(m:=O(s)\). The construction ensures the following:
\[U|0\rangle|0\rangle^{\otimes m}=\underbrace{\left(\frac{1}{2}+\frac{\text{HS }^{2}(\rho_{0},\rho_{1})}{4}\right)}_{\sin(\theta)}|0\rangle|0\rangle^{ \otimes m}+\cos(\theta)|\bot\rangle,\text{ where }\langle 0|\langle 0|^{ \otimes m}|\bot\rangle=0. \tag{4.5}\]
**Making the error one-sided.** Let us consider the Grover operator \(G:=-U(I-2|\bar{0}\rangle\langle\bar{0}|)U^{\dagger}(I-2|\bar{0}\rangle\langle \bar{0}|)\). By applying Lemma 4.13, we derive that \(GU|0\rangle|0\rangle^{\otimes m}=\sin(3\theta)|0\rangle|0\rangle^{\otimes m}+ \cos(3\theta)|\bot\rangle\). Subsequently, we measure all qubits of \(GU|0\rangle|0\rangle^{\otimes m}\) in the computational basis, represented as \(x\in\{0,1\}^{m+1}\). Hence, Algorithm 4 returns "yes" if the outcome \(x\) is \(0^{m+1}\) and "no" otherwise. Algorithm 4 accepts with probability \(\sin^{2}(3\theta)\). Now we analyze the correctness of the algorithm:
* For _yes_ instances (\(\rho_{0}=\rho_{1}\)), we have \(\text{HS}^{2}(\rho_{0},\rho_{1})=0\). Following Equation (4.5), we obtain \(\sin(\theta)=1/2\) and thus \(\sin^{2}(3\theta)=1\). We conclude that Algorithm 4 will always return "yes".
* For _no_ instances, we have \(\mathrm{HS}^{2}(\rho_{0},\rho_{1})\geq\alpha\). According to Equation (4.5), we derive that: \[\sin(\theta)=\frac{1}{2}+\frac{\mathrm{HS}^{2}(\rho_{0},\rho_{1})}{4}\geq\frac{ 1}{2}+\frac{\alpha}{4}\text{ and }\frac{1}{4}\leq\sin^{2}(\theta)=\Big{(}\frac{1}{2}+\frac{ \mathrm{HS}^{2}(\rho_{0},\rho_{1})}{4}\Big{)}^{2}\leq\Big{(}\frac{1}{2}+\frac{ 1}{4}\Big{)}^{2}=\frac{9}{16}.\] (4.6) As a result, considering the fact that \(\sin^{2}(3\theta)=f(\sin^{2}(\theta))\) where \(f(x):=16x^{3}-24x^{2}+9x\), we require Proposition 4.15.1 and the proof is deferred to Appendix B.2: **Proposition 4.15.1**.: _The polynomial function \(f(x):=16x^{3}-24x^{2}+9x\) is monotonically decreasing in \([1/4,9/16]\). Moreover, we have \(f\big{(}\big{(}\frac{1}{2}+\frac{\alpha}{4}\big{)}^{2}\big{)}\leq 1-\frac{ \alpha^{2}}{2}\) for any \(0\leq\alpha\leq 1\)._ Combining Equation (4.6) and Proposition 4.15.1, we have that \(\sin^{2}(3\theta)=f(\sin^{2}(\theta))\leq f\big{(}\big{(}\frac{1}{2}+\frac{ \alpha}{4}\big{)}^{2}\big{)}\leq 1-\frac{\alpha^{2}}{2}\). Hence, Algorithm 4 will return "no" with probability at least \(\alpha^{2}/2\).
Regarding the computational complexity of Algorithm 4, this algorithm requires \(O(s(n))\) qubits and performs \(O(1)\) queries to \(Q_{0}\) and \(Q_{1}\). Finally, we finish the proof by applying error reduction from \(\mathtt{coRQ}_{\mathrm{U}}\mathsf{L}\) (Corollary 3.19) to Algorithm 3.
(\mathsf{BQL}\)- and \(\mathtt{coRQ}_{\mathrm{U}}\mathsf{L}\)-hardness for space-bounded state testing problems
We will prove that space-bounded state testing problems mentioned in Theorem 4.6 are \(\mathsf{BQ}_{\mathrm{U}}\mathsf{L}\)-hard, which implies their \(\mathsf{BQL}\)-hardness since \(\mathsf{BQL}=\mathsf{BQ}_{\mathrm{U}}\mathsf{L}\)[10]. Similarly, all space-bounded state certification problems mentioned in Theorem 4.5 are \(\mathtt{coRQ}_{\mathrm{U}}\mathsf{L}\)-hard.
5.1 Hardness results for \(\mathtt{GapQSD}_{\mathrm{log}}\), \(\mathtt{GapQHS}_{\mathrm{log}}\), and their certification version
Employing analogous constructions, we can establish the \(\mathsf{BQ}_{\mathrm{U}}\mathsf{L}\)-hardness of both \(\mathtt{GapQSD}_{\mathrm{log}}\) and \(\mathtt{GapQHS}_{\mathrm{log}}\). The former involves a single-qubit pure state and a single-qubit mixed state, while the latter involves two pure states.
**Lemma 4.16** (\(\overline{\mathtt{GapQSD}}_{\mathrm{log}}\) is \(\mathsf{BQ}_{\mathrm{U}}\mathsf{L}\)-hard).: _For any deterministic logspace computable functions \(a(n)\) and \(b(n)\) such that \(a(n)-b(n)\geq 1/\operatorname{poly}(n)\), we have that \(\overline{\mathtt{GapQSD}}_{\mathrm{log}}[1-\sqrt{a(n)},\sqrt{1-b(n)}]\) is \(\mathsf{BQ}_{\mathrm{U}}\mathsf{L}[a(n),b(n)]\)-hard._
Proof.: Consider a promise problem \((\mathcal{L}_{yes},\mathcal{L}_{no})\in\mathsf{BQ}_{\mathrm{U}}\mathsf{L}[a(n ),b(n)]\), then we know that the acceptance probability \(\Pr[C_{x}\text{ accepts}]\geq a(n)\) if \(x\in\mathcal{L}_{yes}\), whereas \(\Pr[C_{x}\text{ accepts}]\leq b(n)\) if \(x\in\mathcal{L}_{no}\). Now we notice that the acceptance probability is the fidelity between a single-qubit pure state \(\rho_{0}\) and a single-qubit mixed state \(\rho_{1}\) that generates by two logarithmic-qubit quantum circuits \(Q_{0}\) and \(Q_{1}\), respectively:
\[\begin{split}\Pr[C_{x}\text{ accepts}]&=\||1\rangle \langle 1|_{\mathrm{out}}C_{x}|\bar{0}\rangle\|_{2}^{2}\\ &=\!\mathrm{Tr}\left(|1\rangle\langle 1|_{\mathrm{out}}\Tr_{ \overline{\mathrm{out}}}\left(C_{x}|\bar{0}\rangle\langle\bar{0}|C_{x}^{ \dagger}\right)\right)\\ &=\!\mathrm{F}^{2}\left(|1\rangle\langle 1|_{\mathrm{out}},\Tr_{ \overline{\mathrm{out}}}\left(C_{x}|\bar{0}\rangle\langle\bar{0}|C_{x}^{ \dagger}\right)\right)\\ &:=\!\mathrm{F}^{2}(\rho_{0},\rho_{1}).\end{split} \tag{4.7}\]
In particular, the corresponding \(Q_{0}\) is simply flipping the designated output qubit, as well as the corresponding \(Q_{1}\) is exactly the circuit \(C_{x}\), then we prepare \(\rho_{0}\) and \(\rho_{1}\) by tracing out all non-output qubits. By utilizing Lemma 2.2, we have derived that:
* For _yes_ instances, \(\mathrm{F}^{2}(\rho_{0},\rho_{1})\geq a(n)\) deduces that \(\mathrm{td}(\rho_{0},\rho_{1})\leq 1-\sqrt{a(n)}\);
* For _no_ instances, \(\mathrm{F}^{2}(\rho_{0},\rho_{1})\leq b(n)\) yields that \(\mathrm{td}(\rho_{0},\rho_{1})\geq\sqrt{1-b(n)}\)
Therefore, we demonstrate that \(\overline{\mathtt{GapQSD}}_{\mathrm{log}}[1-\sqrt{a(n)},\sqrt{1-b(n)}]\) is \(\mathsf{BQL}[a(n),b(n)]\)-hard.
To construct pure states, adapted from the construction in Lemma 4.16, we replace the final measurement in the \(\mathsf{BQL}\) circuit \(C_{x}\) with a quantum gate (CNOT) and design a new algorithm based on \(C_{x}\) with the final measurement on _all_ qubits in the computational basis.
**Lemma 4.17** (\(\overline{\textsc{GapQHS}}_{\log}\) is \(\mathsf{BQU}\)-hard).: _For any deterministic logspace computable functions \(a(n)\) and \(b(n)\) such that \(a(n)-b(n)\geq 1/\operatorname{poly}(n)\), we have that \(\overline{\textsc{GapQHS}}_{\log}[1-a^{2}(n),1-b^{2}(n)]\) is \(\mathsf{BQU}\mathsf{L}[a(n),b(n)]\)-hard._
Proof.: For any promise problem \((\mathcal{L}_{\text{yes}},\mathcal{L}_{\text{no}})\in\mathsf{BQU}\mathsf{L}[a (n),b(n)]\), we have that the acceptance probability \(\Pr[C_{x}\text{ accepts}]\geq a(n)\) if \(x\in\mathcal{L}_{\text{yes}}\), whereas \(\Pr[C_{x}\text{ accepts}]\leq b(n)\) if \(x\in\mathcal{L}_{\text{no}}\). For convenience, let the output qubit be the register \(\mathsf{O}\). Now we construct a new quantum circuit \(C^{\prime}_{x}\) with an additional ancillary qubit on the register \(\mathsf{F}\) initialized to zero:
\[C^{\prime}_{x}:=C^{\dagger}_{x}X^{\dagger}_{\mathsf{O}}\textsc{CNOT}_{\mathsf{ O}\to\mathsf{F}}X_{\mathsf{O}}C_{x}.\]
And we say that \(C^{\prime}_{x}\) accepts if the measurement outcome of all qubits (namely the working qubit of \(C_{x}\) and \(\mathsf{F}\)) are all zero. Through a direct calculation, we obtain:
\[\begin{split}\Pr\bigl{[}C^{\prime}_{x}\text{ accepts}\bigr{]}& =\bigl{\|}(|\bar{0}\rangle\langle\bar{0}|\otimes|0\rangle\langle 0|_{ \mathsf{F}})C^{\dagger}_{x}X_{\mathsf{O}}\textsc{CNOT}_{\mathsf{O}\to\mathsf{F} }X_{\mathsf{O}}C_{x}(|\bar{0}\rangle\otimes|0\rangle_{\mathsf{F}})\bigr{\|}_{ 2}^{2}\\ &=\bigl{|}(\langle\bar{0}|\otimes\langle 0|_{\mathsf{F}}\rangle C^{ \dagger}_{x}(|1\rangle\langle 1|_{\mathsf{O}}\otimes I_{\mathsf{F}}+|0\rangle \langle 0|_{\mathsf{O}}\otimes X_{\mathsf{F}})C_{x}(|\bar{0}\rangle\otimes|0 \rangle_{\mathsf{F}})\bigr{|}^{2}\\ &=\bigl{|}\langle\bar{0}|C^{\dagger}_{x}|1\rangle\langle 1|_{ \mathsf{O}}C_{x}|\bar{0}\rangle\bigr{|}^{2}\\ &=\Pr^{2}\left[C_{x}\text{ accepts}\right].\end{split} \tag{4.8}\]
Here, the second line owes to \(\textsc{CNOT}_{\mathsf{O}\to\mathsf{F}}=|0\rangle\langle 0|_{\mathsf{O}} \otimes I_{\mathsf{F}}+|1\rangle\langle 1|_{\mathsf{O}}\otimes X_{\mathsf{F}}\), and the last line is because of Equation (4.7). Interestingly, by defining two pure states \(\rho_{0}:=|\bar{0}\rangle\langle\bar{0}|\otimes|0\rangle\langle 0|_{\mathsf{F}}\) and \(\rho_{1}:=C^{\prime}_{x}(|\bar{0}\rangle\langle\bar{0}|\otimes|0\rangle\langle 0 |_{\mathsf{F}})C^{\prime\dagger}_{x}\) corresponding to \(Q_{0}=I\) and \(Q_{1}=C^{\prime}_{x}\), respectively, we deduce the following from Equation (4.8):
\[\Pr\bigl{[}C^{\prime}_{x}\text{ accepts}\bigr{]}=\operatorname{Tr}(\rho_{0} \rho_{1})=1-\operatorname{HS}^{2}(\rho_{0},\rho_{1}). \tag{4.9}\]
Combining Equation (4.8) and Equation (4.9), we conclude that:
* For _yes_ instances, \(\Pr[C_{x}\text{ accepts}]\geq a(n)\) implies that \(\operatorname{HS}^{2}(\rho_{0},\rho_{1})\leq 1-a^{2}(n)\);
* For _no_ instances, \(\Pr[C_{x}\text{ accepts}]\leq b(n)\) yields that \(\operatorname{HS}^{2}(\rho_{0},\rho_{1})\geq 1-b^{2}(n)\).
We thus complete the proof of \(\overline{\textsc{GapQHS}}_{\log}[1-a^{2}(n),1-b^{2}(n)]\) is \(\mathsf{BQU}\mathsf{L}[a(n),b(n)]\)-hard.
Our constructions in the proof of Lemma 4.16 and Lemma 4.17 are somewhat analogous to Theorem 12 and Theorem 13 in [10]. Then we proceed with a few direct corollaries of Lemma 4.16 and Lemma 4.17.
**Corollary 4.18** (\(\mathsf{BQU}\)- and \(\mathsf{coRQ}_{\mathsf{U}}\)-hardness).: _For any functions \(a(n)\) and \(b(n)\) are computable in deterministic logspace such that \(a(n)-b(n)\geq 1/\operatorname{poly}(n)\), the following holds for some polynomial \(p(n)\) which can be computed in deterministic logspace:_
1. \(\textsc{GapQSD}_{\log}[\alpha(n),\beta(n)]\) _is_ \(\mathsf{BQU}\)_-hard for_ \(\alpha\leq 1-1/p(n)\) _and_ \(\beta\geq 1/p(n)\)_;_
2. \(\overline{\textsc{CertQSD}}_{\log}[\gamma(n)]\) _is_ \(\mathsf{coRQ}_{\mathsf{U}}\)_-hard for_ \(\gamma\leq 1-1/p(n)\)_;_
3. \(\textsc{GapQHS}_{\log}[\alpha(n),\beta(n)]\) _is_ \(\mathsf{BQU}\)_-hard for_ \(\alpha\leq 1-1/p(n)\) _and_ \(\beta\geq 1/p(n)\)_;_
4. \(\overline{\textsc{CertQHS}}_{\log}[\gamma(n)]\) _is_ \(\mathsf{coRQ}_{\mathsf{U}}\)_-hard for_ \(\gamma\leq 1-1/p(n)\)_._
Proof.: Firstly, it is important to note that \(\mathsf{BQU}\mathsf{L}\) is closed under complement, as demonstrated in [25, Corollary 4.8]. By combining error reduction for \(\mathsf{BQU}\) (Corollary 3.19) and Lemma 4.16 (resp., Lemma 4.17), we can derive the first statement (resp., the third statement).
Moreover, to obtain the second statement (resp., the fourth statement), we can utilize error reduction for \(\mathsf{coRQ}_{\mathsf{U}}\) (Corollary 3.19) and set \(a=1\) in Lemma 4.16 (resp., Lemma 4.17).
#### 4.5.2 Hardness results for \(\textsc{GapQJS}_{\log}\) and \(\textsc{GapQED}_{\log}\)
We demonstrate the \(\mathsf{BQ}_{\mathsf{U}}\mathsf{L}\)-hardness of \(\textsc{GapQJS}_{\log}\) by reducing \(\textsc{GapQSD}_{\log}\) to \(\textsc{GapQJS}_{\log}\), following a similar approach as shown in [11, Lemma 4.11].
**Lemma 4.19** (\(\textsc{GapQJS}_{\log}\) is \(\mathsf{BQ}_{\mathsf{U}}\mathsf{L}\)-hard).: _For any functions \(\alpha(n)\) and \(\beta(n)\) are computable in deterministic logspace, we have \(\textsc{GapQJS}_{\log}[\alpha(n),\beta(n)]\) is \(\mathsf{BQ}_{\mathsf{U}}\mathsf{L}\)-hard for \(\alpha(n)\leq 1-\sqrt{2}/\sqrt{p(n)}\) and \(\beta(n)\geq 1/p(n)\), where \(p(n)\) is some deterministic logspace computable polynomial._
Proof.: By employing Corollary 4.18, it suffices to reduce \(\textsc{GapQSD}_{\log}[1-1/p(n),1/p(n)]\) to \(\textsc{GapQJS}_{\log}[\alpha(n),\beta(n)]\). Consider logarithmic-qubit quantum circuits \(Q_{0}\) and \(Q_{1}\), which is an instance of \(\textsc{GapQSD}_{\log}\). We will obtain \(\rho_{k}\) by performing \(Q_{k}\) on \(|0^{n}\rangle\) and tracing out the non-output qubits for \(k\in\{0,1\}\). We then have the following:
* If \(\mathrm{td}(\rho_{0},\rho_{1})\geq 1-1/p(n)\), then Lemma 2.4 yields that \[\mathrm{QJS}_{2}(\rho_{0},\rho_{1})\geq 1-\mathrm{H}_{2}\left(\tfrac{1- \mathrm{td}(\rho_{0},\rho_{1})}{2}\right)\geq 1-\mathrm{H}_{2}\left(\tfrac{1}{2p (n)}\right)\geq 1-\tfrac{\sqrt{2}}{\sqrt{p(n)}}\geq\alpha(n),\] where the third inequality owing to \(\mathrm{H}_{2}(x)\leq 2\sqrt{x}\) for all \(x\in[0,1]\).
* If \(\mathrm{td}(\rho_{0},\rho_{1})\leq 1/p(n)\), then Lemma 2.4 indicates that \[\mathrm{QJS}_{2}(\rho_{0},\rho_{1})\leq\mathrm{td}(\rho_{0},\rho_{1})\leq \tfrac{1}{p(n)}\leq\beta(n).\]
Therefore, we can utilize the same quantum circuits \(Q_{0}\) and \(Q_{1}\), along with their corresponding quantum states \(\rho_{0}\) and \(\rho_{1}\), respectively, to establish a logspace Karp reduction from \(\textsc{GapQSD}_{\log}[1-1/p(n),1/p(n)]\) to \(\textsc{GapQJS}_{\log}[\alpha(n),\beta(n)]\), as required.
By combining the reduction from \(\textsc{GapQSD}_{\log}\) to \(\textsc{GapQJS}_{\log}\) (Lemma 4.19) and the reduction from \(\textsc{GapQJS}_{\log}\) to \(\textsc{GapQED}_{\log}\) (Corollary 4.12), we will demonstrate that the \(\mathsf{BQ}_{\mathsf{U}}\mathsf{L}\)-hardness for \(\textsc{GapQED}_{\log}\) through reducing \(\textsc{GapQSD}_{\log}\) to \(\textsc{GapQED}_{\log}\).
**Corollary 4.20** (\(\textsc{GapQED}_{\log}\) is \(\mathsf{BQ}_{\mathsf{U}}\mathsf{L}\)-hard).: _For any function \(g(n)\) are computable in deterministic logspace, we have \(\textsc{GapQED}_{\log}[g(n)]\) is \(\mathsf{BQ}_{\mathsf{U}}\mathsf{L}\)-hard for \(g(n)\leq\tfrac{\ln 2}{2}\big{(}1-\tfrac{\sqrt{2}}{\sqrt{p(n-1)}}-\tfrac{1}{p(n-1)} \big{)}\), where \(p(n)\) is some polynomial that can be computed in deterministic logspace._
Proof.: By combining Corollary 4.18 and Lemma 4.19, we establish that \(\textsc{GapQJS}_{\log}[\alpha(n),\beta(n)]\) is \(\mathsf{BQ}_{\mathsf{U}}\mathsf{L}\)-hard for \(\alpha(n)\leq 1-\sqrt{2}/\sqrt{p(n)}\) and \(\beta(n)\geq 1/p(n)\), where \(p(n)\) is some deterministic logspace computable polynomial. The hard instances specified in Corollary 4.18 consist of \(s(n)\)-qubit quantum circuits \(Q_{0}\) and \(Q_{1}\) that prepares a purification of \(r(n)\)-qubit (mixed) quantum states \(\rho_{0}\) and \(\rho_{1}\), respectively, where \(1\leq r(n)\leq s(n)=\Theta(\log n)\).
Subsequently, by employing Corollary 4.12, we construct \((s+1)\)-qubit quantum circuits \(Q^{\prime}_{0}\) and \(Q^{\prime}_{1}\) that prepares a purification of \((r+1)\)-qubit quantum states \(\rho^{\prime}_{0}=\big{(}p|0\rangle\langle 0|+(1-p)|1\rangle\langle 1|\big{)} \otimes(\tfrac{1}{2}\rho_{0}+\tfrac{1}{2}\rho_{1})\) satisfying \(\mathrm{H}_{2}(p)=1-\tfrac{1}{2}\big{(}\alpha(n)+\beta(n)\big{)}\) and \(\rho^{\prime}_{1}=\tfrac{1}{2}|0\rangle\langle 0|\otimes\rho_{0}+\tfrac{1}{2}|1 \rangle\langle 1|\otimes\rho_{1}\), respectively. Following Corollary 4.12, \(\textsc{GapQED}_{\log}[g(n)]\) is \(\mathsf{BQ}_{\mathsf{U}}\mathsf{L}\)-hard as long as
\[g(n)=\tfrac{\ln 2}{2}\big{(}\alpha(n-1)-\beta(n-1)\big{)}\leq\tfrac{\ln 2}{2} \Big{(}1-\tfrac{\sqrt{2}}{\sqrt{p(n-1)}}-\tfrac{1}{p(n-1)}\Big{)}.\]
Therefore, \(\textsc{GapQSD}_{s(n)}[\alpha(n),\beta(n)]\) is logspace Karp reducible to \(\textsc{GapQED}_{s+1}[g(n)]\) by mapping \((Q_{0},Q_{1})\) to \((Q^{\prime}_{0},Q^{\prime}_{1})\).
## Acknowledgments
This work was partially supported by MEXT Q-LEAP grant No. JPMXS0120319794. FLG was also supported by JSPS KAKENHI grants Nos. JP19H04066, JP20H05966, JP20H00579, JP20H04139, and JP21H04879. YL was also supported by JST, the establishment of University
fellowships towards the creation of science technology innovation, Grant No. JPMJFS2125. We express our gratitude to anonymous reviewers for providing detailed suggestions on the space-efficient quantum singular value transformation and for suggesting to add discussion on space-bounded distribution testing. Circuit diagrams were drawn by the Quantikz package [14].
|
2303.09531
|
GLASU: A Communication-Efficient Algorithm for Federated Learning with
Vertically Distributed Graph Data
|
Vertical federated learning (VFL) is a distributed learning paradigm, where
computing clients collectively train a model based on the partial features of
the same set of samples they possess. Current research on VFL focuses on the
case when samples are independent, but it rarely addresses an emerging scenario
when samples are interrelated through a graph. For graph-structured data, graph
neural networks (GNNs) are competitive machine learning models, but a naive
implementation in the VFL setting causes a significant communication overhead.
Moreover, the analysis of the training is faced with a challenge caused by the
biased stochastic gradients. In this paper, we propose a model splitting method
that splits a backbone GNN across the clients and the server and a
communication-efficient algorithm, GLASU, to train such a model. GLASU adopts
lazy aggregation and stale updates to skip aggregation when evaluating the
model and skip feature exchanges during training, greatly reducing
communication. We offer a theoretical analysis and conduct extensive numerical
experiments on real-world datasets, showing that the proposed algorithm
effectively trains a GNN model, whose performance matches that of the backbone
GNN when trained in a centralized manner.
|
Xinwei Zhang, Mingyi Hong, Jie Chen
|
2023-03-16T17:47:55Z
|
http://arxiv.org/abs/2303.09531v1
|
GLAST: A Communication-Efficient Algorithm for Federated Learning with Vertically Distributed Graph Data
###### Abstract
Vertical federated learning (VFL) is a distributed learning paradigm, where computing clients collectively train a model based on the partial features of the same set of samples they possess. Current research on VFL focuses on the case when samples are independent, but it rarely addresses an emerging scenario when samples are interrelated through a graph. In this work, we train a graph neural network (GNN) through VFL, where each client owns a part of the node features and a different edge set. This data scenario incurs a significant communication overhead, not only because of the handling of distributed features but also due to neighborhood aggregation in a GNN. Moreover, the training analysis is faced with a challenge caused by the biased stochastic gradients. We propose a model-splitting method that splits a backbone GNN across the clients and the server and a communication-efficient algorithm, GLAST, to train such a model. GLAST adopts lazy aggregation and stale updates to skip communication in neighborhood aggregation and in model updates, respectively, greatly reducing communication while enjoying convergence guarantees. We conduct extensive numerical experiments on real-world datasets, showing that GLAST effectively trains a GNN that matches the accuracy of centralized training, while using only a fraction of the time due to communication saving.
## 1 Introduction
Vertical federated learning (VFL) is a newly developed machine learning scenario in distributed optimization, where clients share data with the same sample identity but each client possesses only a subset of the features for each sample. The goal is for the clients to collaboratively learn a model based on all features. Such a scenario appears in many applications, including healthcare, finance, and recommendation systems.
Most of the current VFL solutions [1, 2] treat the case where samples are independent, but omit their relational structure. However, the pairwise relationship between samples emerges in many occasions and it can be crucial in several learning scenarios, including the low-labeling-rate scenario in semi-supervised learning and the no-labeling scenario in self-supervised learning.
Consider, for example, a company that offers news recommendations to its subscribed users. Several departments may be maintaining a separate user graph in their own compute infrastructure: a professional network where users are connected through occupational ties; a personal network where users are connected through personal life interactions; a follower network where a user is a follower of another on social media, etc. Further, the user data in each graph may contain different features (e.g., occupation related, life related, and interest related, respectively). To offer personal recommendations, the company sets up a server that communicates with each client (each department's computer), to train a model that predicts multiple labels for each user without revealing each client's raw local data. See Figure 1 for an illustration.
One of the most effective machine learning models for such a prediction task is graph neural networks (GNNs) [3, 4, 5, 6, 7]. This model performs neighborhood aggregation in every feature transformation layer,
such that the prediction of a graph node is based on not only the information of this node but also that of its neighbors.
VFL on graph-structured data is not as well studied as that on other data, in part because of the challenges incurred by an enormous amount of communication. The communication overhead comes not only from the aggregation of the partial features/representations of a datum, but also from the neighborhood aggregation unique to GNNs. That is, communication occurs in each layer of the neural network, so that the latest representation of a neighboring node can be used to update the representation of the center node. One solution to reduce communication is that each client uses a local GNN to extract node representations from its own graph and the server aggregates these representations to make predictions [8]. The drawback of this method is that the partial features of a node outside one client's neighborhood are not used, even if this node appears in another client's neighborhood. Another solution is to simulate centralized training: intermediate representations of each node are aggregated by the server, from where neighborhood aggregation is performed [9]. This method suffers the communication overhead incurred in each layer computation.
In this work, we propose GLASU for communication-efficient VFL on graph data. The GNN model is split across the clients and the server, such that the clients can use a majority of existing GNNs as the backbone, while the server contains no model parameters. The server only aggregates and disseminates processed data (e.g., node embeddings) with the clients. The communication frequency between the clients and the server is mitigated through _lazy aggregation and stale updates_ (hence the name of the method). For an \(L\)-layer GNN, GLASU communicates partial node representations only in \(K\) layers and in every other \(Q\) iterations, enjoying the reduction of communication by a factor of \(QL/K\). GLASU can be considered as a framework that encompasses several well-known models and algorithms as special cases, including the work of [2] when the graphs are absent, the work of [8] when all aggregations but the final one are skipped (\(K=1\)), the work of [9] when no aggregations are skipped (\(K=L\)), and centralized training when only a single client exists.
With the enjoyable reduction in communication, another difficulty is the convergence analysis, which admits two challenges: the biased gradient caused by neighborhood sampling in training GNNs and the correlated updates due to the use of stale node representations. We conduct an analysis based on the error decomposition of the gradient, showing that the training admits a convergence rate of \(\mathcal{O}((TQ)^{-1})\), where \(T\) is the number of training rounds, each of which contains \(Q\) iterations.
We summarize the main contributions of this work below:
1. Model design: We propose a flexible, federated GNN architecture that is compatible with a majority of existing GNN models.
2. Algorithm design: We propose the communication-efficient GLASU algorithm to train the model. Therein, lazy aggregation saves communication for each joint inference round, through skipping some aggregation layers in the GNN; while stale updates further save communication by allowing the clients to use stale global information for multiple local model updates.
3. Theoretical analysis: We provide theoretical convergence analysis for GLASU by addressing the challenges of
Figure 1: Data isolation of vertically distributed graph-structured data over three clients.
biased stochastic gradient estimation caused by neighborhood sampling and correlated update steps caused by using stale global information. To the best of our knowledge, this is the first convergence analysis for federated learning with graph data.
4. Numerical results: We conduct extensive experiments on seven datasets, together with ablation studies, to demonstrate that GLASU can achieve a comparable performance as the centralized model on multiple datasets and multiple GNN backbones, and that GLASU effectively saves communication and reduces training time.
## 2 Problem, Background, and Related Works
### Problem Setup
Consider \(M\) clients, indexed by \(m=1,\ldots,M\), each of which holds a part of a graph with the node feature matrix \(\mathbf{X}\in\mathbb{R}^{N\times d}\) and the edge set \(\mathcal{E}\). Here, \(N\) is the number of nodes in the graph and \(d\) is the feature dimension. We assume that each client has the same node set and the same set of training labels, \(\mathbf{y}\), but a different edge set \(\mathcal{E}_{m}\) and a non-overlapping node feature matrix \(\mathbf{X}_{m}\in\mathbb{R}^{N\times d_{m}}\), such that \(\mathcal{E}=\bigcup_{m=1}^{M}\mathcal{E}_{m}\), \(\mathbf{X}=[\mathbf{X}_{1},\ldots,\mathbf{X}_{M}]\), and \(d=\sum_{m=1}^{M}d_{m}\). We denote the client dataset as \(\mathcal{D}_{m}=\{\mathbf{X}_{m},\mathcal{E}_{m},\mathbf{y}\}\) and the full dataset as \(\mathcal{D}=\{\mathbf{X},\mathcal{E},\mathbf{y}\}\). The task is for the clients to collaboratively infer the labels of nodes in the test set.
### Graph Convolutional Network
The graph convolution network (GCN) [3] is a typical example of the family of GNNs. Inside GCN, a graph convolution layer reads
\[\mathbf{H}[l+1]=\sigma\Big{(}\mathbf{A}(\mathcal{E})\cdot\mathbf{H}[l]\cdot \mathbf{W}[l]\Big{)}, \tag{1}\]
where \(\sigma(\cdot)\) denotes the point-wise nonlinear activation function, \(\mathbf{A}(\mathcal{E})\in\mathbb{R}^{N\times N}\) denotes the adjacency matrix defined by the edge set \(\mathcal{E}\) with proper normalization, \(\mathbf{H}[l]\in\mathbb{R}^{N\times d[l]}\) denotes the node representation matrix at layer \(l\), and \(\mathbf{W}[l]\in\mathbb{R}^{d[l]\times d[l+1]}\) denotes the weight matrix at the same layer. The initial node representation matrix \(\mathbf{H}[0]=\mathbf{X}\). The classifier is denoted as \(\hat{\mathbf{y}}=f(\mathbf{H}[L],\mathbf{W}[L])\) with weight matrix \(\mathbf{W}[L]\) and the loss function is denoted as \(\ell(\mathbf{y},\hat{\mathbf{y}})\). Therefore, the overall model parameter is \(\mathbf{W}=\{\mathbf{W}[0],\ldots,\mathbf{W}[L-1],\mathbf{W}[L]\}\).
Mini-batch training of GCN (and GNNs in general) faces a scalability challenge, because computing one or a few rows of \(\mathbf{H}[L]\) (i.e., the representations of a mini-batch) requires more and more rows of \(\mathbf{H}[L-1]\), \(\mathbf{H}[L-2]\),...recursively, in light of the multiplication with \(\mathbf{A}(\mathcal{E})\) in (1). This is known as the _explosive neighborhood problem_ unique to graph-structured data. Several sampling strategies were proposed in the past to mitigate the explosion; in this work, we adopt the layer-wise sampling proposed by FastGCN [5]. Starting from the output layer \(L\), which is associated with a mini-batch of training nodes, \(\mathcal{S}[L]\), we iterate over the layers backward such that at layer \(l\), we sample a subset of neighbors for \(\mathcal{S}[l+1]\), namely \(\mathcal{S}[l]\). In doing so, at each layer, we form a bipartite graph with edge set \(\mathcal{E}[l]=\{(i,j)|i\in\mathcal{S}[l+1],j\in\mathcal{S}[l]\}\). Then, each graph convolution layer becomes
\[\mathbf{H}[l+1][\mathcal{S}[l+1]]=\sigma\Big{(}\mathbf{A}(\mathcal{E}[l]) \cdot\mathbf{H}[l][\mathcal{S}[l]]\cdot\mathbf{W}[l]\Big{)}, \tag{2}\]
where \(\mathbf{A}(\mathcal{E}[l])\in\mathbb{R}^{|\mathcal{S}[l+1]|\times|\mathcal{S} [l]|}\) is a properly scaled submatrix of \(\mathbf{A}(\mathcal{E})\) and \(\mathbf{H}[l][\mathcal{S}[l]]\) denotes the rows of \(\mathbf{H}[l]\) corresponding to \(\mathcal{S}[l]\).
### Related Works
**Vertical federated learning** is a learning paradigm where the features of the data are distributed across clients, who collaborate to train a model that incorporate all features [2, 1, 10, 11, 12, 13, 14]. Thus, the global model is split among clients and the key challenge is the heavy communication costs on exchanging partial sample information for computing the losses and the gradients for each sample. Most works consider simple models (e.g., linear) because complex models incur multiple rounds of communication for prediction.
**Federated learning with graphs** includes four scenarios. The _graph-level_ scenario is _horizontal_, where each client possesses a collection of graphs and all clients collaborate to train a unified model [15, 16, 17, 18]. The task is to predict graph properties (such as molecular properties).
The _subgraph-level_ scenario could be either _vertical_ or _horizontal_. In the vertical scenario, each client holds a part of the node features, a part of the whole model, and additionally a subgraph of the global graph [8, 9]. The clients aim to collaboratively train a global model (combined from those of each client) to predict node properties (such as the category of a paper in a citation network). Our work addresses this scenario.
The _subgraph-level, horizontal_ scenario, on the other hand, considers training a GNN for node property prediction in a _distributed_ manner: a graph is partitioned and each client holds one partition [19, 20, 21, 22]. A challenge to address is the aggregation of information along edges crossing different clients. This scenario differs from the vertical scenario in that features are not partitioned among clients and the graph partitions do not overlap.
The fourth scenario is _node-level_: the clients are connected by a graph and thus each of them is treated as a node. In other words, the clients, rather than the data, are graph-structured. It is akin to _decentralized learning_, where clients communicate to each other via the graph to train a unified model [23, 24, 25, 26].
Due to the space limitation, please see Appendix A for in-depth discussions of the related works.
## 3 Proposed Approach
In this section, we present the proposed model and the training algorithm GLASU for federated learning on vertically distributed graph data. The neighborhood aggregation in GNNs poses communication challenges distinct from conventional VFL. To mitigate this challenge, we propose lazy aggregation and stale updates to effectively reduce the communication between the clients and the server, while maintaining comparable prediction performance as centralized models. For notational simplicity, we present the approach by using the full-graph notation (1) but note that the implementation involves neighborhood sampling, where a more precise notation should follow (2), and that one can easily change the backbone from GCN to other GNNs.
### GNN Model Splitting
We split the GNN model among the clients and the server, approximating a centralized model. Specifically, each GNN layer contains two sub-layers: the client GNN sub-layer and the server aggregation sub-layer. At the \(l\)-th layer, each client computes the local feature matrix
\[\mathbf{H}_{m}^{+}[l]=\sigma\Big{(}\mathbf{A}(\mathcal{E}_{m})\cdot\mathbf{H} _{m}[l]\cdot\mathbf{W}_{m}[l]\Big{)}\]
with the local weight matrix \(\mathbf{W}_{m}[l]\) and the local graph \(\mathcal{E}_{m}\), where we use the superscript \({}^{+}\) to denote local representations before aggregation. Then, the server aggregates the clients' representations and outputs \(\mathbf{H}[l+1]\) as
\[\mathbf{H}[l+1]=\text{Agg}(\mathbf{H}_{1}^{+}[l],\ldots,\mathbf{H}_{M}^{+}[l]),\]
where \(\text{Agg}(\cdot)\) is an aggregation function. In this paper, we only consider parameter-free aggregations, including averaging and concatenation. The server broadcasts the aggregated \(\mathbf{H}[l+1]\) to the clients so that computation proceeds to the next layer. In the final layer, each client computes a prediction. This layer is the same among clients because they receive the same \(\mathbf{H}[L]\).
Figure 2: Illustration of the split model on \(M=3\) clients with lazy aggregation. In the model, the second server aggregation layer is skipped and the graph size used by each layer gradually decreases, due to neighborhood aggregation (inverse of neighborhood sampling).
The two aggregation operations of our choice render a rather simple implementation of the server. They bring in two advantages: parameter-free and memory-less. Since the operations do not contain any learnable parameters, the server does not need to perform gradient computations. Moreover, in the backward pass, these operations do not require data from the forward pass to back-propagate the gradients (memory-less). Specifically, for averaging, the server back-propagates \(\frac{1}{M}\nabla_{\mathbf{H}[l+1]}\mathcal{L}\) to each client, where \(\mathcal{L}\) denotes the loss; while for concatenation, the server back-propagates the corresponding block of \(\nabla_{\mathbf{H}[l+1]}\mathcal{L}\).
We illustrate in Figure 2 the split of each GNN layer among the clients and the server. Note the difference of our approach from existing approaches. Our model splitting resembles federated split learning (SplitFed) [27]; but in SplitFed, each client can collaborate with the server to perform inference or model updates without accessing information from other clients, whereas in our case, all clients collectively perform the job. Our approach also differs from conventional VFL that splits the local feature processing and the final classifier among the clients and the server respectively, such that each model update requires a single U-shape communication [1]. In our case, due to the graph structure, each GNN layer contains one client-server interaction and the number of interactions is equal to the number of GNN layers (we will relax this in the following subsection).
### Lazy Aggregation
The development in the preceding subsection approximates a centralized model, but it is not communication friendly because each layer requires one round of client-server communication. We propose two communication-saving strategies in this subsection and the next. We first consider _lazy aggregation_, which skips aggregation in certain layers.
Instead of performing server aggregation at each layer, we specify a subset of \(K\) indices, \(\mathcal{I}=\{l_{1},\ldots,l_{K}\}\subset[L]\), such that aggregation is performed only at these layers. That is, at a layer \(l\in\mathcal{I}\), the server performs aggregation and broadcasts the aggregated representations to the clients, serving as the input to the next layer:
\[\mathbf{H}_{m}[l+1]=\mathbf{H}[l+1];\]
while at a layer \(l\notin\mathcal{I}\), each client uses the local representations as the input to the next layer:
\[\mathbf{H}_{m}[l+1]=\mathbf{H}_{m}^{+}[l].\]
By doing so, the amount of communication is reduced from \(\mathcal{O}(L)\) to \(\mathcal{O}(K)\).
There are subtleties caused by neighborhood sampling, similar to those faced by FastGCN (see Section 2.2). First, it requires additional rounds of communication to synchronize the sample indices, because whenever server aggregation is performed, it must be done on the same set of sampled nodes across clients. Hence, in the additional communication rounds, the server takes the union of the clients' index sets \(\mathcal{S}_{m}[l_{k}]\) and broadcasts \(\mathcal{S}[l_{k}]=\bigcup_{m=1}^{M}\mathcal{S}_{m}[l_{k}]\) to the clients. Second, when server aggregation is skipped at a layer \(l\notin\mathcal{I}\), each client can use its own set of sampled nodes, \(\mathcal{S}_{m}[l]\), which may differ from each other. Such a procedure is more flexible than conventional VFL where sample features are generally processed synchronously. The sampling procedure is summarized in Algorithm 2 in Appendix B.1.
### Stale Updates
To further reduce communication, we consider _stale updates_, which skip aggregation in certain iterations and use stale node representations to perform model updates. The key idea is to use the same mini-batch, including the sampled neighbors at each layer, for training \(Q\) iterations. In every other \(Q\) iterations, the clients store the aggregated representations at the server aggregation layers. Then, in the subsequent iterations, every server aggregation is replaced by a local aggregation between a client's up-to-date node representations and other clients' stale node representations. By doing so, the clients and the server only need to communicate once in every \(Q\) iterations.
Specifically, let a round of training contain \(Q\) iterations and use \(t\) to index the rounds. At the beginning of each round, the clients and the server jointly decide the set of nodes used for training at each layer. Then, they perform a joint inference on the representations \(\mathbf{H}_{m}^{t,+}[l]\) at every layer \(l\in\mathcal{I}\). Each client \(m\) will store the "all but \(m\)" representation \(\mathbf{H}_{-m}^{t}[l+1]\) through extracting such information from the aggregated representations \(\mathbf{H}_{m}^{t}[l+1]\):
\[\mathbf{H}_{-m}^{t}[l+1]=\text{Extract}(\mathbf{H}_{m}^{t}[l+1],\mathbf{H}_{m }^{t,+}[l]).\]
For example, when the server aggregation is averaging, the extraction is
\[\text{Extract}(\mathbf{H}_{m}^{t}[l+1],\mathbf{H}_{m}^{t,+}[l])=\mathbf{H}_{m}^{t }[l+1]-\frac{1}{M}\mathbf{H}_{m}^{t,+}[l],\]
Afterward, the clients perform \(Q\) iterations of model updates, indexed by \(q=0,\ldots,Q-1\), on the local parameters \(\mathbf{W}_{m}^{t,q}\) in parallel, using the stored aggregated information \(\mathbf{H}_{-m}^{t}[l+1]\) to perform local computation, replacing server aggregation. The name "stale updates" comes from the fact that \(\mathbf{H}_{-m}^{t}[l+1]\) is computed by using stale model parameters \(\{\mathbf{W}_{m^{\prime}}^{t,0}\}_{m^{\prime}\neq m}\) at all iterations \(q\neq 0\). The extraction and the local updates are summarized in Algorithm 3 and Algorithm 4, respectively, in Appendix B.1.
```
1:for\(t=0,\ldots,T\)do
2:Server/Client (Algorithm 2): Sample \(\{\mathcal{S}_{m}^{t}[l]\}_{l=0}^{L}\).
3:Client:\(\mathbf{W}_{m}^{t,0}=\begin{cases}\mathbf{W}_{m}^{t-1,Q},&t>0\\ \mathbf{W}_{m}^{0},&t=0\end{cases}.\)
4:Server/Client (Algorithm 3): \(\{\mathbf{H}_{-m}^{t}[l+1]\}_{l\in\mathcal{I}}=\)JointInference\((\mathbf{W}_{m}^{t,0},\mathcal{D}_{m},\{\mathcal{S}_{m}^{t}[l]\}_{l=0}^{L})\).
5:for\(q=0,\ldots,Q-1\)do
6:Client (Algorithm 4): \(\mathbf{W}_{m}^{t,q+1}=\)LocalUpdate\((\mathbf{W}_{m}^{t,q},\mathcal{D}_{m},\{\mathcal{S}_{m}^{t}[l]\}_{l=0}^{L},\{ \mathbf{H}_{-m}^{t}[l+1]\}_{l\in\mathcal{I}})\).
7:endfor
8:endfor
9:Output:\(\{\mathbf{W}_{m}^{T,Q}\}_{m=1}^{M}\)
```
**Algorithm 1** Training Procedure. All referenced algorithms are detailed in Appendix B.1.
### Summary
The overall training procedure is summarized in Algorithm 1. For communication savings, lazy aggregation brings in a factor of \(L/K\) and stale updates bring in a factor of \(Q\). Therefore, the overall saving factor is \(QL/K\).
Note that the algorithm assumes that all clients have the training labels. If the labels can be held by only one client (say, A), a slight modification by broadcasting the gradient with respect to the final-layer output possessed by A, suffices. See Appendix B.2 for details.
### Special Cases
It is interesting to note that GLASU encompasses several well-known methods as special cases.
**Conventional VFL.** VFL algorithms can be viewed as a special case of GLASU, where \(\mathbf{A}(\mathcal{E}_{m})=\mathbf{I}\) for all \(m\). In this case, no neighborhood sampling is needed and GLASU reduces to [2].
**Existing VFL algorithms for graphs.** The model of [8] is a special case of GLASU, with \(K=1\); i.e., no communication is performed except the final prediction layer. In this case, the clients omit the connections absent in the self subgraph but present in other clients' subgraphs. The model of [9] is also a special case of GLASU, with \(K=L\). This case requires communication at all layers and is less efficient.
**Centralized GNNs.** When there is a single client (\(M=1\)), our setting is the same as centralized GNN training. Specifically, by letting \(K=L\) and properly choosing the server aggregation function \(\text{Agg}(\cdot)\), our split model can achieve the same performance as a centralized GNN model. Of course, using lazy aggregation (\(K\neq L\)) and choosing the server aggregation function as concatenation or averaging will make the split model different from a centralized GNN.
### Privacy
GLASU enables privacy protection because it is compatible with existing privacy-preserving approaches.
**Secure Aggregation (SA)**[28, 29] is a form of secure multi-party computation approach used for aggregating information from a group of clients, without revealing the information of any individual. This can be achieved by homomorphic encryption [30, 29]. In our case, when the server aggregation is averaging, homomorphic encryption can be directly applied.
**Differential Privacy (DP)**[31] is a probabilistic protection approach. By injecting stochasticity into the local outputs, this approach guarantees that an attacker cannot distinguish the sample from the dataset up
to a certain probability. DP can be applied either solely or in combination with SA to our algorithm in the server-client communication, to offer privacy protection for the client data.
## 4 Convergence Analysis
In this section, we analyze the convergence behavior of GLASU under lazy aggregation and stale updates. To start the analysis, denote by \(\mathcal{S}^{t}=\left\{\mathcal{S}^{t}_{m}[l]\right\}_{l=1,m=1}^{L,M}\) the samples used at round \(t\) (which include all sampled nodes at different layers and clients); by \(S=\left|\mathcal{S}^{t}_{m}[L]\right|\) the batch size; and by \(\mathcal{L}(\mathbf{W};\mathcal{S})\) the training objective, which is evaluated at the overall set of model parameters across clients, \(\mathbf{W}=\left\{\mathbf{W}_{m}\right\}_{m=1}^{M}\), and a batch of samples, \(\mathcal{S}\).
A few assumptions are needed (see Appendix C.1 for formal statements). **A1**: The loss function \(\ell\) is \(G_{\ell}\)-smooth with \(L_{\ell}\)-Lipschitz gradient; and a client's prediction function \(f_{m}\) is \(G_{f}\)-smooth with \(L_{f}\)-Lipschitz gradient. **A2**: The training objective \(\mathcal{L}(\mathbf{W};\mathcal{D})\) is bounded below by a finite constant \(\mathcal{L}^{\star}\). **A3**: The samples \(\mathcal{S}^{t}\) are uniformly sampled from the neighbor set in each layer.
**Theorem 1**.: _Under assumptions A1-A3, by running Algorithm 1 with constant step size \(\eta\leq C_{0}^{-1}\cdot(1+2Q^{2}M)^{-1}\), with probability at least \(p=1-\delta\), the averaged squared gradient norm is bounded by:_
\[\frac{1}{TQ}\sum_{t=0}^{T-1}\sum_{q=0}^{Q-1}\mathbb{E}\left\|\nabla\mathcal{L }(\mathbf{W}^{t,q};\mathcal{D})\right\|^{2}\leq\frac{2\Delta_{\mathcal{L}}}{ \eta TQ}+\frac{28\eta M\cdot\left(C_{0}+\sqrt{M+1}Q\right)}{3}\sigma,\]
_where \(\Delta_{\mathcal{L}}=\mathcal{L}(\mathbf{W}^{0,0})-\mathcal{L}^{\star}\), \(C_{0}=G_{\ell}L_{f}+L_{\ell}G_{f}^{2}\), and \(\sigma>0\) is a function of \(\log(TQ/\delta),L_{f},L_{g},G_{f}\) and \(G_{g}\)._
_Remark 1_.: There are two key challenges in the analysis. (1) Owing to neighborhood sampling, the stochastic gradient is biased (i.e., \(\mathbb{E}_{\mathcal{S}}\,\nabla\mathcal{L}(\mathbf{W};\mathcal{S})\neq\nabla \mathcal{L}(\mathbf{W};\mathcal{D})\)). (2) The stale updates in one communication round are correlated, as they use the same mini-batch and samples. Hence, the general unbiasedness and independence assumptions on the stochastic gradients in the analysis of SGD-type of algorithms do not apply. We borrow the technique by [32] to bound the error of the stochastic gradient through the bias-variance decomposition and extend the analysis by [2] for VFL with correlated updates to establish our proof. For details, see Appendix C.
_Remark 2_.: To better expose the convergence rate, assuming that \(Q\) is upper bounded by \(\frac{C_{0}}{\sqrt{M+1}}\), one may set \(\eta=\sqrt{\frac{3\Delta_{\mathcal{L}}}{28MC_{0}\sigma TQ}}\), such that
Ignoring the logarithmic factor \(\log(TQ/\delta)\) in \(\sigma\), the above bound states that the squared gradient norm decreases as \(\mathcal{O}((TQ)^{-1})\). Note that this bound holds only when \(T\) is sufficiently large, because the choice of \(\eta\) must satisfy the condition of Theorem 1.
_Remark 3_.: Based on the preceding remark, we see that to achieve \(\epsilon\)-stationarity, the number of model updates is \(QT=\mathcal{O}(\frac{1}{\epsilon^{2}})\). That is, as long as \(Q\) obeys the upper bound, running more local updates (\(Q\)) reduces the amount of communications (\(T\)). To the best of our knowledge, this is the first result for VFL on graph data.
_Remark 4_.: While we have analyzed the impact of stale updates (\(Q\)), lazy aggregation (\(K\)) does not play a role in convergence, because it does not affect model updates. Instead, it affects model accuracy in a manner similar to how changing a neural network impacts the prediction accuracy.
_Remark 5_.: If we consider the impact of the number of clients, the factor \(M\) in the numerator of the bound indicates a slowdown when more clients participate training. Similar results are seen in FedBCD [2], but therein one can use a large batch size \(S\) to counter the slowdown. For graphs, however, \(S\) does not appear in the bound because of the biased gradient estimation. Nevertheless, we note that unlike other federated scenarios, in VFL, \(M\) is very small because it is limited by, e.g., the feature length.
## 5 Numerical Experiments
In this section, we conduct numerical experiments on a variety of datasets and demonstrate the effectiveness of GLASTU in training with distributed graph data. We first compare its performance with related methods, including those tackling a different assumption on the data distribution and communication pattern. Then, we examine the communication saving owing to the use of lazy aggregation and stale updates. We further showcase the flexibility of GLASTU through demonstration with different GNN backbones and varying clients. The experiments are conducted on a distributed cluster with three Tesla V100 GPUs communicated through Ethernet.
### Datasets
We use seven datasets (in three groups) with varying sizes and data distributions: the Planetoid collection [33], the HeriGraph collection [34], and the Reddit dataset [4]. Each dataset in the HeriGraph collection (Suzhou, Venice, and Amsterdam) contains data readily distributed: three subgraphs and more than three feature blocks for each node. Hence, we use three clients, each of which handles one subgraph and one feature block. For the other four datasets (Cora, PubMed, and CiteSeer in the Planetoid collection; and Reddit), each contains one single graph and thus we manually construct subgraphs through randomly sampling the edges and splitting the input features into non-overlapping blocks, so that each client handles one subgraph and one feature block. The dataset statistics are summarized in Table 1 and more details are given in Appendix D.1.
### Accuracy
We compare GLASTU with three training methods: (a) centralized training, where there is only a single client (\(M=1\)), which holds the whole dataset without any data distribution and communication; (b) standalone training [8], where each client trains a model with its local data only and they do not communicate; (c) simulated centralized training [9], where each client possesses the full graph but only the partial features, so that it simulates centralized training through server aggregation in each GNN layer. Methods (b) and (c) are typical VFL baselines; they are also special cases of our method (see Section 3.5). Except for centralized training, the number of clients \(M=3\). The number of training rounds, \(T\), and the learning rate \(\eta\) are optimized through grid search. See Appendix D.2 for details.
We use GCNII [7] as the backbone GNN. GCNII improves over GCN through including two skip connections, one with the current layer input and the other with the initial layer input. We set the number of layers \(L=4\) and the mini-batch size \(S=16\). For neighborhood sampling, the sample size is three neighbors per node on average. We set \(K=2\); i.e., lazy aggregation is performed in the middle and the last layer.
Table 2 reports the average classification accuracy of GLASTU and the compared training methods, repeated five times. As expected, standalone training produces the worst results, because each client uses only local information and misses edges and node features present in other clients. The centralized training and its simulated version lead to similar performance, also as expected, because server aggregation (or its equivalence in centralized training) on each GNN layer takes effect. Our method GLASTU, which skips half of the aggregations, yields
\begin{table}
\begin{tabular}{c|r r r r} \hline \hline Dataset & \# Nodes & \# Edges & \# Feat. & \# Class \\ \hline Cora & \(2,708\) & \(10,556\) & \(1,433\) & 7 \\ PubMed & \(19,717\) & \(88,648\) & \(500\) & 3 \\ CiteSeer & \(3,327\) & \(9,104\) & \(3,703\) & 6 \\ \hline Suzhou & \(3,137\) & \(916,496\) & \(979\) & 9 \\ Venice & \(2,951\) & \(534,513\) & \(979\) & 9 \\ Amsterdam & \(3,727\) & \(1,271,171\) & \(979\) & 9 \\ \hline Reddit & \(232,965\) & \(114,615,892\) & \(602\) & 41 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Datasets. Each of the HeriGraph datasets (Suzhou, Venice, Amsterdam) contains three naturally formed subgraphs. For other datasets, each contains one single graph and each client holds a sampled subgraph of it.
prediction accuracy rather comparable with these two methods. Using stale updates (\(Q=4\)) is generally outperformed by no stale updates (\(Q=1\)), but occasionally it is better (see PubMed and Amsterdam). The gain in using lazy aggregation and stale updates occurs in timing, as will be demonstrated next.
### Communication Saving
To further investigate how the two proposed techniques affect the model performance and save the communication, we conduct a study on (a) the lazy aggregation parameter \(K\) and (b) the stale update parameter \(Q\).
**Lazy aggregation:** We use a 4-layer GCNII as the backbone and set \(K=1,2,4\). The aggregation layers are "uniform" across the model layers. That is, when \(K=1\), server aggregation is performed on the last layer; when \(K=2\), on the middle layer and the last layer; and when \(K=4\), on all layers. The test accuracy and runtime are listed in Table 3. We observe that the runtime decreases drastically when using fewer and fewer aggregation layers: from \(K=4\) to \(K=1\), the reduction is \(37.5\%\) for PubMed and \(58.2\%\) for Amsterdam. The accuracy is comparable in all cases.
**Stale updates:** We experiment with a few choices of \(Q\): 2, 4, 8, and 16. We report the time to reach the same test accuracy threshold in Table 4. We see that stale updates help speed up training by using fewer communication rounds, corroborating Remark 3 of the theory in Section 4. This trend occurs on the Amsterdam dataset even when taking \(Q\) as large as 16. The trend is also noticeable on PubMed, but at some point (\(Q=8\)) it is reverted, likely because it gets harder and harder to reach the accuracy threshold. We speculate that the target \(82\%\) can never be achieved at \(Q=16\). This observation is consistent with Remark 2 of the theory, requiring \(Q\) to be upper bounded to claim \(\mathcal{O}((TQ)^{-1})\) convergence.
### Flexibility
To demonstrate the flexibility of GLASTU, we conduct experiments to show the performance under (a) different GNN backbones and (b) different numbers of clients, \(M\).
\begin{table}
\begin{tabular}{c|c c c|c c} \hline \hline Dataset & Cent. & StAl. & Sim. & GLASTU-1 & GLASTU-4 \\ \hline Cora & \(80.9\pm 0.6\) & \(74.6\pm 0.5\) & \(80.1\pm 1.2\) & \(81.0\pm 1.3\) & \(80.3\pm 1.2\) \\ PubMed & \(84.9\pm 0.6\) & \(77.2\pm 0.5\) & \(82.7\pm 1.2\) & \(82.3\pm 1.6\) & \(83.8\pm 1.8\) \\ CiteSeer & \(70.2\pm 0.8\) & \(64.4\pm 0.5\) & \(70.0\pm 1.2\) & \(70.0\pm 1.7\) & \(68.8\pm 3.3\) \\ \hline Suzhou & \(94.3\pm 0.3\) & \(51.6\pm 0.9\) & \(93.5\pm 0.6\) & \(92.7\pm 1.4\) & \(90.4\pm 0.8\) \\ Venice & \(95.7\pm 0.5\) & \(33.5\pm 2.1\) & \(93.1\pm 1.3\) & \(92.2\pm 0.6\) & \(91.0\pm 1.6\) \\ Amsterdam & \(94.6\pm 0.1\) & \(59.8\pm 1.0\) & \(95.5\pm 0.8\) & \(93.1\pm 0.8\) & \(94.9\pm 0.4\) \\ \hline Reddit & \(95.6\pm 0.1\) & \(87.3\pm 0.3\) & \(95.3\pm 0.7\) & \(95.7\pm 0.6\) & \(94.7\pm 1.1\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Test accuracy (%). The compared algorithms are Centralized training (Cent.); Standalone training (StAl.); Simulated centralized training (Sim.); GLASTU with no stale updates, i.e., \(Q=1\) (GLASTU-1); and GLASTU with stale updates \(Q=4\) (GLASTU-4).
\begin{table}
\begin{tabular}{c|c c c} \hline \hline \# Layer & \(K=4\) & \(K=2\) & \(K=1\) \\ \hline Accuracy & \(82.5\pm 1.0\) & \(83.8\pm 1.8\) & \(82.2\pm 0.7\) \\ Runtime & \(130\pm 12\) & \(96.6\pm 9.9\) & \(81.3\pm 6.5\) \\ Saving & \(-\) & \(25.7\) & \(37.5\) \\ \hline \hline \# Layer & \(K=4\) & \(K=2\) & \(K=1\) \\ \hline Accuracy & \(93.6\pm 0.7\) & \(94.9\pm 0.4\) & \(92.0\pm 1.7\) \\ Runtime & \(913\pm 76\) & \(544\pm 44\) & \(382\pm 35\) \\ Saving & \(-\) & \(40.4\) & \(58.2\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Test accuracy (%), runtime (seconds), and saving in runtime (%) under different numbers of lazy aggregation layers (\(K=4,2,1\)). The saving is with respect to \(K=4\). Top: PubMed; bottom: Amsterdam.
**Backbone model:** We compare three backbones: GCN, GAT [6], and GCNII, which are representative GNNs. The learning rate for each backbone is tuned to its best performance. The test accuracy over training rounds is plotted in Figure 3. We see that GLASU can take different GNNs as the backbone and reach a similar prediction performance, despite that the convergence curves are not all similar. For example, the convergence histories of GCN and GCNII are quite close, whereas that of GAT experiences roughness.
**Number of clients:** We set \(M=3,5,7\) and investigate the change of performance for different training methods. Hyperparameters are tuned to achieve the optimal accuracy under a fixed number of epochs. Table 5 suggests that the performance of standalone training decreases as \(M\) increases, which is expected because each client has fewer features while server aggregation is not performed. Meanwhile, the performance of GLASU is not affected and it stays comparable with that of centralized training. We note that it is unrealistic to set \(M\) arbitrarily large, because \(M\) is limited by the feature length and also in practice, it is determined by data ownership.
## 6 Conclusion
We have presented a flexible model splitting approach for VFL with vertically distributed graph data and proposed a communication-efficient algorithm, GLASU, to train the resulting GNN. Due to the graph structure, VFL on GNNs incurs heavy communication and poses an extra challenge in the convergence analysis, as the stochastic gradients are no longer unbiased. To overcome these challenges, our approach uses lazy aggregation to skip server-client communication and stale global information to update local models, leading to significant communication reduction. Our analysis makes no assumptions on unbiased gradients. We provide extensive experiments to show the flexibility of the model and the communication saving in training, without compromise on the model quality.
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline \# Stale & \(Q=2\) & \(Q=4\) & \(Q=8\) & \(Q=16\) \\ \hline Accuracy & \(82.5\pm 1.6\) & \(82.0\pm 2.4\) & \(82.1\pm 0.3\) & N/A \\ Runtime & \(66.1\pm 5.0\) & \(43.8\pm 4.0\) & \(88.9\pm 7.4\) & \(>128\) \\ \hline \hline \# Stale & \(Q=2\) & \(Q=4\) & \(Q=8\) & \(Q=16\) \\ \hline Accuracy & \(89.2\pm 0.4\) & \(89.3\pm 0.7\) & \(90.7\pm 0.5\) & \(90.3\pm 1.1\) \\ Runtime & \(1323\pm 44\) & \(521\pm 44\) & \(324\pm 31\) & \(250\pm 24\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Test accuracy (%) and runtime (seconds) under different numbers of stale updates (\(Q=2,4,6,16\)) for the same accuracy threshold. Top: PubMed (threshold: 82%); bottom: Amsterdam (threshold: 89%).
Figure 3: Test accuracy under three backbone GNNs on PubMed.
|
2301.07365
|
Production of $X_b$ via $Υ(5S, 6S)$ radiative decays
|
We investigate the production of $X_b$ in the process $\Upsilon(5S,6S)\to
\gamma X_b$, where $X_b$ is assumed to be a $B {\bar B}^*$ molecular state. Two
kinds of meson loops of $B^{(*)}{\bar B}^{(*)}$ and $B_1^{\prime}{\bar
B}^{(*)}$ were considered. To explore the rescattering mechanism, we calculated
the relevant branching ratios using the effective Lagrangian based on the heavy
quark symmetry. The branching ratios for the $\Upsilon(5S\,,6S) \to \gamma X_b$
were found to be at the orders of $10^{-7} \sim 10^{-6}$. Such sizeable
branching ratios might be accessible at BelleII, which would provide important
clues to the inner structures of the exotic state $X_b$.
|
Xiao-Yun Wang, Zu-Xin Cai, Gang Li, Shi-Dong Liu, Chun-Sheng An, Ju-Jun Xie
|
2023-01-18T08:17:07Z
|
http://arxiv.org/abs/2301.07365v1
|
# Production of \(X_{b}\) via \(\Upsilon(5s,6s)\) radiative decays
###### Abstract
We investigate the production of \(X_{b}\) in the process \(\Upsilon(5S,6S)\to\gamma X_{b}\), where \(X_{b}\) is assumed to be a \(B\bar{B}^{*}\) molecular state. Two kinds of meson loops of \(B^{(*)}\bar{B}^{(*)}\) and \(B_{1}^{\prime}\bar{B}^{(*)}\) were considered. To explore the rescattering mechanism, we calculated the relevant branching ratios using the effective Lagrangian based on the heavy quark symmetry. The branching ratios for the \(\Upsilon(5S\,,6S)\to\gamma X_{b}\) were found to be at the orders of \(10^{-7}\sim 10^{-6}\). Such sizeable branching ratios might be accessible at BelleII, which would provide important clues to the inner structures of the exotic state \(X_{b}\).
pacs: 14.40.Pq, 13.20.Gd, 12.39.Fe
## I Introduction
In the past decades, many _XYZ_ states have been observed by experiments [1]. Some of them cannot be accommodated in the conventional quark model as \(Q\bar{Q}\) (\(Q=c\), \(b\)) and thus become excellent candidates for exotic states. In order to understand the nature of the _XYZ_ states, many studies on their productions and decays have been carried out (for recent reviews, see Refs. [2; 3; 4; 5; 6; 7; 8; 9]). In 2003, the Belle Collaboration discovered an exotic candidate \(X(3872)\) (also known as \(\chi_{c1}(3872)\)) in \(B^{+}\to K^{+}+J/\psi\pi^{+}\pi^{-}\) decay [10]. Subsequently, the \(X(3872)\) was confirmed by several other experiments [11; 12; 13; 14; 15]. Its quantum numbers were determined to be \(I^{G}(J^{PC})=0^{+}(1^{++})\)[16]. The \(X(3872)\) has two salient features: the very narrow total decay width (\(\Gamma_{X}<1.2\) MeV), when compared to the typical hadronic width, and the closeness of mass to the threshold of \(D^{0}\bar{D}^{*0}\) (\(M_{X(3872)}-M_{D^{0}}-M_{D^{*0}}=(-0.12\pm 0.24)\) MeV) [1]. These two features suggest that the \(X(3872)\) might be a \(\bar{D}D^{*}\) molecular state [17; 18].
A lot of theoretical effort has been made to understand the nature of \(X(3872)\) since its initial observation. Naturally, it follows to look for the counterpart with \(J^{PC}=1^{++}\) (denoted as \(X_{b}\) hereafter) in the bottom sector. These two states, which are related by heavy quark symmetry, should have some universal properties. The search for \(X_{b}\) could provide us the discrimination between a compact multiquark configuration and a loosely bound hadronic molecule configuration. Since the mass of \(X_{b}\) is very heavy and its \(J^{PC}\) are \(1^{++}\), a direct discovery is unlikely at the current electron-positron collision facilities, though the \(\Upsilon(5S,6S)\) radiative decays are possible in the Super KEKB [19]. In Ref. [20], a search for \(X_{b}\) in the \(\omega\Upsilon(1S)\) final states has been presented, but no significant signal is observed. The production of \(X_{b}\) at the LHC and the Tevatron [21; 22] and other exotic states at hadron colliders [23; 24; 25; 26; 27; 28] have been extensively investigated. In the bottomonium system, the isospin is almost perfectly conserved, which may explain the
escape of \(X_{b}\) in the recent CMS search [29]. As a result, the radiative decays and isospin conserving decays are of high priority in searching \(X_{b}\)[30; 31; 32; 33]. In Ref. [30], we have studied the radiative decays \(X_{b}\to\gamma\Upsilon(nS)\) (\(n=1,2,3\)), with \(X_{b}\) being a candidate for the \(B\bar{B}^{*}\) molecular state, and the partial widths into \(\gamma X_{b}\) were found to be about 1 keV. In this work, we revisit the \(X_{b}\) production in \(\Upsilon(5S,6S)\to\gamma X_{b}\) using the nonrelativistic effective field theory (NREFT). As is well known, the intermediate meson loop (IML) transition is one of the important nonperturbative transition mechanisms [34; 35; 36]. Moreover, the recent studies on the productions and decays of exotic states [37; 38; 39; 40; 41; 42; 43; 44; 45; 46] lead to global agreement with the experimental data. Hence, to investigate the process \(\Upsilon(5S,6S)\to\gamma X_{b}\), we calculated the IML contributions from both the \(S\)- and \(P\)-wave intermediate bottomed mesons.
The rest of the paper is organized as follows. In Sec. II, we present the theoretical framework used in this work. Then in Sec. III the numerical results are presented, and a brief summary is given in Sec. IV.
## II Theoretical framework
### Triangle diagrams
Under the assumption that \(X_{b}\) is a \(B\bar{B}^{*}\) molecule, its production can be described by the triangle diagrams in Fig. 1. With the quantum numbers of \(1^{--}\), the initial bottomonium can couple to either two \(S\)-wave bottomed mesons in a \(P\)-wave, or one \(P\)-wave and one \(S\)-wave bottomed mesons in an \(S\)- or \(D\)-wave. The \(X_{b}\) couples to the \(B\bar{B}^{*}\) pair in an \(S\)-wave. Because the states considered here are close to the open bottomed mesons thresholds, the intermediate bottomed and antibottomed mesons are nonrelativistic. We are thus allowed to use a nonrelativistic power counting, the framework of which has been introduced to study the intermediate meson loop effects [45]. The three momentum scales as \(v\), the kinetic energy scales as \(v^{2}\), and each of the nonrelativistic propagator scales as \(v^{-2}\). The \(S\)-wave vertices are independent of the velocity, while the \(P\)-wave vertices scales as \(v\) or as the external momentum, depending on the process in question.
For the diagrams (a), (b), and (c) in Fig. 1, the vertices involving the initial bottomonium are in a \(P\)-wave. The momentum in these vertices is contracted with the final photon momentum \(q\) and thus should be counted as \(q\). The vertices involving the photon are also in a \(P\)-wave, which should be counted as \(q\). The decay amplitude scales as
\[\mathcal{A}_{A}\sim N_{A}\frac{v_{A}^{5}}{(v_{A}^{2})^{3}}\frac{q^{2}}{m_{B}^ {2}}=N_{A}\frac{E_{\gamma}^{2}}{v_{A}m_{B}^{2}}\,, \tag{1}\]
where \(E_{\gamma}\) is the external photon energy, \(N_{A}\) contains all the constant factors, and \(v_{A}\) is the average of the two velocities corresponding to the two cuts in the triangle diagram. While for the diagrams (d) and (e) in Fig. 1, all the vertices are in \(S\)-wave. Then the amplitude for the Figs. 1(d) and (e) scales as
\[\mathcal{A}_{B}\sim N_{B}\frac{v_{B}^{5}}{(v_{B}^{2})^{3}}\frac{E_{\gamma}}{m_ {B}}=N_{B}\frac{E_{\gamma}}{v_{B}m_{B}}\,. \tag{2}\]
### Effective interaction Lagrangians
To calculate the diagrams in Fig. 1, we employ the effective Lagrangians constructed in the heavy quark limit. In this limit, the \(S\)-wave heavy-light mesons form a spin multiplet \(H=(P,V)\) with \(s_{l}^{P}=1/2^{-}\), where \(P\) and \(V\) denote the pseudoscalar and vector heavy mesons, respectively, i.e., \(P(V)=(B^{(*)+},B^{(*)0},B_{s}^{(*)0})\). The \(s_{l}^{P}=1/2^{+}\) states are collected in \(S=(P_{0}^{*},P_{1}^{\prime})\) with \(P_{0}^{*}\) and \(P_{1}^{\prime}\) denoting the \(B_{0}^{*}\) and \(B_{1}^{\prime}\) states, respectively. In the two-component notation [47; 48], the spin multiplets are given by
\[H_{a} =\vec{V}_{a}\cdot\vec{\sigma}+P_{a}\,, \tag{3}\] \[S_{a} =\vec{P}_{1a}^{\prime}\cdot\vec{\sigma}+P_{0a}^{*}\,,\]
where \(\vec{\sigma}\) is the Pauli matrix, and \(a\) is the light flavor index. The fields for their charge conjugated mesons are
\[\bar{H}_{a} =-\vec{\bar{V}}_{a}\cdot\vec{\sigma}+\vec{\bar{P}}_{a}\,, \tag{4}\] \[\bar{S}_{a} =-\vec{\bar{P}}_{1a}^{\prime}\cdot\vec{\sigma}+\vec{\bar{P}}_{0a }^{*}\,.\]
Considering the parity, the charge conjugation, and the spin symmetry, the leading order Lagrangian for the coupling of the \(S\)-wave bottomonium fields to the bottomed and antibotomed mesons can be written as [47]
\[\mathcal{L}_{\Upsilon(5S)}=i\frac{g_{1}}{2}Tr[\bar{H}_{a}^{\dagger}\vec{ \sigma}\cdot\stackrel{{\leftrightarrow}}{{\partial}}H_{a}^{ \dagger}\Upsilon]+g_{2}Tr[\bar{H}_{a}^{\dagger}S_{a}^{\dagger}\Upsilon+\bar{S} _{a}^{\dagger}H_{a}^{\dagger}\Upsilon]+\text{H.c.} \tag{5}\]
Here \(A\stackrel{{\leftrightarrow}}{{\partial}}B=A(\partial B)-( \partial A)B\). The field for the \(S\)-wave \(\Upsilon\) and \(\eta_{b}\) is \(\Upsilon=\vec{\Upsilon}\cdot\vec{\sigma}+\eta_{b}\). \(g_{1}\) and \(g_{2}\) are the coupling constants of \(\Upsilon(5S)\) to a pair of \(1/2^{-}\) bottom mesons and a \(1/2^{-}\)-\(1/2^{+}\) pair of bottom mesons, respectively. We use \(g_{1}^{\prime}\) and \(g_{2}^{\prime}\) for the coupling constants of \(\Upsilon(6S)\). Using the experimental branching ratios and widths of \(\Upsilon(5S,6S)\)[1], we get the coupling constants \(g_{1}=0.1\) GeV\({}^{-3/2}\) and \(g_{1}^{\prime}=0.08\) GeV\({}^{-3/2}\). On the other hand, we take \(g_{2}=g_{2}^{\prime}=0.05\) GeV\({}^{-1/2}\), as used in the previous work [49].
To get the transition amplitude, we also need to know the photonic coupling to the bottomed mesons. The magnetic coupling of the photon to the \(S\)-wave bottomed mesons is described by the Lagrangian [48; 50]
\[\mathcal{L}_{HH\gamma}=\frac{e\beta}{2}Tr[H_{a}^{\dagger}H_{b}\vec{\sigma} \cdot\vec{B}Q_{ab}]+\frac{eQ^{\prime}}{2m_{Q}}Tr[H_{a}^{\dagger}\vec{\sigma} \cdot\vec{B}H_{a}]\,, \tag{6}\]
where \(Q=\text{diag}\{2/3,-1/3,-1/3\}\) is the light quark charge matrix, and \(Q^{\prime}\) is the heavy quark electric charge (in units of \(e\)). \(\beta\) is an effective coupling constant and, in this work, we take \(\beta\simeq 3.0\) GeV\({}^{-1}\), which is determined in the nonrelativistic constituent quark model and has been adopted in the study of radiative \(D^{*}\) decays [50]. In Eq. (6), the first term is the magnetic moment coupling of the light quarks, while the second one is the magnetic moment coupling of the heavy quark and hence is suppressed by \(1/m_{Q}\). The radiative transition of the \(1/2^{+}\) bottomed mesons to the \(1/2^{-}\) states may be parameterized as [51]
\[\mathcal{L}_{SH\gamma}=-\frac{ie\widetilde{\beta}}{2}Tr[H_{a}^{\dagger}S_{b} \vec{\sigma}\cdot\vec{E}Q_{ba}]\,, \tag{7}\]
where \(\widetilde{\beta}=0.42\) GeV\({}^{-1}\) is the same as used in Ref. [52].
The \(X_{b}\) is assumed to be an \(S\)-wave molecule with \(J^{PC}=1^{++}\), which is given by the superposition of \(B^{0}\bar{B}^{*0}+c.c\) and \(B^{-}\bar{B}^{*+}+c.c\) hadronic configurations:
\[|X_{b}\rangle=\frac{1}{2}[(|B^{0}\bar{B}^{*0}\rangle-|B^{*0}\bar{B}^{0}\rangle )+(|B^{+}B^{*-}\rangle-|B^{-}B^{*+}\rangle)]. \tag{8}\]
Therefore, we can parameterize the coupling of \(X_{b}\) to the bottomed mesons in terms of the following Lagrangian
\[\mathcal{L}=\frac{1}{2}X^{i\dagger}[x_{1}(B^{*0i}\bar{B}^{0}-B^{0}\bar{B}^{*0i})+ x_{2}(B^{*+i}B^{-}-B^{+}B^{*-i})]+\text{H.c.}\,, \tag{9}\]
where \(x_{i}\) denotes the coupling constant. Since the \(X_{b}\) is slightly below the \(S\)-wave \(B\bar{B}^{*}\) threshold, the effective coupling of this state is related to the probability of finding the \(B\bar{B}^{*}\) component in the physical wave function of the bound states and the binding energy, \(\epsilon_{X_{b}}=m_{B}+m_{B^{*}}-m_{X_{b}}\)[53; 54; 39]
\[x_{i}^{2}\equiv 16\pi(m_{B}+m_{B^{*}})^{2}c_{i}^{2}\sqrt{\frac{2\epsilon X_{ b}}{\mu}}\,, \tag{10}\]
where \(c_{i}=1/\sqrt{2}\) and \(\mu=m_{B}m_{B^{*}}/(m_{B}+m_{B^{*}})\) is the reduced mass. Here, it should be pointed out that the coupling constant \(x_{i}\) in Eq. (10) is based on the assumption that \(X_{b}\) is a shallow bound state where the potential binding the mesons is short-ranged.
The decay amplitudes of the triangle diagrams in Fig. 1 can be obtained and the explicit transition amplitudes for \(\Upsilon(5S,6S)\to\gamma X_{b}\) are presented in Appendix A. The partial decay widths of \(\Upsilon(5S,6S)\to\gamma X_{b}\) are given by
\[\Gamma(\Upsilon(5S,6S)\to\gamma X_{b})=\frac{E_{\gamma}|\mathcal{M}_{\Upsilon (5S,6S)\to\gamma X_{b}}|^{2}}{24\pi M_{\Upsilon(5S,6S)}^{2}}\,, \tag{11}\]
where \(E_{\gamma}\) is the photon energies in the \(\Upsilon(5S,6S)\) rest frame.
## III Numerical results
In Ref. [55], authors predicted a large width of 238 MeV for \(B_{1}^{\prime}\). This large width effect for \(B_{1}^{\prime}\) was taken into account in our calculations by using the Breit-Wigner (BW) parameterization to approximate the spectral function of the \(1/2^{+}\) bottom meson of width. The explicit formula for \(B_{1}^{\prime}\) is
\[\mathcal{M}_{B_{1}^{\prime}}=\frac{1}{W_{B_{1}^{\prime}}}\int_{s_{ l}}^{s_{h}}\text{d}s\rho_{B_{1}^{\prime}}(s)\bar{\mathcal{M}}_{B_{1}^{\prime}}(s)\,, \tag{12}\]
where \(W_{B_{1}^{\prime}}=\int_{s_{l}}^{s_{h}}\text{d}s\rho_{B_{1}^{\prime}}(s)\) is the normalization factor, \(\bar{\mathcal{M}}_{B_{1}^{\prime}}(s)\) represents the loop amplitude of \(B_{1}^{\prime}\) calculated using \(s\) as the mass squared, \(s_{l}=(M_{B}+m_{\gamma})^{2}\), \(s_{h}=(M_{B_{1}^{\prime}}+\Gamma_{B_{1}^{\prime}})^{2}\), and \(\rho_{B_{1}^{\prime}}(s)\) is the spectral function of \(B_{1}^{\prime}\)
\[\rho_{B_{1}^{\prime}}(s)=\frac{1}{\pi}\operatorname{Im}\frac{-1}{s-M_{B_{1}^{ \prime}}^{2}+iM_{B_{1}^{\prime}}\Gamma_{B_{1}^{\prime}}}\,. \tag{13}\]
Before proceeding to the numerical results, we first briefly review the predictions of the mass of \(X_{b}\). The existence of the \(X_{b}\) is predicted in both the tetraquark model [56] and those involving a molecular interpretation [57; 58; 59]. In Ref. [56], the mass of the lowest-lying \(1^{++}\)\(\bar{b}\bar{q}bq\) tetraquark is predicated to be 10504 MeV, while the mass of the \(B\bar{B}^{*}\) molecular state is predicated to be a few tens of MeV higher [57; 58; 59]. For example, in Ref. [57], the mass was predicted to be 10562 MeV, corresponding to a binding energy of 42 MeV, while with a binding energy of \((24^{+8}_{-9})\) MeV it was predicted to be \((10580^{+9}_{-8})\) MeV [59]. Therefore, it might be a good approximation and might be applicable if the binding energy is less than 50 MeV. In order to cover the range for the previous molecular and tetraquark predictions in Refs. [56; 57; 58; 59], we performed the calculations up to a binding energy of 100 MeV and choose several illustrative values of \(\epsilon_{X_{b}}=(5,10,25,50,100)\) MeV for discussion.
In Table 1, we list the contributions of \(\Upsilon(5S)\to\gamma X_{b}\) from \(B^{(*)}\bar{B}^{(*)}\) loops, \(B_{1}^{\prime}\bar{B}^{(*)}\) loops, and the total contributions. For the \(B_{1}^{\prime}\), we choose the \(\Gamma_{B_{1}^{\prime}}\) to be 0, 100 MeV and 200 MeV, respectively. It can be seen that the contributions
from \(B^{(*)}\bar{B}^{(*)}\) loops are about \(10^{-3}\) keV. For the contributions from \(B_{1}^{\prime}\bar{B}^{(*)}\) loops, the partial decay widths decrease with increasing the width of \(B_{1}^{\prime}\). Without the width effects of \(B_{1}^{\prime}\), i.e., \(\Gamma_{B^{\prime}}=0\), the contributions from \(B_{1}^{\prime}\bar{B}^{(*)}\) loops are about \(10^{-2}\) keV, while with \(\Gamma_{B_{1}^{\prime}}=200\) MeV the contributions are about two orders of magnitude smaller. As seen, the total decay widths also decrease with increasing the width of \(B_{1}^{\prime}\). The obtained partial widths range from \(10^{-3}\) to \(10^{-2}\) keV, indicating a sizeable branching fraction from about \(10^{-7}\) to \(10^{-6}\).
The results for \(\Upsilon(6S)\to\gamma X_{b}\) are summarized in Table 2. The contributions from \(B^{(*)}\bar{B}^{(*)}\) loops are about \(10^{-3}\) keV. Different from the case of \(\Upsilon(5S)\to\gamma X_{b}\), the contribution from \(B_{1}^{\prime}\bar{B}^{(*)}\) loops for \(\Upsilon(6S)\to\gamma X_{b}\) is not monotonous with the width of \(B_{1}^{\prime}\). This finding indicate that the \(B_{1}^{\prime}\) width has a smaller effect in \(\Upsilon(6S)\to\gamma X_{b}\) than in \(\Upsilon(5S)\to\gamma X_{b}\), which may be due to the fact that the mass of \(\Upsilon(5S)\) is closer to the threshold of \(B_{1}^{\prime}\bar{B}^{(*)}\) than \(\Upsilon(6S)\). It can be seen that the contributions from \(B_{1}^{\prime}\bar{B}^{(*)}\) loops range from \(10^{-4}\) to \(10^{-3}\) keV, which is about 1 order of magnitude smaller than \(\Upsilon(5S)\). The total decay widths increase with increasing the width of \(B_{1}^{\prime}\). Similar to the case of the process \(\Upsilon(5S)\to\gamma X_{b}\) the obtained partial widths for \(\Upsilon(6S)\to\gamma X_{b}\) are also about \(10^{-3}\) to \(10^{-2}\) keV, thereby corresponding to a branching fraction of about \(10^{-7}\).
In Fig. 2(a), we plot the decay widths and the branching ratios of \(\Upsilon(5S)\to\gamma X_{b}\) as a function of the binding energy with \(\Gamma_{B_{1}^{\prime}}=0\) MeV (solid line), \(\Gamma_{B_{1}^{\prime}}=100\) MeV (dash line), and \(\Gamma_{B_{1}^{\prime}}=200\) MeV (dotted line). The coupling constants of \(X_{b}\) in Eq. (10) and the threshold effects can simultaneously influence the binding energy dependence of the partial widths. With increasing the binding energy \(\epsilon_{X_{b}}\), the coupling strength of \(X_{b}\) increases, and the threshold effects decrease. Both the coupling strength of \(X_{b}\) and the threshold effects vary quickly in the small \(\epsilon_{X_{b}}\) region and slowly in the large \(\epsilon_{X_{b}}\) region. As a result, the partial width is relatively sensitive to the small \(\epsilon_{X_{b}}\), while at the large \(\epsilon_{X_{b}}\) region it keeps nearly constant. As seen, at the same binding energy, the partial widths with small \(\Gamma_{B_{1}^{\prime}}\) are larger
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Binding energy} & \multirow{2}{*}{\(B^{(*)}\bar{B}^{(*)}\) loops} & \multicolumn{3}{c|}{\(B_{1}^{\prime}\bar{B}^{(*)}\) loops} & \multicolumn{3}{c|}{Total Decay Widths} \\ \cline{3-8} & & \(\Gamma_{B_{1}^{\prime}}=0\) & \(\Gamma_{B_{1}^{\prime}}=100\) & \(\Gamma_{B_{1}^{\prime}}=200\) & \(\Gamma_{B_{1}^{\prime}}=0\) & \(\Gamma_{B_{1}^{\prime}}=100\) & \(\Gamma_{B_{1}^{\prime}}=200\) \\ \hline \(\epsilon_{X_{b}}=5\) MeV & \(1.52\times 10^{-3}\) & \(5.67\times 10^{-4}\) & \(1.11\times 10^{-3}\) & \(4.15\times 10^{-4}\) & \(8.19\times 10^{-4}\) & \(1.10\times 10^{-3}\) & \(1.50\times 10^{-3}\) \\ \hline \(\epsilon_{X_{b}}=10\) MeV & \(2.22\times 10^{-3}\) & \(7.52\times 10^{-4}\) & \(1.27\times 10^{-3}\) & \(5.11\times 10^{-4}\) & \(1.25\times 10^{-3}\) & \(1.62\times 10^{-3}\) & \(2.20\times 10^{-3}\) \\ \hline \(\epsilon_{X_{b}}=25\) MeV & \(3.87\times 10^{-3}\) & \(1.01\times 10^{-3}\) & \(1.41\times 10^{-3}\) & \(6.38\times 10^{-4}\) & \(2.40\times 10^{-3}\) & \(2.90\times 10^{-3}\) & \(3.80\times 10^{-3}\) \\ \hline \(\epsilon_{X_{b}}=50\) MeV & \(6.39\times 10^{-3}\) & \(1.17\times 10^{-3}\) & \(1.45\times 10^{-3}\) & \(7.27\times 10^{-4}\) & \(4.39\times 10^{-3}\) & \(4.99\times 10^{-3}\) & \(6.19\times 10^{-3}\) \\ \hline \(\epsilon_{X_{b}}=100\) MeV & \(1.21\times 10^{-2}\) & \(1.27\times 10^{-3}\) & \(1.46\times 10^{-3}\) & \(8.22\times 10^{-4}\) & \(9.24\times 10^{-3}\) & \(9.77\times 10^{-3}\) & \(1.15\times 10^{-2}\) \\ \hline \end{tabular}
\end{table}
Table 2: The predicted decay widths (in units of keV) of \(\Upsilon(6S)\to\gamma X_{b}\) for different binding energies. Here we choose the \(\Gamma_{B_{1}^{\prime}}\) to be 0, 100, and 200 MeV, respectively.
than those with large \(\Gamma_{B_{1}^{\prime}}\)
In Fig. 2(b), the dependences of the decay widths and the branching ratios for \(\Upsilon(6S)\to\gamma X_{b}\) on the binding energy are shown. Similar to the case of \(\Upsilon(5S)\to\gamma X_{b}\), the partial width is relatively sensitive to the small \(\epsilon_{X_{b}}\), while at the large \(\epsilon_{X_{b}}\) region, it becomes nearly independent of the binding energy. As shown in this figure, at the same binding energy, the partial widths increases with the increase of \(\Gamma_{B_{1}^{\prime}}\). It can be seen that the predicted partial width for \(\Upsilon(6S)\to\gamma X_{b}\) is insensitive to the \(B_{1}^{\prime}\) width, which is different from the case of \(\Upsilon(5S)\to\gamma X_{b}\). This indicates that the intermediate bottomed meson loop contribution to the process \(\Upsilon(6S)\to\gamma X_{b}\) is smaller than that to \(\Upsilon(5S)\to\gamma X_{b}\).
## IV Summary
We have presented the production of \(X_{b}\) in the radiative decays of \(\Upsilon(5S,6S)\). The \(X_{b}\) is assumed to be a molecular state of \(B\bar{B}^{*}\). The numerical calculations were performed under two kinds of intermediate bottomed meson loops. The first kind is \(B^{(*)}\bar{B}^{(*)}\) loop coupled with \(\Upsilon(5S,6S)\) in \(P\)-wave and the second is \(B_{1}^{\prime}\bar{B}^{(*)}\) loop coupled with \(\Upsilon(5S,6S)\) in \(S\)-wave. Our results show that the partial widths of \(\Upsilon(5S\,,6S)\to\gamma X_{b}\) range from \(10^{-3}\) to \(10^{-2}\) keV, which correspond to the branching ratios from \(10^{-7}\) to \(10^{-6}\). In Refs. [30; 31], we have studied the radiative decays and the hidden bottomonium decays of \(X_{b}\). If we consider that the branching ratios of the isospin conserving process \(X_{b}\to\omega\Upsilon(1S)\) are relatively large, a search for \(\Upsilon(5S)\to\gamma X_{b}\to\gamma\omega\Upsilon(1S)\) may be possible for the updated BelleII experiments. These studies may help us investigate the \(X_{b}\) deeply. The experimental observation of \(X_{b}\) will provide us further insight into the spectroscopy of exotic states and is helpful to probe the structure of the states connected by the heavy quark symmetry.
Figure 2: The dependence of the decay widths of \(\Upsilon(5S)\to\gamma X_{b}\) (a) and \(\Upsilon(6S)\to\gamma X_{b}\) (b) on the binding energy for different \(B_{1}^{\prime}\) widths as indicated by the numbers in the graph. The right \(y\)-axis represents the corresponding branching ratio.
## Acknowledgements
This work is partly supported by the National Natural Science Foundation of China under Grant Nos. 12075133, 12105153, 12075288, 11735003, 11961141012, and 11835015, and by the Natural Science Foundation of Shandong Province under Grant Nos. ZR2021MA082, and ZR2022ZD26. It is also supported by Taishan Scholar Project of Shandong Province (Grant No.tsqn202103062), the Higher Educational Youth Innovation Science and Technology Program Shandong Province (Grant No. 2020KJJ004), the Chongqing Natural Science Foundation under Project No. cstc2021jcyj-msxmX0078, and the Youth Innovation Promotion Association CAS.
## Appendix A The transition amplitudes
Here we give the amplitudes for the transitions \(\Upsilon(5S,6S)\rightarrow\gamma X_{b}\). \(\epsilon_{1}\), \(\epsilon_{2}\), and \(\epsilon_{3}\) are the polarization vectors of the initial state \(\Upsilon(5S,6S)\), final photon \(\gamma\), and final state \(X_{b}\), respectively. The transition amplitudes shown in Figs. 1 (a)-(c) are
\[{\cal M}_{a} = -eg_{1}g_{X}\left(\beta Q+\frac{Q^{\prime}}{m_{Q}}\right)\epsilon _{ijk}q^{i}\epsilon_{2}^{j}\epsilon_{3}^{k}\epsilon_{1}\cdot qI_{a}^{(1)}(m_{ B},m_{B},m_{B^{*}},q)\,, \tag{10}\] \[{\cal M}_{b} = eg_{1}g_{X}\left(\beta Q-\frac{Q^{\prime}}{m_{Q}}\right) \epsilon_{ijk}\epsilon_{1}^{i}q^{j}(q\cdot\epsilon_{3}\epsilon_{2}^{k}-q^{k} \epsilon_{2}\cdot\epsilon_{3})I_{b}^{(1)}(m_{B^{*}},m_{B},m_{B^{*}},q)\,,\] (11) \[{\cal M}_{c} = -eg_{1}g_{X}\left(\beta Q+\frac{Q^{\prime}}{m_{Q}}\right) \epsilon_{ijk}q^{i}\epsilon_{2}^{j}\left(\epsilon_{1}^{k}q\cdot\epsilon_{3}-q \cdot\epsilon_{1}\epsilon_{3}^{k}+q^{k}\epsilon_{1}\cdot\epsilon_{3}\right)I _{c}^{(1)}(m_{B^{*}},m_{B^{*}},m_{B},q)\,. \tag{12}\]
The transition amplitudes shown in Figs. 1 (d) and (e) are
\[{\cal M}_{d} = eQ\widetilde{\beta}g_{2}g_{X}\epsilon^{ijk}\epsilon_{1}^{i} \epsilon_{2}^{j}\epsilon_{3}^{k}E_{\gamma}I(m_{B_{1}^{\prime}},m_{B},m_{B^{*} },q)\,, \tag{13}\] \[{\cal M}_{e} = -eQ\widetilde{\beta}g_{2}g_{X}\epsilon^{ijk}\epsilon_{1}^{i} \epsilon_{2}^{j}\epsilon_{3}^{k}E_{\gamma}I(m_{B_{1}^{\prime}},m_{B^{*}},m_{B },q)\,. \tag{14}\]
In the above amplitudes, the basic three-point loop function \(I(q)\) is [45]
\[I(m_{1},m_{2},m_{3},q) = i\int\frac{{\rm d}^{d}l}{(2\pi)^{d}}\frac{1}{(l^{2}-m_{1}^{2}+i \epsilon)[(P-l)^{2}-m_{2}^{2}+i\epsilon][(l-q)^{2}-m_{3}^{2}]+i\epsilon} \tag{15}\] \[= \frac{\mu_{12}\mu_{23}}{16\pi m_{1}m_{2}m_{3}}\frac{1}{\sqrt{a}} \left(\tan^{-1}\left(\frac{c^{\prime}-c}{2\sqrt{ac}}\right)+\tan^{-1}\left( \frac{2a+c^{\prime}-c}{2\sqrt{a(c^{\prime}-a)}}\right)\right).\]
Here \(\mu_{ij}=m_{i}m_{j}/(m_{i}+m_{j})\) are the reduced masses, \(b_{12}=m_{1}+m_{2}-M\), \(b_{23}=m_{2}+m_{3}+q^{0}-M\), and \(M\) represents the mass of the initial particle. \(a=\left(\mu_{23}/m_{3}\right)^{2}\vec{q}^{2}\), \(c=2\mu_{12}b_{12}\), and \(c^{\prime}=2\mu_{23}b_{23}+\mu_{23}\vec{q}^{2}/m_{3}\). \(m_{1}\), \(m_{2}\), and \(m_{3}\) represent the masses of up, down, and right charmed mesons in the triangle loop, respectively.
The involved vector loop integral is defined as
\[q^{i}I^{(1)}(m_{1},m_{2},m_{3},q) = i\int\frac{{\rm d}^{d}l}{(2\pi)^{d}}\frac{l^{i}}{(l^{2}-m_{1}^{2 }+i\epsilon)[(P-l)^{2}-m_{2}^{2}+i\epsilon][(l-q)^{2}-m_{3}^{2}]+i\epsilon}\,. \tag{16}\]
Using the technique of tensor reduction, we get
\[I^{(1)}(m_{1},m_{2},m_{3},q)\simeq\frac{\mu_{23}}{am_{3}}\left[B(c^{\prime}-a)- B(c)+\frac{1}{2}(c^{\prime}-c)I(m_{1},m_{2},m_{3},q)\right]\,, \tag{17}\]
where the function \(B(c)\) is
\[B(c)=-\frac{\mu_{12}\mu_{23}}{4m_{1}m_{2}m_{3}}\frac{\sqrt{c-i\epsilon}}{4\pi}. \tag{18}\]
It is worth mentioning that a factor \(\sqrt{M_{i}M_{f}}m_{1}m_{2}m_{3}\) should be multiplied in each amplitude, when considering the nonrelativistic normalization of the bottomonium and bottomed meson fields, where \(M_{i}\) and \(M_{f}\) represent the masses of the initial and final particles, respectively.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.